セルフホスト型アーキテクチャ ソリューション: コンポーネント チュートリアル

このページは、Looker のホスティング、デプロイ方法、関連するコンポーネントのベスト プラクティスについて説明する複数シリーズの一部です。このページでは、Looker アーキテクチャの特定のコンポーネントに関する一般的なプラクティスを紹介し、デプロイ内でそれらを構成する方法について説明します。

このシリーズは、3 つのパートで構成されています。

Looker には、サーバーのホスティング、アドホックのワークロードとスケジュールされたワークロードの両方の処理、反復モデル開発の追跡など、さまざまな依存関係があります。このページでは、このような依存関係をコンポーネントと呼んでいます。各コンポーネントの詳細については、次のセクションをご覧ください。

ホストの設定

OS とディストリビューション

Looker は、RedHat、SUSE、Debian/Ubuntu の最も一般的なバージョンで動作します。通常、特定の環境で実行するように設計され、最適化されたこのディストリビューションの派生物は問題ありません。たとえば、Linux の Google Cloud または AWS ディストリビューションは Looker に対応しています。Debian/Ubuntu は Looker コミュニティで最もよく使用されている Linux であり、Looker サポートで最もよく知られているバージョンです。Debian/Ubuntu や、Debian/Ubuntu から派生した特定のクラウド プロバイダのオペレーティング システムを使用するのが最も簡単です。

Looker は Java 仮想マシン(JVM)で実行されます。ディストリビューションを選択する際は、OpenJDK 8 のバージョンが最新かどうか確認します。古い Linux ディストリビューションでは Looker を実行できますが、Looker で特定の機能に必要な Java のバージョンとライブラリが最新である必要があります。JVM に推奨する Looker ライブラリとバージョンが含まれていない場合、Looker は正常に機能しません。現在 Looker には、Java HotSpot 1.8 Update 161 以降または OpenJDK 8 181 以降が必要です。

CPU とメモリ

小規模なチームで使用する開発システムや基本的なテスト Looker インスタンスは、4x16(4 つの CPU と 16 GB の RAM)で十分です。しかし、本番環境では、通常はこれでは不十分です。経験上、16x64 ノード(16 CPU と 64 GB RAM)が、価格とパフォーマンスのバランスが良好です。64 GB を超える RAM を使用すると、ガベージ コレクション イベントがシングル スレッドで実行され、他のすべてのスレッドの実行が停止するため、パフォーマンスに影響する可能性があります。

ディスク ストレージ

通常、本番環境システムでは 100 GB のディスク容量で十分です。

クラスタに関する考慮事項

Looker は Java JVM で動作し、Java では 64 GB を超えるメモリの管理が難しい場合があります。原則として、より大きな容量が必要な場合は、ノードサイズを 16x64 よりも大きくするのではなく、16x64 のノードをクラスタに追加する必要があります。可用性を向上させるためにクラスタ化アーキテクチャを使用することもできます。

クラスタ内の Looker ノードは、ファイル システムの特定の部分を共有する必要があります。共有データには次のものが含まれます。

  • LookML モデル
  • デベロッパーの LookML モデル
  • Git サーバー接続

ファイル システムは共有され、多数の Git リポジトリをホストしているため、同時アクセスとファイルロックの処理は重要です。ファイル システムは POSIX 準拠である必要があります。ネットワーク ファイル システム(NFS)は Linux ですぐに利用できる機能として認識されています。追加の Linux インスタンスをスピンアップして、ディスクを共有するように NFS を構成する必要があります。デフォルトの NFS は単一障害点となる可能性があるため、フェイルオーバー設定または高可用性設定を検討してください。

Looker のメタデータも一元化する必要があるため、内部データベースを MySQL に移行する必要があります。これは、MySQL サービスまたは専用の MySQL デプロイのいずれかになります。詳細については、このページの内部(バックエンド)データベースのセクションをご覧ください。

JVM の設定

Looker の JVM 設定は Looker 起動スクリプト内に定義されます。更新後、マニフェストを変更するために Looker を再起動する必要があります。デフォルトの設定は次のとおりです。

java \
  -XX:+UseG1GC -XX:MaxGCPauseMillis=2000 \
  -Xms$JAVAMEM -Xmx$JAVAMEM \
  -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps \
  -Xloggc:/tmp/gc.log ${JAVAARGS} \
  -jar looker.jar start ${LOOKERARGS}

関連情報

リソース設定は、Looker の起動スクリプトで定義されます。

JAVAMEM="2300m"
METAMEM="800m"

JVM のメモリ割り当てでは、Looker が実行されているオペレーティング システムのオーバーヘッドを考慮する必要があります。一般的に、JVM は合計メモリの 60% まで割り当てることができますが、マシンのサイズによっては注意点があります。合計メモリが最低 8 GB のマシンでは、Java に 4 ~ 5 GB、Meta には 800 MB を割り当てることをおすすめします。マシンのサイズが大きい場合は、オペレーティング システムに割り当てられるメモリの割合が低くなります。たとえば、合計メモリが 60 GB のマシンの場合、Java に 36 GB、Meta に 1 GB を割り当てることをおすすめします。Java のメモリ割り当ては、通常はマシンの合計メモリに応じてスケーリングされますが、Meta は 1 GB で十分であることに注意してください。

さらに、Looker はレンダリングのために Chromium などの他のプロセスとシステム リソースを共有するため、予想されるレンダリング負荷とマシンのサイズを考慮して Java に割り当てられるメモリの量を選択する必要があります。レンダリングの負荷が低いと予想される場合は、Java がメモリ全体に占める割合を高くできます。たとえば、合計メモリが 60 GB のマシンの場合、Java には一般的に推奨される 60% よりも高い 46 GB を安全に割り当てることができます。デプロイごとに適切なリソース割り当ては異なるため、60% をベースラインとして使用し、使用状況の指示に応じて調整します。

ガベージ コレクション

Looker では、Java バージョンで入手可能な最新のガベージ コレクタを使用することをおすすめします。デフォルトでは、ガベージ コレクションのタイムアウトは 2 秒ですが、起動構成の次のオプションを編集して変更できます。

-XX:MaxGCPauseMillis=2000

複数のコアを搭載した大規模なマシンでは、GC のタイムアウトが短縮される場合があります。

Logs

デフォルトでは、Looker の GC ログは /tmp/gc.log に保存されます。これは、起動構成で次のオプションを編集することで変更できます。

-Xloggc:/tmp/gc.log

JMX

Looker の管理には、リソースを絞り込むためにモニタリングが必要になる場合があります。JVM メモリ使用量をモニタリングするには、JMX を使用することをおすすめします。

Looker の起動オプション

起動オプションは lookerstart.cfg というファイルに保存されます。このファイルは、Looker を起動するシェル スクリプトから取得されます。

スレッドプール

Looker にはマルチスレッド アプリケーションとして、数多くのスレッドプールがあります。こうしたスレッドプールは、コアのウェブサーバーから、スケジューリング、レンダリング、外部データベース接続などの特殊なサブサービスまで多岐にわたります。ビジネス ワークフローによっては、これらのプールをデフォルト構成から変更する必要があります。特に、セルフホスト型インフラストラクチャ アーキテクチャ パターンとベスト プラクティスに記載されているクラスタ トポロジに関する特別な考慮事項があります。

高スケジューリング スループット オプション

スケジューラ以外のすべてのノードで、lookerstart.cfgLOOKERARGS 環境変数に --scheduler-threads=0 を追加します。スケジューラ スレッドがなければ、これらのノードではスケジュールされたジョブは実行されません。

すべての専用のスケジューラ ノードで、lookerstart.cfgLOOKERARGS 環境変数に --scheduler-threads=<n> を追加します。Looker では、デフォルトで 10 個のスケジューラ スレッドが開始しますが、<n> に増やすことができます。<n> 個のスケジューラ スレッドを使用して、各ノードは <n> 個の同時スケジュール ジョブを実行できます。通常は、<n> を CPU 数の 2 倍未満に保つことをおすすめします。推奨される最大ホストは、16 個の CPU と 64 GB のメモリを備えたホストであるため、スケジューラ スレッドの上限は 32 未満である必要があります。

高レンダリング スループット オプション

レンダリングされていないノードすべてについて、--concurrent-render-jobs=0 environment variable in lookerstart.cfg. Without renderer nodes, no render jobs will run on these nodes. に追加します

For all dedicated render nodes, add --concurrent-render-jobs=<n> to the LOOKERARGS environment variable in lookerstart.cfg. Looker starts with two render threads by default, but this can be increased to <n>. With <n> render threads, each node will be capable of executing <n> concurrent render jobs.

Each render job can utilize a significant amount of memory. Budget about 2 GB per render job. For example, if the core Looker process (Java) is allocated 60% of the total memory and 20% of the remaining memory is reserved for the operating system, that leaves the last 20% for render jobs. On a 64 GB machine, that leaves 12 GB, which is enough for 6 concurrent render jobs. If a node is dedicated to rendering and is not included in the load balancer pool that is handling interactive jobs, the core Looker process memory can be reduced to allow for more render jobs. On a 64 GB machine, one might allocate approximately 30% (20 GB) to the Looker core process. Reserving 20% for general OS use, that leaves 50% (32 GB) for rendering, which is enough for 16 concurrent render jobs.

Internal (backend) database

The Looker server maintains information about its own configuration, database connections, users, groups, and roles, folders, user-defined Looks and dashboards, and various other data in an internal database.

For a standalone Looker instance of moderate size, this data is stored within an in-memory HyperSQL database embedded in the Looker process itself. The data for this database is stored in the file <looker install directory>/.db/looker.script. Although convenient and lightweight, this database experiences performance issues with heavy usage. Therefore, we recommend starting with a remote MySQL database. If this isn't feasible, we recommend migration to a remote MySQL database once the ~/looker/.db/looker.script file reaches 600 MB. Clusters must use a MySQL database.

The Looker server makes many small reads and writes to the MySQL database. Every time a user runs a Look or an Explore, Looker will check the database to verify that the user is still logged in, the user has privileges to access the data, the user has privileges to run the Look or Explore, etc. Looker will also write data to the MySQL database, including the actual SQL that was run, the time the request started and ended, etc. A single interaction between a user and the Looker application could result in 15 or 20 small reads and writes to the MySQL database.

MySQL

The MySQL server should be version 8.0.x, and must be configured to use utf8mb4 encoding. The InnoDB storage engine must be used. The setup instructions for MySQL, as well as instructions for how to migrate data from an existing HyperSQL database to MySQL, are available on the Migrating the Looker backend database to MySQL documentation page.

When configuring Looker to use MySQL, a YAML file must be created containing the connection information. Name the YAML file looker-db.yml and add the setting -d looker-db.yml in the LOOKERARGS section of the lookerstart.cfg file.

MariaDB

MySQL is dual-licensed, available both as open source and as a commercial product. Oracle has continued to enhance MySQL, and MySQL is forked as MariaDB. The MariaDB equivalent versions of MySQL are known to work with Looker, but they aren't developed for or tested by Looker's engineering teams; therefore, functionality is not supported or guaranteed.

Cloud versions

If you host Looker in your cloud infrastructure, it is logical to host the MySQL database in the same cloud infrastructure. The three major cloud vendors — Amazon AWS, Microsoft Azure, and Google Cloud. The cloud providers manage much of the maintenance and configuration for the MySQL database and offer services to help manage backups, provide rapid recovery, etc. These products are known to work well with Looker.

System Activity queries

The MySQL database is used to store information about how users are using Looker. Any Looker user who has permission to view the System Activity model has access to a number of prebuilt Looker dashboards to analyze this data. Users can also access Explores of Looker metadata to build additional analysis. The MySQL database is primarily used for small, fast, "operational" queries. The large, slow, "analytic" queries generated by the System Activity model can compete with these operational queries and slow Looker down.

In these cases, the MySQL database can be replicated to another database. Both self-managed and certain cloud-managed systems provide simple configuration of replication to other databases. Configuring replication is outside the scope of this document.

In order to use the replica for the System Activity queries, you will create a copy of the looker-db.yml file, for example named looker-usage-db.yml, modify it to point to the replica, and add the setting --internal-analytics-connection-file looker-usage-db.yml to the LOOKERARGS section of the lookerstart.cfg file.

The System Activity queries can run against a MySQL instance or a Google BigQuery database. They are not tested against other databases.

MySQL performance configuration

In addition to the settings required to migrate the Looker backend database to MySQL, highly active clusters may benefit from additional tuning and configuration. These settings can be made to the /etc/my.cnf file, or through the Cloud Console for cloud-managed instances.

The my.cnf configuration file is divided into several sections. The setting changes discussed below are made in the [mysqld] section.

Set the InnoDB buffer pool size

The InnoDB buffer pool size is the total RAM that is used to store the state of the InnoDB data files in memory. If the server is dedicated to running MySQL, the innodb_buffer_pool_size should be set to 50%-70% of total system memory.

If the total size of the database is small, it is allowable to set the InnoDB buffer pool to the size of the database rather than 50% or more of memory.

For this example, a server has 64 GB of memory; therefore, the InnoDB buffer pool should be between 32 GB and 45 GB. Bigger is typically better.

[mysqld]
...
innodb_buffer_pool_size=45G

Set the InnoDB buffer pool instances

When multiple threads attempt to search a large buffer pool, they could contend. To prevent this, the buffer pool is divided into smaller units that can be accessed by different threads without conflict. By default, the buffer pool is divided into 8 instances. This creates the potential for an 8 thread bottleneck. Increasing the number of buffer pool instances reduces the chance of a bottleneck. The innodb_buffer_pool_instances should be set so that each buffer pool gets at least 1 GB of memory.

[mysqld]
...
innodb_buffer_pool_instances=32

Optimize the InnoDB log file

When a transaction is committed, the database has the option to update the data in the actual file, or it can save details about the transaction in the log. If the database crashes before the data files have been updated, the log file can be "replayed" to apply the changes. Writing to the log file is a simple append operation. It is efficient to append to the log at commit time, then batch up multiple changes to the data files and write them in a single IO operation. When the log file is filled, the database has to pause processing new transactions and write all the changed data back to disk.

As a general rule of thumb, the InnoDB log file should be large enough to contain 1 hour of transactions.

There are typically two InnoDB log files. They should be about 25% of your InnoDB buffer pool. For an example database with a 32 GB buffer pool, the InnoDB log files should total 8 GB, or 4 GB per file.

[mysqld]
...
innodb_log_file_size=8GB

Configure InnoDB IO capacity

MySQL will throttle the speed at which writes are recorded to the disk so as not to overwhelm the server. The default values are conservative for most servers. For best performance use the sysbench utility to measure the random write speed to the data disk, then use that value to configure the IO capacity so that MySQL will write data more quickly.

On a cloud-hosted system, the cloud vendor should be able to report the performance of the disks used for data storage. For a self-hosted MySQL server, measure the speed of random writes to the data disk in IO operations per second (IOPS). The Linux utility sysbench is one way to measure this. Use that value for the innodb_io_capacity_max, and a value one-half to three-quarters of that for innodb_io_capacity. So, in the example below, we would see the values if we measured 800 IOPS.

[mysqld]
...
innodb_io_capacity=500
innodb_io_capacity_max=800

Configure InnoDB threads

MySQL will open at least one thread for each client being served. If many clients are connected simultaneously, that can lead to a huge number of threads being processed. This can cause the system to spend more time swapping than processing.

Benchmarking should be done to determine the ideal number of threads. To test, set the number of threads between the number of CPUs (or CPU cores) on the system and 4x the number of CPUs. For a 16-core system, this value is likely between 16 and 64.

[mysqld]
...
innodb_thread_concurrency=32

Transaction durability

A transaction value of 1 forces MySQL to write to disk for every transaction. If the server crashes, the transaction won't be lost, but database performance will be impacted. Setting this value to 0 or 2 can improve performance, but it will come at the risk of losing a couple of seconds' worth of data transactions.

[mysqld]
...
innodb_flush_log_at_trx_commit=1

Set the flush method

The operating system normally does buffering of writes to the disk. Since MySQL and the OS are both buffering, there is a performance penalty. Reducing the flush method one layer of buffering can improve performance.

[mysqld]
...
innodb_flush_method=O_DIRECT

Enable one file per table

By default, MySQL will use a single data file for all data. The innodb_file_per_table setting will create a separate file for each table, which improves performance and data management.

[mysqld]
...
innodb_file_per_table=ON

Disable stats on metadata

This setting disables the collection of stats on internal metadata tables, improving read performance.

[mysqld]
...
innodb_stats_on_metadata=OFF

Disable the query cache

The query cache is deprecated, so setting the query_cache_size and query_cache_type to 0 disables it.

[mysqld]
...
query_cache_size=0
query_cache_type=0

Enlarge the join buffer

The join_buffer is used to perform joins in memory. Increasing it can improve certain operations.

[mysqld]
...
join_buffer_size=512KB

Enlarge the temporary table and max heap sizes

The tmp_table_size and max_heap_table_size set reasonable defaults for temporary in-memory tables, before they are forced to disk.

[mysqld
...
tmp_table_size=32MB
max_heap_table_size=32MB

Adjust the table open cache

The table_open_cache setting determines the size of the cache that holds the file descriptors for open tables. The table_open_cache_instances setting breaks the cache into a number of smaller parts. There is a potential for thread contention in the table_open_cache, so dividing it into smaller parts helps increase concurrency.

[mysqld]
...
table_open_cache=2048
table_open_cache_instances=16

Git service

Looker is designed to work with a Git service to provide version management of the LookML files. Major Git hosting services are supported, including GitHub, GitLab, Bitbucket, etc. Git service providers offer additional value adds such as a GUI to view code changes and support for workflows like pull requests and change approvals. If required, Git can be run on a plain Linux server.

If a Git hosting service is not appropriate for your deployment because of security rules, many of these service providers offer versions that can be run in your own environment. GitLab, in particular, is commonly self-hosted and can be used as an open source product with no license cost or as a supported licensed product. GitHub Enterprise is available as a self-hosted service and is a supported commercial product.

The following sections list nuances for the most common service providers.

GitHub/GitHub Enterprise

The Setting up and testing a Git connection documentation page uses GitHub as an example.

GitLab/gitlab.com

Refer to the Using GitLab for version control in Looker Looker Community post for detailed setup steps for GitLab. If your repo is contained within subgroups, these can be added to the repo URL using either the HTTPS or SSH format:

https://gitlab.com/accountname/subgroup/reponame

git@gitlab.com:accountname/subgroup/reponame.git

Additionally, there are three different ways you can store Looker-generated SSH keys in GitLab: as a user SSH key, as a repository deploy key, and as a global shared deploy key. A more in-depth explanation can be found in the GitLab documentation.

Google Cloud Source

Refer to the Using Cloud Source Repositories for version control in Looker Community Post for steps to set up Git with Cloud Source Repositories.

Bitbucket Cloud

Refer to the Using Bitbucket for version control in Looker Community Post for steps for setting up Git with Bitbucket Cloud. As of August 2021, Bitbucket Cloud does not support secrets on deploy webhooks.

Bitbucket Server

To use pull requests with Bitbucket Server, you may need to complete the following steps:

  1. When you open a pull request, Looker will automatically use the default port number (7999) in the URL. If you are using a custom port number, you will need to replace the port number in the URL manually.
  2. You will need to hit the project's deploy webhook to sync the production branch in Looker with the repo's master branch.

Phabricator diffusion

Refer to the Setting up Phabricator and Looker for version control Community Post for steps on setting up Git with Phabricator.

Network

Inbound connections

Looker web application

By default, Looker listens for HTTPS requests on port 9999. Looker uses a self-signed certificate with a CN of self-signed.looker.com. The Looker server can alternately be configured to do the following:

  1. Accept HTTP connections from an SSL-termination load balancer/proxy, with the --ssl-provided-externally-by=<s> startup flag. The value should either be set to the IP address of the proxy, or to a host name that can be locally resolved to the IP address of the proxy. Looker will accept HTTP connections only from this IP address.
  2. Use a customer supplied SSL certificate, with the --ssl-keystore=<s> startup flag.

Looker API

The Looker API listens on port 19999. If the installation requires access to the API, then the load balancer should have the requisite forwarding rules. The same SSL considerations apply as with the main web application. We recommend using a distinct port from the web application.

Load balancers

A load balancer is often used to accept an HTTPS request at port 443 using the customer's certificate, then forward the request to the Looker server node at port 9999 using the self-signed certificate or HTTP. If load balancers are using Looker's self-signed certificate, they must be configured to accept that certificate.

Idle connections and timeouts

When a user starts a large request in Looker, that could result in a query that could be expensive to run on the database. If the user abandons that request in any way — by shutting the lid on their laptop, disconnecting from the network, killing that tab in the browser, etc. — Looker wants to know and terminate that database query.

To handle this situation, when the client web application makes a request to run a database query, the browser will open a socket connection via a long-lived HTTP request to the Looker server. This connection will sit open and idle. This socket will get disconnected if the client web application is killed or disconnected in any way. The server will see that disconnect and cancel any related database queries.

Load balancers often notice these open idle connections and kill them. In order to run Looker effectively, the load balancer must be configured to allow this connection to remain open for as long as the longest query a user might run. A timeout of at least 60 minutes is suggested.

Outbound connections

Looker servers can have unrestricted outbound access to all resources, including the public internet. This simplifies many tasks, such as installing Chromium, which requires access to the package repositories for the Linux distribution.

The following are outbound connections that Looker may need to make.

Internal database connection

By default, MySQL listens for connections on port 3306. The Looker nodes must be able to initiate connections to MySQL on this port. Depending on how the repository is hosted, you may need to traverse a firewall.

External services

Looker's telemetry and license servers are available via HTTPS on the public internet. Traffic from a Looker node to ping.looker.com:443 and license.looker.com:443 may need to be added to an allowlist.

Data warehouse connections

Cloud-hosted databases may require a connection via the public internet. For example, if you are using BigQuery, then accounts.google.com:443 and www.googleapis.com:443 may need to be added to an allowlist. If the database is outside of your own infrastructure, consult with your database host for network details.

SMTP services

By default, Looker sends outgoing mail via SendGrid. That may require adding smtp.sendgrid.net:587 to an allowlist. The SMTP settings can be changed in the configuration to use a different mail handler as well.

Action hubs, action servers, and webhooks

Many scheduler destinations, in particular webhooks and the ones that are enabled in the Looker Admin panel, involve sending data via HTTPS requests.

  • For webhooks, these destinations are specified at runtime by users, and may be contrary to the goal of firewalling outbound connections.
  • For an action hub, these requests are sent to actions.looker.com. Details can be found in our Looker Action Hub configuration documentation.
  • For other action servers, these requests are sent to the domains specified in the action server's configuration by administrators in the Looker Admin panel.

Proxy server

If the public internet cannot be reached directly, Looker can be configured to use a proxy server for HTTP(S) requests by adding a line like the following to lookerstart.cfg:

JAVAARGS="-Dhttp.proxyHost=myproxy.example.com
  -Dhttp.proxyPort=8080
  -Dhttp.nonProxyHosts=127.0.0.1|localhost
  -Dhttps.proxyHost=myproxy.example.com
  -Dhttps.proxyPort=8080"

Note that internode communications happen over HTTPS, so if you use a proxy server and your instance is clustered, you will usually want to add the IPs/host names for all the nodes in the cluster to the Dhttp.nonProxyHosts argument.

Internode communications

Internal host identifier

Within a cluster, each node must be able to communicate with the other nodes. To allow this, the host name or IP address of each node is specified in the startup configuration. When the node starts up, this value will be written into the MySQL repository. Other members of the cluster can then refer to those values to communicate with this node. To specify the host name or IP address in the startup configuration, add -H node1.looker.example.com to the LOOKERARGS environment variable in lookerstart.cfg.

Since the host name must be unique per node, the lookerstart.cfg file needs to be unique on each instance. As an alternative to hardcoding the host name or IP address, the command hostname -I or hostname --fqdn can be used to find these at runtime. To implement this, add -H $(hostname -I) or -H $(hostname --fqdn) to the LOOKERARGS environment variable in lookerstart.cfg.

Internal ports

In addition to the ports 9999 and 19999, which are used for the web and API servers, respectively, the cluster nodes will communicate with each other through a message broker service, which uses ports 1551 and 61616. Ports 9999 and 19999 must be open to end-user traffic, but 1551 and 61616 must be open between cluster nodes.