In this page, you'll find best practices for using Datastream. These include general best practices when using Datastream.
Change a stream's source database
In some cases, you may have to change the source database of a stream. For example, you may have to modify the stream to replicate from a replica instead of from the primary database instance.
- Create a connection profile for the replica instance.
- Create a stream, using the connection profile for the replica that you created and the existing connection profile for the destination.
- Start the stream with historical backfill disabled. When the stream is started, it will bring only the data from the binary logs.
- Optional. After the stream is running, modify it to enable automatic backfill.
Pause the stream that's reading from the primary instance.
Optional. Delete the stream that was streaming data from the primary instance.
Optional. Delete the connection profile for the primary instance.
Alert and monitor in Datastream
The Datastream dashboard contains a great deal of information. This information can be helpful for debugging purposes. Additional information can be found in the logs, which are available in Cloud Logging.
There's no default alert set up for Datastream, but alerts can be created easily by clicking the "Create alerting policy" link for each metric in the Datastream UI. We recommend creating alerts for the following Datastream metrics:
- Data freshness
- Unsupported events
- Total latency
An alert on any of these metrics can indicate a problem with either the stream or the source database.
How many tables can a single stream handle?
A single stream can handle up to 10,000 tables, and there's no limit to the size of the tables. However, there may be some other business-logic considerations where you should use multiple streams for your source database. . Examples of these considerations include having a better control of user access to data, using multiple streams to make maintenance easier for different business flows, and so on.