This guide shows SAP LT Replication Server administrators, SAP data engineers, or others how to perform operational tasks, such as performance tuning and version updates, for version 2.8 (latest) of the BigQuery Connector for SAP.
Tuning replication performance
Replication performance can be affected by multiple factors. The specific factors that apply can be different from installation to installation and can change over time.
The following sections provide guidance on how to tune some of the more common factors that can impact performance.
For more information about replication performance with the BigQuery Connector for SAP, see Performance planning.
Set performance options for tables
In SAP LT Replication Server you can specify replication options for each table that affect performance.
In particular, the replication performance for large tables, which require more time and resources to replicate, can benefit from specifying ranges and increasing the maximum number of parallel replication jobs that can be used for the table.
Examples of tables that commonly grow large are MSEG
, ACDOCA
, and MATDOC
,
among others.
When you specify parallel replication jobs for large tables, you need to balance the number of parallel jobs that you allow for any given table against the total number of parallel jobs that are allowed in the mass transfer configuration. Your organization might also limit the number of parallel replication jobs that you can specify for a given server.
To set performance options for a table:
In the SAP GUI, enter SAP transaction
LTRS
.On the Advanced Replication Settings screen, specify the ID of the mass transfer settings for the table.
In the Advanced Replication Settings folder hierarchy, click the Performance Options folder to display the tables that have performance options defined.
If the table you need isn't listed, right-click the Performance Options folder, and then select Add Table.
Specify a name for the Table.
Specify the following options as needed:
- Under General Performance Options:
- No. of Parallel Jobs, to set the maximum number of parallel replication jobs that can be used for the table.
- Sequence Number, to prioritize the replication of this table relative to other table replications.
- Under Initial Load Options:
- For Reading Type, select Reading Type 1 Range Calculation, if your table is not too large. For more information, see Performance and the LTRS Advanced Replication Settings.
- For Package Size, specify the size in bytes of the portions of records that are sent to SAP LT Replication Server.
- If you select a Reading Type that uses ranges, then define appropriate ranges.
- Under Replication Option:
- For Ranges for Logging Table, specify No Ranges for the most reliable option.
- If you select Specify Ranges for Manually, then define appropriate ranges.
- Under General Performance Options:
Click Save.
Baseline performance benchmark
To help you evaluate your replication performance, this section contains baseline performance numbers that were observed in Google Cloud test systems.
Due to the many different factors that affect performance, your performance numbers are likely to vary.
For example, if your SAP systems are not running on Google Cloud, your load and replication rates might be slower than the baseline rates due to things like network latency and the overhead associated with for access tokens. If your source table has fewer columns or you install SAP LT Replication Server on its own server in a standalone architecture, your rates might be faster because SAP LT Replication Server doesn't have to compete with the source system for resources.
Observed baseline performance numbers
The following performance numbers represent the baseline performance that was observed by Google Cloud for each source system type during testing. In each test system, SAP LT Replication Server was installed on the SAP source system in an embedded architecture on Compute Engine VMs. The SAP source system was running in the same Google Cloud region as the target BigQuery dataset.
For information about the configuration of the test systems, see Baseline performance test system configuration.
To see the performance numbers, click your source system type:
S/4HANA
- Table: ACDOCA
- 343 million records
- 477 columns
- Initial load
- Load rate: 350 million records per hour on average
- Load duration: 59 minutes on average
- Replication
- Source table change rate: 50 million records per hour on average
- Maximum replication rate: 50 million records per hour on average
ECC
- Table: MSEG
- 203 million records
- 188 columns
- Initial load
- Load rate: 385 million records per hour on average
- Load duration: 32 minutes on average
- Replication
- Source table change rate: 50 million records per hour on average
- Maximum replication rate: 69 million records per hour on average
The preceding performance numbers are the baselines that the Google Cloud testers observed.
The observed performance was better in test systems that had the following attributes:
- SAP LT Replication Server was installed on its own VM in a standalone
architecture.
- For S/4HANA systems, a standalone architecture was observed to have an initial load rate approximately 42% faster than an embedded architecture due to the independent scaling of the SAP LT Replication Server processes.
- For ECC systems, a standalone architecture was observed to have an initial load rate approximately 10% faster than an embedded architecture due to the independent scaling of the SAP LT Replication Server processes.
- The source table had fewer columns.
- The overall byte size of the records was smaller.
For information about the system attributes that you can modify to improve performance, see:
Baseline performance test system configuration
The test systems described in this section produced the baseline performance numbers that are listed in the preceding section, Observed baseline performance numbers.
The tests systems, including the SAP source system, the SAP LT Replication Server, and the BigQuery dataset, were all running on Compute Engine VMs in the same Google Cloud region.
In each system, the servers and the workload were designed to simulate a heavier workload and higher replication volume that you are likely to find in many real-world installations.
To see the test system attributes, click your source system type:
S/4HANA
- SAP LT Replication Server installation architecture:
- Embedded architecture
- Source system servers:
- Two application servers, each on an N2-based Compute Engine
custom machine type with the following specifications:
- vCPUs: 60
- Memory: 324 GB
- CPU platform: Intel Cascade Lake
- One SAP HANA server on an
m1-ultramem-80
Compute Engine VM with the following specifications:- vCPUs: 80
- Memory: 1,900 GB
- CPU platform: Intel Broadwell
- Two application servers, each on an N2-based Compute Engine
custom machine type with the following specifications:
- Software versions:
- S/4HANA 1909
- SAP LT Replication Server: S/4CORE 104 SP00
- Table size:
- Table name: ACDOCA, general ledger journal entry line item data
- Number of records: 343 million
- Number of columns: 477
- Work processes on each application server:
- 60 Dialog processes
- 220 Background processes
- Load settings in SAP LT Replication Server:
- Jobs: 99
- Reading type: 1 Range
- Calculation: Auto-ranges
- Replication settings:
- Jobs: 99
- Use Key fields to calculate ranges for logging table
- 128 ranges
ECC
- SAP LT Replication Server installation architecture:
- Embedded architecture
- Source system servers:
- Two application servers, each on an
n2-highmem-48
Compute Engine VM with the following specifications:- vCPUs: 60
- Memory: 348 GB
- CPU platform: Intel Cascade Lake
- Two application servers, each on an
- Software versions:
- SAP NetWeaver: 7.0 EHP2
- SAP LT Replication Server: DMIS 2011_1_700 SP17
- Table size:
- Table: MSEG, materials inventory management documents
- Number of records: 203 million
- Number of columns: 188
- Work processes on each application server:
- 60 Dialog processes
- 100 Background processes
- Load settings in SAP LT Replication Server:
- Jobs: 99
- Reading type: 5 Sender
- Queue: Manual ranges
- Replication settings:
- Jobs: 99
- Ranges for Logging Table: Use Key fields to calculate ranges
- Number of ranges: 128
Dynamic chunk size
If you come across errors because the byte size of the chunks exceeds the maximum byte size for HTTP requests that BigQuery accepts, then you need to reduce the byte size manually by reducing the chunk size. The dynamic chunk size feature lets you automatically reduce the chunk size and retry replication to BigQuery when the byte size of a chunk exceeds the maximum byte size for HTTP requests that BigQuery accepts. Dynamic chunk size helps you prevent most replication failures due to exceeding byte size of a request. You might receive an error only if the chunk size reaches 1, but the byte size remains more than the BigQuery limit on the number of bytes in each HTTP request.
You enable dynamic chunk size in the mass transfer configuration for a table
by using transaction /GOOG/SLT_SETTINGS
.
Dynamic chunk size is an optional setting. For information about
how to enable dynamic chunk size, see:
- If SAP LT Replication Server is running on a Compute Engine VM, see Specify table creation and other general attributes.
- If SAP LT Replication Server is running on a host that is external to Google Cloud, see Specify table creation and other general attributes.
When dynamic chunk size is enabled, the maximum chunk size that BigQuery Connector for SAP allows remains within the BigQuery quota limits, which is 50,000 records.
For more information about chunk size, see Portion size and chunk size.
How dynamic chunk size works
With dynamic chunk size, if the HTTP request with initial chunk size exceeds the BigQuery limit on byte size, then BigQuery Connector for SAP reduces the chunk size and retries to send the data. BigQuery Connector for SAP keeps on reducing the chunk size and retries to send the data to BigQuery, until the data is transferred successfully for a particular chunk or until the chunk size reaches 1.
The final reduced chunk size, for which the data transfer was successful, is then used as the chunk size for all the remaining chunks of that portion. You can find the final reduced chunk size that was successful for each portion in the SAP LT Replication Server application logs as an information message:
Dynamic chunking triggered. Chunk size reduced from
INITIAL_CHUNK_SIZE_VALUE to FINAL_REDUCED_CHUNK_SIZE_VALUE
.
For the subsequent portions and any
subsequent replications, BigQuery Connector for SAP starts sending data to
BigQuery with the chunk size configured in transaction
/GOOG/SLT_SETTINGS
and keeps on reducing the chunk size if dynamic chunking
is triggered.
By default, the chunk size is reduced by 50% after each retry. If you want to reduce the chunk size by a lower or higher percentage, then modify the advanced settings parameters.
Let us understand with an example how chunk size is determined in
the replication process when dynamic chunk size is enabled for a table.
For this example,
SAP LT Replication Server portion size is larger than BigQuery Connector for SAP
chunk size and the chunk size of 10,000 records is defined in transaction
/GOOG/SLT_SETTINGS
. BigQuery Connector for SAP replicates a portion to
BigQuery as follows:
When replication starts for a portion that contains 20,000 records, the chunk size for the first chunk is 10,000 records, but if the byte size for HTTP request is greater than 10 MB, then BigQuery Connector for SAP reduces the chunk size by 50%, and the new chunk size becomes 5,000 records.
BigQuery Connector for SAP retries to send the chunk size of 5,000 records, but if the byte size for HTTP request is still greater than 10 MB, then BigQuery Connector for SAP reduces the chunk size further by 50%, and the new chunk size becomes 2,500 records.
BigQuery Connector for SAP retries to send the chunk size of 2,500 records, now if the byte size for the HTTP request for this chunk is smaller than 10 MB, then the replication succeeds and data is inserted into BigQuery.
The chunk size for all subsequent chunks becomes 2,500 records as long as the byte size for each HTTP request is less than 10 MB. If the byte size of HTTP request for any subsequent chunk exceeds 10 MB, then BigQuery Connector for SAP again reduces the chunk size and retries to send the data to BigQuery, until the data is transferred successfully for a particular chunk. The reduced chunk size is used only for the current portion of the current replication.
Performance with dynamic chunk size
Dynamic chunk size can affect the performance of replication to BigQuery. For every chunk, BigQuery Connector for SAP calculates the number of records in a chunk and checks the byte size of HTTP requests. If the byte size is greater than 10 MB, then BigQuery Connector for SAP reduces the chunk size and retries to send the data to BigQuery that increases the overall replication time.
Use dynamic chunk size only in specific situations, where even after configuring an ideal chunk size for some data records, the request size might exceed the HTTP request limit of BigQuery and you do not want to receive chunk size error. For example:
- Source tables that contain large variance in sparsity of the data in the fields, that is, for some records fewer fields are maintained whereas for some records many fields are maintained.
- Source tables that contain long text fields such as
EDID4-SDATA
,VARI-CLUSTID
, andREPOSRC-DATA
.
You can also use dynamic chunk size during your testing phase to identify an ideal chunk size for a table that you can define in your production SAP system.
For more information about configuring the chunk size, see:
- If SAP LT Replication Server is running on a Compute Engine VM, see Specify table attributes.
- If SAP LT Replication Server is running on a host that is external to Google Cloud, see Specify table attributes.
Transport mass transfer settings to production
To transport mass transfer settings to production, you first export the settings from a development system, and then import them into the production system.
You can, optionally, import three separate parts of the settings of a mass transfer into production:
- The advanced replication settings, which can be accessed by using the
LTRS
transaction. - The client key settings from the
/GOOG/CLIENT_KEY
table, which can be accessed by using theSM30
transaction. - The BigQuery Connector for SAP the mass transfer settings, which can be
accessed by using the
/GOOG/SLT_SETTINGS
transaction.
Export mass transfer settings from a development system
In the SAP LT Replication Server development system, export each part of the mass transfer settings:
Export the advanced replication settings:
- Run the
LTRS
transaction. - Select the mass transfer records that you are transporting to production.
- From the File drop-down menu, select Export All Settings.
- In the Export Settings dialog, select a destination and click Save. The settings are saved in a compressed file in CSV format on your local workstation.
- Run the
Export the BigQuery Connector for SAP mass transfer settings:
Run the
/GOOG/SLT_SETTINGS
transaction:/n/GOOG/SLT_SETTINGS
In the Settings Table field, select Mass Transfer.
Select the mass transfer records that you are transporting to production.
Click Transport Mass Transfer.
In the Prompt for Workbench request, enter the transport request number and click the Continue icon. For each selected mass transfer record, the settings from the following custom configuration tables are included in the transport:
/GOOG/BQ_MASTR
/GOOG/BQ_TABLE
/GOOG/BQ_FIELD
The mass transfer settings are saved to a transport request.
Export the client key settings by manually including the contents of the
/GOOG/CLIENT_KEY
table in the transport request.Save the files to your local workstation.
Import mass transfer settings into a production system
In the SAP LT Replication Server production system, import each part of the mass transfer settings:
Create an SAP LT Replication Server replication configuration for the mass transfer settings.
Import the advanced replication settings:
- Run the
LTRS
transaction. - Select the mass transfer that you created in the first step.
- From the File drop-down menu, select Import All Settings.
- In the Choose File dialog, select the compressed file from your local workstation and click Open. The settings are imported as settings for the mass transfer.
- Run the
Import the transport request that contains the mass transfer settings.
Run the
SM30
transaction.Update the client key settings as necessary for the production environment.
Run the
/GOOG/SLT_SETTINGS
transaction:/n/GOOG/SLT_SETTINGS
Verify that the correct mass transfers are displayed in the Mass Transfers screen.
In the Mass Transfer ID column, replace the mass transfer ID from the development system with the mass transfer ID from the replication configuration that you created in the first step.
In the subsequent Tables and Fields settings screens, update other values for the table and field mapping as necessary for the production environment.
Test the configuration by starting an initial load or replication. For information about starting an initial load or replication, see:
- If SAP LT Replication Server is running on a Compute Engine VM, Test replication.
- If SAP LT Replication Server is running on a host that is external to Google Cloud, Test replication.
Update BigQuery Connector for SAP
Google Cloud delivers new releases of the BigQuery Connector for SAP as SAP transports.
SAP administrators can update BigQuery Connector for SAP by following these steps:
- Deactivate the configuration in SAP LT Replication Server.
- Import the new SAP transport request.
- After validating the successful import and object activation, activate the configuration in SAP LT Replication Server.
Update the gcloud CLI
You need to keep the Google Cloud CLI updated on the SAP LT Replication Server host.
For more information about managing the gcloud CLI, see Managing gcloud CLI components.
Monitoring
You can monitor several different points along the data path from the SAP data source to the target BigQuery table, including:
- Infrastructure - network, hardware and operating system
- The SAP database layer
- The SAP application layer
- BigQuery Connector for SAP
- BigQuery
Your options for monitoring at each of these points are presented in the following subsections.
Monitoring the infrastructure
On Google Cloud, you can install the Ops Agent on your host VMs for advanced monitoring and logging. The Ops Agent sends the data to Cloud Monitoring in the Google Cloud console.
For more information, see:
For systems that are not running on Google Cloud, you can also get
server information by running SAP transactions, such as
transaction ST06
.
Monitoring the database layer
Use standard SAP transaction codes to monitor the health of the database.
The transaction code DBACOCKPIT
is the most common transaction for
monitoring the database. This transaction also provides detailed
logs that you can use for troubleshooting errors.
For SAP HANA, you can use SAP HANA Studio for SAP HANA operations. You can install SAP HANA Studio on any frontend machine.
When troubleshooting performance or other issues, check the following things in the source database:
- Expensive SQL Statements
- Locks
- Load history
- Indexes
- Processes
Monitoring the application layer
You can use SAP application monitoring and troubleshooting tools to monitor and troubleshoot BigQuery Connector for SAP, because it runs in the application layer.
SAP application monitoring and troubleshooting can be further classified into the following:
- Standard SAP monitoring and troubleshooting
- BigQuery Connector for SAP monitoring and troubleshooting
For larger landscapes, you can use the SAP Solution Manager as a central monitoring tool.
You can use the SAP transaction codes in the following list to monitor and diagnose issues on individual SAP application systems:
- SLT configuration status:
LTRC
- SLT errors and logs:
LTRO
andSLG1
- Internet Communication Manager (HTTP and HTTPS Calls):
SMICM
- Security and certificates:
STRUST
- SAP transports:
STMS
- RFC connections:
SM59
- OS command:
SM69
- Package check:
SE80
- Authorization checks:
SU53
- Background jobs:
SM37
- System logs:
SM21
Monitoring BigQuery
Use Cloud Monitoring to view BigQuery metrics and
create charts and alerts. Each metric has a resource type, either
bigquery_dataset
, bigquery_project
, or global
, as well as a set of
labels.
Use the resource types and labels to build queries in the Monitoring Query Language (MQL).
You can group or filter each metric by using the labels.
For more information about Monitoring, see the Cloud Monitoring documentation.
View the BigQuery Connector for SAP settings
To view the mass transfer settings of BigQuery Connector for SAP, in SAP GUI,
execute the transaction /GOOG/SLT_SETT_DISP
.
View the BigQuery Connector for SAP version
To view the version of the BigQuery Connector for SAP installed on your system,
in SAP GUI, execute the transaction /GOOG/BQC_VERSION
.
Create Table tool
For empty source tables in SAP, SAP SLT prevents the creation of target tables in BigQuery. If you need to create the target tables in the BigQuery dataset for empty source tables, then you can use the Create Table tool.
To run the Create Table tool, follow these steps:
In SAP GUI, execute the transaction
/GOOG/CREATE_BQ_TAB
preceded by/n
:/n/GOOG/CREATE_BQ_TAB
On the Create target tables from BQ settings screen, provide values for the following fields:
- Mass transfer key: the mass transfer key that contains the SAP tables.
- SAP table name: the SAP table names that you need to create.
Click the Execute icon. The target tables are created in the BigQuery dataset.
Optionally, verify in the BigQuery dataset if the table has been created with the correct schema.
Mass Field Conversion tool
Although BigQuery Connector for SAP automatically suggests the
BigQuery data types for most fields, you might need to
map the fields manually. Rather than manually assigning the data type to each
field, you can use the Mass Field Conversion tool to map the data
type assignment for all fields in the field mapping screen of the /GOOG/SLT_SETTINGS
transaction. The Mass Field Conversion tool converts all fields mappings
for a table to the STRING
type on BigQuery.
If a table is already being replicated or is added for initial load
in the LTRC
transaction, then don't use the Mass Field Conversion tool
for such tables as it can result in schema mismatch issues.
You can only use this tool for the SAP tables for which initial
load or replication has not started.
To run the Mass Field Conversion tool, follow these steps:
In SAP GUI, execute the transaction
/GOOG/MASS_CNVT_FMAP
preceded by/n
:/n/GOOG/MASS_CNVT_FMAP
On the Mass field conversion screen, provide values for the following fields:
- Mass transfer key: the mass transfer key that contains the SAP tables.
- SAP table name: the SAP table names for which you need all the field
mappings to be converted to the
STRING
type.
Click the Execute icon. For the selected tables, all the field mappings are converted to the
STRING
type.
Load Simulation tool
This section provides an overview of the Load Simulation tool and what you can do with it.
The Load Simulation tool is a support tool for BigQuery Connector for SAP that lets you simulate the replication of SAP data into BigQuery. The tool is part of the transport that Google Cloud provides for BigQuery Connector for SAP. You use the Load Simulation tool to replicate source SAP data into BigQuery by directly invoking the Business Add In (BAdI) of BigQuery Connector for SAP. Because the Load Simulation tool does not use underlying SLT framework, the SLT triggers are not impacted. Do not use the Load Simulation tool for data replication in production environments.
The Load Simulation tool provides a report that you can analyze to assess replication performance, identify potential problems, understand the root cause of issues, and resolve them prior to actual replication of SAP data into BigQuery by using BigQuery Connector for SAP.
The following are some common use cases where you can use the Load Simulation tool:
- Reproduce and troubleshoot any network connectivity, authorization, or authentication issues.
- Generate enhanced logs of the BigQuery API calls for troubleshooting issues.
- For any troubleshooting assistance from Cloud Customer Care, run the Load Simulation tool and provide logs to the Customer Care team.
- Measure performance metrics by providing the time taken for each step in the replication process.
- For SAP LT Replication Server in an embedded architecture, determine an optimal chunk size for the SAP tables.
Use a sample mass transfer configuration with the Load Simulation tool that
you create using the custom transaction /GOOG/SLT_SETTINGS
.
Do not use your production dataset and BigQuery
tables for running the Load Simulation tool.
When your SAP LT Replication Server is in an embedded architecture, you run the
Load Simulation tool with the standard SAP tables such as MARA
and T001
.
When your SAP LT Replication Server is in a standalone architecture, you run the
Load Simulation
tool with the sample table /GOOG/TEST_REPL
that Google Cloud provides
with the BigQuery Connector for SAP. The Load Simulation tool does not support
reading source tables from a remote system.
For more information about the architectures for SAP data sources on Google Cloud, see Installation architecture.
Prerequisites
Before you run the Load Simulation tool, make sure that the following prerequisites are satisfied:
BigQuery Connector for SAP is installed and configured. For information about the installation and configuration steps, see:
- If SAP LT Replication Server is running on a Compute Engine VM, see Install and configure BigQuery Connector for SAP on a Compute Engine VM.
- If SAP LT Replication Server is running on a host that is external to Google Cloud, see Install and configure BigQuery Connector for SAP on a host outside of Google Cloud.
The intended users have access to the custom transaction
/GOOG/LOAD_SIMULATE
that is provided by Google Cloud. For more information, see:- If SAP LT Replication Server is running on a Compute Engine VM, see Create SAP roles and authorizations for BigQuery Connector for SAP.
- If SAP LT Replication Server is running on a host that is external to Google Cloud, see Create SAP roles and authorizations for BigQuery Connector for SAP.
How to run the Load Simulation tool
To run the Load Simulation tool, follow these steps:
In the SAP GUI, enter the
/GOOG/LOAD_SIMULATE
transaction preceded by/n
:/n/GOOG/LOAD_SIMULATE
Click the Execute icon. The SLT Load Simulation screen displays.
In the Processing Options, make sure that the Execute Simulation option is selected.
In the Selection Options section, enter the following specifications:
- From the drop-down menu in the Google Cloud Partner field, select BigQuery.
In the Mass Transfer Key field, enter the mass transfer key for the mass transfer configuration.
Use a sample mass transfer configuration with the Load Simulation tool. Do not use your production dataset and BigQuery tables.
In the Table Name field, enter the name of the source SAP table that you provided in the sample mass transfer configuration.
Optionally, in the Where Condition field, enter a condition for data selection from the source table.
You can enter up to 255 characters. For example, if you run the Load Simulation tool for SAP table
MARA
, and you need to select material number from a specific range, then for Where Condition, specify a value likeMATNR GE '000000000400000001' AND MATNR LE '000000000600000001'
.In the Cycle Count field, enter the number of processing cycles that the Load Simulation tool runs.
This is helpful when you need to compare how the simulation report is coming up in multiple cycles. The value must be greater than 1.
In the Record Count per cycle field, enter the number of records that you want to send to BigQuery in each processing cycle. The value must be greater than 1.
In the Portion size field, enter the number of records out of the Records Count per cycle that SAP LT Replication Server sends to BAdI of BigQuery Connector for SAP in each portion.
Select one or more flags as appropriate:
Exact Records Count: Indicates that exactly the same number of records provided in the Record Count per cycle field are sent to BigQuery in each processing cycle. If the table does not have enough records, then the Load Simulation tool duplicates the existing records to achieve the required count. The records are duplicated only to insert data into BigQuery and not to insert data in the source table.
Use SLT Target Structure: Uses structure of the SLT logging table to get the source table fields. If this flag is not set, then fields are read directly from the source table to generate the target structure. For more information about SAP LT Replication Server data flow, see Detailed architectural view of the data flow.
Detailed Log: Indicates that the log records are created for all defined methods in the BigQuery Connector for SAP. If this flag is not set, then only important methods are logged.
Clear Previous Results: Clears previously created log records for the same mass transfer and SAP table. If the flag is not set, then the logs are appended to the previous results.
To run the Load Simulation tool, click the Execute icon.
After the load simulation is complete, in the Processing Options section, select the Display Report radio button.
In the Selection Options section, enter the following specifications:
- From the drop-down menu in the Google Cloud Partner field, select BigQuery.
- In the Mass Transfer Key field, enter the mass transfer key for the sample mass transfer configuration.
- In the Table Name field, enter the name of the source SAP table.
- Optionally, to view the report by load simulation execution date, specify a date range in the Report Date field.
- Optionally, to view the last executed report along with the current report, select the Last Execution Only flag.
To display the report, click the Execute icon.
The following table describes the columns displayed in the simulation report:
Name | Description |
---|---|
Transfer Key | The mass transfer key for the mass transfer configuration. |
SAP Table | The name of the SAP table that is being replicated to BigQuery. |
Execution Start Timestamp | The time when the execution started for a BigQuery Connector for SAP method. |
Completion Timestamp | The time when the execution completed for a BigQuery Connector for SAP method. |
Job Number | Unique job number for each completed execution that is automatically generated every time the Load Simulation tool is run. |
Cycle Numb | The sequence number of the processing cycle in which the report is generated. The Record Count per cycle provided in the simulation input is transferred to BigQuery for each cycle. |
Portion Numb | The sequence number of the portion. The Record Count per cycle provided in the simulation input is split into portions based on the specified portion size. The BAdI of BigQuery Connector for SAP is called for each portion. |
Class Name | The class name of the BigQuery Connector for SAP method. |
Method Name | The name of the BigQuery Connector for SAP method. The methods that are called by BigQuery Connector for SAP are logged in a sequence. If the Detailed Log flag is selected in the simulation input, then all methods are logged, or else only the important methods are logged. |
Invoked by Method | The last method that invoked the current BigQuery Connector for SAP method. |
Duration | The total time taken for the execution of a BigQuery Connector for SAP method. |
Recs Count | The number of records passed to a BigQuery Connector for SAP method. This is shown for only the methods to which the records are passed. |
URI Method | The name of the HTTP method, in case the ABAP method makes a BigQuery API call. |
URI String | The HTTP URL, in case the ABAP method makes a BigQuery API call. |
Token Source | The source of the authentication token that the Load Simulation
tool is using. This is applicable only if token caching is
activated in the /GOOG/CLIENT_KEY table. The possible values are:
|
Expiration Time | The expiration time of the authentication token. This is applicable only when token caching is activated in the /GOOG/CLIENT_KEY table. |
Token Value | Value of the authentication token that the Load Simulation tool uses to access BigQuery. |
Return code | The return code of the method execution. The possible values are:
|
Error Text | The error title, if any. |
Error Description | Detailed information about the error. |
Payload size | The HTTP payload size to BigQuery Insert API. If there is an error in method execution and payload size is greater than 10 MB, then you can adjust the chunk size to decrease the payload size. |
Information text | Any relevant information message raised by the BAdI of
BigQuery Connector for SAP. For example, when dynamic chunking
is triggered, the following information message is shown:
Dynamic chunking triggered. Chunk size reduced from
INITIAL_CHUNK_SIZE_VALUE to FINAL_REDUCED_CHUNK_SIZE_VALUE . |
Status | Status of the method execution. In case a method execution fails, see the BigQuery Connector for SAP troubleshooting guide to resolve the issue. |
Scheduling Load Simulation tool
You can schedule the Load Simulation tool to run automatically as
a background job on the SAP LT Replication Server by using the
program name /GOOG/R_LOAD_SIMULATION
.
For more information from SAP about scheduling background jobs,
see Scheduling Background Jobs.
Replication validation
If you select the Extra Fields Flag when you create the target
BigQuery table with transaction /GOOG/SLT_SETTINGS
,
columns are added to the table schema
for storing the type of change to each record that triggered replication
and for a timestamp that reflects the time at which SAP LT Replication Server received the portion that contained the record.
You can use the change types and the timestamp to query the following types of record counts:
- The number of records that are loaded into a BigQuery table during an initial load.
- The number of records replicated on a specified day into a BigQuery table.
- The total number of unique records in a BigQuery table.
To get these counts, you can query the BigQuery table directly by submitting SQL queries in the Google Cloud console, or you can run the Replication Validation tool, which generates reports that compare the BigQuery record counts with SAP LT Replication Server statistics or record counts from the source table.
For an overview the Extra Fields Flag, see Extra fields for record changes and count queries.
For information about how to specify the Extra Fields Flag, see:
- If SAP LT Replication Server is running on a Compute Engine VM, see Specify table creation and other general attributes.
- If SAP LT Replication Server is running on a host that is external to Google Cloud, see Specify table creation and other general attributes.
SQL queries for record counts
On the BigQuery SQL Editor page in the Google Cloud console, you can run SQL queries to check the record counts in your BigQuery tables.
You can then compare the BigQuery record counts with the counts in the source table or in the SAP LT Replication Server statistics.
Query the count of records inserted in initial load mode
When a BigQuery table schema includes the optional
operation_flag
column, records that are inserted into the table
in initial load mode include the L
operation flag.
To get the count of records that were received by BigQuery during an initial load, execute the following query:
SELECT COUNT(*) FROM `PROJECT.DATASET.TABLE` WHERE operation_flag = 'L'
Query the number of records inserted in replication mode
When a BigQuery table schema includes the optional
operation_flag
column, records that are inserted into the table
in replication mode include one of the following operation flags:
I
: the record was inserted into the source table.D
: the record was deleted from the source table.U
: the record was updated in the source table.
To get the count of records that were received by BigQuery in replication mode, run the following query:
SELECT COUNT(*) FROM `PROJECT.DATASET.TABLE` WHERE operation_flag = 'I' | 'D' | 'U'
Query the total count of records in a BigQuery table
When a BigQuery table schema includes the optional
recordstamp
column, the corresponding recordstamp
field of each record
that is inserted into the table contains a timestamp that indicates when
the record was sent by SAP LT Replication Server to BigQuery.
To get a total count of the records in a BigQuery table
that you can compare with the total count of records in a source table,
you can use the recordstamp
and is_deleted
fields to count the unique
records in the BigQuery table that have not been deleted
from the source table.
If the source table is being actively updated or replication is active when you query the records, the count of records in the source and target tables might not match exactly.
To get the current count of unique records in the BigQuery target table, run the following query:
SELECT COUNT(*) FROM ( SELECT *, ROW_NUMBER() OVER (PARTITION BY KEY_FIELD_1, ..., KEY_FIELD_N ORDER BY recordstamp DESC) row_num FROM `PROJECT.DATASET.TABLE` ) WHERE row_num = 1 AND is_deleted = false
Replication Validation tool
This section provides an overview of the Replication Validation tool and what you can do with it.
The Replication Validation tool generates reports that compare the record counts in the BigQuery table with SAP LT Replication Server statistics and the record counts in the source table. If the counts don't match exactly, the tool flags the report with a red circle.
To count the records in BigQuery, the tool uses the SQL queries that are shown in the preceding section, SQL queries for record counts.
Run the Replication Validation tool periodically to validate that SAP LT Replication Server and the BigQuery Connector for SAP are replicating records to BigQuery as expected.
To run the Replication Validation tool, enter the custom transaction
/GOOG/REPLIC_VALID
preceded by /n
in the SAP GUI. For
step-by-step instructions, see:
- If SAP LT Replication Server is running on a Compute Engine VM, Run the Replication Validation tool.
- If SAP LT Replication Server is running on a host that is external to Google Cloud, Run the Replication Validation tool.
Replication validation reports
You can generate the following validation reports with the Replication Validation tool:
- Initial Load Counts: A comparison of the number of records that were sent by SAP LT Replication Server in load mode and the number of records that were loaded into BigQuery.
- Replication Counts: A comparison of the number of records that were sent by SAP LT Replication Server in replication mode and the number of records that were inserted into BigQuery on a specified day.
- Current Counts: A point-in-time comparison of the number of records that are in the source table and the number of unique records in BigQuery. Current count on the source table cannot display a number greater than the 32-bit integer limit (-2,147,483,648 to 2,147,483,647).
You can generate each report individually or, by selecting All Checks when you run the tool, you can generate all three reports in a single execution. With the Table Names field, you can generate the replication validation reports for specific tables in the mass transfer configuration.
Displaying Replication Validation reports
After you generate a report, you can display the report by selecting the Display Report radio button in the Processing Options section of the Replication Validation tool interface.
The information that the Replication Validation tool displays in each report differs slightly depending on the type of report.
All of the reports include the following types of information:
- Source record counts from SAP LT Replication Server statistics and the source table.
- Target record counts from the target BigQuery table.
- Any difference between the two counts. The difference is calculated by subtracting the BigQuery counts from the source record counts. A positive value indicates a likely problem, because it suggests that not all source records are making it into BigQuery.
- The difference in the counts displayed as a percentage of source record count.
- A visual indicator of whether the source and target counts are equal or different.
Unequal record counts
The Replication Validation tool includes a status field with each report it displays.
A green square in the status field means that the source record count is equal to the target record count in BigQuery.
A red circle in the status field means that the record counts are not equal.
An unequal record count does not always indicate a problem. The following indicators suggest a possible problem:
- For a Current Counts report, an unequal value always indicates a problem.
For an Initial Load Counts or Replication Counts report, a positive value indicates a likely problem.
A relatively low negative value is not a problem. The count in a target BigQuery table can sometimes be a little higher than the source record count due to events such as momentary connectivity disruptions that cause SAP LT Replication Server to resend data.
If you see an unequal count, rerun the report to make sure it was not caused by a transitory issue. An unequal record count can occur due to replication processing at the time the tool generated the report.
For a very large source table or a table that has filters set in SAP LT Replication Server for initial load or replication, the Replication Validation tool might not be able to count all of the records that are required for an equal count.
Scheduling validation checks
You can schedule the Replication Validation tool to run automatically at intervals by using the SAP background job feature.
Edit the BigQuery field map in a CSV file
The following sections describe how to export the default field mapping so that data engineers or BigQuery administrators can edit the target field values without requiring access to SAP LT Replication Server.
When editing the target field values, adhere to the following rules:
- Don't modify values in the SAP Table Name and SAP Field Name columns.
- In the Send Uncompressed Flag column, to enable record compression,
mark the field with only an
X
. Otherwise, leave the field blank.
Create a spreadsheet or text file of the default field mappings
To create a CSV file for editing outside of SAP LT Replication Server:
Run the
/GOOG/SLT_SETTINGS
transaction.In the SLT Settings Maintenance screen, specify the following values:
- In the Settings Table field, specify Fields.
- In the Mass Transfer Key field, specify the ID of the mass transfer that you are updating.
- In the Table Name field, either leave the field blank to work with all fields from all tables or specify a table name to work with a specific table.
- Leave all other fields blank.
Click the Execute icon. The BigQuery Settings Maintenance - Fields screen displays.
On the BigQuery Settings Maintenance - Fields screen, hide all columns except for those in the following list by right-clicking on the column headings and selecting Hide from the drop-down menu:
- SAP Table Name
- SAP Field Name
- External Data Element
- External Field Name
- Field Description
- Send Uncompressed Flag
With the six remaining columns displayed, click the Export icon.
From the Export menu, select one of the following options:
- Spreadsheet
- Local file. For ease of converting the file contents to CSV format, we recommend saving the file in the Text with tabs format.
Save the default field mappings by clicking the Checkmark icon.
Convert the spreadsheet or text file to CSV format
To upload edited field mappings by using the custom transaction
/GOOG/SLT_SETTINGS
, the field mappings
must be in CSV format.
If you are using a spreadsheet, save the spreadsheet as a CSV file before you upload the file.
If you are using a local file in a tab-separated format or any other format, you need to modify the file to conform to CSV format.
For example:
SAP Table,SAP Field Name,External Data Element,External Field Name,Field Description, Send Uncompressed Flag SAP_TABLE_NAME,SAP_FIELD_NAME1,BIGQUERY_DATA_TYPE,BIGQUERY_FIELD_NAME1,BIGQUERY_FIELD_DESCRIPTION1, SEND_UNCOMPRESSED_FLAG1 SAP_TABLE_NAME,SAP_FIELD_NAME2,BIGQUERY_DATA_TYPE,BIGQUERY_FIELD_NAME2,BIGQUERY_FIELD_DESCRIPTION2, SEND_UNCOMPRESSED_FLAG2 SAP_TABLE_NAME,SAP_FIELD_NAME3,BIGQUERY_DATA_TYPE,BIGQUERY_FIELD_NAME3,BIGQUERY_FIELD_DESCRIPTION3, SEND_UNCOMPRESSED_FLAG3
Upload the CSV file
To upload an edited CSV file:
Run the
/GOOG/SLT_SETTINGS
transaction.In the SLT Settings Maintenance screen, specify the following values:
- In the Settings Table field, specify Fields.
- In the Mass Transfer Key field, specify the ID of the mass transfer that you are updating.
- Select the Upload from file checkbox.
Click the Execute icon. The Select File to Upload dialog opens.
In the Select File to Upload dialog, select the CSV file that contains the edited field values.
Click Open.
If you receive a security warning, click Allow. The file loads and the modified values in the file appear on the applicable rows in the BigQuery Settings Maintenance - Fields screen.
Click the Save icon.
To confirm the values are applied, compare the values in the CSV file with the values that are displayed by SAP LT Replication Server.
Handling errors in the source data
Upon receiving a chunk of records from BigQuery Connector for SAP, the BigQuery streaming API checks for data errors before inserting any records into the BigQuery table.
You can control how the BigQuery API and the BigQuery Connector for SAP respond when data errors are found by specifying the following flags in mass transfer settings:
- The
Skip Invalid Records
(SKIP
) flag - The
Break at First Error Flag
(BREAK
) flag
The SKIP
flag
If you specify the SKIP
flag, when
the BigQuery API receives a chunk of records and finds a
record with a data error, then the BigQuery API discards,
or skips, the record with the error and continues inserting all other
records from the chunk into the BigQuery table.
If you do not specify the SKIP
flag, when BigQuery
finds a record with a data error, BigQuery discards the entire
chunk without inserting any records from it into the BigQuery
table. This is the default behavior.
Specifying the SKIP
flag is best for development and QA environments,
and is not recommended for production environments.
You can specify SKIP
flag in the /GOOG/SLT_SETTINGS
transaction when
you are configuring replication. The specification is stored in the
/GOOG/BQ_MASTR
configuration table.
To see how SKIP
specifications interact with BREAK
specifications,
see Matrix table for SKIP
and BREAK
interactions.
The BREAK
flag
If you specify the BREAK
flag, when
BigQuery Connector for SAP is notified by the BigQuery API that
a data error was found in a record, BigQuery Connector for SAP stops
sending records to BigQuery and terminates the
replication job. This is the default behavior.
If you do not specify the BREAK
flag, when
BigQuery Connector for SAP is notified by BigQuery that
a data error was found in a record, BigQuery Connector for SAP continues
sending records to BigQuery by sending the next chunk
and the replication job continues.
Specifying the BREAK
flag is recommended in production environments.
You can specify BREAK
flag
in the /GOOG/SLT_SETTINGS
transaction when you are configuring replication.
When you create a new mass transfer key, the BREAK
flag enabled by default.
The specification is stored in the /GOOG/BQ_MASTR
configuration table.
To see how BREAK
specifications interact with SKIP
specifications,
see Matrix table for SKIP
and BREAK
interactions.
Matrix table for SKIP
and BREAK
interactions
You can configure BigQuery Connector for SAP to handle data errors in the following ways:
SKIP flag |
BREAK flag |
Behavior |
---|---|---|
FALSE | TRUE |
BigQuery discards the current chunk of records without inserting any records from the current chunk into the BigQuery table. BigQuery Connector for SAP sends no more chunks of records from the current portion and tells SAP LT Replication Server to terminate the replication job. This is the default and recommended setting. |
FALSE | FALSE |
BigQuery discards the current chunk of records without inserting any records from the current chunk into the BigQuery table. BigQuery Connector for SAP sends any remaining chunks of records from the current portion and retrieves the next portion. BigQuery Connector for SAP does not tell SAP LT Replication Server to terminate the replication job. |
TRUE | TRUE |
BigQuery discards only the record that contains the error and inserts the rest of the records in the current chunk into the BigQuery table. BigQuery Connector for SAP sends no more chunks of records from the current portion and tells SAP LT Replication Server to terminate the replication job. |
TRUE | FALSE |
BigQuery discards only the record that contains the error and inserts the rest of the records in the current chunk into the BigQuery table. BigQuery Connector for SAP sends any remaining chunks of records from the current portion and retrieves the next portion. BigQuery Connector for SAP does not tell SAP LT Replication Server to terminate the replication job. |
Table structure changes
This section explains how to modify the SAP source table structure, for which
an existing LTRC
replication is in progress.
Add a column to a source table
To add a new column to a source table, follow these steps:
Add a new column to the source table. As a result of this step, the replication status changes to
Load/Replication blocked
.In your SLT system, reset the replication status using transaction
LTRC
. For more information from SAP about how to reset the replication status, see SAP Note 2204955 - SLT tables are in status 'Load /Replication blocked'.Add, update, or delete an entry in the source table.
Validate the replication result in BigQuery.
Delete a column from a source table
To delete an existing column from a source table, follow these steps:
In your SLT system, suspend the replication using transaction
LTRC
.Delete a column from the source table. As a result of this step, the existing SLT triggers are either deleted or changed to an inconsistent state.
In BigQuery, delete the column from the target BigQuery table. For more information about the steps to delete a column from an existing table, see the BigQuery documentation.
In your SLT system, resume the replication using transaction
LTRC
.In your SLT system, recreate the SLT triggers. For more information from SAP about how to recreate SLT triggers, see SAP Note 2254376 - SLT trigger(s) in an inconsistent state.
If the replication status is
Load /Replication blocked
, then reset the replication status using transactionLTRC
. For more information from SAP about how to reset the replication status, see SAP Note 2204955 - SLT tables are in status 'Load /Replication blocked'.Clear the logs, if any.
Add, update, or delete an entry in the source table.
Validate the replication result in BigQuery.
Change the data type of an existing column
When you change the data type of an existing column in SAP source table, you need to follow specific steps depending on whether you are changing the data type to a compatible or non-compatible data type with the target BigQuery table.
A data type is compatible with the data type in the target
BigQuery table when the existing data type
and new data type of an existing column map to the same data type in the target
BigQuery table. For example, if the data type
of a column is changed from INT1
to INT2
in a source table, then both the
data types are compatible with the data type INTEGER
in the target
BigQuery table.
For more information about data type mapping in BigQuery Connector for SAP, see Data type mapping.
Change the data type to a compatible data type
To change the data type of an existing column to a compatible data type, follow these steps:
Change the data type to a compatible data type in the source system. As a result of this step, the existing SLT triggers are either deleted or changed to an inconsistent state.
In your SLT system, recreate the SLT triggers. For more information from SAP about how to recreate SLT triggers, see SAP Note 2254376 - SLT trigger(s) in an inconsistent state.
If the replication status is
Load /Replication blocked
, then reset the replication status using transactionLTRC
. For more information from SAP about how to reset the replication status, see SAP Note 2204955 - SLT tables are in status 'Load /Replication blocked'.Clear the logs, if any.
Add, update, or delete an entry in the source table.
Validate the replication result in BigQuery.
Change the data type to a non-compatible data type
To change the data type of an existing column to a non-compatible data type, follow these steps:
- In your SLT system, stop the replication using transaction
LTRC
. - In BigQuery, delete the target table.
- Change the data type in the source system.
- In your SLT system, start the replication using transaction
LTRC
.
For more information about table structure changes, see BigQuery Connector for SAP: Handle table structure changes like a pro.
Enhancement exits
BigQuery Connector for SAP provides several enhancement points in its code where an ABAP developer can insert code to add custom features.
The following table lists the functions that the enhancement points support, the methods, and the class that contains the enhancement point.
Function | Class | Method | Spot | Option |
---|---|---|---|---|
Update the mapping for a field, such as the external field name, the data type, and so on. | /GOOG/CL_IUUC_REPL_RUNTIME |
CREATE_FLD_MAPPINGS |
/GOOG/ES_IUUC_REPL_RUNTIME |
/GOOG/UPDATE_FIELD_MAPPING |
Update the mapping for the field table by adding or removing fields. | /GOOG/CL_IUUC_REPL_RUNTIME |
CREATE_FLD_MAPPINGS |
/GOOG/ES_IUUC_REPL_RUNTIME |
/GOOG/UPDATE_FIELD_MAPPINGS |
Change the value of a source field before the field is converted to a target field. | /GOOG/CL_IUUC_REPL_RUNTIME_BQ |
FILL_TARGET_RECORDS |
/GOOG/ES_IUUC_REPL_RUNTIME_BQ |
/GOOG/CHANGE_SOURCE_FIELD |
After a source field is converted to a target field in the target table, change the value of the target field. | /GOOG/CL_IUUC_REPL_RUNTIME_BQ |
FILL_TARGET_RECORDS |
/GOOG/ES_IUUC_REPL_RUNTIME_BQ |
/GOOG/FILL_TARGET_FIELD |
Add a field to the target table that does not exist in the source table during the source-to-target table conversion. | /GOOG/CL_IUUC_REPL_RUNTIME_BQ |
FILL_TARGET_RECORDS |
/GOOG/ES_IUUC_REPL_RUNTIME_BQ |
/GOOG/FILL_EXTRA_FIELD |
Prepare a BigQuery schema field before the BigQuery table is created. | /GOOG/CL_GCP_CLIENT_BQ |
PREP_BQ_TABLE_SCHEMA |
/GOOG/ES_GCP_CLIENT_BQ |
/GOOG/PREPARE_SCHEMA_FIELD |
In case of HTTP errors originating on the BigQuery server side, for troubleshooting the issue, you can collect logging data after the HTTP calls to the BigQuery API. | /GOOG/CL_GCP_CLIENT_BQ_SLT |
INSERT_TABLEDATA |
/GOOG/ES_GCP_CLIENT_BQ_SLT |
/GOOG/LOG_INSERT_ERROR |
In case of HTTP errors originating on the SAP client side, for troubleshooting the issue, you can collect logging data. | /GOOG/CL_GCP_HTTP_CLIENT |
RECEIVE_HTTP_RESPONSE |
/GOOG/ES_GCP_HTTP_CLIENT |
/GOOG/RECEIVE_HTTP_RESPONSE |
Advanced settings
Optionally, you can modify the advanced settings for BigQuery Connector for SAP. Google Cloud recommends that you modify the advanced settings parameters only after a comprehensive analysis and impact of new values on performance. You are responsible for ensuring that the new advanced settings for BigQuery Connector for SAP do not cause failures and performance issues.
Advanced settings for BigQuery Connector for SAP are applied at the system level and are common for all mass transfer keys. If the advanced settings parameters are not modified, then BigQuery Connector for SAP works with the default settings.
To modify advanced settings parameters, complete the following steps:
In the SAP GUI, enter the
/GOOG/SLT_SETTINGS
transaction preceded by/n
:/n/GOOG/SLT_SETTINGS
From the Settings Table drop-down menu in the launch screen for the
/GOOG/SLT_SETTINGS
transaction, select Parameters.Click the Execute icon. The BigQuery Settings Maintenance - Parameters screen displays.
Click the Insert Row icon.
In the displayed row, specify the following settings:
- In the Parameter Name field, enter the name of the parameter. The parameter description is automatically populated.
In the Parameter Value field, enter a value.
To know about the Advanced settings parameters, see Advanced settings parameters.
Click Save.
Your advanced settings are stored as a record in the
/GOOG/BQ_PARAM
configuration table and the Changed By, Changed On, and Changed At fields are automatically populated.
Advanced settings parameters
The following table shows the advanced settings parameters for BigQuery Connector for SAP.
Parameter name | Description | Default value | Valid value |
---|---|---|---|
CHUNK_SIZE_DEF |
This is the default chunk size the BigQuery Connector for SAP
supports. If a chunk size is not maintained in the settings, the default chunk size is used. |
10,000 | The value must be within the BigQuery Quota limits. |
PERC_REDUC_DEF |
The chunk size percentage reduction. If dynamic chunk size is enabled, the chunk size is reduced by this percentage until an ideal chunk size is reached and the data in the chunk is transferred to BigQuery successfully. |
50 | The value must be from 1 to 99. |
CMD_EXEC_TRIES |
For SAP systems that are not running on Google Cloud, if the
operating
system command that you created in the transaction SM69
fails to retrieve an access token from Google Cloud, then this
is the number of times that the BigQuery Connector for SAP retries the
token retrieval.
|
5 | The minimum value that you can assign to this parameter is
1 . To facilitate at least one retry, set the value
2 . The maximum value for this parameter must be set after
analyzing the impact that token retrieval retries can have on
replication performance.
|
CMD_SECS_DEFLT |
If you have enabled token caching, then this is the duration in seconds after which the cached token expires. | 3500 | The value must be from 1 to 3599. |