[[["わかりやすい","easyToUnderstand","thumb-up"],["問題の解決に役立った","solvedMyProblem","thumb-up"],["その他","otherUp","thumb-up"]],[["わかりにくい","hardToUnderstand","thumb-down"],["情報またはサンプルコードが不正確","incorrectInformationOrSampleCode","thumb-down"],["必要な情報 / サンプルがない","missingTheInformationSamplesINeed","thumb-down"],["翻訳に関する問題","translationIssue","thumb-down"],["その他","otherDown","thumb-down"]],["最終更新日 2025-09-04 UTC。"],[[["\u003cp\u003eThis page details how to improve performance when using the Salesforce batch source in Cloud Data Fusion.\u003c/p\u003e\n"],["\u003cp\u003eEnabling PK chunking breaks down large datasets into smaller chunks, enhancing performance, reducing server load, and increasing scalability.\u003c/p\u003e\n"],["\u003cp\u003eUsers can configure PK chunking within the Salesforce source node's properties, setting the desired number of records per chunk with a default of 100,000, and a maximum of 250,000 records.\u003c/p\u003e\n"],["\u003cp\u003eUtilizing SObject query filters or SOQL queries can help reduce the number of API calls to Salesforce when retrieving records.\u003c/p\u003e\n"]]],[],null,["# Best practices for the Salesforce batch source\n\nThis page describes best practices for improving performance when you use a\n[Salesforce batch\nsource](/data-fusion/docs/how-to/configure-salesforce-batch-source) in\nCloud Data Fusion.\n\nImprove performance with PK chunking\n------------------------------------\n\nPK chunking breaks up large datasets into smaller datasets, or *chunks*.\n\nEnabling PK chunking in the Salesforce batch source plugin has the following\nbenefits:\n\n- It improves performance, especially for large datasets\n- It reduces the load on the server\n- It increases scalability\n\n| **Note:** Before enabling PK chunking, check that you're using an sObject that supports it. For more information about PK chunking and the supported sObjects, see the [Salesforce documentation](https://developer.salesforce.com/docs/atlas.en-us.234.0.api_asynch.meta/api_asynch/async_api_headers_enable_pk_chunking.htm).\n\nTo use PK chunking, follow these steps:\n\n1. [Go to the Cloud Data Fusion web interface](/data-fusion/docs/create-data-pipeline#navigate-web-interface) and open your pipeline on the **Studio** page.\n2. Optional: If you haven't added a Salesforce node in your pipeline, add one:\n 1. In the **Source** menu, click **Salesforce** . The Salesforce node appears in your pipeline. If you don't see the Salesforce source on the **Studio** page, [deploy the Salesforce plugins from the Cloud Data Fusion Hub](/data-fusion/docs/how-to/deploy-a-plugin).\n3. To configure the source, go to the Salesforce node and click **Properties**.\n4. Turn on **Enable PK chunking**.\n5. In the **Chunk size** field, enter the number of records per chunk. The default value is `100000` records. The maximum is `250000` records.\n6. Click **Validate**.\n\nUse SObject query filters or SOQL queries\n-----------------------------------------\n\nTo reduce the number of API calls in Salesforce, retrieve records with SObject\nquery filters or SOQL queries.\n\n- **SObject query filters** : configure the filter in the Salesforce plugin\n properties in the **SObject name** field. For more information, see\n [Configure the plugin](/data-fusion/docs/how-to/configure-salesforce-batch-source).\n\n- **SOQL queries** : configure the queries in the Salesforce plugin properties\n in the **SOQL query** field. For more information, see [SOQL queries for the\n Salesforce source](/data-fusion/docs/use-case/salesforce-soql-queries).\n\nWhat's next\n-----------\n\n- Learn about configuring the [Salesforce batch source](/data-fusion/docs/how-to/configure-salesforce-batch-source#properties) in Cloud Data Fusion.\n- Work through a [Salesforce plugin tutorial](/data-fusion/docs/tutorials/connect-salesforce-to-bq)."]]