[[["容易理解","easyToUnderstand","thumb-up"],["確實解決了我的問題","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["難以理解","hardToUnderstand","thumb-down"],["資訊或程式碼範例有誤","incorrectInformationOrSampleCode","thumb-down"],["缺少我需要的資訊/範例","missingTheInformationSamplesINeed","thumb-down"],["翻譯問題","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["上次更新時間:2025-09-04 (世界標準時間)。"],[[["\u003cp\u003eCloud Data Fusion uses plugins, which are customizable modules, to extend its capabilities in managing and moving data through a pipeline.\u003c/p\u003e\n"],["\u003cp\u003ePlugins in Cloud Data Fusion are categorized into types such as Sources, Transformations, Analytics, Sinks, Conditions and Actions, and Error Handlers and Alerts, each serving a distinct purpose in data processing.\u003c/p\u003e\n"],["\u003cp\u003eSource plugins connect to various data origins, Transformation plugins alter ingested data, Analytics plugins perform aggregations and analysis, and Sink plugins write processed data to its destination.\u003c/p\u003e\n"],["\u003cp\u003eCondition and action plugins allow scheduling of non-data manipulation actions within workflows, while error handler plugins catch and manage errors that occur within the pipeline, allowing for more detailed analysis of them.\u003c/p\u003e\n"]]],[],null,["# Plugins overview\n\n| **Tip:** Explore the [Cloud Data Fusion plugins](/data-fusion/plugins).\n\nWhen you create a data pipeline in Cloud Data Fusion, you use a series of\nstages, known as nodes, to move and manage data as it flows from source to sink.\nEach node consists of a plugin, a customizable module that extends the\ncapabilities of Cloud Data Fusion.\n\nYou can find the plugins in Cloud Data Fusion web interface by going to\nthe **Studio** page. For more plugins, click **Hub**.\n\nPlugin types\n------------\n\nPlugins are categorized into the following categories:\n\n- Sources\n- Transformations\n- Analytics\n- Sinks\n- Conditions and actions\n- Error handlers and alerts\n\n### Sources\n\nSource plugins connect to databases, files, or real-time streams from which your\npipeline reads data. You set up sources for your data pipeline using the web\ninterface, so you don't have to worry about coding low-level connections.\n\n### Transformations\n\nTransform plugins change data after it's ingested from a source. For example,\nyou can clone a record, change the file format to JSON, or use the Javascript\nplugin to create a custom transformation.\n\n### Analytics\n\nAnalytics plugins perform aggregations, such as joining data from different\nsources and running analytics and machine learning operations.\n\n### Sinks\n\nSink plugins write data to resources, such as Cloud Storage,\nBigQuery, Spanner, relational databases, file systems,\nand mainframes. You can query the data that gets written to the sink using the\nCloud Data Fusion web interface or REST API.\n\n### Conditions and actions\n\nUse condition and action plugins to schedule actions that take place during a\nworkflow that don't directly manipulate data in the workflow. For example:\n\n- Use the Database plugin to schedule a database command to run at the end of your pipeline.\n- Use the File Move plugin to trigger an action that moves files within Cloud Storage.\n\n### Error handlers and alerts\n\nWhen stages encounter null values, logical errors, or other sources of errors,\nyou can use an error handler plugin to catch errors. Use these plugins to\nfind errors in the output after a transform or analytics plugin. You can\nwrite the errors to a database for analysis.\n\nWhat's next\n-----------\n\n- [Explore the plugins](/data-fusion/plugins).\n- [Create a data pipeline](/data-fusion/docs/create-data-pipeline) with the plugins."]]