You can use the following techniques to capture snapshots of your Cloud Dataprep application work in progress.
Make a copy
You can make a copy of individual recipes and flows.
NOTE: Copied recipes and datasets are independent objects and do not continue to inherit any changes in the original.
In Flow View, click the context menu and select Make a copy.
In Flow View, select a recipe to duplicate. In the right panel, select Make a copy from the context menu. You can link the recipe to the same inputs or to no inputs.
NOTE: This recipe is still available to all who have access to the flow. If needed, select Move to relocate the copied recipe to another flow to which other users do not have access.
Select the copied recipe and click Edit Recipe to begin working with the recipe in the Transformer page.
See Flow View Page.
Download Work in Progress
From the Recipe panel in the context panel, you can download your work in progress, including the recipe and the dataset sample as reflected in the current recipe step.
Download Sample Data
From the Transformer page, you can download the dataset sample as it is currently reflected in the Transformer page.
NOTE: A sample downloaded from the Transformer page reflects all recipe steps up to the step that is currently selected. Steps that occur after the current one are not applied to the dataset sample.
Tip: You can use this as a work-in-progress backup if you select the final step of the recipe and if the dataset sample represents the entire dataset.
From the Recipe panel, click the context menu and select Download Sample data as CSV.
The CSV file is written to your desktop.
In the Recipe panel, click the context menu and select Download recipe as Wrangle .
The entire recipe is downloaded to your desktop as a text file.
Tip: If you are attempting to capture the recipe as a work-in-progress of the dataset sample, you can just delete the steps that aren't executed from the downloaded file.
See Recipe Panel.
Backups of the Cloud Dataprep databases (flows, recipes, and other metadata) and source datastores (imported datasets) should be executed according to your enterprise requirements.