Jump to Content
Data Analytics

How to use gen AI for better data schema handling, data quality, and data generation

February 18, 2025
Deb Lee

AI/ML Customer Engineer

Jaideep Sethi

Customer Engineer

Join us at Google Cloud Next

Early bird pricing available now through Feb 14th.

Register

In the realm of data engineering, generative AI models are quietly revolutionizing how we handle, process, and ultimately utilize data. For example, large language models (LLMs) can help with data schema handling, data quality, and even data generation. 

Building upon the recently released Gemini in BigQuery Data preparation capabilities, this blog showcases areas where gen AI models are making a significant impact in data engineering with automated solutions for schema management, data quality automation, and generation of synthetic and structured data from diverse sources, providing practical examples and code snippets. 

1. Data schema handling: Integrating new datasets

Data movement and maintenance is an ongoing challenge across all data engineering teams. Whether it's moving data between systems with different schemas or integrating new datasets into existing data products, the process can be complex and error-prone. This is often exacerbated when dealing with legacy systems; in fact, 32% of organizations cite migrating the data and the app as their biggest challenge, according to Flexera's 2024 State of the Cloud Report

Gen AI models offer a powerful solution by assisting in automating schema mapping and transformation on an ongoing basis. Imagine migrating customer data from a legacy CRM system to a new platform, and combining it with additional external datasets in BigQuery. The schemas likely differ significantly, requiring intricate mapping of fields and data types. Gemini, our most capable AI model family to date, can analyze both schemas and generate the necessary transformation logic, significantly reducing manual effort and potential errors.

A common approach to data schema handling that we’ve seen from data engineering teams involves creating a lightweight application that receives messages from Pub/Sub, retrieves relevant dataset information from BigQuery and Cloud Storage, and uses the Vertex AI Gemini API to map source fields to target fields and assign a confidence score. Here is example code showing a FunctionDeclaration to perform the mapping-confidence task:

lang-py
Loading...

As seen in the above prompt, Gemini assigns confidence levels to each mapping, which are then stored in BigQuery. Once these are in BigQuery, the data engineering team can validate high-confidence mappings (and eventually choose to fully automate these if they feel comfortable), and investigate the low-confidence mappings. This pipeline of gen AI tasks could be deployed in an event-driven architecture or could run on a batch basis. However, there’s usually a final step required, where a human approves the final output (this could eventually become fully automated over time, given the rapid release cadence of improvements in gen AI models). Here is an example architecture / workflow:

https://storage.googleapis.com/gweb-cloudblog-publish/images/image1_BlI8CAw.max-1900x1900.png

2. Data quality: Enhancing accuracy and consistency

In today's data-driven world, poor data quality can cost businesses millions. From inaccurate customer insights leading to misguided marketing campaigns, to flawed financial reporting that impacts investment decisions, the consequences of bad data are significant. Gen AI models offer a new approach to data quality, going beyond traditional rule-based systems to identify subtle inconsistencies that can wreak havoc on your data pipelines. For example, imagine a system that can automatically detect and correct errors that would typically require hours of manual review or creation of intensive ReGex expressions.

Gemini can augment your existing data quality checks in several ways:

  • Deduplication: Consider a scenario where you need to deduplicate customer profiles. Gemini can analyze various fields, such as names, addresses, and phone numbers, to identify potential duplicates, even when there are minor variations in spelling or formatting. For example, Gemini can recognize that "Robert Smith" and "Bob Smith" likely refer to the same individual, or that "123 Main St." and "123 Main Street" represent the same address. In contrast to traditional methods like fuzzy matching, which are cumbersome to code and don’t always produce ideal results, using an LLM can provide a simpler and more effective solution.

  • Standardization: Gemini excels at standardizing data formats. Instead of relying on intricate regular expressions to validate data formats, Gemini can be used with prompt engineering, RAG, or fine-tuning to understand and enforce data quality rules in a more human-readable and maintainable way. This is particularly useful for fields like dates, times, and addresses, where variations in format can hinder analysis.

  • Subtle error detection: Gemini can identify subtle inconsistencies that might be missed by traditional methods. These include:

    • Variations in abbreviations (e.g., "St." vs "Street")

    • Different spellings of the same name (e.g., "Catherine" vs. "Katherine")

    • Use of nicknames (e.g., "Bob" vs. "Robert")

    • Incorrectly formatted phone numbers (e.g., missing area codes)

    • Inconsistent use of capitalization and punctuation

Let's illustrate this with a common example of address validation. We have a table named customer_addresses with the following format, and we want to check if the address_state column is a valid US state and convert it into the standard two-letter abbreviation:

https://storage.googleapis.com/gweb-cloudblog-publish/images/1a.max-1000x1000.jpg

Looking at the input data, you can easily identify some issues with the address_state column. For example, 'Pennsylvaniaa' is misspelled, and 'Texas' is written out instead of using the standard two-letter abbreviation. While these errors are obvious to a human, they can be challenging for traditional data quality tools to catch because they rely on exact matches or rigid rules, missing these subtle variations.

However, Gemini excels at understanding and interpreting human language, making it well suited for this task. With a simple prompt, Gemini can accurately identify these inconsistencies and standardize the state names into the correct format, going beyond rigid rules and adapting to nuances of the human language.

Here's how you can use Gemini in BigQuery to perform this task, using the BQML function ML. GENERATE_TEXT, which lets you perform gen AI tasks on data stored in BigQuery using a remote connection to Gemini hosted in Vertex AI:

Loading...

This code sends each address_state value to Gemini with a prompt asking it to validate and standardize the input. Gemini then returns a JSON response with the original input, the standardized output, and a boolean indicating whether the state is valid:

https://storage.googleapis.com/gweb-cloudblog-publish/images/2a.max-1400x1400.jpg

In this instance, Gemini has automated and streamlined our data quality process and reduced the complexity of the code. The first column contains the validation output — with a simple prompt, we are able to correctly identify the rows that have an invalid state column value and convert the state columns to a standard format. In the more traditional approach this would have taken multiple SQL expressions, external APIs, or joining with a lookup table.

The above example is just a glimpse into how Gemini can improve data quality. But beyond basic validation and standardization, gen AI models also excel at more nuanced tasks. For instance, they can classify data errors by severity (low, medium, high) for prioritized action and effectively handle mixed-language text fields by detecting language discrepancies. For more detailed examples check out this code repo, which includes how to leverage gen AI models for semantic search in BigQuery that you could use to identify duplicate records.

Important considerations for large datasets:

When working with large datasets, sending individual requests to an LLM like Gemini can become inefficient and may exceed usage quotas. To optimize performance and manage costs, consider batching requests and make sure your GCP project has sufficient API quotas.

3. Data generation: Unlocking insights from unstructured data

Unstructured data like images, videos, and PDFs hold valuable information that has historically been difficult to translate into structured data use cases. Gemini’s multimodal industry-leading context window of up to 2 million tokens allows us to extract structured data for downstream usage.

However, some gen AI models can be unreliable and prone to hallucinations, posing challenges for consistent data processing. To address this in practice, you can useGemini’s system instructions, controlled generation, grounding with Gemini, and Vertex AI evaluation services. System instructions guide the model's behavior, while controlled generation instructs the models to output in specific format such as JSON and enforces structured outputs adhering to a predefined schema. Evaluation lets you automate the selection process of the best response and provide associated quality metrics and explanations. Finally, grounding tethers the output to private or public up-to-date data, reducing the likelihood of the model inventing content. Then, the model’s structured data output can be integrated with BigQuery for downstream analysis and used in data pipelines and ML workflows, helping to ensure consistency and reliability in business applications. 

Let’s take a look at an example inspired by the YouTube ABCDs where we use one of the latest Gemini models, Flash 2.0, to analyze an ad video on YouTube to see if it follows YouTube best practices, using the following prompt:

lang-py
Loading...

The resulting output can easily be ingested into BigQuery as structured data for further analytical and reporting uses:

Loading...

There are also considerations for choosing the right model for the right task. For  example, larger videos or unstructured content may require using the 2M token context window, available from Gemini Pro, whereas other tasks may be fine using just 1M context window with Gemini Flash

You can also use Gemini to generate synthetic data that mimics real-world scenarios, augmenting your datasets and improving model performance. Synthetic data is artificially generated data that statistically mirrors real-world data while preserving privacy by excluding personally identifiable information (PII). This approach enables organizations to develop robust machine learning models and data-driven insights without the limitations and risks associated with using real-world data. The growing interest in synthetic data stems from its ability to address privacy concerns, overcome data scarcity, and facilitate test data generation across various industries. To learn more about synthetic data generation using gen AI, check out our in-depth blog about Generating synthetic data in BigQuery with Gretel.

Going to production: DataOps and the LLM pipeline

Once you've successfully implemented LLM-powered data engineering solutions, you’re ready to integrate them into your production environment. Here are a few things you’ll need to address:

  • Scheduling and automation: Leverage tools like Composer or Vertex AI Pipelines to schedule and automate gen AI tasks, to help ensure continuous data processing and analysis.

  • Model monitoring and evaluation: Implementing an evaluation pipeline to monitor the performance of your gen AI models allows you to track accuracy, identify potential biases, and trigger retraining when necessary.

  • Version control: Treat Gemini prompts and configurations as code, using version control systems to track changes and ensure reproducibility.

The following resources are useful for integrating gen AI models into your data engineering production pipelines, and deliver robust, scalable, and reliable solutions: 

Transform your data engineering processes with gen AI

Gen AI is transforming the data engineering landscape, offering powerful capabilities for schema handling, data quality improvement, synthetic data generation, and data generation from unstructured sources. By embracing these advancements and adopting DataOps principles, get ready to unlock new levels of efficiency, accuracy, and insight from your data. Start experimenting with Gemini in your own data pipelines and unlock the potential for greater consistency in data processing, insights from new data sources, and ultimately, better business outcomes.

Posted in