Request a 30-minute demo

Our product expert will guide you through our demo to show you how to automate testing for every part of your workflow.

See data diffing in real time
Data stack integration
Discuss pricing and features
Get answers to all your questions
Submit your credentials
Schedule date and time
for the demo
Get a 30-minute demo
and see datafold in action
January 10, 2024
DATA TESTING, DATA QUALITY BEST PRACTICES, DBT

3 different ways to diff data

Data diffing is the process of comparing two datasets. See various ways to compare data at different levels of complexity.

No items found.
Kenny Ning

[Guest blog by Kenny Ning, Freelance Analytics Engineer, and Kira Furuichi, Product Marketing at Datafold]

Data diffing involves comparing two datasets to identify the impact of code changes on the datasets' shape and content. It's crucial for any data transformation process, whether the datasets are CSV files on a local computer, database tables, or tables across multiple databases.

In this post, we’ll walk through different approaches to diff your datasets depending on what format they are in:

  • Local files: Utilize git diff
  • Tables in a database: dbt tests, Datafold Cloud's in-database data diffing
  • Tables across databases: Datafold Cloud's cross-database data diffing

Yillow: a fictional real estate listings company

Imagine you’re a data engineer at a real estate listings company called Yillow. The company ingests raw listings data from various Multiple Listing Services (MLSs) and unifies it into a standardized schema so that potential homebuyers can easily browse listings in a slick web UI.

It’s your first day and you’re pointed to an internal python library that normalizes addresses. Different MLSs have different ways they represent addresses, and this function takes in an MLS source and raw address and transforms it into a standard format:


def normalize_address(mls_source, address):
"""
Example usage: normalize_address("ca_mls", "947 HIGH ST, BURLINGTON, NJ, 08016")
Example output: "947 High St | Burlington | NJ | 08016"
"""
# lots of crazy if statements
return formatted_address

You get word that the California MLS source is going to change their address delimiter to semicolons instead of commas. You adjust the ca_mls clause in the function to the following:


def normalize_address(mls_source, address):
...
# handle ca_mls format
if mls_source == 'ca_mls':
return '|'.join(address.split(';'))

After you merge your changes, someone notices that some California listings are missing from the app. You notice that <span class="code">ca_addresses.csv</span> is getting piped to <span class="code">normalize_address</span> which in turn writes to an intermediate dataset called <span class="code">cleaned_listings.csv</span>

Local files: git diff

You take the following steps to debug:

  1. Check out main branch
  2. Run full pipeline
  3. Compare <span class="code">ca_addresses.csv</span> and <span class="code">cleaned_listings.csv</span> which should return an equal amount of rows

> git diff ca_addresses.csv ~/cleaned_listings.csv
diff --git a/ca_addresses.csv b/cleaned_listings.csv
index 648ab5d..7b5653e 100644
--- a/.csv
+++ b/al_address_out2.csv
@@ -7,23 +7,18 @@ addr
-993 POST ST, IRVINE, CALIFORNIA, 92612
-989 POST ST, IRVINE, CALIFORNIA, 92612
-985 POST ST, IRVINE, CALIFORNIA, 92612
-975 POST ST; IRVINE; CALIFORNIA; 92612
+975 POST ST | IRVINE | CALIFORNIA | 92612
...

It looks like not all of the addresses in <span class="code">ca_addresses</span> were migrated to semi-colon delimiters after all. In the example above, the first three addresses still have comma delimiters and subsequently were dropped in the pipeline. You submit a small patch and shoot the California MLS an email. In this example, a local data diffing solution like <span class="code">git diff</span> helped us identify a regression introduced by the code refactor.

You can specify the <span class="code">--stat</span> option to get a summary instead:


> git diff ca_addresses.csv ~/cleaned_listings.csv
> git diff ca_addresses.csv cleaned_listings.csv --stat

ca_addresses.csv => cleaned_listings.csv | 4 +---
1 file changed, 1 insertion(+), 3 deletions(-)

If you can’t assume your data is sorted, then you’ll need to first sort your datasets like <span class="code">cat cleaned_listings.csv | sort -s</span> and then pass the output as arguments using process substitution <span class="code"><(commands)</span> syntax. Unfortunately <span class="code">git diff</span> doesn’t work with process substitution so you’ll have to use the more minimal <span class="code">diff</span> command like so:


> diff <(cat 985="" 989="" 993="" 92612="" ca_addresses.csv="" |="" sort="" -s)="" <(cat="" cleaned_listings.csv="" 98681d98680="" <="" 048="" post="" st,="" irvine,="" california,="" 99002d99000="" 99439c99437="" ---=""> 048 975 POST ST | IRVINE | CALIFORNIA | 92612

The output is not as readable as <span class="code">git diff</span> but you can see how deletions are marked with a <span class="code">XXXdXXX</span> and changes are marked with a <span class="code">XXXcXXX</span>. Experiment with the myriad options for these tools directly from the command line by typing <span class="code">man git-diff</span> or <span class="code">man diff</span>. For example, try pairing vanilla <span class="code">diff</span> with the <span class="code">--suppress-common-lines</span> and <span class="code">--side-by-side</span> commands.

Me whenever I read the man pages for Unix commands and see a million different options I never knew existed

Right for you if:

  • Your data is in a flat file format
  • Your data fits in memory

You’ve outgrown if:

  • Your data is stored in a database
  • Your pipeline takes a long time to run, so the development workflow described above takes too long to iterate

Tables in a database: dbt test

Now let’s assume Yillow has made the migration from flat files to the ⭐ modern data stack ⭐. You’ve traded in your Python scripts, which previously downloaded data from S3, for dbt jobs that run on BigQuery.

The California MLS source has decided that they don’t like semicolons after all and want to go back to using commas as a delimiter. You adjust the dbt job to the following:


# cleaned_listings.sql

with clean as (
select
id,
upper(replace(address, ',', ' |')) as address

from ca_addresses
)
select *
from clean
where formatted_address like '% | % | % | %'

You make a pull request, but an error is thrown in your CI dbt cloud run. It looks like a test called <span class="code">test_normalize_address.sql</span> failed which has the following code:


#test_normalize_address.sql

select id from ca_addresses
except distinct
select id from cleaned_listings

You can then run this test using the dbt test command:


$ dbt test

20:31:20 Running with dbt=1.1.0
20:31:20 Found 3 models, 1 test, 0 snapshots, 0 analyses, 191 macros, 0 operations, 0 seed files, 0 sources, 0 exposures, 0 metrics
20:31:20
20:31:20 Concurrency: 1 threads (target='dev')
20:31:20
20:31:20 1 of 1 START test test_normalize_address .................................................... [RUN]
20:31:22 1 of 1 FAIL 1 test_normalize_address ........................................................ [FAIL 1 in 1.98s]
20:31:22
20:31:22 Finished running 1 test in 2.35s.
20:31:22
20:31:22 Completed with 1 error and 0 warnings:
20:31:22
20:31:22 Failure in test test_normalize_address (tests/test_normalize_address.sql)
20:31:22 Got 1 result, configured to fail if != 0
20:31:22
20:31:22 compiled SQL at target/compiled/yillow/tests/test_normalize_address.sql
20:31:22
20:31:22 Done. PASS=0 WARN=0 ERROR=1 SKIP=0 TOTAL=1

As the dbt test output shows, tests fail if more than one result is returned, so the fact that this test failed means there were some addresses that got lost in the <span class="code">normalize_address</span> step.

The great thing about these kinds of SQL tests is that they are run directly on your data warehouse, so they scale well with data sizes. They can be run during development to test your local changes, but can also be run in a CI environment to catch errors before they reach production.

Right for you if:

  • Your data is stored in a relational database
  • You want an easy / fast way to test if two datasets are the same

You’ve outgrown if:

  • You want to compare differences across multiple databases
  • You’re looking for richer data diff reports than just a binary "yes/no" result
  • You want to compare data differences between two versions of a dataset (e.g., staging vs production)

Tables across databases: Datafold's cross-database diffing

A few months pass, and you find out that normalize_address isn’t the problem anymore. The scope of Yillow’s data team has expanded to much more than just managing the data warehouse. Now you’re responsible for the ingestion process as well; you have access to the raw California MLS data that sits in a Postgres instance and have written a pipeline to copy that data over into your BigQuery data warehouse.

You want to make sure that all the data from Oracle is making its way to BigQuery. Datafold Cloud has a slick UI for doing data diffs—value-level comparisons of tables and views in a database. Notably, Datafold allows you to do data comparisons on database objects  across databases, something that none of the previous solutions can handle.

Let's take a look. Assuming you have a BigQuery project already set up and authenticated, first jump into the Datafold Cloud UI (or, use the Datafold API) and set-up a cross-database data diff:

Setting up a cross-database diff in Datafold Cloud

Here, you can see I’m running a comparison between an addresses table that exists in both Postgres and BigQuery. You can additionally set column mappings for columns with slightly renamed versions, and egress limits and diff sampling for extremely large tables.

When the data diff is run, you can see summary and value-level differences that may exist between your Postgres and BigQuery tables, like in the image below.

Value-level differences of an addresses table across Postgres and BigQuery

Before Datafold, I’d jump between SQL consoles of the source and target database, running a lot of select count(*) from table queries and praying they lined up. If they didn’t, I’d then try ordering by timestamp for both tables and start checking row by row. Or I'd pull down the latest events for each that can fit into memory on my computer and run git diff. All this to say that having a diff tool that can compare datasets across databases is a huge time saver and a welcome addition to any data engineer’s development workflow.

Right for you if:

  • You need to compare data across databases, especially if you need to compare data across databases during ongoing data replication
  • You're seeking a comprehensive data-diffing solution with advanced features such as detailed visualizations for reporting

The bottom line

Hopefully you’re inspired by now to incorporate data diffing into your workflow. Start with the basics at your disposal like <span class="code">git diff</span> and <span class="code">dbt test</span>, but once you’re ready to take data quality seriously and are looking for a high quality platform, look no further than Datafold.

In this article