Data observability tools vs data quality tools
Discover how data quality tools preempt production issues and why observability is key to reactive problem-solving. Equip your team with the right tools.

Nobody likes getting âstuff is breaking in productionâ alerts. First of all, you have to figure out whatâs broken and why. Second, you have to identify the potential fix and implement it. Third, you have to restrain yourself from rage-quitting when you find out the issue was completely preventable.
Data teams are wasting so much time putting out fires when there were three mini-fires leading up to it that they had no visibility into. They had no tests, they had no standardization for their data quality tests, and they had no data diffing. No wonder they didnât catch it before it hit prod.
Without continuous integration or a standard set of practices, thereâs no guarantee that two engineers will test data the same way. Did they check that primary keys are the same between dev and prod? Did they check for statistical outliers? Were they expected to do so in the first place? Probably notâat least not from one pull request to another.
When data quality is low, more problems rear their head in production. Yet itâs not always a âgarbage in, garbage outâ problem, which is why itâs critical to understand the difference between data quality and data observability. What are they, whatâs the difference, and is it possible to end the 2AM wake-up calls?

Understanding data quality vs. data observability at a high level
Without solid definitions of data quality and data observability, the line between them could appear fuzzy. But thatâs not quite right. The two methods may have functional overlap in terms of features and methods, however the intentions of each approach are quite different.
Data quality starts in pre-production and focuses on the data itself. Some may argue that quality starts at the data source, but you usually have no control over whether Facebook changes the name of a column or LinkedIn changes a calculation in their CPM column. With the right tools, you can maintain high quality data even when your data providers arenât.
Pre-production environments are where you manage the eight dimensions of data quality with techniques like replication testing, automated regression testing, data lineage visualizations, and data diffing. Unless you want to keep finding issues the hard way, you need to test your data before something bad or incorrect gets deployed to production.
Data quality is really about testing and validation of the eight dimensions. When you change your data, you need to assess whether those changes accurately reflect the requirements for the change and the expected outcomes. For example, are the results in pre-production what youâre expecting in production? If so, youâre good to deployâright?
Well, not always, which is why we need to talk about data observability.
Data observability is focused on the overarching health and status of your data and pipelines, often in your production environments. Think: monitoring and alerting of anomalies. You wouldnât typically use observability methods to manage the eight dimensions of data quality. Rather, youâre more concerned with knowing whether something is awry as soon as it can be detected.
Finding the âgarbage outâ with data observability tools
Data observability tools often alert you to problematic activity. They might notify you about something that needs an immediate fix. Or they might alert you to look at your production environment or pipelines.Â
Observability tools donât typically tell you the specifics of the problem unless youâve specified what to look for. Itâs usually just a âHey, your ARR has jumped 200% today.â Out of the box, they can point out blank fields, duplicate data, and other more obvious issues. But they often wonât tell you why your revenue suddenly increased by an order of magnitude overnight.
By nature, data observability tools require considerable domain context. If you see an annual recurring revenue (ARR) value spiked for some reason, thereâs probably only a handful of people that can look at that number and say itâs correct, normal, or expected. Was there a really great sale day? Did a dbt code merge change the calculation of ARR? Did someone input something wrong into Salesforce?
The people most familiar with the underlying data are also likely to have written the change. The rest of the team wonât have the transparency or confidence to solve that issue. Analytics engineers try to understand the data as much as possible, but when it comes to domain-specific knowledge (e.g. how the finance team interprets detailed numbers), you really have to rely on the subject matter expert to get the root-cause of a data quality issue found by an observability tool.

Where thereâs smoke, thereâs (probably) a data fire
Data observability tools can look for smoke in production. Data quality tools can prevent the flammable stuff from getting to prod in the first place.
You donât want to run a data ecosystem without data quality and observability tools. You need both, but it usually comes down to two questions:
- How much tooling do we need?
- How much tooling can we afford?
The answers are different for every team and every organization. Depending on where you are with your data stackâs technical debt and the level of staff expertise, you might need much more than you can afford. In those situations, you end up doing much more firefighting, burning out, and wishing you could âaccidentallyâ double your salary in the HR database without anyone noticing.
Data observability finds the issues too late
You canât be proactive if prodâs already on fire. Sure, you can mitigate issues with some proactive protections, but letâs be honest: thatâs just damage control. By the time you get an alert, bad (or anomalous) data has already reared its head in production. Now you have to decide whether to roll back or write a patch. Meanwhile, your stakeholders are breathing down your neck.
Data observability is a type of reactive protection. Itâs great that it can discover when things go sideways, but it wonât do much for your data quality.
Which leads to the question: when do you want to find your problems? Would you rather wait until after theyâre in prod, when a stakeholder can find them? Do you enjoy the âdrop everything and fix thisâ style of working?

Or, do you want to fix your data quality issues before they reach production? (Hint: we think this is the way.)
Data observability tools canât catch all your data quality issues
Data quality issues can happen whether youâve got the worldâs greatest observability tools or not. You really need to employ data quality tools (i.e. tests and context awareness) everywhere there could be a problem:
- At the data source
- Before and after transformations
- In pre-prod and prod
- In your codebase
Putting data observability tools in the same places wonât guarantee youâll catch the issues. However, data quality tools will tell you:
- âThis table has 48% fewer rows than normalâ
- âThese data tests didnât passâ
- âThe primary and foreign keys are no longer correctâ
- âYou should notify the engineering teamâ
Data observability tools wonât tell you any of that until theyâve already happened.Â
Making matters more complicated, most observability tools have lost their focus by bolting on less-than-useful features and products to their platforms. Itâs resulted in alert fatigue with false positives and low signal-to-noise ratios.
Data observability tools cause alert fatigue
Too much observability and not enough quality management are the main ingredients for alert fatigue. One of the biggest issues we hear about with observability tools is that these tools often send too many alerts, leading to a game of whack-a-mole and determining whether thereâs really an issue in production.
Signal to noise ratios may be the most difficult part of observability. This is why data quality management is so crucial. High quality data solves the âgarbage inâ problem and effective, well-tuned observability helps you manageâand ultimately fixâthe âgarbage outâ problem.
Poorly configured observability suffers from data quality issues. No one trusts the data when they receive too many false alerts.
Data quality or observabilityâwhy not both?
Some data products out there have gotten distracted. Nowadays, your observability tool comes with a data catalog and lineage capabilities, but very few products help you from preventing data quality issues from entering your production pipelines to begin with.Â
Datafold Cloud provides full transparency into how your data is changing and gives you more context before code is merged in. Whereas an observability tool will catch a value going from 0 to 1M, Datafold will give you visibility into the granular details of your data through data diffing. Weâll tell you exactly which rows, columns, and tables are affected by a change, whether youâre comparing pre-prod to prod or performing a migration to another data platform.
You donât need to write tests for Datafold, either. When you open up a dbt or transformation code change, Datafold will automatically run a data diff on the dev and production versions of your data. If thereâs a difference, Datafold will identify the exact rows that are different, and your team can decide if these changes are expected (and accepted) or unexpected (and likely problematic).
Most data observability tools will find large data discrepancies once theyâve entered your CFOâs dashboard. Datafold will catch that before youâve merged any code into your master/main branch. There could be a million reasons why a large data discrepancy occurs in your ARRâand Datafold will help show you exactly how that data will change before it actually does.
This is not to say that data quality and data observability tools can't work together. General alerting and monitoring at the source-level for data, for example, might be incredibly helpful for finding data quality issues at the source, such as a backend engineering bug or bad source data.
Do you want to be reactive or proactive?
Somewhere in the world right now, some poor soul is getting alerted about a data problem that could have been avoided in pre-production. Maybe that person was you just recently. And maybe you also discovered that the problem was avoidableâor maybe it was yet another false alarm. Do you really want to live that out again, losing precious time reactively solving for root cause problems?
Or would you rather spend that time on newer data work?
We think life would be so much easier if software developers and data engineers could see exactly where problems originate in their code and their data. A data quality tool like Datafold can help prevent lost time. Instead of being a firefighter, you can be an architect and a builder, just like our friends at Trainual.
Datafold is designed to help you proactively avoid problems before they pop up in production. We show you how each code change is going to affect your data in a purposefully unopinionated set of results. We give you row-level change awareness, not just summaries and trends. We enable your team to find data quality issues before they actually happen.