Problem statement

The ongoing management of data quality poses significant challenges within a big data environment, particularly when relying on human intervention. The sheer volume, velocity, and variety of data generated in such environments make it increasingly difficult for human resources to maintain data accuracy, consistency, and reliability. Some of the current issues in human management of data quality within a big data environment include:

  1. Scale and Complexity: The exponential growth of data in terms of volume and variety overwhelms traditional manual data management approaches. Human resources struggle to keep up with the sheer scale and complexity of data, leading to potential errors, inconsistencies, and delays in data quality management processes.
  2. Inconsistent Data Standards: Big data environments often involve data from diverse sources, each following different data standards, naming conventions and formats. Human intervention alone may not be sufficient to ensure consistent data standards across the entire data estate, resulting in data quality issues and interoperability challenges.
  3. Manual Data Profiling and Cleansing: Data profiling and cleansing, essential steps in data quality management, can be time-consuming and error-prone when performed manually. Human resources face difficulties in efficiently identifying and resolving data anomalies, inaccuracies, and duplications, leading to compromised data quality.
  4. Limited Scalability and Efficiency: Traditional human-based data quality management approaches may lack scalability and efficiency when dealing with large volumes of datasets. The time and effort required to manually manage data quality rule bases increase exponentially as the data grows, hindering organisations from maintaining consistent and reliable data across their big data environment.

Choice of open source tooling
In response to the challenges faced in managing data quality within a big data environment, the open-source community has developed a range of tools to assist organisations. These open-source data quality tools provide cost-effective alternatives to proprietary solutions and offer functionalities that address various data quality aspects. Some examples of popular open-source data quality tools:

Each solution has their own pro’s and cons, but in reality – it shouldn’t actually matter which tooling is required – the use of AI for template code generation can provide an abstraction to the data engineering team and their preferred framework

The use of AI to support

Artificial Intelligence (AI) has the potential to revolutionise data quality management within a big data environment by augmenting human capabilities. One significant application of AI is its ability to generate code against any target framework with a standardised input.  By leveraging AI techniques such as machine learning and natural language processing, organisations can automate the generation of code snippets or scripts that improve data quality processes.

AI-driven code generation significantly accelerates and streamlines data quality tasks.  Reducing the cost of achieving data quality controls and also the human limitations in producing this at scale.

How bigspark
leverage AI and open source tooling for data governance

Our CTO Chris Finlayson explains: “AI is not often considered for data governance use cases, more often viewed as a data governance challenge!  It has been interesting to explore the contrarian position and the results have been compelling.  With the support of Large Language Models (LLM), our engineering team is able to consistently generate high quality pipelines for data quality measurement – based only on provided schema metadata, minimal annotation and natural language inputs.   We believe that AI can provide massive acceleration within the data governance research space and are excited to be innovating in this”

Our AI Data Quality solution

In this overall solution, we demonstrate how we can extract schema from any common source format, generate an consistent metadata markup file, supply into an inference service via a user interface and generate data quality pipelines against the desired  framework.

For example – Generated code artifacts can be compiled against Amazon deequ and deployed into a dedicated data quality orchestration

  • The solution has shown promise in creating data quality rulesets from a bare source schema and some minimal annotation (entity and attribute description), with no requirement for direct source data access
  • High degrees of automation are possible (compilation, orchestration) to industrialise the generation and deployment of data quality rules into Development, allowing for a human refinement of the rules and verification of outputs prior to promotion

How bigspark can accelerate your data quality deliverables

  • Our engineering team have strong pedigree in building data quality frameworks that perform at scale
  • We can demonstrate how to supercharge these frameworks with the use of the latest AI techniques, in a way which is secure, reliable and compliant with your requirements


Please contact us with the form below to arrange a discussion and detailed review of the solution!