news

Implications of AI regulation for data integrity

According to research, organisations should invest in training for those working with artificial intelligence (AI) to support data integrity assurance in AI applications.

data integrity AI

A study published in the Asian Journal of Advanced Research and Reports, has reported “a strong positive correlation between higher levels of regulatory compliance and perceived effectiveness in artificial intelligence (AI) implementation, as well as between AI ethics awareness and data integrity assurance.”

The research highlighted the importance of regulatory frameworks and professional training in shaping AI development, specifically “dynamic, adaptable, and inclusive regulatory frameworks that can align AI practices with societal values and ethical norms”.

The authors stated that for AI regulation, the European Commission’s 2021 AI Act proposal “introduces a novel, risk-based framework for AI regulation. This framework categorises AI systems based on the potential risk they pose to safety and fundamental rights.”

Key challenges and opportunities

Artificial intelligence (AI) significantly enhances data integrity by reducing human error and increasing efficiency in data processing”

Considering its benefits for data integrity, “AI significantly enhances data integrity by reducing human error and increasing efficiency in data processing,” according to research highlighted in the paper.

With its ability to efficiently process and analyse large datasets, according to Oladoyinbo et al., “AI has facilitated “breakthroughs in fields such as predictive analytics, personalised medicine, and autonomous systems”.

However, when using AI systems, data integrity concerns, including “… data accuracy, quality, privacy, and security [arise]. The integrity of AI decisions is directly linked to the integrity of the data it processes,” the authors asserted.

The capability of artificial intelligence systems also raises “significant ethical concerns, particularly regarding the integrity of the data AI systems rely on”, the paper explained.

“Instances of data manipulation, whether intentional or due to inherent biases in algorithms, pose serious questions about the reliability and fairness of AI-driven decision making,” the authors continued.

Implications for data integrity

Other research mentioned in the paper argued: “AI systems are only as good as the data they are fed and how they are programmed, thus a concern that if the input data is flawed or biased, AI will amplify these issues.” As such, there is a “need for transparency in AI algorithms to ensure data integrity.”

Therefore, by “setting specific parameters and continuously updating algorithms, AI can be used as a tool to promote fairness and objectivity”.

In conclusion, Oladoyinbo et al. recommended: “policymakers should focus on developing and refining comprehensive, adaptable regulatory frameworks for AI that emphasise privacy, transparency, and accountability…. institutions and organisations should invest in continuous ethical training and awareness programs for AI practitioners. This would enable them to recognise and address the ethical implications of their work, thereby ensuring data integrity and fairness in AI applications.”