BACK

The Next Wave of GRC: Harnessing the Power of Responsible AI

3 min
4/19/2024

We’re starting to see a meaningful acceleration from many industries towards advanced AI adoption, with the most recent example of this being JP Morgan, which details their activity in the space in their most recent annual letter. Many organizations, including JP Morgan, have used AI for over the last decade. In their annual letter, Jamie Dimon noted, “We have been actively using predictive AI and ML for years—and now have over 400 use cases in production in areas such as marketing, fraud, and risk—and they are increasingly driving real business value across our businesses and functions.” However, it’s only until now, with the emergence of LLMs, that organizations are starting to explore the potential of generative AI. Dimon notes some of the most interesting use cases are within software engineering, customer service and operations, and general employee productivity. 

Generative AI has introduced a new, heightened sense of urgency around risk management, controls, governance, and compliance. One of the biggest reasons for this is that generative AI models can take private information from training data and unintentionally reveal it. Moreover, integrating AI and LLMs requires constant adaptations, exposing organizations to significant risks. These risks include inaccurate outputs, biases, misinformation, and potential unknowns. All of this begs the need for organizations to use AI responsibly. There are several different areas where we’re seeing innovation within the GRC tech stack that companies are building to alleviate the concerns associated with advanced AI adoption.

Synthetic Data / Data Curation and Anonymization: Supporting data curation by automating cleaning, preprocessing, and generating synthetic data to augment datasets, as well as anonymizing personally identifiable information while preserving data utility.

Version Control & Experiment Tracking: Automating code review, recommending optimizing development, and ensuring traceability and reproducibility of AI models to alleviate code integrity and transparency concerns.

Private Learning Platforms: By optimizing communication and aggregation protocols, AI can reduce communication overhead, improve convergence speed, and allow collaborative model training while ensuring privacy during model aggregation. In addition, some training methods, like Federated Learning, enable organizations to train models without having their data exposed or transferred to a central server. This is particularly attractive for companies in highly regulated industries like healthcare, financial services, and even defense. 

AI Auditing / Governance Workflow and Documentation: Auditing tools analyze AI systems for biases and ethical implications, ensuring compliance with regulations and providing comprehensive documentation to address transparency and accountability concerns.  

Model Evaluation & Testing: Automating AI model performance and reliability testing mitigates concerns of model inaccuracies across various tasks and conditions. By quickly identifying vulnerabilities or failures, models are more secure.

Model Monitoring & Observability: Detecting anomalies in real-time, forecasting model performance, and providing actionable insights for troubleshooting, addressing concerns about model performance degradation and unexpected behaviors through data analysis.

We’d love to chat if you're building in this space! Feel free to reach me at hannah@ansa.co

Share
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.