SUBSCRIBE NOW
IN THIS ISSUE
PIPELINE RESOURCES
New AI Ethics Research Released

AI Adoption Requires Strong Governance Through Ethical and Risk Management Frameworks, Says Info-Tech Research Group

AI governance will help organizations monitor, manage, and control all artificial intelligence activities.

As artificial intelligence (AI) and machine learning (ML) rapidly gain momentum as commonplace organizational functions, concerns about the ethical and responsible use of AI technology have also begun to surface. To help organizations build a framework for more transparent and unbiased AI, global IT research and advisory firm Info-Tech Research Group has published its new industry blueprint titled AI Governance.

Artificial intelligence is defined as a combination of technologies that can include machine learning. These AI systems perform tasks that mimic human intelligence, such as learning from experience and problem solving. Most importantly, AI can make its own decisions without human intervention. Machine learning is an AI process in which systems learn from experience without explicit instructions. Instead, patterns are "learned" and analyzed from data to make predictions based on past behavior and learned patterns.

Often used to support or replace human decision making, the use of AI and ML offers several potential benefits, including enhanced customer experience, improved operational efficiency, and automated business processes. While this makes incorporating AI and ML functions into routine organizational operations appealing, doing so must be approached with clear expectations and concise governance procedures in place.

"Since ML and AI technologies are constantly evolving, so too must AI governance and risk management frameworks to verify that the appropriate safeguards and controls are in place," says Irina Sedenko, advisory director at Info-Tech Research Group. "To ensure responsible, transparent, and ethical AI systems, organizations need to review existing risk control frameworks and update them to include AI risk management and impact assessment frameworks and processes."

Based on the timely research included in the AI Governance blueprint from Info-Tech, the firm recommends that organizations consider the following key components for an effective AI governance framework:

  • Monitoring – Monitoring compliance and risk of AI/ML systems or models in production.
  • Organization – Structure, roles, and responsibilities of the AI governance organization.
  • Operating Model – How AI governance operates and works with other organizational structures to deliver value.
  • Risk & Compliance – Alignment with corporate risk management and ensuring compliance with regulations and assessment frameworks.
  • Policies/Procedures/Standards – Policies and procedures to support the implementation of AI governance.
  • Model Governance – Accountability and traceability for AI/ML models.
  • Tools & Technologies – Tools and technologies to support the AI governance framework implementation.
There is no one-size-fits-all AI governance structure. As such, Info-Tech encourages organizations to identify roles and responsibilities at strategic, tactical, and operational levels; establish an AI governance council; and identify all groups supporting AI initiatives. The organization's maturity, size, and enterprise governance structure will influence the AI governance structure.

Building an effective AI governance framework and program will help organizations:
  • Define accountability and responsibility for AI.
  • Define the AI risk management framework.
  • Support the ethical, transparent, and fair use of AI.
  • Define a framework to support ML/AI model governance.
The AI governance framework can also be used to define a set of metrics and key performance indicators (KPIs) that can measure the success of the framework implementation.

Source: Info-Tech Research Group media announcement
FEATURED SPONSOR:

Latest Updates





Subscribe to our YouTube Channel