Aspirant Wavestone Logo

May 31, 2024

Exploring Risk Management in IT Systems with a Focus on AI

Cybersecurity

Risk Management and ICT

Risk management plays an important role in Information and Communication Technology (ICT) services by helping in the identification, assessment, and prioritization of risks associated with technology usage. This strategic approach helps organizations to allocate resources effectively, minimizing, monitoring, and controlling the probability and impact of unforeseen events. Successful risk management can lead to enhanced security, informed decision-making, improved operational efficiency, and compliance with regulatory standards.

 As companies increasingly rely on ICT services to sustain their operations across various sectors such as banking and supply chain management, the European Commission has introduced the DORA regulation. This regulation aims to:
  • Comprehensively address ICT risk management within the financial services sector.
  • Harmonize ICT risk management regulations across individual EU member states.

However, with the democratization of Artificial Intelligence usage in various industries, including finance and manufacturing, the EU recognizes the need for more specific regulations that adopt a risk-based approach “AI act”. The objective is to ensure that AI systems used in the EU are safe, transparent, traceable, non-discriminatory, and environmentally friendly. Importantly, AI systems should be overseen by humans rather than relying solely on automation to prevent harmful outcomes.

 

Anticipating the Impact of AI act

The Artificial Intelligence Act is a proposal for an EU regulatory framework on artificial intelligence (AI) that was tabled by the European Commission in April 2021. This draft AI act is the first ever attempt to enact a regulation for AI.

The proposed legal framework focuses on the specific utilization of AI systems and associated with Risks. The Commission proposes to establish a technology-neutral definition of AI systems in EU law and to lay down a classification for AI systems with different requirements and obligations tailored on a ‘risk-based approach’ that is explained as follows:

Unacceptable risk: AI systems that are considered threats and will be banned. This AI includes:
  1. AI systems that deploy harmful manipulative ‘subliminal techniques’;
  2. AI systems that exploit specific vulnerable groups (physical or mental disability);
  3. AI systems used by public authorities, or on their behalf, for social scoring purposes;
  4. ‘Real-time’ remote biometric identification systems in publicly accessible spaces for law enforcement purposes, except in a limited number of cases.
High risk: AI systems that impact safety or fundamental rights and are classified into two categories:
  1. AI systems in products covered by EU product safety legislation (e.g., toys, aviation, medical devices).
  2. AI systems in eight specific areas requiring registration in an EU database, including biometric identification and law enforcement. They undergo assessment before market entry and throughout their lifecycle.
Generative AI: Transparency requirements include disclosing AI-generated content, preventing illegal content generation, and publishing training data summaries.
 
Limited risk: AI systems with minimal transparency requirements, ensuring users are aware when interacting with AI, especially in image, audio, or video manipulation.
 

The EU AI Act mandates legal obligations for high-risk AI systems, emphasizing risk management and data governance practices. These measures prioritize responsible and ethical AI use while safeguarding user rights and safety.

 

Key Points in the AI Act

  • Article 9 stands out as a pivotal risk management provision in the AI Act. It defines the regulatory concept, scope of application, specific risk management requirements, and enforcement mechanisms.
  • Article 9 primarily applies to providers of high-risk AI systems, assisting them in complying with the previous requirements.
  • The development of harmonized standards on AI risk management aligns closely with the objectives of the AI Act.
  • A Risk Management framework serves as a practical roadmap, striking a balance between managing AI-related risks and fostering innovation and efficiency.

This approach ensures that the development and deployment of AI technologies are done responsibly, ethically, and with minimal risk to users and society.

As the Luxembourgish ecosystem keeps on evolving it will rely on AI as much as other ICT services to create value to its customers, it is crucial to address challenges such as data privacy, security, fair competition, and regulatory compliance. Companies must foresee impact of DORA and AI act regulation on their activities and start implementing new processes such as Risk Management to ensure compliance with upcoming regulations.

 

How to implement Risk management to ensure the compliance use of AI in a company 

Risk management processes play a crucial role in ensuring the compliance of AI use in a company. Here are some key steps involved:

  • AI Risk Assessment: The first step is to conduct a comprehensive AI risk assessment. This involves identifying and evaluating the potential risks associated with the implementation or development of AI. The National Institute of Standards and Technology’s AI Risk Management Framework provides a comprehensive approach for this.
  • AI by Design: Companies need to build risk management directly into their AI initiatives, so that oversight is constant and concurrent with internal development and external provisioning of AI across the enterprise.
  • AI Security Risk Management: An AI security risk assessment framework can be used to reliably audit, track, and improve the security of the AI systems. This involves looking at the entire lifecycle of system development and deployment.
  • Compliance with Regulations: Continuous monitoring with actual and upcoming EU regulation to anticipate changes and process adaptation to always be ahead of any changes in the market.
  • Integrated Audit Solutions: Integrated audit solutions are needed to manage existing and potential risks associated with Artificial Intelligence (AI).
  • Continuous Monitoring and Improvement: Regular monitoring and updating of the risk management strategies is essential to keep up with the evolving AI technologies and regulations.

In conclusion, while risk management in the financial sector is currently heavily concentrated on Information and Communication Technology, with most major financial institutions having already integrated ICT risk management into their internal control framework, the emergence of Artificial Intelligence is introducing a new dimension to organizational operations, particularly within risk management functions. Anticipated advancements suggest that AI has the potential to perform various risk management tasks, including risk mapping to identify and assess a company risks and evaluate its financial, reputational, and other impacts. Moreover, it is expected that AI will be able to plan tailored risk management strategies to the top management, aligned with their risk appetite. This evolution signifies a shift towards a more sophisticated and technology-driven approach to managing risks in the financial landscape.

 

Zineb is a seasoned consultant at Wavestone, specializing in risk management, project management, and compliance. With a sharp eye for identifying and mitigating potential risks, she has successfully guided numerous projects to completion, ensuring adherence to regulatory standards and minimizing exposure to compliance issues. Her expertise lies in developing strategic solutions that align with organizational goals, streamline operations, and foster a culture of continuous improvement. Zineb is dedicated to delivering high-quality, tailored advice that empowers clients to navigate complex challenges and achieve sustainable success.

Related posts