Risk management plays an important role in Information and Communication Technology (ICT) services by helping in the identification, assessment, and prioritization of risks associated with technology usage. This strategic approach helps organizations to allocate resources effectively, minimizing, monitoring, and controlling the probability and impact of unforeseen events. Successful risk management can lead to enhanced security, informed decision-making, improved operational efficiency, and compliance with regulatory standards.
However, with the democratization of Artificial Intelligence usage in various industries, including finance and manufacturing, the EU recognizes the need for more specific regulations that adopt a risk-based approach “AI act”. The objective is to ensure that AI systems used in the EU are safe, transparent, traceable, non-discriminatory, and environmentally friendly. Importantly, AI systems should be overseen by humans rather than relying solely on automation to prevent harmful outcomes.
The Artificial Intelligence Act is a proposal for an EU regulatory framework on artificial intelligence (AI) that was tabled by the European Commission in April 2021. This draft AI act is the first ever attempt to enact a regulation for AI.
The proposed legal framework focuses on the specific utilization of AI systems and associated with Risks. The Commission proposes to establish a technology-neutral definition of AI systems in EU law and to lay down a classification for AI systems with different requirements and obligations tailored on a ‘risk-based approach’ that is explained as follows:
The EU AI Act mandates legal obligations for high-risk AI systems, emphasizing risk management and data governance practices. These measures prioritize responsible and ethical AI use while safeguarding user rights and safety.
This approach ensures that the development and deployment of AI technologies are done responsibly, ethically, and with minimal risk to users and society.
As the Luxembourgish ecosystem keeps on evolving it will rely on AI as much as other ICT services to create value to its customers, it is crucial to address challenges such as data privacy, security, fair competition, and regulatory compliance. Companies must foresee impact of DORA and AI act regulation on their activities and start implementing new processes such as Risk Management to ensure compliance with upcoming regulations.
Risk management processes play a crucial role in ensuring the compliance of AI use in a company. Here are some key steps involved:
In conclusion, while risk management in the financial sector is currently heavily concentrated on Information and Communication Technology, with most major financial institutions having already integrated ICT risk management into their internal control framework, the emergence of Artificial Intelligence is introducing a new dimension to organizational operations, particularly within risk management functions. Anticipated advancements suggest that AI has the potential to perform various risk management tasks, including risk mapping to identify and assess a company risks and evaluate its financial, reputational, and other impacts. Moreover, it is expected that AI will be able to plan tailored risk management strategies to the top management, aligned with their risk appetite. This evolution signifies a shift towards a more sophisticated and technology-driven approach to managing risks in the financial landscape.