The AI Act: Do the risk groups make sense for trust?
- Luca Collina
- Dec 23, 2023
- 2 min read

The EU's AI Act wants to create clear rules for building, using, and selling artificial intelligence systems. It puts AI risks into three groups - unacceptable, high danger, and some danger. Then, it sets rules for each group.
For people, the Act tries to protect fundamental rights. It totally bans AI meant to trick people or target weaknesses. But some groups still worry the rules allow AI that hurts rights. Showing users when an AI is being used gives them control, though.
For businesses, the Act gives solid guidelines for responsible AI. But following them takes work, especially for smaller companies. Some AI opportunities are blocked, too. However, the single set of standards helps focus on AI that helps people.
Building Trust at All Levels
The AI Act's provisions for accountability and transparency can increase trust in AI across organisational levels - from executives setting strategy to managers overseeing systems and employees impacted by deployment.
Studies show many executives hesitated to invest in AI due to concerns over complexity, unintended consequences, and loss of human oversight.
The Act addresses these worries by requiring high-risk systems. Restrictions on fully autonomous operation also help. Executives can leverage external auditing to validate internal processes, data and algorithmics.
Managers clearly defined duties around monitoring, accuracy, risk management and documenting AI system changes to build confidence that responsible governance is possible. By embedding such operational guidance into the systems development life cycle, managers should have standardised tools to assess progress and compliance.
While employees can attempt to use Gen-AI in their private lives, they often distrust AI as enabling excessive tracking or threatening job security. The human-centric requirements, restrictions on certain types of problematic AI use and guaranteed human oversight protect workers while allowing innovation.
Ultimately, the Act also could empower labor advocates to challenge the deployment of systems that could still undermine rights.
The Act shows a societal consensus on an ethical way forward for AI. This helps convince executives, managers and employees alike of the intent behind the rules, overcoming reservations. Overall, the focus on responsible development anchored in EU values signals that AI can be trusted.
Does the Act hit the right balance?
The ambitious AI Act is groundbreaking but still untested. Overall, it carefully walks between warning and enabling innovation. Built-in feedback to refine the rules over time will keep shaping it towards accountable progress of AI in Europe, allow innovation in several industries, and consider effectively protecting the vast number of stakeholders.
As for the impact of the Act, in an investigation among 113 EU-based startups, 33% said their system could be considered high-risk following the regulations.
It implies that regulatory readiness and internal governance are required.
Comments