In August this year, the European Commission’s guidelines for Article 73 of the EU AI Act will come into force. The guidelines mandate deployers and providers to report serious incidents related to AI systems in high-risk environments such as critical infrastructure. This is a timely release. Nick Moës, Executive Director of The Future Society, warned during the flagship Athens Roundtablein London last December: “This may be one of the last years in which we can still prevent an AI disaster as defined by the OECD. The scale and growth of incidents we are witnessing is already concerning.”. Is the regulation we are developing fit for purpose? As we expressed in our recent submission to the Commission’s consultation on draft guidance and reporting template on serious AI incidents, the draft already contains a worrying loophole that must be addressed. They focus on single-agent and single-occurrence failures, and assume a simplistic one-on-one causality map for AI-related incidents. Yet some of the most serious risks are already emerging from interactions between AI systems, where multiple occurrences can lead to cascading, cumulative effects. An incident reporting framework that ensures accountability for these new risks must be part of the EU AI Act’s implementation. With Article 73 guidelines set to become binding in August, the clock is ticking for the European Commission to embed these changes.
EU Regulations Are Not Ready for Multi-Agent AI Incidents | TechPolicy.Press



