(Ashwin Prabu, Marlena Wisniak – Tech Policy Press) As artificial intelligence becomes embedded in state security and surveillance across Europe, the legal safeguards meant to constrain its use are increasingly being left behind. EU member states are turning to AI to automate decision-making, expand surveillance, and consolidate state power. Yet many of these applications, particularly biometric surveillance and algorithmic risk assessments, remain largely unregulated when it comes to national security. Indeed, broad carve-outs and exemptions for national security in existing AI legislation, including Article 2 of the EU AI Act and Article 3(2) CoE Framework Convention on AI and Human Rights, the Rule of Law and Democracy, have created significant regulatory gaps. Compounding this issue, “national security” itself is so loosely defined that it allows states to bypass fundamental rights while deploying AI with minimal oversight. Against the backdrop of a rapidly shifting geopolitical environment and rising authoritarianism, national security risks are becoming a convenient cover for unchecked surveillance and executive authority. This dynamic is setting a dangerous precedent. EU governments and candidate countries are invoking national security to justify AI deployment in ways that evade regulatory scrutiny, particularly in surveillance and counterterrorism. Upholding the Court of Justice of the European Union jurisprudence is critical because it provides a legal compass for defining national security and setting clear thresholds for when states can override fundamental rights. Without it, Europe risks building a security architecture powered by AI, but shielded from accountability. – When National Security Becomes a Shield for Evading AI Accountability | TechPolicy.Press
When National Security Becomes a Shield for Evading AI Accountability
Related articles



