Dynamically Typed

The EU's Artificial Intelligence Act

The European Commission has released its Artificial Intelligence Act, “the first ever legal framework on AI, which addresses the risks of AI and positions Europe to play a leading role globally.” The proposal covers software powered by anything from machine learning to more classical statistical and expert system approaches, and applies rules depending on how risky it deems them. Unacceptable-risk applications like broad, real-time facial recognition or automated social credit systems are completely forbidden; but high-risk applications like emotion detection or biometric categorization systems only require the person being analyzed to be notified it’s happening. As noted on Twitter by Dr. Kate Crawford and in Andrew Ng’s DeepLearning.AI newsletter, there are certainly flaws in the proposal — on the one hand it could hinder innovation, on the other there are loopholes — but it could have a similar effect to GDPR in “drawing a line in the sand” and inspiring other big economies’ regulators to create similar legislation. Creating such hand-holds for what AI applications we accept and don’t accept as a society, is a very good thing in my book.