Dynamically Typed

#58: Partnership on AI's AI Incident Database, and lots of productized AI

Hey everyone, welcome to Dynamically Typed #58! In today’s issue I wrote about the AI Incident Database, a new project by the Partnership on AI that’s meant to help future machine learning researchers and developers avoid repeating bad outcomes that AI-powered systems have historically caused.

Beyond that, I’ve got quite a few productized AI links today: Facebook launched a big update to their feature that helps blind and visually impaired people understand images in their newsfeed; an engineer at Dropbox explained how ML prioritization can help reduce compute costs; and a problem with the live speech translation feature Google Translate proved to be an interesting case study of nonfunctional requirements for AI-powered products. And for ML research, I covered a new entry in the Distill Circuits thread on high-low frequency detectors. (Not too much was happening in climate AI and AI art these past weeks, so I don’t have any links for those sections today.)

Productized Artificial Intelligence 🔌

The Discover app of the AI Incident Database

The Discover app of the AI Incident Database

The Partnership on AI to Benefit People and Society (PAI) is an international coalition of organizations with the mision “to shape best practices, research, and public dialogue about AI’s benefits for people and society.” Its 100+ member organizations cover a broad range of interests and include leading AI research labs (DeepMind, OpenAI); several universities (MIT, Cornell); most big tech companies (Google, Apple, Facebook, Amazon, Microsoft); news media (NYT, BBC); and humanitarian organizations (Unicef, ACLU).

PAI recently launched a new project: the AI Incident Database . The AIID mimics the FAA’s airplane accidents database and is similarly meant to help “future researchers and developers avoid repeated bad outcomes.” It’s launching with a set of 93 incidents, including an autonomous car that killed a pedestrian, a trading algorithm that caused a flash crash, and a facial recognition system that caused an innocent person to be arrested (see DT #43). For each incident, the database includes a set of news articles that reported about it: there are over 1,000 reports in the AIID so far. It’s also open source on GitHub, at PartnershipOnAI/aiid.

Systems like this (and Amsterdam’s AI registry, for example) are a clear sign that productized AI is a quickly starting to mature as a field, and that lots of good work is being done to manage its impact. Most importantly, I hope these projects will help us have more sensible discussions about regulating AI. Benedict Evans’ essays Notes on AI Bias and Face recognition and AI ethics are excellent reads on this; he compares calls to “regulate AI” to wanting to regulate databases — it’s not the right level of abstraction, and we should be thinking about specific policies to address specific problems instead. A dataset of categorized AI incidents, managed by a broad coalition of organizations, sounds like a great step in this direction.

Quick productized AI links 🔌

Machine Learning Research 🎛

Synthetic tuning curves: responses of six high-low frequency detectors to artificial stimuli. (Schubert et al., 2021)

Synthetic tuning curves: responses of six high-low frequency detectors to artificial stimuli. (Schubert et al., 2021)

I’ve also collected all 75+ ML research tools previously featured in Dynamically Typed on a Notion page for quick reference. ⚡️

Thanks for reading! As usual, you can let me know what you thought of today’s issue using the buttons below or by replying to this email. If you’re new here, check out the Dynamically Typed archives or subscribe below to get a new issues in your inbox every second Sunday.

If you enjoyed this issue of Dynamically Typed, why not forward it to a friend? It’s by far the best thing you can do to help me grow this newsletter. 🕘