Dynamically Typed

#48: Microsoft's deepfake detection app, the economics of AI startups, and 3x ML for climate change

Hey everyone, welcome to Dynamically Typed #48. Today’s newsletter is a bit later than usual because of an unexpected extra rowing training (yesterday) and an unexpected last beach day of the year (today) — I’m trying to make the club’s eight and big ocean waves are fun!

I wrote today’s feature story on Video Authenticator, Microsoft’s new app for detecting deepfake videos. Beyond that, I’ve also got a new a16z essay on the economics of ML startups and some progress on under-display selfie cameras for productized AI. For ML research, I found a useful guide for NLP data preparation and a not-so-BigGAN. And finally, on the climate change AI side, I have a triplet of quick links that range from prevention (slash-and-burn detection in the Amazon) to adaptation (flood prediction in India and Bangladesh) to measurement (ice sheet thickness detection in glaciers).

Productized Artificial Intelligence 🔌

Microsoft is launching Video Authenticator , an app that helps organizations “involved in the democratic process” detect deepfakes — videos that make people look like they’re saying things they’ve never said by superimposing automatically-generated voice tracks and face movements over real videos. Deepfakes are usually made using generative adversarial networks (GANs) like those in Samsung AI’s neural avatars project (see DT #15) and in the popular open-source DeepFaceLab app.

Because of all the obvious ways in which deepfakes can be abused, this has been a popular research area for technology platform companies: a bit over a year ago, Facebook launched their deepfake detection challenge and Google contributed to TU Munich’s FaceForensics benchmark (#23). Microsoft has now productized these research efforts with Video Authenticator. The app checks photos and videos for the “subtle fading or greyscale elements” that may occur at a deepfake’s blending boundary — where the fake facial movements mix in with the real background media — and gives users a confidence score for whether a face is manipulated. This happens in real-time and frame-by-frame for videos, which I imagine will be particularly useful for detecting subtle fakery, like a mostly-real video with a few small tweaks that change its message.

Video Authenticator initially won’t be made publicly available. Instead, Microsoft is privately distributing it to news outlets, political campaigns, and media companies through the AI Foundation’s Reality Defender 2020 program, “which will guide organizations through the limitations and ethical considerations inherent in any deepfake detection technology.” This makes sense, since deepfakes represent a typical cat-and-mouse AI security game — new models will surely be trained specifically to fool Video Authenticator, which this limited release approach attempts to slow down.

I’d be interested to learn about how organizations integrate Video Authenticator into their existing workflows for validating the veracity of newsworthy videos. I haven’t really come across any examples of big-name news organizations getting fooled by deepfakes yet, but I imagine it’s much more common on social media where videos aren’t vetted by journalists before being shared.

Quick productized AI links 🔌

Machine Learning Research 🎛

I’ve also collected all 70+ ML research tools previously featured in Dynamically Typed on a Notion page for quick reference. ⚡️

Artificial Intelligence for the Climate Crisis 🌍

Thanks for reading! As usual, you can let me know what you thought of today’s issue using the buttons below or by replying to this email. If you’re new here, check out the Dynamically Typed archives or subscribe below to get a new issues in your inbox every second Sunday.

If you enjoyed this issue of Dynamically Typed, why not forward it to a friend? It’s by far the best thing you can do to help me grow this newsletter. 📓