#45: Al Gore's Climate TRACE coalition, Papers with Code's Methods, and lots of productized AI
Hey everyone, welcome to Dynamically Typed #45! I’ve got two feature stories for you today: one on Al Gore’s new Climate TRACE coalition for tracking greenhouse gas emissions using AI, and one on Methods, a useful new resource by Papers with Code. For productized AI, today’s quick links include an Amnesty International case study on using AI for human rights resource, Google’s open-source release that I think will power its next generation of voice assistants, and follow-ups on the facial recognition saga in the United States. And finally, I found a few more useful ML research tools and a cool AI-powered animation software demo.
(By the way: Plumerai is hiring for several roles across our Amsterdam, London, and Warsaw offices—come work on making deep learning radically more efficient with me and my amazing colleagues!)
Productized Artificial Intelligence 🔌
- 🛰 Interesting case study from Amnesty International on using automated satellite image classification for human rights research in war zones: tens of thousands of volunteers labeled 2.6 million image tiles of the Darfur region in western Sudan as empty , human presence, or destroyed, which Amnesty and Element AI researchers used to train a model that could predict these labels at 85% precision and 81% recall on the test set. This model then allowed them to visualize and analyze different waves of destruction in the zone over time. The full case study is well worth a read: it includes detailed notes on the ethical tradeoffs they considered before starting the project —a contrast with the ethics sections in many recent ML papers that read like checkbox afterthoughts.
- 📱 Google open-sourced Seq2act, a new model that translates natural-language instructions (“How do I turn on Dutch subtitles on YouTube?”) into mobile UI action sequences (tap the video; tap the settings button; tap closed captions; select Dutch from the list). This isn’t quite productized yet, but who wants to bet that the next major version of Android will allow you to say “OK Google, turn on Dutch subtitles” in the YouTube app—as well as millions of other commands in other apps—and that the phone will just tap the right buttons in the background and do it for you? This is the stuff that makes me jealous as an iPhone user.
- 👁 Update on facial recognition in the United States, which big tech recently pulled out of (see DT #42), and which startups then doubled down on (#43): a group of senators has now proposed legislation to block use of the facial recognition technology by law enforcement. Good!
- 🦸♀️ Related: Fawkes is a new algorithm by Shan et al. that makes imperceptible changes to portrait photos to fool facial recognition: “ cloaked images will teach the model a highly distorted version of what makes you look like you.” The University of Chicago researchers wrapped Fawkes into a Windows and macOS app, and they claim that it’s 100% effective against the state-of-the-art models powering commercially available facial recognition APIs. As my friends who study computer security tell me, though, this is always a cat-and-mouse game: at some point, someone will figure out how to make a facial recognition model that’s robust against Fawkes; and then someone else will make a Fawkes 2.0 that’s robust against that; and then… But, at least for a while, running your photos through Fawkes should make them unrecognizable to most facial recognition models out there. Probably.
Artificial Intelligence for the Climate Crisis 🌍
Former vice president Al Gore and Gavin McCormick of WattTime launched Climate TRACE, a project for Tracking Real-time Atmospheric Carbon Emissions. From the coalition’s launch post:
Our first-of-its-kind global coalition will leverage advanced AI, satellite image processing, machine learning, and land- and sea-based sensors to do what was previously thought to be nearly impossible: monitor GHG emissions from every sector and in every part of the world. Our work will be extremely granular in focus — down to specific power plants, ships, factories, and more. Our goal is to actively track and verify all significant human-caused GHG emissions worldwide with unprecedented levels of detail and speed.
Extracting information from satellite imagery is shaping up to be the killer app for climate change AI: we’ve previously seen it used for predicting electrical grid resilience (see DT #14), locating solar panels (#29), tracking deforestation (#25, #28, #39), and classifying farming land use (#41). At the NeurIPS 2019 panel on AI for climate change research (#30), former head of Google Brain Andrew Ng also mentioned that the ability to train models on small satellite datasets is one of the machine learning advances he was most excited about for climate projects.
All this is to say: I’m extremely excited to see such a broad coalition—its founding members include “Blue Sky Analytics, CarbonPlan, Carbon Tracker, Earthrise Alliance, Hudson Carbon, Hypervine, OceanMind, and Rocky Mountain Institute”—launch as an independent observer of greenhouse gas emissions. Their goals are certainly ambitious:
Through Climate TRACE, we will equip business leaders and investors, NGOs and climate activists, as well as international, domestic, and local policy leaders with an essential tool to fully realize the economic and societal benefits of a clean energy future, while ensuring that no one — corporation, country, or otherwise — will ever again have the ability to hide or fake their emissions data. Next year, every country in the world will gather in Glasgow, Scotland, to enhance their commitments to the Paris Agreement and raise collective ambition in line with what the world’s scientists tell us is necessary. We at the Climate TRACE coalition hope to support these COP26 climate talks with the most thorough and reliable data on emissions the world has ever seen.
The rest of the launch post goes a bit into how their GHG emissions observation will work, but beyond mentioning that they’ll do sensor fusion on visible + infrared imagery and satellite + radar measurements, Gore and McCormick don’t go into much technical detail yet. They mention that this will follow in future posts, which I’ll be sure to link to here when they come out.
For now, they’ve set up a number of online profiles to follow for updates (website, Twitter, GitHub, LinkedIn); David Roberts at Vox also wrote a nice feature about how the coalition came to be.
Machine Learning Research 🎛
Papers with Code’s Methods page for the residual block (cropped).
A few weeks ago, Papers with Code launched Methods , a knowledge graph of hundreds of machine learning concepts:
We are now tracking 730+ building blocks of machine learning: optimizers, activations, attention layers, convolutions and much more! Compare usage over time and explore papers from a new perspective.
I’ve started using Methods as my go-to reference for many things at work. Sitting at a more abstracted level than the documentation for your ML library of choice, it’s an incredibly useful resource for anyone doing ML research or engineering. Each Methods page contains the following sections:
- A concise description of what the method is and how it works, including math and a diagram where relevant
- A chronological list of papers that use the method
- A breakdown of tasks from the site’s State-of-the-Art leaderboards for which the method is used
- A graph of how the method’s use changed over time, compared to other methods of the same category (for example, Adam vs. SGD for optimizers)
- A list of components: other methods that contribute to this method (for example, 1x1 convolutions and ReLUs are components of residual blocks)
- A list of categories for the method
I’ve found the last of those sections to be especially handy for answering those hard-to-Google “what’s the name of that other thing that’s kind of like this thing again?” questions. (Also see this Twitter thread by the project’s co-creator Ross Taylor for a few example uses of the other sections.) Methods launched just a month ago, and given how useful it already is, I’m very excited to see how it grows in the future.
One additional feature I’d find useful is the inverse of the components section: I also want to know which methods build on top of the method I’m currently viewing. Another thing I’d like to see is an expansion of code links for methods to also include TensorFlow snippets—but since Facebook AI Research bought Papers with Code late last year, I’m guessing that keeping these snippets exclusive to (FAIR-controlled) PyTorch may be a strategic decision rather than a technical one.
Quick ML research + resource links 🎛 (see all 69 resources)
- ⚡️ Gradio is an open-source Python library for generating quick web UIs around ML models: use it to “play around with your model in your browser by dragging-and-dropping in your own images (or pasting your own text, recording your own voice, etc.).”
- ⚡️ Josh Meyer has created a handy markdown template for creating datasheets for datasets (see DT #41). This would’ve come in very handy a month ago when I was writing a dataset datasheet at work and copying over all the questions from the PDF of Gebru et al. (2018) by hand.
- ⚡️ Google has released its Model Card Toolkit, a JSON spec that makes it easier to specify the capabilities and gotchas of trained ML models (see DT #41).
Cool Things ✨
- 🚶♂️ Kevin Parry—whose video wizardry you should really be following on Twitter—got to to try RADiCAL’s software that extracts 3D animation data from videos on his 100 walks video, and the results are fantastic. Also check out the 3D view.
Thanks for reading! As usual, you can let me know what you thought of today’s issue using the buttons below or by replying to this email. If you’re new here, check out the Dynamically Typed archives or subscribe below to get a new issues in your inbox every second Sunday.
If you enjoyed this issue of Dynamically Typed, why not forward it to a friend? It’s by far the best thing you can do to help me grow this newsletter. 🌞