Dynamically Typed

Language Interpretability Tool (LIT)

Also presented at EMNLP 2020, the Language Interpretability Tool (LIT) is an open-source platform for visualizing and understanding NLP models. It builds on top of Google’s previous What-If Tool, and supports “local explanations, including salience maps, attention, and rich visualizations of model predictions, as well as aggregate analysis including metrics, embedding spaces, and flexible slicing.” James Wexler and Ian Tenney introduce the tool in a post on the Google AI Blog, which also includes a few demos.