Rigorous Play in Interpretable Machine Learning

An AI, in its least flattering light, is a giant pile of calculus and what ever bias it was fed by the humans who created its data set. And as AI are increasingly enmeshed in our culture, economy, safety, and personal decision making its terrifying not to have a handle on understanding how they work. The medicine then we are told is interpretability.

But what is Interpretability exactly?

Are the inner workings of glass-fronted wrist watches, ie transparency, with their intricate gears whirring in plain sight, easier to mentally model than their opaque counterparts? Do diagrams of neural networks or convolution, ie explanations, lead to understanding why a particular AI made a particular decision?

In Doshi-Velez and Been Kim’s 2017 paper “Towards A Rigorous Science of Interpretable Machine Learning”1 they “define interpretability as the ability to explain or to present in understandable terms to a human” and “propose data-driven ways to derive operational definitions and evaluations of explanations, and thus, interpretability.” But ‘presenting in understandable terms’  and ‘explanation’ sounds a lot like experts handing down knowledge to lay humans in the form of textbook diagrams and specialized language and we have plenty of that already. Words and pictures, even data-derived ones, just aren’t enough. As both supplement and alternative, I propose play-driven ways for individuals to kindle  their own embodied AI functionality discovery methodologies.

But how to create toys and games which thru undirected play foster deep embodied understanding about this technology? Minecraft is example of a toy in an adjecent genre. It enables people to build their engineering intuition via play and experimentation. Its a way to play with and kindle a love of collaboration, math, and problem solving before some boring class manages to squeeze it all out of you. We don’t current have a class of toys for playing with AI.

 

 

Gradient Descent: The Carnival Game

My first attempt at creating this kind of toy was Gradient Descent: The Carnival Game. I wanted to play with the embodied interaction design principles I’d been developing in my first VS OS prototype to enable a direct experience with the internal functioning of an ML black box and I caught on the idea of carnival games. Each game in the carnival embodying a different part of the neural networks data processing. Take gradient descent as an example, gradient descent helps us with the hardest part of making a new model: fiddling with the parameters. 

For those who haven’t head of gradient descent, it’s basically what it sounds like: a ball rolling downhill until settling in a local valley. Except since we are talking about a multidimensional algorithm  the overall hill is invisible, the ball can only tell what is down hill from its current position not the entire path. So much like an early explorer in an unknown land, finding the valleys (called parameter optimization) on the slopes of an AI algorithm’s “ground” requires moving step by step down an unmapped surface we can not see in whole.

My initial instinct was to use the simplest embodied interaction we have in VR, throwing a ball, to give people a playful illustration of this part of the AI creation process. This game would then be situated in the context of an array of embodied VR carnival games, each exploring some component of the AI’s anatomy.

The above video is a play thru of the game. The player sidles up to the play area which has tree components: a glowing white box with apparently nothing inside, a poster from which an infinite number of small blue balls can be grabbed using the controller, and a reset button off to the left. With either or both hands the player grabs the balls and throws, drops, tosses, and plunks them into the white box. The invisible curved surface in the box then lights up blue in the spots where it was touched, and where the ball rolled down hill.

This prototype uses a gravity-esque physics simulation to mock up the ball moving across and thus revealing the surface. And yes this only works because I am artificially constraining the surface to 3 dimensions which is far below the number of dimensions present in even the simplest ML model but this is first and foremost a prototype intended to create embodied intuition thru play and the metaphors bodies use to create knowledge are shaped by the environment from which ideas like down and up are derived.

Why this prototype is exactly wrong

While I found this prototype useful for launching my subsequent VR ML interface experiments I haven’t continued developing further carnival games. Not because I am unconvinced by the approach of using play for learning about AI but because my research has lead me to think that “anatomic” approaches like this one, which break a machine learning system down into its sub-components and explore each individually, are not how people are currently best developing their intuition. Instead my research into the use of glitch, data set intimacy, and interface experimentation with algorithmic black boxes (each of which will be the topic of a subsequent post) all seem more fruitful.

  1. Towards A Rigorous Science of Interpretable Machine Learning, Doshi-Velezand, Been Kim, 2017 https://arxiv.org/pdf/1702.08608.pdf