Ecology of Worries by Caitlin & Misha

Ecology of Worries asks the question of whether we should teach a machine to worry for us, and is enabled by an archive of actual recorded worries we’ve been collecting from people since 2016. The video consists of hand drawn critters. Some critters are driven by synthetic worries generated with TextGenRnn neural network trained on the transcribed worries archive. Other characters are driven to worry by a novel machine learning system called Generative Pretrained Transformer 2 (GPT-2), which was dubbed by commentators as the AI that was too dangerous to release (but it was released anyway). The creatures’ performance of the worries spans a gradient of intelligibility, reflecting on the evolution of machine learning systems. The resulting manifestations of the algorithms—with glitches intact—are presented in an audiovisual bestiary.

By characterizing the synthetic worries of various sophistication as variously evolved creatures we are engaging the empathy of the viewers. It is one thing to experience a text generating neural network failing into mode collapse, which is a state where the system generates the same unchanging output no matter the input e.g. a string of the same repeating vowel. It is a whole other thing to watch a mode collapse personified by one of our critters: as we watch this creature struggling to get a word out we can’t stop ourselves from feeling like we should help it finish the sentence. The mode collapse glitched text result of ‘aaa aaaaaaa’ becomes a living wail. The critters in Ecology of Worries appear alive not because of any sort of omniscience a tech evangelist might expect from a digital assistant, but due to their very real flaws. The creatures become uncanny through a juxtaposition of familiar and abstract concerns.

Training is the aspect of machine learning that engages the most important political dimension of this technology. Silicon valley fraternity bros scrape photos of their classmates, or grab celebrity faces from the wild to train their AI! Even well intentioned people often do the easy thing without anticipating problems down-stream. Biased data imbues the machines with the biases of their creators. The wide deployment of AI by US social media companies across much of the world has made the voices of Alexa, Siri, and Google Maps ever more recognizable. Ecology of Worries defamiliarizes the peppy digital assistant voice by training these creatures to worry on our communal woes (albeit reinterpreted by a machine). If a machine takes up our worry, does that at least mean it has really heard us? If we bare our soul to machines, don’t we risk them doing the same to us? The Eliza effect, discovered in the late 60s, led people to perceive a chat bot as intelligent and worth confessing to. Ecology of Worries flips the dynamic to have the machines confess to us and put us in an awkward, thoughtful, and sometimes hilarious state of mind.

Leave a Reply