Notes from a children's playlab with DALL-E

September 20th, 2022

by Matteo Loglio

+++

Hey oio readers! It's been a while since we wrote anything here in our research journal - we've all been focused on a few running projects and we forgot to write more updates on oio land, but hey, we are back. Back on the typing machine because last week we did a singular activity, one that involves children, machine learning algorithms and AI-generated play-doh sculptures, one that absolutely deserved this piece of documentation.

As part of our efforts to make technology accessible and fun, we run workshops and educational activities for students and the general public, often involving creative activities with technology. Our goal is to make participants familiar with these subjects, approaching complexity in a playful way, but most importantly to show how these technologies can become creative tools of our everyday lives.

Last week we were invited to organise a laboratory with children, in the stunning location of the Triennale Design Museum.

Triennale Design Museum in Milan

As one of Milan's landmark cultural institutions, the museum offers a series of free educational activities for children, revolving around design and creativity. We were invited by Ludosofici, an organisation focused on advancing artistic and philosophical practices for the young, by organising events in schools and other public places.

image by Ludosofici

Our initial idea was to run a series of activities with DALL-E 2, one of the most popular AI image generators (at the time of writing), with the hope of blowing their minds with what a computer can do these days. As the participants were as young as 6 years old, part of our role was to convert their prompts into a machine-readable sentence. If a child says “Pizza princess", we had to convert it into something like “A drawing of a princess made of pizza, children illustration, hand drawn", to make it digestible by the algorithm.

a rainbow cat

We wanted to organise a Chinese Whispers session using DALL-E 2, following the same gameplay as Drawception - one child starts by telling a secret prompt to the AI, that in return generates an image. The next child has to watch the image, and describe it with a sentence. That sentence becomes the prompt for the next image and so on.

another rainbow cat

Turns out we were absolutely wrong about them getting excited by AI-generated images, unsurprisingly. Children in our group were not really impressed by the generative power of AI. First the prompts - the little participants were referencing YouTubers in their prompts, that DALL-E of course didn't know. So we tried to steer them more towards fantastic combinations (“A mermaid panda") which we know work better with DALL-E. Even then, they were not really impressed by the generated images, often kind of distorted, looking a bit like a sketchy version of the ones they normally get from their highly-polished illustrated books and media.

children were not impressed by the AI

we really tried

After a few sessions we realised that the main reason they were not really engaged lied in the activity itself, rather than the tool. It was the lack of physical interaction. They are used to build things with wooden blocks, paint stuff, run and jump around - you know things that kids do instead of watching at a projected screen like boring adults.

children were not really impressed by the AI generated images, often looking like sketchy versions of the highly polished content they are used to

so we went into craft mode

So we changed our activity to something more hands-on, making use of the available resources, including a bunch of play-doh. So what we did was challenge the little participants to create a series of sculptures of fantastic creatures. Then we took pictures of the creations and fed them to the algorithm, to generate some AI-generated variations of their sculptures. This time we kind of nailed it, as they spent a lot of time and energy into the creation of the sculpture using their hands, and just a few moments watching the generated images on the screen. They really liked the images this time, as they could relate them with their creations. The AI output felt novel and authentic, rather than random disconnected images.

Here's some of the results:

these play-doh creatures do not exist

I don't think there are some groundbreaking insights that we can take away from this, probably just that children will be children. Don't expect them to grasp the depth of something as complex as artificial intelligence, if not by connecting it to some activity that is very real and practical to them. If we had to organise it again, we'd probably do something similar with drawings and sketches, possibly adding a challenge to solve to turn it into a game.

Thanks for reading 🌀

m