Maya Man

Seeing yourself in the data

April 30, 2021 / Interview by Alex Westfall | Photography by David Evan McDowell

For Maya Man, there was “not at all a single moment where it clicked” when it came to bringing together her passions for dance and creative code. The convergence just came naturally—and it has led to collaborations with choreography legend Bill T. Jones, using Unreal Engine to capture movement, and countless browser-based, interactive experiences. 

Maya is now a Creative Technologist at the Google Creative Lab and continues to work on personal projects that address our intimate relationships with our devices and translate human movement into data in a beautiful, dynamic way. We speak here about telling stories through pose estimation, the politics of performativity, and making work that pushes past dance on screen.

How influential was your childhood to your creative development? 

That realization came that I wanted to be an artist late for me. I grew up in Central Pennsylvania, in suburbia. I was very interested in math and science—I loved the logic of it. It was so satisfying to me to problem-solve that way. For college, I decided I wanted to study physics; that’s what I applied as. During that whole process growing up, I was drawn to dance. I was in a company; I was in the competition, dance world—if you’ve ever watched Dance Moms, that world without the drama… in my life at least [laughs].

I gravitated toward dance, naturally—I am always moving, and it was a good outlet for me. A couple of weeks before I went to college, my mom said, Hey, people who like math like computer science and coding. You should try it. I did a two-week coding bootcamp. We built one website, and I was completely hooked. It had the logic and problem-solving that I loved about math and physics, but I could actually output a creation of my own, conceptually make it my own thing.

I went to Pomona College and studied Computer Science and Media Studies. When I started studying CS, everyone around me was interested in becoming a software engineer at a big tech company, to generalize the trajectory most people went along with.

 

I was dancing, excited about art and design, but they felt very separate in my life. Growing up, I always thought that artists were people who were really good at drawing and painting. I didn’t have a holistic sense of what art could be.

Going into my sophomore year of college, I met Lauren McCarthy and Taeyoon Choi at the first p5.js conference—community of artists and educators who thought about code, art, and design. They really prioritized inclusivity in a tech-oriented space, which was, at the time, very radical.

I always felt like in tech spaces that I needed to prove myself—that I was already being underestimated. This conference totally opened my mind to what you were able to do with a technical background in the realm of art and design. CS was technical, math-based, and Media Studies talked a lot about technology and its effect on society and culture. A lot of my intersectional education was DIY came a lot from learning on the internet resources that were free and online.

You have a family history in tech!

My grandmother was an AI researcher. She studied computer science. At the time, it was very early, especially for a woman, to study those things. My uncle, Ken Goldberg, is a professor at Berkeley and works a lot at the intersection of robotics and art. Everyone in my family has been very driven and in the specific thing, they’re interested in, whether AI research or robotics.

Even having those role models growing up, I didn’t fully see myself doing it until I entered that Processing community. What excited me the most about working in this space was that I could envision a project and then bring it into existence myself with my coding background. 

I realized the available technologies or what was available to me to start to explore dance. I was excited when I saw work that combined Processing with the Kinect, and people had things on screen reacting to their movement in space. Discovering PoseNet was exciting—it was this in-browser machine learning model that was able to understand where you are in space; where specific parts of your body were on screen. Whenever I am introduced to a new technology that enables me to make something reactive to my movement on the screen, it’s really exciting. 

 

Your work challenges this conception of using your computer as a very passive experience. Does the physical act of dance and the physical act of programming feel different for you? 

Dance is such a nuanced, human, organic art form. Those are not words I would necessarily use to describe machines or programming. I struggle with the conflict between these things. Programming and dance are part of my own identity and things I like to engage with, but at the same time, sometimes they don’t work together in the way that I always want them to. They feel distinct to me, but that’s what makes it so exciting to try to combine it in a lot of my work. I’m not interested in just putting dance on screen.

I’m interested in what can be done when you introduce code that can’t be done when you watch dance offline. Nothing I make will ever replace the feeling of dancing or going to see dance in physical space. Still, there are many exciting ways to combine them that allow you to perceive movement or think about movement in a way that you just might not be able to without the augment of the program you’re building. 

What was it like discovering PoseNet for the first time? 

The version that runs on TensorFlow is JavaScript-based, so it will run in your browser. Previously, any interaction with motion capture body tracking that estimation technology was not as easy this…now, I can click on a link on a website and just have it on my computer. Before, I needed a Kinect camera, a wholly separate piece of hardware that I had to acquire. With PoseNet, I could try something out in the browser like I would click on any website. 

 

 

From "Body, Movement, Language: AI Sketches with Bill T. Jones"

PoseNet has been trained on thousands and thousands of images of humans, but for the most part, humans doing very pedestrian things. So, they’re standing, sitting, it hasn’t been trained on dancers with their leg by their head. When dancers would do something that was outside of the scope of normal human day-to-day movement, it would often get confused.

We also had to brace ourselves for it to glitch or not pick up on something and obviously can tell where a person is in space. Working with [choreographer] Bill T. Jones and his dancers, we wanted it to feel like an exploration, not like a final answer to the question of, How would you combine this new technology with dance?  I brought the PoseNet Sketchbook to Bill and his team—they reacted to it in a way where they felt frustrated that they couldn’t see the nuance of the dancers’ movement in those points on the screen.

They said, Technically, it’s very cool, but where’s the meaning in this? What is the dancer’s role in this? Bill posed this question at the beginning. We kept coming back to what he was saying. Can these dots on screen make somebody cry? That was a moment working in technology and constantly around people who are like, Wow, the machine-learning models running in the browser. That was the moment I realized that the point of making something with Bill, this incredible artist, is that his story and what he’s trying to say can come through. The experiments that are part of the project with Bill, we stripped down a lot. So they’re very simple;  they’ll use a couple of points on the wrist, and we really leaned into using language, in addition to the PoseNet model, Bill was interested in speech and how that can interact in a live performance setting. The dancers were able to say things that would appear on the screen and then attached to their bodies somehow, and they could move things around.

Now, I think about how the concept necessitates the technology I use with any work I do. Am I trying to use the technology in a way that actually doesn’t add meaning—more of a demo of the tech—versus a powerful concept?

When you are in front of the camera or AI technology, does it feel different as a dancer, compared to when you’re aware of the technology?

Definitely, it feels different. Dancing in front of a computer with the intention of your movement showing up on a screen somewhere versus a stage performance—you form this reciprocal relationship with the technology you’re using. The technology responds to you and your own movement, but you are also responding to the technology. With Bill, we were doing experiments where they could say something and then hold the words they said on-screen between their hands.

They’re not just improvising and doing whatever movement that they want. They’re really thinking about their hands then—how they’re going to hold them in a way that will make the text legible on screen, or make it larger or smaller depending on how they move to depend on the program or intention.

It gets into your head and affects how you move, which I don’t think is a bad thing. Especially in a situation like that, where that’s the intention, but it’s hard to completely separate your movement from the reason you’re performing it.

You have a piece in the latest Feral File show called Can I Go Where You Go. How were you thinking about this as an interactive piece?

I was excited about Feral File because it was an exhibition that was going to take place in the browser. I wanted the piece to be interactive and highlight the fact that the code was running in-browser. It’s exciting about software-based work—that that distinguishes it. Digital artwork exists as an image or video or a GIF—where the code isn’t necessarily running, so you can’t interact.

What would the process look like of someone interacting with my form on the screen? I was playing around—putting my moving form actually on the person on the mouse, like X and Y coordinates. My digital body moves wherever your mouse is moving on a screen. I was tethering myself to this anonymous person who would be viewing my work in the future. It connected me to them.

I was interested in risograph colors—they’re very vibrant laid out on a page. I made the colors these bright pink and purple hues. I filmed the video generating my format in a friend’s studio in Red Hook. I had to get the most perfect contour ever because I’m using background subtraction, which depends on the pixel’s colors. I bought a black suit that would mostly cover my body to dance in. 

 

 

"can I go where you go?" is now on view in Feral File's Social Codes show.

My favorite thing about that piece was after the launch, seeing other people use the piece. Each time, it’s different depending on how someone moves their mouse. Some people would move slowly and draw little squiggles. Some people would cover the whole screen. Some people drew hard. Some people drew straight lines. It was interesting to see how different it could feel and look depending on who was interacting with it.

The piece doesn’t really start until someone moves the mouse. In JavaScript, it’s called a user event. I wanted it to feel like me and this person on the other side of the screen was performing the piece together. That connection was important. 

You do choreography and movement work for music videos. How do these experiences filter into your larger creative process?

This past year I worked on 777 by Joji, directed by Saad Moosajee. And that one was a really wild experience because you don’t see me in the video, but I did the motion capture for all of the characters in the video. It was a real challenge for me as a choreographer and a dancer to think, how can I create these movement vocabularies for these different types of characters and then convey them solely through the data that’s going to be captured in the motion capture studio. As a performer, I’m very expressive on my face. 

For this project, I didn’t have any of that. I could come off as these different characters depending on who I was trying to perform for each take. I would send videos to Saad of me trying something out to the song. We did two different days in the motion capture studio. The wildest part was there was one day where I was performing moments where the characters interact in the video.

Behind-the-scenes on Joji's 777 music video.

We used Unreal Engine in the motion capture studio. They would import movement; I would see that on screen, which was myself, and then I’d be in the suit reacting to that character on the screen in real-time. It was a really strange experience knowing that it was me, this avatar on screen. But it was the movement that I had generated, and the movement was pretty high fidelity. Compared to other ways I’ve worked with the Kinect or especially with PoseNet, in the data, I could really see myself and recognize my own movement, which doesn’t happen to me with some other motion capture technology. 

In the end result, I’m seeing my movement on this muscular white male, or these bodies that don’t look like my body, but the movement looks like mine. It was a mind-blowing process. I thought about what it means to put myself on screen and what it means to represent my own identity in my own appearance in pixels. That was a far-out experience, seeing myself represented by my own movement, but physically in a completely different way.

Your project, Glance Back, addresses a presence there—your computer or your webcam is capturing you in a moment, but it’s. Also, there’s also a past to it…users create an archive of intimate moments with their devices. How did this project come to be?

I was thinking about that feeling when someone’s staring at you for a really long time, and you’re not staring back, but you can like feel them staring at you.

It’s awkward. After I started working, I thought, that’s how my computer feels—I’m just staring at it all day, and it’s not staring at me. I need to give it a chance to engage. At least once a day, we need to have a little check-in. I’m really interested in self-documentation. I would ideally love to be really consistent and journaling or taking a photo every day, but I often start those endeavors and fall off.

I’m pretty reliably in my browser every day on my computer and Glance Back will go off automatically. I was really interested in the difference between the way we represent ourselves on-screen versus how we actually look while we’re looking at our screen—which is, at least for me, dead face, no expression. I was interested in that raw capture of how we usually appear when we’re looking at our screens versus how we tend to aspirationally render ourselves on social media.

Initially, I made it for myself for all those reasons. Once a day, it would take a photo of me and ask me what I’m thinking about. Other people would see it on my computer screen and ask me if they could use it so I released it publicly as a Chrome extension.

 I’m interested in those moments of intimacy that we share with the computers that we don’t usually share online with the rest of the online audience that we might have.

Stills from Glance Back.

The original version of Glance Back was made pre-pandemic? 

Way pre-pandemic. I launched it publicly in January 2019, but I had made it and was using it in fall of 2018 myself. So yeah, it was so funny. I can see when I scroll through my past all my archive of Glance Backs, and it really has tracked things nicely. You can track just my mental state too. 

A lot of people’s relationship with their computers and their phones is super intimate. In the pandemic, I’m constantly using it as a portal to something less intimate that is like work calls and even hanging out with friends on a big zoom, or those types of things. This is not a unique feeling. I find it very draining because what I do on my computer feels internal.

To have to externalize so much via my computer and via these video calls is very tiring. Just going into the office, I would always think of it like that. And so having this extra layer of where the computer is, mediating your interaction, you can actually see yourself. I find that really exhausting because not only are you going through the exercise of performing the social interaction, but also, seeing yourself on screen, perceiving yourself. It’s this exhausting mental loop that I’ve found I need to step away from as much as I can afford to, especially this far into this time.

I don’t think performativity is inherently bad. It’s part of just living, but it exacerbates the situation with things being on Zoom calls all the time. 

How do you see the relationship between your work as a creative technologist at Google and your personal practice? Do you see them as separate? 

More and more of my personal work is really interested in performance and intimacy and, and these things feel very precious to me that I don’t feel ready to explore in a work context.

Working in an office setting, there are many dynamics in addition to you just wanting to do your work and make your own work. I like to keep my personal practice very separate from my engagement with work, especially at this point. But they definitely inform each other. One of my favorite things about working at the Creative Lab is that it’s not like working on an engineering team on a product where I work with many other engineers. I’ll often be the only technologist on a project. I’ll be working with a writer, a designer, a team lead, and a producer, and they are all great at what they do, but also, what they do is very different from what I do.

I’m constantly learning from them and also pushing myself to grow technically. So it’s taught me a lot about figuring out how to build things myself with my own chosen technical stack.

That’s really valuable to me. That exercise is something I carry into my personal work a lot because I can think of an idea. Even if I don’t know how to build it, I feel like I have the tools to teach myself how to build it.

What else are you working on at the moment?

I’ve been working on this new project with Heidi Latsky, a choreographer in New York. She called me pretty early into the pandemic and said, “All of these dance companies are just putting up videos of an on stage performance.” She was not really interested in that. 

I’m thinking about how I can build an experience on screen, but it is unique to my dancers, my company, and not just a YouTube video or Zoom recording. I was really in the same thing—the idea of translating what was meant to be an onstage or in-person performance onto a computer screen was not that interesting to me.

We both were really excited about engaging the idea of building from the ground up. A piece that was meant to be a dance piece that was meant to be viewed through the browser. We’ve been working on a piece called Recessed. It’s browser-based. I built a system that allows us to choreograph pop-up windows.

When you click on certain dancers, pop-up windows will come with videos of them moving inside. The videos were filmed at home in their apartment or local neighborhood—it is very intimate look into the dancers wherever they were quarantining and also the type of movement that they were doing at home and this time, rather than in a public performance setting. Our entire experience is embedded on the internet. It’s exciting to think about building a dance piece native to that platform.