跨学科艺术家Stephanie Dinkins探讨如何通过关怀和社区故事训练人工智能,挑战AI系统中的偏见,并倡导以包容性数据塑造更公平的技术未来。
Transdisciplinary artist Stephanie Dinkins challenges us to rethink what we feed our machines—and asks what AI might become if it were trained on care
Twelve years ago Stephanie Dinkins traveled to Vermont to meet a robot. Bina48, a humanoid bust with dark skin, was designed to hold conversations about memory, identity and consciousness. Dinkins, a photographer by training, wanted to understand how a Black woman had become the model for one of the world’s most advanced social robots—and whether she could befriend it.
What she found during that encounter launched a decade of work that has made Dinkins one of the most influential artists exploring artificial intelligence.
Dinkins grew up in Tottenville’s enclave of Black families at the southern tip of Staten Island. Her grandmother tended a flower garden with such care that even reluctant neighbors came to admire it and then stayed to talk. Dinkins has described this as her first lesson in art as social practice—using beauty to build community.
Today she asks a simple but revolutionary question: What might our machines become if they were trained on that same level of care and human experience? She challenges the ways that AI is often used, showing that data can be intimate, culturally rooted and deeply alive. Through public-facing art installations at places such as the Smithsonian Arts + Industries Building in Washington, D.C., and the Queens Museum in New York City, she encourages people to reflect on technology, power and responsibility.
You’ve described your first meeting with the Bina48 robot as a turning point in your career. What were you expecting to find, and what actually happened?
I thought that if I could befriend the robot, it could let me in on where it thought it fit between humans and technology. But as I spoke to Bina48, it became apparent that some of her answers felt flat alongside her representative self. If I asked her about race, she didn’t have the deepest answers or the most nuanced answers as a Black woman figure, and that scared me. If these people who have really good intentions are producing something that is seemingly flat, then what happens when people aren’t even concerned with these questions?
How did that realization shape your work?
It shaped everything. Here in New York [City], I lived in a neighborhood that was predominantly Black and brown. I was wondering if we knew what was coming, if people were thinking about what the systems would do in their world. At the time, ProPublica did an article on judges and sentencing in terms of AI and how they would use sentencing software to come up with how long someone would stay in jail. And that was built on biased data, the historical biased data of a historically biased system, the judicial system, which I equate to a “Black tax.” We have to figure out ways to contend with this because you’re automatically getting more time just by being Black, now, because a machine said so.
I made a project called Not the Only One, which is based on my family. It started as a memoir— really trying to pass down the knowledge from my grandmother so that two generations even more from her would still have some touchpoints of her ethos. It’s an oral history project where we recorded interviews with three women in my family, and then I was forced to find foundational data to support it. It was hard to find base data that didn’t feel violent or felt loving enough to put my family on top of.
How did you define violence in a dataset, and how did you solve for it?
When I think about violence in data, I think, really, about a linguistic violence or a kind of labeling or stereotyping that happens in our popular media. If we’re thinking about a dataset based on movies, what roles Black people could play in films was limited: servitude, the friend—always the supportive friend but not the protagonist—the relegation to a background character instead of one who is a star in one’s own life. I think not being able to inhabit those roles is a sort of violence. So the challenge became to build a base set of language that I felt actually would buoy my family and not pull it down.
I finally wound up trying to make my own dataset. Not the Only One was based on a dataset of 40,000 lines of extra data beyond the oral histories, which is very small, so the piece is very wonky. It sometimes answers correctly, and sometimes it speaks in complete non sequiturs. I prefer that to just sitting my family’s history atop historic cruelty.
How did that project shape the next projects that you did?
That made me think about the value of small, community-minded data. We as humans have always told stories to orient ourselves, to tell ourselves what the values are. So what would happen if we gave—and really, I think about gifting—the AI world some of that information so it knows us better from the inside out? I created an app called The Stories We Tell Our Machines to let people do exactly that.
That’s my quest at the moment, convincing people that that’s a good idea because what we hear out in the world is, “No, they’re taking our data. We’re being exploited,” which we are. But also, we know that if we do not nurture these systems to know us better, they are likely using definitions that did not come from the communities being defined. The quest is truly: What would it look like if the data used mimicked global population?
The next step is to take that data and start to make a dataset that can be widely distributed to help fine-tune or train other systems. I’m starting to talk to computer scientists about how we can do this in a way that does not denature the stories but makes them widely usable.
Can you give an example of how AI could offer opportunities to people who have historically underprivileged?
I’m waiting for an underprivileged kid with not a lot of money to produce some spectacular film using a computer and AI tools that competes with a Hollywood movie. I think that’s possible.