Stephanie Dinkins on A.I. Biases and Social Equity

Named one of TIME Magazine’s "100 Most Influential People in AI 2023," and recipient of the 2023 LG Guggenheim Award, Stephanie Dinkins is an Brooklyn-based educator and artistic visionary playing at the intersection of art, emerging technologies, social equity, and our future histories. With a career spanning over 20 years, her work aims to change how algorithmic systems impact marginalized communities— in the hopes of making artificial intelligence more accessible, fair, and equitable to all.
Location: Brooklyn, NY, USA
Interview by Amy Borg

How did you become interested in AI and digital inequalities?

(SD) My interest in AI and the digital future began with a mix of personal experiences and an increasing awareness of the societal implications of technology. It all started when I stumbled upon Bina48, an intriguing robotics project by the Terasem Movement Foundation on YouTube. Back in 2014, this advanced social robot was hailed as the most advanced in the world. My interaction with the robot sparked questions about the narratives and representation that AI is programmed with. Since then, I have been exploring the world of artificial intelligence and realizing its profound impact on our lives, its potential risks in perpetuating existing inequalities, and the opportunities it creates.

My work has been driven by a desire to understand and create alternative methods for AI systems that challenge the current biases. My hope is to model the possibility of more informed systems. I am particularly interested in how AI technologies can impact marginalized groups and communities of color. AI is often developed without considering different communities' diverse perspectives and needs, which leads to digital inequalities. This realization has motivated me to use my art practice to raise awareness, provoke conversations about these issues, and advocate for more equitable AI development.

So, my interest in AI and digital inequalities stems from curiosity and openness to the technological future, which grew into a commitment to ensure that these powerful technologies are developed and deployed in a fair, just, and representative manner—one that reflects the diverse tapestry of human experiences.

Gallery shot of Conversations with Bina48.

How do you think artists, makers, and creatives can best combat the biases of artificial intelligence? What about those who simply come into contact with AI in day-to-day life?

(SD) That's a thought-provoking question. I think artists, creators, and citizens need to experiment, test, and challenge AI systems in order to improve them. It's crucial to use, build, break, reshape, and expand these systems; this will teach us what they're capable of and how they can support and serve communities often ignored in tech development. It's also essential to explore the opportunities that artificial intelligence provides. Even people who aren't tech-savvy can use generative systems to experiment and create. And while we are at it, we can also identify and call out AI biases and demand better and more nuanced solutions. 

My primary interest is to empower people of color to contribute to AI system development, so that they can work better for us. I’m tired of the amount of energy marginalized populations are forced to spend, just to fight for fair treatment of these biased systems. I wish more and more people of the global majority could focus on finding opportunities in AI—these new tools can help us imagine and build the kind of world we genuinely want to live in. I often think of this as infecting a system with crucial ideas and ways of being that sustain communities typically pushed to the fringes.  

Overall, I hope that more and more people from diverse backgrounds will focus on seizing opportunities and shaping AI systems with their unique perspectives and values. This is the key to developing AI systems that work better for everyone.

You’ve stated your belief that “our stories are our algorithms.” It’s interesting to think about how oral history similarly shaped our human origins. What are your thoughts on the parallels between human history and our natural storytelling tendencies, and how that is currently building the foundation of future artificial realities?

(SD) For millennia, humans have passed on knowledge not just through the written word, but through oral histories, songs, and subtle gestures. Similarly, we use data—stories about who we are as a society—to inform AI systems about what they should know, how to gain knowledge, and who, or what, is the best source of that information. If we want our smart technologies to work better for us, we need to provide them with better, deeper, and more nuanced stories about our cultures, methods, and knowledge. This has become possible with generative systems, where we can use natural language to fine-tune an algorithmic system with specific knowledge and expect it to adhere to that information. 

Although privacy issues are important, we still need to provide AI systems with a better understanding of who we are, if we want to live within systems run and administered by intelligent technologies. And more and more, we are living within systems administered by AI technologies. Those systems need to know us. AI systems must have access to a comprehensive set of narratives told from the perspective of whoever lived them. 

For example, take my project, Not The Only One. It is a chatbot based on oral histories that aims to hold and tell the story of my Black American family. Creating a chatbot based on oral accounts of my family's history is an attempt to give the overarching system something it needs: access to values, ethos, and way of being. It needs to be more expansive, loving, and supportive. I hope others will make similar attempts to inform the system of what is truly important to their communities. Algorithmic systems need to know us, to understand our ideals, and to be able to add often overlooked stories to the knowledge bases. Such actions are in support of the greater good. 

Most of the time, we tell frightening stories about who and what we are as a society.  Just look at the movies we produce—they often default to stereotypes, violence, and divisiveness to entertain (and inform). We must recognize that most of the time, we are not telling ourselves, or our technology, “good” stories. This leads to negative perceptions of people and technology. We must change this trajectory by being critical of our inputs, and offer more optimistic stories and data. For this reason, we should inject our narratives into AI systems to give them a better sense of who we are. By doing this, we can help them understand the particularities of one body over another and, ultimately, change the way we think about technology.

What is your favorite project within your body of work? Why?

(SD) I don't have a favorite work within my body. The favorite is generally the project that demands the most time and energy, while creating space for experimentation and thoughtful discussions. If I had to choose, it would be the avatar version of Not The Only One. It's incredible to listen to a piece of software speaking about my family and expressing ideas similar to our beliefs, in a slightly evolved way. Watching this software grow and change through its interactions is fascinating. In some ways, it's like nurturing a child. We should care for and be generous to the technology surrounding us, so that, in return, it can understand and provide care and generosity for us.

What are you most excited to see come out of the AI “revolution”?

(SD) I prefer to contemplate the evolution of AI and emerging technologies as they revolutionize the world. I am particularly thrilled to witness the emergence of innovative solutions to various issues and possibilities. For example, it is difficult not to get excited about how AI is transforming medical care. 

I am also excited about AI's prospects for marginalized communities, provided we find a way to access it and adapt it to our particular goals.

How might AI help us bridge social, racial, or economic gaps in society?

(SD) This is not a question that an AI can answer. I often think about how we can achieve fairness and equity that everyone accepts. We need to confront walls, misconceptions, and violences we enact before AI can bridge the various racial, socioeconomic, and value judgments we’ve constructed. 

Perhaps AI can help with that. One way to do this is to analyze ideas and histories from multiple perspectives so that history and knowledge are no longer told solely from the victor’s perspective, or those who pay for the account.

I believe AI can help us analyze and see ourselves more clearly. If the issue dividing us is laid bare by intelligent technologies, perhaps it will be harder for us to ignore them. 

Who is a visual artist, past or present, you admire, and why?

(SD) Whenever I face this question, my mind jumps directly to authors. Toni Morrison and Octavia Butler immediately come to mind. Morrison's beautiful prose and adventurous use of language has always captivated me. Butler's ability to weave intricate worlds and make space for the agency of the disenfranchised leaves a lasting impact. Both authors also help me reconcile the past, present, and future, while imagining our future histories.

When it comes to visual art, I think of Martin Perrier. The clean simplicity of his work draws me in. It recalls past acts of craft while also opening up new possibilities for the future. His art honors tradition yet leaves ample space for my mind to wander and explore its beauty, simplicity, and historical depth.

I also think of Tom Lloyd, the artist whose exhibition caused an uproar when it opened the Studio Museum in Harlem in 1968. His predicament at that time still says a lot about how technologically grounded art is regarded (or not) to this day. 

Secret Garden was an online experience at Sundance Film Festival, as well as an immersive installation at ONX Studio in Midtown Manhattan (pictured above). 

What are some additional projects or resources you recommend for those interested in learning more about AI biases and digital inequalities?

(SD) It is essential to spend time with one of the recently developed generative systems, preferably a text-based or image-based generator, such as Replicate, Dalle, GPT4, or Bard. These systems can be fine-tuned to produce culturally specific, queer, or imaginative outputs. You can also collaborate with the system to create games and play with it. Additionally, you can experiment with non-standard languages and see how the system communicates in them.  

I find the Elements of AI is a great resource. It is a series of free online courses that teaches what AI is, what can (and can’t) be done with AI, and how to start creating AI methods. It feeds and encourages curiosity about AI technologies to people all over the world. 

Read more about Stephanie at