ABOUT
Bjørn Karmann: Empowering User Agency Through Tech
Danish experience and interaction designer, Bjørn Karmann, is always seeking out ways to humanize tech while empowering its users. His body of work is characterized by an insatiable curiosity of our interactions with the machine-oriented world, and a reimagining of “human-centric” technology. The result is a series of thought-provoking explorations which challenge the status quo set by mainstream tech. Which can, sometimes, come packaged in an odd design (like an AI-camera inspired by a mole’s snout).
How did you become interested in interaction design?
(BK) I’ve always been fascinated by the convergence of design, art, and technology. From an early age, I found myself building drawing machines, experimenting with generative art, and crafting unusual inventions. Meanwhile, I was immersing myself in classical design theory and human-centric design, which at the time seemed separate. It was around this time I came upon “A Touch of Code,” which featured student projects from the Copenhagen Institute of Interaction Design (CIID). I was captivated by these projects, because they showcased a seamless integration of the two spaces I was exploring.
This led me to CIID, where I had the opportunity to dive into interaction design in a tangible and social way. To me, interaction design is the point where our human experiences meet the increasingly machine-oriented world around us. It’s about humanizing technology, making it more intuitive and bridging the physical with the digital. And making technology feel more like a natural extension of our human experience. But, too often, it is the other way around…
How do you find opportunities to design something new? Are there traditional products or scenarios in daily life that you feel could be radically transformed through artificial intelligence?
(BK) My approach to design is somewhat intuitive — it often begins with a sense of discomfort or dissonance with the current state of emerging technologies. Some of my projects like The Objectifier, Project Alias, and Paragraphica were born from a palpable sense that certain new or evolving technologies were misaligned with our collective well-being — creating futures that we might not desire, if given a choice. Technologies that are sold to us as “smart” don't seem so to me when they are designed to keep us engaged in a one-dimensional way — which often leads to passive consumption rather than active creation. Therefore, my design philosophy centers on reimagining these technologies in a way that prioritizes human-centric values.
With AI, I see a chance to flip the power dynamic. To make machines that don't just demand we speak their language, but actually understand what we're saying. Imagine a world where my grandma could whip up a program without breaking a sweat, just by saying what's on her mind. That's the kind of future I want to design for — a future where my projects aren't just gadgets, but conversation pieces that spark debate and propose a new path forward. Project Alias is a good example of that as, ideally, we shouldn't even need a tool like Alias. It's a workaround—a patch over a bigger problem. By existing, it shines a spotlight on the issues of privacy and user agency in mainstream tech.
Why did you create Paragraphica?
(BK) Paragraphica was born out of a mix of concern and curiosity. We're in an era where AI-manipulated photos are everywhere, creating unrealistic representations over what's really in front of us. As smartphones become our go-to cameras, they lack the physical hardware that traditional cameras have. So companies are turning to software — to algorithms and AI — to compensate. But this fix comes at a cost — it's getting harder to tell what's real from what's been digitally perfected. Essentially, photos are no longer a stamp of authenticity.
(BK) As for my curiosity—I've always been fascinated by how different forms of intelligence perceive the world. Just like animals that navigate by echolocation or through tactile senses, I’ve been thinking about how AI may perceive our reality. Systems like ChatGPT or Midjourney that are trained on data from the real world must have a unique perspective, right?
I became intrigued by the idea of a camera that not only captures images, but does so through the interpretive "vision" of AI. It's about juxtaposing the artificial perfection we're used to with a new, AI-infused way of seeing—offering a glimpse into how these systems might understand and portray the world around us. It's both a tool and a question, wrapped up in a device that looks oddly similar to a camera.
I became intrigued by the idea of a camera that not only captures images, but does so through the interpretive "vision" of AI. It's about juxtaposing the artificial perfection we're used to with a new, AI-infused way of seeing—offering a glimpse into how these systems might understand and portray the world around us. It's both a tool and a question, wrapped up in a device that looks oddly similar to a camera.
What is the most unconventional or unexpected place you’ve found inspiration for a creative project or idea?
(BK) I often find inspiration within the strange intelligences living among us. There are so many parallels and hidden stories within our natural surroundings that are full of potential for influencing emerging technologies and interactive experiences.
With Occlusion Grotesque, you inverted the typical human-as-designer role. What did this project teach you about human–nonhuman collaboration?
(BK) Occlusion Grotesque has been a fun one! With every visit to my parents' forest, I'm reminded of the unique partnership between my design intent and nature's will through the evolving typography on that tree. It's become more than just a tree; it's a living part of my portfolio, an organic co-creator of sorts. And while the tree thrives, the project has become a real-life study of the dynamics between human and nonhuman actors. Sure, I like to romanticize it as a symbiotic relationship where both parties benefit, but I won't shy away from the fact that it could also be seen as parasitic—after all, the tree didn't sign up for this. This interplay has been an intriguing philosophical question, but practically, it's taught me patience and to design with a broader life-centric perspective. It’s important to consider not just immediate outcomes, but long-term evolution and interaction with the environment. It's a humbling reminder that as designers, we're always initiating dialogues with nature in one way or another.
And with Deep Grotesque, you examined how AI could learn the stages of tree growth and predict future design developments. What kind of potential do you foresee this kind of artificial knowledge having?
(BK) This way of listening and simulating nature can be extremely useful if we want to understand how ecosystems might respond to different scenarios. By simulating the patterns of natural growth, we could predict and prepare for changes, helping us make more informed decisions about conservation and sustainability.
In a similar vein, we're already getting better at figuring out the signals and noises other creatures make, and one day, we may even understand what they are saying. AI is excellent at spotting and predicting patterns, so it may be our ticket to “communicating” with the animal kingdom someday.
Trees and plants are a different story, though. They don't make noises or run away from danger the way animals do, so it's not as clear when they're “talking" or what they're trying to say. But who knows? If AI can someday crack the code on how trees and plants do their thing, it could change how we think about nature — and how we design with, or for, it.
What are you most excited to see come out of the AI “revolution”?
(BK) I’m hopeful to see a side of AI that empowers animals and the natural world. So that maybe, they’ll have more representation and importance within world government.
4550 Montgomery Ave / Suite 420
Bethesda / Maryland / 20814
301.718.0333
info@rp3agency.com