Working in emerging tech for almost two decades, you develop a certain relationship with change. It's no longer wide-eyed optimism, but something more considered.
We’ve shipped hundreds of projects using new tech of many kinds. You lose track of the specifics and see the patterns that emerge between vision and reality over and over. It's not about technical problems, though there are plenty of those. It's not about the team’s skills either, though that matters. It's about accepting that however good your idea, the path from concept to finished product involves compromises, surprises, and moments where you have to let go of things you thought were essential.
You learn to think ahead - instinctively - and wonder what will go wrong. Its actually fun, thinking about all the ways everything could go wrong. Risk assessment can be fun, if you allow it. Quite the opposite to the Silicon Valley "fail fast" mindset, you’re trying not to fail to the best of your ability, by imagining all the possibilities of failure. I've found that the difference between learning by failure versus learning by research is solving challenges before they become problems.
You learn before you fail, and so are most prepared when it happens
People who work with new tech often say they're building something entirely new, so how could they know? There is no precedent. “Never been done before.” But there are so many parallels throughout history that, with a little imagination, can provide reference points for how humans react to change, how we all interact with the world and with technology, and how unintended consequences emerge when new dynamics are introduced. The specifics may not relate directly to what you're making, but working out how to use existing information about human behavior, even if it comes from seemingly unrelated fields or time periods, is such a fun part of the job.
"The most interesting spaces are at the intersections. When we bring together experts from different fields, such as designers, scientists, writers, engineers, we start to see the unintended consequences that might be missed in siloed thinking" says world-builder Alex McDowell. This cross-pollination of perspectives is exactly what helps anticipate problems before they emerge. When you're creating technology that might reshape how people experience reality, you need these diverse viewpoints to identify the blind spots in your thinking.
Beyond blind-spots, you have to be ready for your plans to not work out the way you hoped, and for a few real problems to pop up along the way. And yet somehow, you still need to show up the following day excited about the next possibility.
Bruce Sterling captured this perfectly when he said: "The major benefit is that, when seemingly weird stuff happens, you're rather less weirded out than most people. You kind of saw it coming." I've found that my most valuable insights don't only come from imagining how wonderful things could be. They also come from imagining how they might go wrong. This kind of curiosity is the bread and butter of science fiction but is sorely lacking in the tech space, and worse still, the AI scene.
Pessimism as a tool is valuable when we're developing software for wearable devices meant to replace smartphones or integrating AI systems that change how we see the world.
I'm reminded of Sarah Kendzior's piece "In Defense of Complaining," which contains this simple truth: "Complaining is an act of hope... To complain is to believe that a better world is possible." She wasn't talking about technology, but the idea works here too. When I worry about how AR glasses might split our attention even more or how AI assistants might reduce our control, it's not because I'm negative. It's because I care about getting this right.
The most thoughtful tech people I know hold two opposite views at once: excitement about what's possible alongside real concern about what it might actually do. This tension isn't a weakness. It's necessary for creating better, more human-centered technology.
When you accept that nothing is simple, understanding everything becomes less complicated
We push ourselves as a team to spend time thinking about what could go wrong as a result of our work. What happens if this works perfectly but creates unexpected social problems? What if the convenience we're designing makes people depend on technology in ways that aren't healthy?
These questions are essential to creating technology that actually helps us as humans rather than just creating more technology.
Sterling also notes that futurists need to "get a feel of how the passage of time actually affects people." This long view matters. The technologies we're building now, such as glasses that change how we see reality, or AI systems that make choices for us, will reshape human experience over decades, not quarters.
The best breakthroughs often come not from just asking "how cool will this be?" but "what could go wrong here?" and then trying to fix those problems before they happen.
That's why some pessimism, some careful thought about downsides and the road ahead, including unexpected detours on your journey, isn't just responsible - it's necessary.
What do you think? How do you find the balance between optimism vs pessimism in your work? Have you found a balance or do you disagree with the central premise of pessimism as a design tool? We’d love to hear from you, get in touch with us.