Human vs Machine Logic
Thoughts on team work
I’ve been leading creative teams building interactive experiences for 15+ years. I’ve been actively working on integrating AI into our creative and technical workflows for just under five. I’ve spent hours wrestling with AI, to get what was intended. This is difficult, and that made me think.
Example: I needed some functionality updates to an app. Simple stuff. New features that have an impact on UI layout. Using Codex I worked on this for several days. But the AI kept rearranging my UI unexpectedly, adding features or buttons I didn’t ask for, and removing this. Changing the layouts unnecessarily, stuff I thought was locked. None of this was in my prompts.
The agent followed my instructions perfectly. It just did so in ways I didn’t expect and also did a bunch of other stuff because I didn’t explicitly tell it not to.
The thing Asimov knew
This is what Asimov was writing about in the stories contained in I-Robot, too. Not that machines don’t follow instructions, but that they follow them with absolute, literal precision, and that when they are left to interpret they do so unlike humans. Asimov’s Three Laws were supposed to fix this by forcing robots to consider broader implications beyond the immediate task, or at least restrict their actions in a way way that didn’t require constant intervention. The context was very different (to prevent robots harming), but in a I-Robot is a comedy about this moment in time, in a way. We’re all struggling to work out how to talk to the machine. The stories are a really funny way to explore what as humans is not obvious, but which underpins all our communication.
We have a huge amount of shared cultural and personal context we don’t even realize we’re using, when working together
But even with those carefully crafted laws, his robots still ended up in impossible situations where literal interpretation led to chaos.
Here’s what bugs me though
When my coding agent messes up my interface, the agent doesn’t care. It’s not confused. It doesn’t feel bad. It doesn’t learn in any way that carries forward to tomorrow.
I bear full responsibility for preventing everything I don’t want done. I’ve gotten better at it—I have prompt patterns now, I’m absurdly explicit about constraints.
I leave gaps
In seventeen years of working with designers and engineers, I’ve learned what good collaboration looks like. I am not nearly as good as my team is, at the things that they do. Over the many years as a director and team lead, I have learned to create space for people to do their best work. I leave gaps, knowing they will use their talent and experience to fill them better than I can.
With AI coding agents this is fundamentally different. The gaps don’t work. They can’t notice when you’re asking for something technically or creatively vague on purpose or not. They do what you asked but not necessarily what you mean.
Our shared context
This difference taught me something about myself. It’s beautiful, actually. The whole point of working with creative teams of people. But the gaps just quietly get handled, so it is good to stop and notice through the contrasting experience with AI agents.
I find it interesting to pause and consider how creative work gets made. Shared context about what looks good, what feels right, what our intention is, what is a priority right now vs what isn’t in our creative process, and what users might expect in a product. That’s our culture, guiding us invisibly. Knowledge that’s often unspoken but hugely impactful.
AI doesn’t always have that context. For better or worse, it makes decisions that no human on my team would ever make.
Working with AI is like seeing my own communication style in a mirror for the first time. All those gaps I didn’t know existed? They’re suddenly very visible.
But seeing the gaps and mistakes firsthand I’m revisiting my communication style with human teams too. More explicit about priorities, without losing the gaps. More aware of what I’m assuming versus what I’m actually saying.
Maybe that’s the real skill we’re developing here. Not just how to talk to machines, but understanding how much invisible work goes into our human collaborations. All that context we share without ever discussing it. All those gaps we fill for each other without even noticing.
AI can’t do that like creative teams can, although it can do other things much better. But watching it fail is teaching me to appreciate how much my human teams do.
We build games and products for wearable devices that feel organic and human. We write about the (un)intended consequences of new technology and the merging of the real and virtual worlds. By Anrick, Grace and Liam.

