9 August, 2025
There’s a story I love to tell when I’m talking about enabling autonomy in teams. It was the first time I remember consciously letting the team plot their own course without either abdicating my role as the lead, or trying to “Jedi mind trick” them into thinking they had plotted their own course.
It was pretty early on in my management career, possibly the first major project I had been involved in from the start. It was time for the team to sit down, look at the problem, and start to formulate a solution. I was terrified, and had spent the previous week doing nothing other than running through the context, the current implementation, various bits of tech debt and bugs: effectively running a dress rehearsal on the whole planning session ahead of time, myself.
Why was I terrified? Despite having been at the company for five years before moving to management, this particular team worked in a domain that I had very little knowledge about. How was I supposed to lead them unless I knew as much as they did about everything? I didn’t want to let them down by being clueless.
I knew exactly how this project needed to go, exactly how it needed to be broken down. I even had a good idea who should work on what, based on skills, experience, upcoming holidays, even what kind of growth the team members had on their career maps.
All I needed to do was present it to the team. I was prepped, they would feel properly looked after, it would be great.
But as I walked into the meeting room to get the projector set up ahead of the start, something buzzed in the back of my brain: something my predecessor had said. Our job isn’t to stop them driving off the cliff, rather it’s to be there to roll up the sleeves, help them pick up the pieces, and figure out what went wrong. There was a nagging feeling that despite all my prep, this was going to be a disaster.
Still, I had my documents all ready to present. Work breakdowns, maps of the code, Gantt charts, the full thing. I couldn’t just abandon that, could I?
People started to file in. We had one remote engineer to dial in, made sure they were able to see everything (we had a dedicated in-room buddy for every remote team member, so they were on an iPad that their buddy could move around to better see what was going on), and so we began.
I pulled up the brief document which outlined the problem we were trying to solve, the constraints, how we were going to measure success, who the stakeholders were. All the starting points for the planning I had done.
I read through it, let them ask some questions, and was all ready to skip to the next tab: the one I had lovingly called “The Plan”.
And I paused. This was where the disaster would start. My gut told me to ask a question.
“So,” I turned to the room rather than talking to the projector. “Where should we start?”.
There was a brief pause before our remote engineer spoke up. “Well, we obviously need to chat with our contacts management team: this is going to bump into a bunch of code they manage.”
I breathed a sigh of relief. This was exactly on my plan, and so this might work. They were going to reinvent the plan I had for them. I wouldn’t need to be a dictator, the Jedi Mind Trick had worked.
“Hang on”, another voice jumped in. “No we don’t. The problem we’re actually solving for has nothing to do with contacts. That’s just in the success metrics. I can see why it’s there: it’s the easiest thing for us to measure. But it’s not actually needed to solve the actual business problem”.
I stopped. Wasn’t it? I skimmed the problem statement again. No, we could bypass the contacts altogether. I had completely missed that, as had our stakeholder. We had veered off course. I fought the panic for a moment.
“Okay,” I started, terrified that I’d fucked up but also genuinely curious. “Say we bypass touching contacts. How do we measure the impact?”.
More silence. Then “those are just a proxy for usage of this new feature. We could measure directly if we added in some telemetry here and here. Hang on, let me show you.”
The projector switched to another laptop screen and up came some code I had seen but not fully understood. “Look”.
The next ten minutes saw the team fully engage on this new idea. Code was pulled up, a quick diagram was sketched on our remote whiteboard, and suddenly we were starting to form a plan. I kept on asking insightful questions (only insightful because I was genuinely curious why things were different from my plan, but they didn’t know that), and the conversation flowed for a further hour.
Some of my plan (just over half of it) ended up being reinvented, but what we landed on in the end added up to about 60% the effort I had originally projected to myself, and informally budgeted for in terms of expectation management to stakeholders.
We had just manufactured four weeks of time. And all because I had the sense to keep my damn mouth shut.
See, what I realised later was that it wasn’t that I’d made a mistake doing the planning, but that it had been essential to help me be the best possible coach in the moment. Having an idea of how to solve the problem, but not sharing it, helped give me something concrete to compare to. I could ask helpful questions, not just dumb manager ones. But the shock of having a blind spot revealed to me so early helped me avoid poisoning the well by trying to steer them back to my plan.
I had context, useful knowledge, curiosity, and a genuine incentive to defer to their superior understanding of the existing implementations.
And by trusting that, the team now had a plan that they owned, that they felt genuinely invested in, that they understood and could adapt to changes, because it was their plan. Oh, and we had also managed to buy a month of refactoring at the end of the project.
It was at that point I resolved to avoid sharing my ideas till as late as possible in any conversation. I still fall into this trap too often, but it’s a powerful technique when managing a team that has been deep in the code for long enough, and is more in need of being guided in processes or business context.
In short, do your homework so you can ask good questions, rather than give good answers. Ask the questions. And then shut the fuck up.
21 July, 2025
One of the things I love about the span of time is stuff like the timeline of tool usage by humans. Homo Sapiens evolved around 300,000 years ago, but there’s evidence of hominid tool usage dating back over 2 million years ago.
That, it seems fair to say, is a long time.
It’s also fair to say that we Homo Sapiens know how to master our tools. We literally evolved alongside them, and have never known a tool-free world, as a species.
Which makes it so surprising to me that we still seem so skeptical of new tools when they come along. The synthesiser was seen as the death of music, because why would anyone want to learn the cello when you can just press a key to make the perfect sound every time? The keyboard was seen as the death of handwriting, since why bother learning how to write? The calculator the death of arithmetic, the camera the death of painting, the bicycle the death of walking.
Even writing (writing!) was seen by some ancient Greek thinkers as the death of memory.
And yet the human capacity for integrating new tools into our (literal) toolbox remains undefeated. Rather than tools limiting human creativity and capability, in every single instance the tools have always been additive. The trick is to avoid seeing the tool in terms of what it replaces, but rather in what it enables. Photography enables the capture of fidelity in a way that created a brand new branch of art using the camera, while also freeing painting from the need for realism. Electronic instruments allowed for new, previously impossible speeds and accuracy, while also freeing traditional musicians to be able to explore new areas of creativity inspired by their digital bandmates.
And yes, this is another post about AI. A reaction, this time, to the idea that the goal of AI is to somehow make everything effortless, and that by seeking to abolish effort, we somehow risk losing something essential about ourselves.
The idea goes that the hard work is the thing that makes the work itself capable of greatness. Remove the hard work, and the result will be bland. Unearned. Unoriginal. It will miss that human something. Further, our grit and our determination will atrophy, and we will find ourselves unable to create any more, subject only to the slop that AI can produce for us.
I argue that not only does our history with tools suggest this is nonsense, but also that it misses the point. Hard work is not the only signifier of endeavour. As a counter to this, consider the state of flow: that place where we find that we are tackling tasks with ease, effortlessly, our skills and our whims aligned to create what we want.
Is flow effortless? It’s one of the defining characteristics of flow! Is it somehow bland and unearned? I would say not.
The “hard” part here is triggering that state. Flow can often feel like an accident.
But what if AI could be used as a tool to help, to make us “accident prone” as it were? What if AI could be used to coach through the blank page, the fear of failure, the fear of success? What if it could be used to nudge rather than solve, to offer different ways of looking at a problem?
Sure, some will use AI to simply solve the problem — one of the many things we have evolved is a fine sense of calorie efficiency — but the creative ones among us should be able to find ways to use AI to enhance their abilities. Not to tell them new ways of looking at the world, but to prompt them into finding their own new ways of seeing the world.
As with the previous centuries of tools, though, those creatives that learn to harness this new tool may not be the ones who were proficient with the old tools. And that’s a shame, because it’s the same deep curiosity that drives both. The same desire when confronted with a new idea to figure out how to use it to do more of what we love, better, faster, brighter.
And if that tool allows more people to participate? If it can get more people to write, to take photos, to compose, to push past their inhibitions and create? Isn’t that part of the goal of humanity in the first place?
This is why I choose optimism. I choose to hope that we can find a way through this current inflection point, just as we have before, just as we have for our entire existence, and just as our ancestors did for literally millions of years.
Yes, AI is different, but so was everything else. Our history, our pre-history, and the history of our entire species, is one of bending tools to our will.
I choose to believe this fire will be tamed.
13 June, 2025
For the longest time, I was sure these LLM chatbots were the next crypto grift: ideas that had existed for decades, implemented in the worst possible way, but with a lick of paint and a shiny marketing campaign, designed to separate the gullible from their money. The best strategy, I thought, was to sit it out, watch others lose their shirts, and wait for it all to blow over.
But I may have to go back a little further to see history repeating.
I think now that the current LLM craze is this generation’s dotcom bubble.
You see, the internet in general was a transformational technology, but it was the web that built on top of it with something tangible: it felt like we had productised it. We had the internet in a box now, and all that was left to come up with clever ideas, package it, and become rich.
So along came the speculators with their late 90s ideas of ideas that would revolutionise humanity! They all shared two common features (besides being in cyberspace):
Whether it was that people would be comfortable ordering clothes sight unseen, or typing credit card details into online forms, or watching movies on tiny screens, or listening to music on crappy speakers, or waiting three days to download software that they’d be faster driving to the store to get, the ideas all raised huge sums of money by focusing on the hype, and glossing over the real problems that needed to be solved.
And a lot of people bought into the hype. And they spent a lot of money. And then it all blew up and people lost their jobs, or their savings, or both. And they were angry, felt cheated, felt lost, and struggled to see a path forward. Many wrote the web off as a fad.
So what’s the analogue now? Just like the early dotcom companies, there’s a lot of easy hype money to be made by selling the future, then cashing out when people realise it isn’t here yet. It’s also easy to roll our eyes at how ridiculous the idea is that an LLM could do whatever it is that we’re being told it can do, or get angry at how expensive it all is to run, at how wasteful and energy hungry the technology is, at how immoral the training data is, at how dystopian the disruption of artists and writers and programmers and musicians and actors will be.
All of this is true. But look back at the list of dotcom problems above, and the funny thing is that all those problems eventually got solved so comprehensively that they seem positively archaic now. Back in 2000, boo.com was the poster child for how ridiculous it all was. As if people would ever buy clothes online! Hah.
Maybe the stakes are higher now, but LLMs, or whatever comes next, will soon be able to do the things they can’t do now. And like the web, that will both change everything and change nothing at all.
What I do know, though, is that the people who navigated the dotcom bubble to stay relevant were the ones who saw that whatever happened, the toothpaste was out of the tube. The web was here to stay, so they rolled up their sleeves and started working to address those problems, to build the future, rather than laugh at it or yell at it. I suspect the same will be true today — we can try to put the toothpaste back in the tube, or be angry at the ones who squeezed it out.
Or maybe we can roll up our sleeves once again, figure out how to take control of the technology again, to use it to build the future we want. Maybe it’s impossible, or if it is possible it won’t last, but since when was that a reason not to try anyway?
21 May, 2025
There’s much to be said for using a well-made tool, whether it’s a sharp saw, a strong chisel, a mechanical keyboard, a sturdy van, a light and compliant bike. Whatever it is, when it comes to evaluating tools, their quality can really be measured only by one thing: How well did they achieve the job they were hired for.
As the saying goes, a person doesn’t buy a drill because they want a drill; they buy a drill because they want a hole.
The same is true of software tools, but those have an insidious side to them: some software tools invite massive amounts of customisation to “improve their effectiveness”. A text editor like Neovim, for example, promises to allow you to write “at the speed of thought”, and offers so much customisation that it can feel like a blade you can sharpen indefinitely.
That can be a trap: blades rarely need to be infinitely sharp. After a point, you’ve traded your job as a carpenter for a job as a sharpener.
The tool must be used. It doesn’t matter how perfectly your drill fits your hand if you’ve never made any holes with it yet.
Most people who have worked with me for long will know that I love to point out when people are “painting the hammer”, or spending more time tweaking the tool than using it. As with most things like this, every accusation is a confession. I love painting hammers. It’s why I love the phrase so much: it’s a constant reminder to ensure I stop painting at some point and start hitting.
In the last few years, though, I feel like I’ve gone too far the other way. I pushed deep into the “just find the best tool and use it as it comes out of the box” way of working. Zero customisation, the ability to use the tool as designed, rather than constantly tweaking it to be just so. This has a number of benefits, not least the insurance of being able, if needed, to instantly replace the tool and be comfortable (and productive!) with the replacement immediately.
However, I think I’ve missed something. I’ve missed a key component to tool effectiveness, one that I only really noticed in its absence.
A blunt blade is useless, so to be effective it must be honed. It must be sharp. It must be “functional”. There are other functional improvements that can help increase its effectiveness too: the ergonomic handle on the saw, the addition of a second handle to allow the application of force in just the right balance.
But a saw on its own doesn’t cut anything. The user of the saw must also be effective. They must also be functional, knowing how to use the tool effectively.
Even so, I believe there’s still something more, something else that drives effectiveness.
Given two saws of equally peak functional effectiveness, and a user of the saw who is skilful, there can be something about the first tool that still gives it an edge over the second when paired with that user.
The first is the tool their spouse gave them. Or maybe it’s the one they apprenticed with. Or maybe it’s the first one they made by hand, or the one that their kid painted for them.
The missing component is joy. When a tool is joyful to use, it becomes more effective. It does so not simply because the user’s skill, say, is increased on each use, but also because the user is more inclined to use the tool in the first place. Which means they’re more inclined to practice, find better ways to wield it, and simply get more done with it.
It becomes the reason to do that job they’ve been putting off.
So, while we should be careful of spending so much time painting the hammer that we forget to swing it, perhaps just a little paint, in just the right place, can make us seek out new nails to hit, just for the fun of it.
Joyful tools are effective tools.
Consider that when you sit down to work: what small thing could you add (or remove) that would make you smile the next time you visited the task?
14 May, 2025
It’s becoming accepted truth that iPhones are distraction machines. We’re constantly bombarded by notifications, with our favourite apps incentivised to model themselves after flypaper to maximise attention.
One diagnosis for this which makes sense to me is that the modern iPhone is a device without a clear purpose: a true Everything Box. The problem is that it’s incredibly hard to be intentional with something like that.
Stop me if this sounds familiar: you pick up your phone to make a quick note of something that caught your attention, but it’s all too easy to also “just quickly check” your email, or your insta, or reply to that message. Before you know it, half an hour has gone by as you stagger bleary-eyed back to the present moment. You were ambushed by the Everything Box, and you probably didn’t even manage to make that damn note in the first place.
So what’s the answer? I’ve seen a strong argument that it’s time to ditch the very idea of the Everything Box, that we’ve comfortably demonstrated that it’s a dead end, and that it’s time to go back to single purpose devices that do one thing exceptionally.
To take the notemaking example above, what a different experience I have when, instead of my phone, I pull a nice notebook and pen out of my pocket to make that note! Not only are there no distractions, but the pen has a chance to bring me joy, as does the notebook. Further, by carrying the notebook, the object itself is a reminder of a habit I want to cultivate: make more notes!
So, the theory goes, look at all the things the iPhone removed, and reintroduce them to your life in the name of intentionality: stop taking photos on the phone, take a nice camera with you! Stop listening to music on your phone, dig up an old iPod on ebay and choose the music you want to bring with you. Want something to read? Forget doomscrolling Facebook, make sure you have a magazine to hand. Stop hate-watching the news, buy a newspaper and read it cover to cover. Ditch x: if you want to connect with people, use your shiny new dumbphone and, you know, call them.
Which I guess is fine advice. Each of those things will do the same double duty as the notebook, providing an intentional, single purpose outlet for your need while also reminding you to do that thing in the first place.
But, frankly, I’m a little suspicious. Let’s replay that list again in slow motion, and look at how many new things I need to buy:
Now, I don’t think there’s some sort of global conspiracy to get us to re-buy all the things we ditched in favour of the iPhone, but I do think that this is an extreme position. Not to mention an expensive one if you already own an iPhone.
So what is the answer?
David Sparks has long been banging the drum about what he calls “contextual computing”, and I think this is a big part of the solution. The iPhone is an excellent tool for many of the things we want do in the course of our lives, but as stated way back at the start, its very nature as an Everything Box makes it a dangerous tool for our primate attentions.
We deal with dangerous tools all the time, though. A lawnmower is a dangerous tool. A stove is a dangerous tool. The hot water tap can be a dangerous tool. We don’t then respond to these dangers by deciding the tool can no longer be used, or that we have to use some other tool. I’m not seeing anyone advocating for mowing lawns with scissors as they’re less likely to fling rocks out from under them into unsuspecting neighbours’ windows.
No, instead we look at the dangerous aspects and find ways (either physical or systemic) to mitigate that danger.
For an iPhone, the danger is lack of clear intention. The mitigation is to introduce an intentional interface to the iPhone.
Let’s go back to the note-making story from earlier. Now, instead of having to open our phone, find the notes app, open it up, type out the note, and then put the phone back in our pocket without going back to any of the shiny icons we saw on our way in, we instead:
Isn’t that better? Potentially better than the notebook (since it presumably brings with it all the benefits of digital notemaking, like syncing, search, perfect legibility and so on) with none of the drawbacks.
If you construct a list of such actions, or intentions, then those also serve as a reminder to do those things too.
So how does this work in practice? Well, one of the things I’ve done is to create a shortcut that simply pops up a text field and asks what’s on my mind. I type whatever I need into that box, and hit “done”.
Then, a menu pops up asking what I want to do with it. I can turn it into a list of tasks and add it to my task manager. Or maybe it’s a list of things to add to the shared shopping list for whoever in the family is next at the shops to pick up?
Or perhaps it’s a note to add to my journal, or maybe it’s a message to send to a family member. Those are all options too. I pick one… and that’s it. The text is added to wherever it needs to go, and the shortcut ends.
This is then connected to a button on my phone’s lock screen, so I don’t even have to unlock my phone to do this. No distractions, just whatever was on my mind handled. I even added an option to the bottom of the list to open the shortcut itself for editing while copying the input text to the clipboard, so that if I want to do something new with the text, I have an easy way to add that to the list rather than risk going into my phone.
Now, is this a perfect solution? Of course not. First, I don’t believe in perfect solutions, but second, I think all of these choices are tradeoffs. There’s undeniably something nice about a good notebook, or a solid, dedicated camera with a sharp prime lens on it. Am I saying you shouldn’t go and get these things? No, and I’ll admit to being damn tempted myself.
I’m just suggesting that we shouldn’t be so quick to write off the iPhone as a failed experiment. Like most tools, ultimately we are in charge of how we use them. That an iPhone comes out of the box in a way that encourages mindless usage does not mean that there is nothing to be done to transform it into a tool that serves our needs, rather than the other way round.
In fact, I’d argue that modern iPhone bend over backwards to help us ensure that they are “safe to use”, and that it’s our habits that lead to them becoming distraction machines. This is not about blame, but an encouragement to see that, if it’s our doing, it’s also something we can undo, while still retaining the truly magical powers that iPhone can confer on us mere mortals.
I’ll be writing more about this, as I think that defending our attention is one of the most important actions we can take right now. You can’t fight injustice, push back against fascism, save the climate, or anything else meaningful if you can’t direct your attention towards those things in the first place. But for right now, if writing your own shortcuts seems a bit daunting, I’ve found these to be simple but effective next steps:
Even just applying these two questions to my own phone has made a huge difference to my experience of using it, turning it into something I go to when I need something, not when it tells me it needs something.
In short, it doesn’t take much to be reminded that I can be in control of where I spend my attention, and only a little more to start to reassert that control.