Novelty vs Nuance
Early in my engineering career I used to chase novelty - the shiny new idea that piqued by curiosity and let me explore solving new and exciting problems.
The issue with this is, of course, you end up with a lot of half-baked/semi-developed projects. Once you've "solved" the core problem you were curious about, everything else suddenly feels like a bore and a chore - and then you never build up the skills required to deliver something.
It's an easy thing to slip into when you're new to coding - you've been given a powerful hammer, and everything looks like a very hammerable nail.
Somewhere along the way, I stumbled upon some sage words that helped me break the cycle: "There's novelty in nuance." Rather than chase the shiny new thing, stick with the old thing and instead the novelty in the details of the old thing. This strategy helps you stick with a project and helps you see it through to completion. As an aside, one of my long-time favorite blogs/books is 'The Old New Thing' which captures interesting nuance in the development of Windows as its evolved.
As everyone in the world wrestles with how their roles will evolve in the age of AI and LLMs, I've found myself thinking back to those same words: Nuance over Novelty.
Whether you're a product manager, software engineer, marketer, or whatever - we're all exploring what and how AI can be used to make us faster and better. We're seeing the productivity gains while running into present-day limitations.
AI is great for the ideation stage (i.e. novelty). It cuts down research, it can challenge ideas, provide a sounding board, and help us iterate quickly. But, for now, it struggles with when the contextual details are increasingly specific (i.e. nuance). Whether it's writing code, generating images, video, audio, providing an analysis, when it comes to nuance AI butts up against a wall that (again, at least for now) can only be pierced through human creativity.
That's where we're headed: We'll define the goal, set the context of the problem space (the nuance of our novelty), and AI will then help us progress towards goal in Steps 1 through to Step x. We'll pick back up at Step x+1.
Like Zeno's paradox, Step x will continuously progress down the long tail to where we eventually hit an upper bound. That's where refining the solution to fit the nuance and refining the nuance to improve the solution will reign supreme.