Everything is interesting
If you dig deep enough about any topic it can be interesting. Even if the topic itself doesn’t seem groundbreaking on its face, the people, the processes, the history of that topic will always provide it unimaginable depth.
I realized this in college when I read Junkyard Planet, a book about the global trash trade. How could trash be interesting? Discarded things are discarded because they aren’t worthwhile…right? But I quickly learned I was only thinking about the trash, not everything around the trash. (The trash itself is also interesting though, when you realize each piece of trash tells a story). And the people, systems, and consequences of the trash trade have more depth than even a ~300 page book can communicate.
This is not a new realization, academics and experts have been spiraling down rabbit holes for all of human history. New ways of distributing knowledge (the printing press, film, the internet, etc) have only expanded our ability to rabbit hole. But very few of us are able to go down these rabbit holes…
Always out of time
Everyone is busy all of the time. It’s a cliche, but it is true. Primarily because it’s an expectation that we’re busy from our employers but also from American society at large. Much has been said about the culture of 996, grinding, etc. Even the FIRE movement, which touts an idea that you should work hard today for a longer retirement, relies on the idea that for a short period of time you must be as busy as possible for some, often unreachable, future payoff. It’s no wonder we choose the easy solution and don’t think deeply. And it’s no wonder that throughout history those who were the most privileged were also the most likely to be able to rabbit hole, shaping the world for themselves.
Optimized thoughtlessness
“Don’t make me think” is one of, if not the, most commonly recommended books for people in the UX field. It’s a great book that’s coming from a place where user centered thinking is lacking. I’ve worked with some colleagues over my career who would rather not consider what the user needs to think about and instead build what they think is best, assuming what is best for them is best for everyone else. By putting in the work to reduce what users need to think about you have to put yourself in their shoes. However, for every positive consequence there is always a potential negative consequence.
Making something intuitive is perfect for driving “conversions”, a word that does an amazing job of obscuring what it really means and removing all of the context for why it matters. If a conversion is donating to a charity that’s incredible, but if it’s betting on the Steelers going to the Superbowl maybe making the user think is a good thing (although I personally hope this example ages poorly ⬛🟨). This is the negative side of the frictionless design the industry has been striving for. Stopping a user from thinking is ripe territory for exploitation and nudging them towards inhuman actions that distract more than anything else.
What we're busy with isn't even human
Technology has made this busyness even worse because we’ve built our world around our devices. Many of the tasks we are doing aren’t very human to begin with. Particularly for office workers and people who work in technology, like I do, most of the day is spent translating human goals into technological contexts. Describing a bug in software, translating behavior into code, writing performance reviews, creating slide decks. Even a concept so foundational as websites or apps is not how people inherently think. Before smartphones existed it would have been ridiculous to think "I need to open my bank app". This isn't specific to technology of course, even thinking "I need to go to my bank" isn't the real goal, it's more like "I need to see how much money I have access to". Technology becomes another layer of abstraction on top of the ones we already have (banks, businesses, paper money, etc.)
AI as a tool to reduce distraction
One of the many reasons I hear for AI not being useful is that it doesn't meet the level of quality that a human could. I completely agree with this for tasks that are inherently human or require strong judgement. But to me it’s the other types of tasks where AI becomes interesting: not because it's particularly good at these tasks, but because humans aren't either. Maybe these tasks shouldn't exist in the first place, but they do and someone or something needs to do them.
There’s a commonly predicted version of the future of AI where teams shrink, every company becomes a team of one (or a few) and we reduce all the layers that have made our organizations slow. Product Designers, Product Managers, Engineers, etc. all become one product person. But I’m not sold on this version of the future yet, primarily because AI’s limitations are very clear to the person who normally does that task. I have struggled for hours to get an AI tool to create the exact design I’m describing, even when my inputs are painstakingly clear and providing them to any designer would produce a better output. But I have created quite a bit of code that works, but isn’t high quality and is immediately written-off by an engineer for good reason.
However, I have found a lot of value out of using AI for small things. Writing docs to explain a process, finding obvious patterns across research session, searching the web for design patterns, frankly the least human things. The tedious work that is more distraction than substance.
We have access to more information than any other humans in history. That was the promise of the internet, expanding what we have access to and who we can connect with. But that promise also came with learning many confusing and unnecessary interactions.
We don’t lose this access to information with AI tools, in many ways we gain more access and capabilities through tools like Consensus or Elicit. With AI tools, we can choose to automate the technical (or sometimes actively anti-human) tasks, we can spend our time on more valuable things than writing tickets, defining processes, or coding solutions that purely exist for monetary gain. This is assuming we can solve AI’s energy usage and sustainability problems of course. Our progress could be in going deeper into topics to better understand them. Not generating more and more things we don't need, but reading, learning, making, discovering what matters. Going down more and more rabbit holes…
