The most interesting concept I’ve learned about in the past month is “enshittification,” a term coined by Cory Doctorow. He uses it to describe the predictable decline of digital platforms: first they are great to users, then they shift to favor business customers and advertisers, and finally they extract maximum value for shareholders, often degrading the experience for everyone else in the process. The term is blunt, but the framework is sharp. It gave language to something I’ve observed repeatedly in media and technology organizations: product decisions that begin as user-centered gradually become revenue-optimized, and eventually erode trust, quality, and long-term viability.
Related to that, I’ve been following Doctorow’s arguments about a potential AI bubble, his view that massive capital is flowing into AI in ways that may not be sustainable relative to real, durable value creation. That perspective helped me separate two things that are often conflated: genuine technological breakthroughs and hype-driven overinvestment. AI is clearly transformative, but history shows that markets frequently overshoot practical adoption curves. The insight for me is not skepticism of AI itself, but caution about misaligned incentives, especially when product roadmaps are driven more by investor signaling than by validated user need.
Together, these ideas have sharpened how I think about product management and program leadership. They reinforce the importance of long-term trust, governance, and disciplined prioritization over short-term extraction or trend chasing. In an era of rapid innovation and intense competitive pressure, the real strategic advantage may lie in resisting the slide toward enshittification, protecting user value while being sober about where investment genuinely compounds versus where it simply follows the crowd.