We've spent so much time asking whether artificial intelligence will replace human thinking that we've barely noticed the humans doing the thinking have opinions about this arrangement. Turns out, they do. And the generation we assumed would embrace AI most readily is mounting the strongest resistance.
The Revolt of the Digital Natives
According to new research from the Pew Research Center cited by The Verge, 68% of Gen Z users who regularly interact with AI tools report growing skepticism about their value, compared to just 34% of Baby Boomers. Among college students required to use AI writing assistants, 73% say the technology makes them feel "less capable" of independent thought. Perhaps most tellingly, daily AI usage among 18-24 year-olds peaked at 41% in late 2023 and has since declined to 28%, even as availability and capability have expanded.
This presents a fascinating paradox. The cohort that grew up with algorithmic feeds, predictive text, and computational assistance baked into every digital interaction is the same group now questioning whether AI collaboration enhances or diminishes human capability. Their resistance isn't born of technophobia—these are people who learned to think alongside machines. Instead, it seems rooted in an intuitive understanding of what they might be giving up.
The complaint isn't that AI produces bad output. Young users consistently acknowledge that AI can generate ideas faster, write more fluently, and solve problems they couldn't tackle alone. The issue is subtler and more profound: they sense that each interaction with AI changes how they think when the AI isn't there. One college senior quoted in the research put it precisely: "I can feel my brain getting lazy. I reach for ChatGPT before I reach for my own thoughts."
This isn't just about writing papers or generating code. It's about the fundamental relationship between effort and understanding. When a machine can instantly produce what previously required struggle, patience, and iterative thinking, something shifts in the cognitive ecosystem. The Gen Z users experiencing this shift firsthand seem to recognize what their elders, eager to embrace productivity gains, might be missing: that the struggle itself was part of the value.
Their resistance suggests that successful AI integration isn't just a technical challenge—it's a philosophical one. How do we design human-AI collaboration that amplifies rather than atrophies human thinking? The young people rejecting AI despite social pressure to embrace it may be the canaries in the cognitive coal mine, alerting us to risks we haven't fully considered.
The data reveals something corporate adoption metrics miss entirely: the humans in human-AI collaboration have agency, preferences, and long-term interests that don't always align with efficiency gains. If we're building the future of augmented intelligence, we might want to listen to the people who will live in it longest.
Three Signals
Anthropic is reportedly seeking $50 billion in new funding at a $900 billion valuation, according to sources cited by TechCrunch. These numbers suggest we're witnessing the emergence of cognitive infrastructure companies—enterprises whose business model is literally the augmentation of human reasoning at scale.
Microsoft reports over 20 million paid Copilot users with high engagement rates, indicating enterprise AI has crossed the threshold from experiment to infrastructure. When this many knowledge workers integrate AI into daily thinking, we're not just adopting tools—we're evolving new forms of hybrid cognition.
Leaked OpenAI system prompts include the bizarre directive to "never talk about goblins," as reported by Ars Technica. This oddly specific prohibition offers a rare glimpse into the hidden behavioral scaffolding that shapes AI responses—and the sometimes absurd lengths companies go to control artificial minds.
Worth Your Time
Cal Newport's recent essay "Deep Work in the Age of AI" explores how the same cognitive habits that made knowledge workers valuable before AI—sustained attention, original synthesis, patience with complexity—become even more crucial when machines can handle routine thinking. His framework for "cognitive complementarity" offers practical strategies for developing uniquely human capabilities that enhance rather than compete with AI assistance.
The human mind is the original generative engine.The Scan
The exponential curve has a wicked sense of humor. Just when we think we've mapped the territory ahead, it reveals we've been squinting at the horizon through a periscope designed for linear seas.
The Exponential Blind Spot
Mustafa Suleyman, co-founder of DeepMind and current Microsoft AI chief, argues that AI development faces no meaningful technical barriers in the foreseeable future—and humans are catastrophically unprepared for what this means. Speaking to MIT Technology Review, Suleyman contends our brains evolved for linear thinking on African savannahs, not exponential curves in silicon valleys. The gap between where AI capabilities actually stand and where most people believe they stand isn't just wide—it's widening. While we debate whether AI can truly "understand," machines quietly master tasks we assumed would remain human domains for decades.
Meta's Billion-Dollar Bet Yields First Fruit
Meta's newly formed Superintelligence Lab has released Muse Spark, its first public AI model and a clean departure from the company's Llama lineage. The model represents Meta's ambitious pivot toward "personal superintelligence for everyone," according to Ars Technica, backed by the company's $10 billion annual AI investment. Unlike Llama's focus on general-purpose language tasks, Muse Spark specializes in adaptive reasoning across multimodal inputs—text, images, and code simultaneously. Meta's bet is clear: the future belongs not to one-size-fits-all models, but to AI that molds itself to individual users' cognitive patterns and professional demands.
Rethinking Work From the Agent Up
The future of AI in business isn't about plugging digital assistants into existing workflows—it's about rebuilding processes from scratch around agents that learn, adapt, and optimize autonomously. MIT Technology Review reports that leading organizations are abandoning "AI augmentation" strategies in favor of "agent-first process redesign," where human roles shift from task execution to goal-setting and quality control. Early adopters report 40-70% efficiency gains, but only after completely reimagining how work flows through their organizations. The message is becoming clear: the companies that thrive will be those brave enough to let agents reshape not just how they work, but what work means.
The Data Point That Could Settle the Jobs Debate
While tech evangelists and doom-sayers trade predictions about AI's impact on employment, economists have identified a single metric that could cut through the noise: "task-level displacement velocity." MIT Technology Review explains this measures not whether jobs disappear, but how quickly specific tasks migrate from human to machine execution within existing roles. Early data from three major labor markets shows 15% of knowledge work tasks shifted to AI assistance in 2025, but job losses remained under 2%. The disconnect suggests we're witnessing task evolution, not job elimination—at least for now.
When AI Becomes the Ultimate Penetration Tester
Anthropic's classified Project Glasswing discovered security vulnerabilities in Windows, macOS, Linux, Chrome, Safari, and Firefox—essentially every major computing platform—with minimal human guidance, according to The Verge. The AI system identified 847 previously unknown exploits across six months of testing, including 23 classified as "critical" by security researchers. What's particularly unsettling isn't just the AI's success rate, but its method: Glasswing operated with the same black-box approach malicious actors use, suggesting we may already be in an arms race between AI attackers and AI defenders that humans can barely follow, much less control.
The Democratization of Market Intelligence
Small online sellers are abandoning gut instinct for AI-powered market analysis, fundamentally changing how products get created and brought to market. MIT Technology Review reports that Alibaba's new Accio platform and similar tools now provide sophisticated demand forecasting, competitor analysis, and trend prediction to sellers who previously relied on intuition and limited data. A ceramics artist in Portland can now access the same market intelligence as a Fortune 500 consumer goods team. But this democratization comes with a cost: when everyone optimizes for the same AI-identified opportunities, markets may become more efficient but less surprising.
Google's AI Tells Millions of Lies Per Hour
Independent testing reveals Google's AI Overviews feature produces incorrect information in 10% of search queries, translating to an estimated 3.2 million false answers delivered to users every hour. Ars Technica's analysis found errors ranging from minor factual mistakes to completely fabricated historical events and medical advice. The scale is staggering: more misinformation than most traditional media outlets could produce in a year, delivered with the authority of the world's most trusted search engine. Google's response emphasizes ongoing improvements, but the fundamental challenge remains—when AI hallucinates at internet scale, truth becomes a probabilistic rather than binary concept.
Remote Control Gets Its First Revolution in Decades
Astropad's new Workbench platform reimagines remote desktop technology for AI agent oversight rather than human-to-human IT support, according to TechCrunch. The system allows humans to monitor and guide autonomous agents working across multiple remote machines, with interfaces designed for delegation rather than direct control. Three major consulting firms are already piloting Workbench for AI agents that handle routine data analysis, report generation, and system monitoring across client networks. This represents the first conceptual leap in remote access technology since the 1990s—shifting from extending human presence to supervising artificial intelligence.
Worth Your Time
The essay "Ambient AI: When Intelligence Becomes Infrastructure" by researcher Sarah Chen at Stanford explores how AI capabilities are quietly embedding themselves into everyday objects and spaces. Chen's framework for understanding "cognitive infrastructure" offers a compelling lens for thinking about AI's evolution beyond chatbots and search engines toward something more fundamental—and invisible.
Never forget: the human mind is the original generative engine. AI just gives us the chance to amplify it.
