A Year In Review: The Three Questions That Shaped Tech Strategy In 2025
What 2025 revealed about intelligence, power, and leading through uncertainty
2025 was loud and noisy.
If you are a business leader, a tech builder or a founder: you have been bombarded by weekly tech breakthroughs, bold claims, aggressive postures, bigger models, grander promises.
But what’s real? What’s just noise? And what actually matters for the decisions you’re making right now?
As I struggled myself with these questions, the answers I found did not really satisfy me. So I searched for my own: reading, writing, talking, and sharing to better grasp what was truly going on.
This year, my work circled around three of them. If you lead an organization navigating technology, geopolitics, and uncertainty, these are the questions you have likely been asking too.
The Intelligence Question: What assumptions is your AI strategy built upon?
Every AI conversation in 2025 started with capability. What models can do. What’s coming next. What’s possible.
But capability isn’t the same as reliability. And language fluency isn’t understanding.
The more I read, the more I realised we were confusing pattern matching with reasoning; and mistaking sexy demos for enterprise-ready solutions.
So I researched further. I asked whether LLMs are truly the thinking machines we hope for, drawing on Chollet, Kahneman, Kurzweil, and others. My take: these tools are brilliant but they are not minds.
The hype around autonomous agents was even louder. Experts promised that agentic AI would replace knowledge workers through autonomous job execution. But simple experiments have shown that when LLMs excel at benchmarks, they fail miserably in real-life scenarios. And maybe that’s a good thing: reliability, security, judgement, and accountability remain non-negotiable for enterprise processes.
Game theory research revealed that simulating strategy is not the same as possessing intelligence. Real-time performance creates an illusion. But that’s not the same as intelligence. What’s most troubling and worth remembering is the opaque nature of closed-source models (GPT, Claude, Gemini, and others): training data is not published, model weights are closely guarded, and data labelling instructions for RLHF remain confidential.
They are black boxes. And companies are building their intelligence engines on top of them. Arguably, outsourcing their know-how, decision making and proprietary data to a third party.
So, I mapped who actually wins and loses in the AI value chain because understanding who captures value matters as much as understanding how the models work. Hint: it’s probably not you….(!)
And I cautioned about the risks of Humanizing AI and AI companionship: its impact on cognitive and self-preservation skills. Sorry to say but I don’t believe AI should be your therapist, lover, or best friend. AI is a mirror of your inputs, distorted by humanity’s historical collective data trails. No intent. No purpose. No personality. Hence, we need to be careful.
The takeaway: Before you scale your AI investments, stress-test your assumptions. What do these models actually do versus what you’ve been told they do? Who controls them? And what happens when they fail?
Now defining intelligence is only half the problem. The deeper question is who gets to decide what models are, what they’re made of, and what values, ideologies, and cultures they carry. That leads somewhere most strategies ignore entirely.
The Power Question: Who really controls the technology your business depends on?
The AI conversation in 2025 was dominated by what AI might bring to enterprises and society. But far less attention went to understanding the powers at play and how politics are driving tech roadmaps.
I spent considerable effort mapping who really controls the AI stack (from rare earths to model governance) and what that concentration means for enterprises dependent on foreign infrastructure.
This matters because sovereignty is an operational issue. And your dependencies are someone else’s leverage on your business.
Through my Technopolitics webinar series in partnership with INITIATIK, I studied extensively why technology is no longer neutral infrastructure; but rather a lever of power.
The powers behind each model and each breakthrough tell you more about trajectory than any product announcement. How dependencies are engineered to create stickiness. How control is asserted through infrastructure, not just contracts. In fact, reading through the US National Security Strategy, provided more insights into the future of tech than any press release or market research report.
AI is not abstract. It is physical, electrical, and increasingly constrained. Working with ENSSO, I explored how AI’s energy appetite is becoming a limiting factor for both climate goals and technological ambition. Grid capacity, power pricing, water access, data centre geopolitics are strategic imperatives.
That reality now reaches across borders. I examined how extraterritorial regulations are enforced through infrastructure, finance, and code. Export controls, sanctions, data governance now shape tech stacks, vendor choices, and risk exposure in ways most leaders underestimate.
With Asia Tech Lens, I analysed 25 national AI strategies to reveal where sovereignty narratives mask deep dependencies and where the real chokepoints lie. With the Digital Growth Collective, I proposed a roadmap on how boards and CIOs must secure control across clouds, models, data flows, and vendors.
At FIBEP’s World Media Intelligence Congress, I explored why truth itself has become fragmented and what that means for leaders trying to build trust in a technopolitical age. Interestingly, I also realised this had happened before: from Gutenberg’s press to incendiary pamphlets in 19th-century Paris (best captured in Balzac’s Lost Illusions), all the way to Instagram and TikTok. The medium changes. The dynamics don’t.
The takeaway: Map your dependencies before someone else exploits them. Energy, infrastructure, model governance, jurisdictional exposure are strategic risks hiding in your tech stack.
The Leadership Question: How are you leading through uncertainty?
This was the question I carried into every founder conversation, every advisory session, every Friday morning when I sat down to write.
Because I think the leaders who will thrive aren’t the ones with the best predictions but the ones asking better questions and building organisations that can adapt when the answers change.
Watching Davos unfold early in 2025, I reflected on why global consensus around technology governance is fragmenting rather than converging. Cold, cynical, disillusioned. Because those in control have no interest in changing a world that works better for them. It’s paramount to understand their agenda: people make technology, not the other way around. With “Do You Hear The People Sing“ from Les Misérables in the background, I reflected on the failure of leadership and what principled leadership means.
Inside large organizations, the need to transform and the growing economic pressures create more tensions and offer also new opportunities. I explored how HR and technology teams might co-design the future of work asking a question many leaders wrestle with: how do you grow the business without growing the headcount?
I revisited the art of breaking walls and building bridges: what it takes to lead change from within, navigate resistance, and be a change agent in an established structure. Execution inside the machine is harder than starting fresh.
With founders, the challenge was rarely ambition or talent. It was blind spots. Over-reliance on a single platform or API. Enterprise readiness. Assumptions about data access or residency through AI models consumption. Business models built on fragile pricing power.
Those conversations reinforced something I now consider a core principle: innovation needs conviction. But good strategy is less about conviction and more about structured doubt. As an ultra-trail runner, each race keeps me humble and grounded. They help me achieve bigger things one small step at a time.
I also kept returning to what leadership reveals about us. I thought about what happens when leaders confuse the role with the person and why being more than your job title matters. I asked why abrasive leaders seem to succeed while principled ones often remain in the shadows.
Every Friday for the past 30 weeks, I shared an idea about leadership, power, or technology. Responsibility versus liability. Visibility versus influence. Speed versus direction. Resilience when progress becomes uncomfortable. This weekly routine forces me to step back, observe, and think harder.
The takeaway: Leadership is not about having answers. It is about holding better questions and staying steady while the world kept moving. The leaders who adapt aren’t smarter. Maybe just more honest about what they don’t know.
Reading More to Think Better
This year, I probably read more books than in the previous decade combined (!) Not so much to accumulate knowledge but to see things differently and learn: Ray Kurzweil’s How to Create a Mind reminded me of the intricacies of intelligence and biology to ground my analysis in AI models and what reasoning means. Chris Miller’s Chip War reshaped how I see semiconductors as strategic assets and a geopolitical power play. Patrick McGee’s Apple in China became a case study in technopolitics. Asma Mhalla’s Technopolitics and Cyberpunk gave language to the fusion of technology and power. Karen Hao’s Empire of AI traced how a handful of players came to dominate the field and at what price. Dan Wang’s Breakneck offered an insider’s view of China’s push to engineer the future. And Sarah Wynn-Williams’ Careless People was a sharp reminder that behind every platform are choices; and sometimes people with unchecked power who look away.
These books now shape what I write. If you lead through complexity, they’re worth your time too!
Looking Ahead
Earlier this year, I gathered ten uncomfortable questions that I thought would keep leaders up at night; about power shifts, tech advancements, and the future of work. Some proved pressing, urgent, and important. Others moved to the background. At least temporarily.
Thinking KoncentriK is not about having stronger opinions but about asking better questions before decisions harden into dependencies.
If any of these questions resonate or challenge how you’re thinking about 2026 I’d love to hear from you!
The work continues.
Thanks for reading!
Damien
I am a Senior Technology Advisor who works at the intersection of AI, business transformation, and geopolitics through RebootUp (consulting) and KoncentriK(publication): what I call Technopolitics. I help leaders turn emerging tech into business impact while navigating today’s strategic and systemic risks. Get in touch to know more damien.kopp@rebootup.com


