Reading the U.S. National Security Strategy as a Technology Blueprint
A technopolitical reading of AI, infrastructure, standards, and enterprise risk
Why this document matters to business leaders
Last month, the U.S. administration released their National Security Strategy; a 33 page document describing the U.S. perceived threats as well as their priorities and plans to address them. Full document available here.
Most executives will never read a national security strategy. They assume it belongs to diplomats, generals, or politicians.
I believe that assumption is now a liability.
When read carefully, the latest U.S. National Security Strategy is in fact not a political document but a technology and industrial strategy, expressed in the language of security.
It describes how infrastructure, energy, AI, standards, and supply chains are to be shaped over the coming decade, and how dependencies will be created, monitored, and enforced.
For enterprises, here is what at stakes:
where you place data and compute
which AI models you depend on
which standards shape your products
how portable your systems really are
how exposed you are to shifts in regulation, export controls, or infrastructure access
This article reads the strategy as a technopolitical document, not to debate intent nor politics , but to understand how the operating environment for technology-driven organisations is being restructured.
My goal is to help leaders understand what is already changing, what will accelerate next, and how to respond with clarity rather than reaction.
Why your digital infrastructure is not neutral terrain
The document consistently treats technology as something that must be protected, hardened, and controlled, not optimised purely for efficiency or performance.
It states:
“We want a resilient national infrastructure that can withstand natural disasters, resist and thwart foreign threats, and prevent or mitigate any events that might harm the American people or disrupt the American economy. No adversary or danger should be able to hold America at risk.”
This framing matters. Infrastructure logic implies prioritisation, conditional access, and sovereign control. It moves technology out of the domain of neutral enterprise tooling and into the domain of strategic assets.
That logic is reinforced by how industrial and technological capacity are positioned:
“We want the world’s most robust industrial base. American national power depends on a strong industrial sector capable of meeting both peacetime and wartime production demands.”
“We want to remain the world’s most scientifically and technologically advanced and innovative country, and to build on these strengths. And we want to protect our intellectual property from foreign theft.”
This is a clear break from the assumption that global markets will naturally optimise technology outcomes. Instead, technology is something to be secured against risk, influence, and dependency.
What this means for corporate leaders
Cloud, AI, and data platforms are increasingly treated like utilities with strategic conditions attached.
Continuity of service can no longer be assumed to be purely contractual.
Technology choices are becoming long-term exposure decisions, not short-term optimisation plays.
Every company should treat their digital infrastructure as a strategic asset as well: digital resilience and infrastructure concentration risk should be fully integrated into the organisations risk management plans.
Read my analysis about the Dependency Economy of AI comparing 25 National AI Strategies, or download the full 49 page report here.
Re-industrialisation and supply chains as instruments of control
One of the most explicit technopolitical moves in the document is the treatment of supply chains as security objects.
The strategy states:
“The United States must never be dependent on any outside power for core components—from raw materials to parts to finished products—necessary to the nation’s defense or economy.”
More importantly, it specifies how this will be operationalised:
“Moreover, the Intelligence Community will monitor key supply chains and technological advances around the world to ensure we understand and mitigate vulnerabilities and threats to American security and prosperity.”
It clearly formalises a reality already visible to multinationals: industrial roadmaps, sourcing strategies, and technology dependencies are now monitored and evaluated at state level.
Reindustrialisation is framed as a long-term structural shift:
“Reindustrialization – The future belongs to makers. The United States will reindustrialize its economy, ‘re-shore’ industrial production… with a focus on the critical and emerging technology sectors that will define the future.”
This resonates closely with Dan Wang’s Breakneck, which documents how industrial capacity and technological power are inseparable in modern competition, and with Chris Miller’s Chip War , which shows how semiconductor supply chains have become geopolitical fault lines.
Example in action:
In January 2025, the U.S. Department of Commerce implemented sweeping new export controls on AI chips and model weights, requiring companies to monitor computing power distribution and report vast amounts of information to retain export privileges Sidley Austin LLPU.S. Department of Justice.
By December 2025, federal authorities shut down a major smuggling network in “Operation Gatekeeper,” seizing over $50 million in Nvidia technologies after uncovering attempts to export $160 million worth of H100 and H200 GPUs to China through falsified shipping paperwork U.S. Department of Justice.
The message is clear: supply chain monitoring is already operational and enforced.
What this means for corporate leaders
Supply-chain strategy is no longer separable from geopolitical risk management.
Vendor diversification is becoming a strategic requirement, not a resilience nice-to-have.
Long-term cost efficiency will increasingly be traded off against controllability and alignment.
It means organisation need to manage interdependencies within their digital supply chain rather than (naïve) global optimisation purely based on technological and operational constraints.
Energy as the foundation of AI and compute power
The document draws a direct and unusually explicit line between energy policy and AI leadership, stating:
“Energy Dominance – Restoring American energy dominance… is a top strategic priority.”
And crucially:
“Cheap and abundant energy will… fuel reindustrialization, and help maintain our advantage in cutting-edge technologies such as AI.”
This matters because AI is not abstract software but a compute-energy system. Training a single large language model can consume as much electricity as hundreds of homes use in a year. Inference at scale requires sustained, reliable power. Data centers running AI workloads already account for significant portions of grid capacity in key regions.
The operational reality: Companies building AI strategies must now evaluate energy geography alongside technical capabilities.
Where can you reliably access cheap, stable power at the scale AI demands?
Which jurisdictions offer favorable energy policy for compute-intensive workloads?
As energy costs and availability diverge globally, so will the feasibility of AI deployment.
Explore further my prior on the AI / Energy paradox: whoever controls energy density and grid stability controls the feasible geography of AI.
What this means for corporate leaders
AI strategy must be evaluated alongside energy exposure and geography.
Compute-heavy workloads will increasingly cluster in regions with favourable energy policy.
Claims of “sovereign AI” without sovereign energy and compute access are structurally weak.
AI, quantum, and autonomy as strategic capabilities
The strategy does not treat AI as a general-purpose productivity tool. It consistently groups it with military and dual-use technologies.
It states:
“The United States must… invest in research to preserve and advance our advantage in cutting-edge military and dual-use technology… such as AI, quantum computing, and autonomous systems, plus the energy necessary to fuel these domains.”
This framing has predictable consequences:
differentiated access to advanced models
tighter export controls on hardware and software
increased compliance, logging, and usage constraints
AI capabilities treated as conditional, not universal
Example in action:
The January 2025 AI Diffusion Framework divided the world into three tiers for chip access, with compliance required by May 2025 Sidley Austin LLP. For the first time, the framework established export controls on AI model weights themselves—not just hardware Anthropic. Chinese AI companies like DeepSeek openly acknowledge that chip restrictions force them to use 2-4x more power to achieve results comparable to U.S. firms AI Frontiers. The practical result: AI model choice now directly determines operational costs and capabilities.
This is also resonates with my review of Patrick McGee’s Apple in China illustrates in a different context: when technology becomes so strategic, access is never just commercial.
What this means for corporate leaders
AI model choice is becoming a strategic dependency decision.
Regulatory and contractual constraints around AI usage will increase, not decrease.
Portability across models, clouds, and jurisdictions becomes a core resilience capability.
Platforms as security infrastructure: the public–private fusion
One of the most revealing passages concerns the role of private technology platforms in national cyber posture:
“The U.S. Government’s critical relationships with the American private sector help maintain surveillance of persistent threats to U.S. networks, including critical infrastructure.”
And:
“This in turn enables the U.S. Government’s ability to conduct real-time discovery, attribution, and response… while protecting the competitiveness of the U.S. economy and bolstering the resilience of the American technology sector.”
This is a clear articulation of what Asma Mhalla describes as the liquefaction of sovereignty in Technopolitique and Cyberpunk: state functions increasingly operate through private digital infrastructures.
Example in action:
In 2024, CISA conducted 2,131 Pre-Ransomware Notifications to critical infrastructure entities, warning them of early-stage ransomware activity before encryption occurred CISA. The agency used Administrative Subpoena authorities to identify and drive mitigation of over 1,200 vulnerable devices controlling critical infrastructure like power plants and water utilities CISA. This operational reality demonstrates how private infrastructure increasingly functions as monitored security layers.
For enterprises, this fundamentally reshapes assumptions about data access, logging, telemetry, and governance.
What this means for corporate leaders
Cloud and SaaS providers are not neutral intermediaries.
Observability and logging are structural features, not optional add-ons.
Data governance must assume multi-layered visibility and jurisdictional complexity.
Standards as power: shaping markets without legislation
The strategy is explicit about standards as a lever of influence:
“We want to ensure that U.S. technology and U.S. standards—particularly in AI, biotech, and quantum computing—drive the world forward.”
In practice, standards are not only set in formal bodies. They are embedded in:
platform defaults
API design
open source software and models
safety and alignment constraints
identity and access frameworks
audit and compliance tooling
This is how technological leadership translates into normative power.
What this means for corporate leaders
Platform choices embed governance assumptions into products and operations.
Standards shape what is possible, auditable, and compliant across markets.
Regulatory exposure increasingly follows technical architecture.
Regulatory strength versus infrastructure vulnerability: the European trade-off
The strategy devotes attention to Europe’s trajectory, explicitly linking economic performance, regulation, and technological capacity.
From a technopolitics perspective, this highlights a structural tension: regulatory power without corresponding infrastructure depth.
Europe has established itself as a global standard-setter in data protection (GDPR), AI governance (EU AI Act), and digital services regulation. But regulatory frameworks alone do not generate compute capacity, energy density, or semiconductor manufacturing capability.
Standards influence what is permissible; infrastructure determines what is possible.
A jurisdiction can write excellent AI governance frameworks, but if it lacks the energy, chips, and compute to train and deploy models at scale, it becomes dependent on others who control those resources.
What this means for corporate leaders
Compliance excellence does not equal operational autonomy
Geographic presence in regulated markets must be balanced against access to infrastructure
Boards should evaluate: where do our capabilities depend on external infrastructure we don’t control?
This is the European dimension of a broader technopolitical reality: resilience is not achieved through regulation alone, but through control of the underlying technological stack.
Narrative power and the non-neutrality of platforms
One of the most revealing parts of the strategy is its explicit treatment of narrative influence as strategic infrastructure; particularly in its language on Europe.
The document does not describe Europe only in economic or security terms. It frames Europe as facing a deeper crisis:
“But this economic decline is eclipsed by the real and more stark prospect of civilizational erasure.”
It goes further, identifying the causes not merely as external threats, but as internal regulatory, cultural, and informational dynamics:
“The larger issues facing Europe include activities of the European Union and other transnational bodies that undermine political liberty and sovereignty… censorship of free speech and suppression of political opposition… and loss of national identities and self-confidence.”
The proposed response is not only economic or military. It explicitly frames narrative influence as a strategic tool:
“We want Europe to remain European, to regain its civilizational self-confidence, and to abandon its failed focus on regulatory suffocation.”
Whether one agrees with this characterization is irrelevant to the technopolitical analysis. What matters is that the strategy explicitly positions narrative influence as a policy instrument and that instrument operates through commercial technology platforms (X, Meta, TikTok, Google, etc) and amplified by AI.
Digital platforms, cloud ecosystems, social networks, search engines, content moderation systems, and increasingly AI models themselves are the infrastructures through which:
speech norms are enforced
visibility is allocated
legitimacy is shaped
certain narratives are amplified while others are deprioritized
In the same strategy, the U.S. commits to deep coordination with its private technology sector:
“The U.S. Government’s critical relationships with the American private sector help maintain surveillance of persistent threats to U.S. networks, including critical infrastructure.”
Taken together, these passages illustrate a central technopolitical reality:
platforms are not politically or narratively neutral, even when operating as private companies.
AI systems trained on curated datasets, governed by specific safety frameworks, deployed through dominant cloud platforms, and aligned with particular standards inevitably carry embedded assumptions about acceptable speech, legitimate authority, and social norms.
This is not conspiracy theory but an explicit strategy. The document states openly that narrative influence is a policy tool to assert control and enforce alignment, and that tool operates through the technology platforms enterprises depend on.
What this means for enterprise leaders
For organisations operating in Europe and globally, this has concrete implications:
Platforms and AI tools embed normative assumptions that may differ across jurisdictions
Content moderation, model alignment, and governance frameworks increasingly reflect strategic priorities, not just ethical abstractions
Claims that AI or platforms are “neutral” are operationally misleading
For leaders, the question is not whether narrative influence exists, but whether they understand:
which narratives their technology stack implicitly supports
which jurisdictions shape their platforms’ governance models
how AI systems may amplify or suppress certain perspectives by design
In short: AI is not only an automation tool. It is a narrative infrastructure.
Infrastructure deployment as dependency creation
The strategy is unusually direct about infrastructure abroad:
“We should partner… to build scalable and resilient energy infrastructure, invest in critical mineral access, and harden existing and future cyber communications networks that take full advantage of American encryption and security potential.”
And even more explicitly:
“We should make every effort to push out foreign companies that build infrastructure in the region.”
This is infrastructure as alignment. Once embedded, infrastructure creates long-term technological and operational dependencies. Although not named, China is clearly an obstacle to the U.S. strategy at the moment.
What this means for corporate leaders
Infrastructure contracts signal long-term alignment, not just procurement decisions.
Vendor choice increasingly implies geopolitical positioning.
Exiting embedded infrastructure dependencies becomes harder over time.
In short it is the weaponisation of digital infrastructure using dependencies to assert controls over other nations and regions.
Practical conclusion: what leaders should do now
Reading this strategy through a technopolitics lens leads to a small number of practical actions.
Map your dependency stack: Across energy, compute, chips, cloud, models, data, and standards. Identify single points of failure and jurisdictional exposure.
Design for portability and reversibility: Architect systems so workloads, data, and models can move when conditions change.
Elevate technology decisions to board-level risk: AI, cloud, and infrastructure choices now shape continuity, not just cost.
Treat standards as strategic inputs: Understand which norms, defaults, and governance models you are embedding through platforms.
Build technopolitical literacy: Not to debate geopolitics, but to understand how strategy documents like this translate into operational reality.
A final thought
This strategy does not tell companies what to think. It tells them what kind of technological environment they will be operating in.
In that environment, the central leadership question is not: Which technology is best?
But: Which dependencies can we live with when the world reconfigures?
That is the real challenge enterprise leaders must now address.
Thanks for reading,
Damien Kopp
I am a Senior Technology Advisor who works at the intersection of AI, business transformation, and geopolitics through RebootUp (consulting) and KoncentriK(publication): what I call Technopolitics. I help leaders turn emerging tech into business impact while navigating today’s strategic and systemic risks. Get in touch to know more: connect with me on LinkedIn or send me an email at damien.kopp@rebootup.com










Brilliant breakdown of how infastructure dependencies shape corporate risk exposure. The compute governance angle is something we had to reconsider when choosing cloud provdiers last quarter, and this piece clarifies why that felt less like a tech decision and more like geopolitical positioning. The part about energy geography determining AI feasibility is underrated tbh.