Your AI strategy has a landlord problem

This is part one of a seven part series I'm writing on Sustainable AI. Not sustainable as in go plant trees around your local data center, but sustainable as in if something big changes in your world, everything you've been working towards with AI is still there tomorrow.
Imagine this. You've built a product your customers adore. Your engineering team has spent 18 months integrating an AI model into your core workflows. It fully handles customer support, it assists your developers in all their tasks, it powers the recommendation engine that drives 30% of your revenue. Your CTO chose the provider carefully. The integration is deep. It works so very beautifully.
Then one Friday afternoon, the US government designates your AI provider a supply chain risk.
You certainly don't have to imagine this part. It happened on 27th February 2026. The Department of Defence (eek, sorry 'War') designated Anthropic, the company behind Claude, a supply chain risk to national security. The same label that's only ever been applied to two other organisations in US history, Huawei and ZTE. Federal agencies have been ordered to purge Claude from their systems within six months. The State Department's internal chatbot immediately switched from Claude to GPT-4.1 overnight. And this will not be the last time, mark my words. Remember, we're just at the beginning of the AI arms race.
If your reaction is "well, I'm not the US government," you are missing the point my friends. It's the mechanism that matters. A single policy decision, made for reasons entirely outside your control, can remove your AI provider from an entire market segment overnight. Anthropic holds 32% of enterprise LLM market share. One in three enterprise AI deployments runs on Claude. That's not a niche product. That's called infrastructure!
The Windsurf lesson bites harder
While everyone was debating the Anthropic sanctions, something else was happening in the AI coding tools market.
OpenAI offered $3 billion to acquire Windsurf, the AI coding editor that hundreds of thousands of developers relied on daily. Hundreds of thousands. The deal fell apart because Microsoft's partnership agreement with OpenAI created IP complications they struggled, and failed to resolve. Google then swooped in with a $2.4 billion licensing deal, hired Windsurf's CEO, co-founder, and all its key engineers. Three days later, Cognition bought whatever fragments were left.
In 72 hours, Windsurf went from an independent company to a carcass picked over by three competitors like a carcass in the Arizona sun. If you built your development workflow on Windsurf, your weapon of choice was dismembered over a weekend. You had zero say. Zero warning.
Meanwhile, Cursor refused to sell to OpenAI, raised $900 million at a $9 billion valuation, and remained independent. Anyone running open-source tooling was completely unaffected. The lesson here writes itself.
This is deeper than cloud lock-ins we know and hate

So, you're thinking. "We deal with vendor dependencies during cloud migrations. We can handle this." Well, I wish that were true.
Cloud lock-in can be painful but navigable. You can migrate a database from AWS to Azure. It's expensive, slow, and deeply unpleasant, but it's a well-understood process with established tooling to soften the blow.
AI lock-in is fundamentally different. It's not just in your infrastructure. It's also in your business logic. The proprietary prompt architectures mean that applications using vendor-specific syntax encode dependencies directly into the code. OpenAI's 'function-calling' format is different from Anthropic's 'tool-use' format. Migration isn't just swapping an API key. It's rebuilding your entire application.
This goes deeper still. Marketplaces. Anthropic launched a marketplace in March 2026 where enterprise customers can buy their partner tools through their existing Anthropic budget. Convenient, yes. Every partner tool you purchase through that marketplace then ties you more tightly to Anthropic, not to the actual software vendor. It's a very elegant strategy from Anthropic's perspective. It's a total dependency trap from yours.
A Panorays survey from January 2026 found that only 15% of CISOs have complete transparency over their AI software supply chains. That's not a lot. Nearly half of all employees had adopted AI tools without their employer's approval. I was certainly guilty of that too. Most companies can't even draw a dependency graph of where AI sits in their stack.
Most organisations don't know how deep their dependency goes. And that's the problem.
Landlords and Tenants
Here's how I think about it. When you build on a single AI provider without any form of abstraction layer, you're a tenant, not an owner. Your landlord controls the rent (the pricing), the rules (the usage policies), the building maintenance (the model updates…and deprecations), and ultimately whether you can even stay (those wonderful terms of service we so diligently read ;)).
A good landlord keeps the building running and leaves you alone. But you are still subject to their decisions. If they decide to sell the building, you have to deal with the new owner. If they raise the rent, you have to pay it or move. If the government decides to condemn the building, then you're out.
The companies that best survived the Anthropic sanctions comfortably were the ones that had already built for multi-tenancy. They ran multiple models. They had abstraction layers. The ability to swap providers without rebuilding their entire product. They were owners, not tenants.
Survivability is in the abstraction

The good news is that the technical community isn't sat on its hands and is building the escape routes. Model Context Protocol (MCP), donated to the Linux Foundation, is creating a new interoperability standard for AI tools. We're building AI gateways and unified APIs like LiteLLM, Portkey, and others that let you route requests to different models through a single interface. The AI Bill of Materials Foundation, supported by Block, Anthropic, and OpenAI, aims to become the W3C for AI interoperability, and we need one.
The architecture seems straightforward in principle. Your application talks to an abstraction layer. The abstraction layer talks to the models. If you need to swap a model, you change the routing configuration, not the application code.
In practice, like most things, it requires discipline. Every time an engineering team hardcodes a vendor-specific prompt format, they're adding a brick to the wall between you and portability. It's hard not to do this when you're running because it slows down development. But, every time a team uses a AI vendor-specific feature without wrapping it in an abstraction layer, they're deepening the dependency risk.
The bus test for your AI stack
There's a simple mental model I use called the "bus test." (The traditional version being "what happens if someone gets hit by a bus." Mine is the less morbid: "what happens if the bus doesn't show up.")
For every AI provider in your stack, ask: what happens if this provider is unavailable for 30 days starting tomorrow? Not gone forever. Just unavailable. A need to manage pricing. A terms of service (Or political ideology) change you just can't accept. A government sanction. An acquisition that changes the product roadmap or makes you question their ethics.
If the answer is "we'd be fine, we'd switch to an alternative within a week," you're in good shape. If the answer is "we'd need to rebuild significant parts of our product," then you my friend, have a landlord problem…
So, what should you be doing
I don't believe in articles that diagnose problems without offering solutions. So here are three things you can do this week.
First, draw the dependency graph. Ask for transparency, grant them clemency. List every AI model, tool, and service your company uses. Not just the official ones. Include the 'shadow' AI tools too. Map which products, features, and workflows depend on each one. You'll be surprised how deep it goes.
Second, identify the single points of failure. Which AI dependencies, if removed, would break something critical? Those are your real landlord problems. Prioritise building abstraction layers around them.
Third, start the abstraction. You don't need to rearchitect everything overnight. Start with picking your highest-risk dependency. Build a wrapper and route through an API gateway. Make sure that if you had to swap that provider next month, the application code wouldn't need to change. Then just do the next one.
The race to adopt AI is real. I'm not saying to slow down. I am telling you though to check who holds the keys to the building you're racing through at 100mph. If it's not you, that's a very real problem worth solving before the next crisis decides it for you.
This is part one of the Sustainable AI series. Next week: Shadow AI is your biggest security risk and your best innovation signal.
The sharpest AI tools intel, weekly.
Join thousands of professionals navigating the AI tools landscape. Free, no spam, unsubscribe anytime.