Skip navigation
Person holding cellphone with webpage of US artificial intelligence company Anthropic PBC on screen with logo. Timon Schneider / Alamy Stock Photo

How Big Tech Is Co-Opting the Rising Stars of Artificial Intelligence

While upstarts like Anthropic AI may have created powerful breakthrough tech, they still need Big Tech's money, data centers, and cloud computing resources to make it work.

(The Washington Post) -- In 2021, a group of engineers abandoned OpenAI, concerned that the pioneering artificial intelligence company had become too focused on making money. Instead, they formed Anthropic, a public-benefit corporation dedicated to creating responsible AI.

This week, the do-gooders at Anthropic threw in with a surprisingly corporate partner, announcing a deal with Amazon worth up to $4 billion.

The arrangement highlights how AI's insatiable need for computing power is pushing even the most anti-corporate start-ups into the arms of Big Tech. Before Anthropic announced Amazon as its "preferred" cloud partner, it boasted in February of a similar relationship with Google. (Anthropic's February blog post no longer has the word "preferred.")

Spokespeople for both companies said Google and Anthropic's relationship is unchanged.

The AI boom is widely seen as the next revolution in technology, with the potential to catapult a new wave of start-ups into the Silicon Valley stratosphere. But instead of breaking Big Tech's decade-long dominance of the internet economy, the AI boom so far appears to be playing into its hands.

Big Tech's warehouses of powerful computer chips are necessary to train the complex algorithms behind AI chatbots, giving Amazon, Google and Microsoft immense sway over the market. And while upstarts like Anthropic AI may have created powerful breakthrough tech, they still need Big Tech's money and cloud computing resources to make it work.

"To build AI at any kind of meaningful scale, any developer is going to have core dependencies on resources that are largely concentrated in only a few firms," said Sarah Myers West, managing director at the AI Now Institute, which researches the effects of AI on society. "There really isn't a path out of it."

Training "generative" AI systems like chatbots and image generators is hugely expensive. The technology behind them has to crunch through trillions of words and images before it can produce humanlike text and photorealistic pictures from simple prompts. That work requires thousands of specialized computer chips sitting in huge data centers that use enormous amounts of energy.

And demand is only rising. Northern Virginia - the most important region in the world for computer warehouses - added 20 percent to its overall capacity in 2022, according to real estate company CBRE. Still, vacancy rates at data centers in the region were less than 2 percent at the beginning of this year.

In January, OpenAI, the start-up that kicked off the AI boom by launching ChatGPT last year, announced a similar multibillion dollar deal with Microsoft, giving the tech giant deep access to the new technology and allowing it to rush out a chatbot of its own. Anthropic's deal with Amazon doesn't tie the two companies as closely together, but it does let Amazon engineers use Anthropic models in their products, Amazon said in a press release announcing the deal.

Federal Trade Commission chair Lina Khan has said the agency is watching closely for signs of anticompetitive behavior. In March, the FTC opened an inquiry into cloud computing providers, asking whether AI products are dependent on the cloud provider they're built on. Regulators elsewhere are watching, too. The offices of Nvidia, which makes the computer chips and software necessary to train large language models, were raided Wednesday by French competition authorities, according to the Wall Street Journal.

"We need to be very vigilant to make sure this is not just another site for the big companies becoming bigger and really squelching their rivals," Khan said at the Spring Antitrust Enforcers Summit in March. "When you have these moments of technological transition, . . . you see the incumbents sometimes having to resort to anticompetitive tactics to protect their moats and protect their dominance."

Russell Wald, director of policy at Stanford University's Institute for Human-Centered AI, said competition does exist, but only among the small group of players with access to computing power. Wald, who organizes a program to teach congressional staffers about AI, worries that some regulatory proposals could make things worse: For example, he said, requiring companies to get their AI models licensed by the government could help bigger players and make it difficult for smaller start-ups to compete.

Some business leaders aren't as concerned about Big Tech's control over computing power, arguing that the cost of running AI models will inevitably go down as competition and efficiency rise.

"We're going to stop brute-forcing our AI progress," said Matt Calkins, chief executive of Appian, a publicly-traded software company that is building AI tools of its own. "I expect more efficiency."

When ChatGPT launched in November, 2022, it sent shock waves through the technology world. Tech pundits speculated that Google's search business was in trouble because people could ask ChatGPT questions instead of Googling them. The Big Tech firms sprung into action, moving at a speed observers hadn't seen from them in years. Google told workers to stop sharing its AI research with the public. Microsoft pushed out a new chatbot, Bing, that immediately expressed hostility toward its users, raising questions about whether it was quite ready for prime time.

This month, a flurry of announcements from Google, Microsoft, Amazon and OpenAI illustrated the frenzied pace of competition. Google integrated its Bard chatbot into Gmail, Google Docs and some of its other products; users found the tool making basic mistakes. Amazon announced a new conversation mode for its Alexa speakers using cutting edge chatbot tech; in an onstage demonstration, the tool lapsed into long pauses between answers.

But the ability to push AI tech to customers through existing products is a key advantage, said Myers West. ChatGPT rocketed to popularity through word of mouth, social media posts and news coverage, but after only a few months was already losing users, according to a report from web traffic monitoring firm SimilarWeb. Big Tech companies have billions of users coming to them every day.

"Ownership of the ecosystem matters," Myers West said.

The partnerships with Big Tech have triggered angst among some AI workers and researchers, said Manoj Vekaria, a software engineer in Seattle. AI labs like OpenAI and Anthropic may claim independence, he said, but it's hard to predict how long that will last.

"What if the leadership changes? What if Amazon gets a new CEO? What if Anthropic gets a new CEO?" Vekaria said. "When you're taking their money, you're selling your soul."

For now, Anthropic appears to be trying to keep its options open. In a statement announcing the Amazon deal, Amazon said "Anthropic plans to run the majority of its workloads on AWS." But despite switching its "preferred" status, Anthropic is still primarily using Google servers, according to a person familiar with the company's cloud computing setup who spoke on the condition of anonymity to discuss internal matters.

- - -

Nitasha Tiku and Cat Zakrzewski contributed to this report.

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish