According to Dylan Pate, chief analyst at SemiAnalysis, a chip industry research company, OpenAI could cost as much as $700,000 a day to run ChatGPT because it runs on expensive computing infrastructure.
ChatGPT predicted cost ( GPT-4 )
Patel pointed out that ChatGPT needs huge computing power to give feedback based on user input information, such as writing cover letters, generating teaching plans, and helping users optimize their personal profiles. "Most of the cost comes from expensive servers," he said.
Also, while Patel's original estimate was based on OpenAI's GPT-3 model, ChatGPT is now likely to be more expensive to run after adopting the latest GPT-4 model.
OpenAI has yet to respond to this.
Another analyst (Afzal Ahmad) of Patel and SemiAnalysis said that the outside world has previously noticed that it may cost hundreds of millions of dollars to train the large language model behind ChatGPT, but the operating expenses, or the cost of artificial intelligence reasoning, are as high as At any reasonable scale of deployment, this far outweighs the training cost. “Indeed, on a weekly basis, the inference cost of ChatGPT exceeds the training cost,” they point out.
Companies using OpenAI's language models have also been paying high prices for the past few years. Startup company Latitude has developed an artificially intelligent dungeon game that generates a storyline based on user input. The company's chief executive, Nick Walton, said that the cost of running the model, and the corresponding purchase of Amazon AWS cloud servers, will reach $200,000 per month in 2021. So Walton ultimately decided to switch to a language software provider backed by AI21 Labs. That helped him cut the company's AI costs in half, to $100,000 a month.
"We'll joke that we have human workers and AI workers and spend roughly the same amount on both," Walton said in an interview. "We're spending hundreds of thousands of dollars a month on AI. , and we’re not a big startup, so it’s a huge expense.”
In order to reduce the operating cost of generative artificial intelligence models, it has been recently reported that Microsoft is developing an artificial intelligence chip code-named "Athena". The project started in 2019. A few years ago, Microsoft reached a $1 billion investment agreement with OpenAI, requiring OpenAI to run its models only on Microsoft's Azure cloud server.
There are two aspects of thinking behind Microsoft's launch of this chip project. Microsoft executives realized they were behind Google and Amazon in developing their own chips, according to people familiar with the matter. Meanwhile, Microsoft is looking for cheaper alternatives to Nvidia's GPU chips.
Currently, Microsoft has about 300 employees working on the chip. The chip could be released as early as next year for internal use by Microsoft and OpenAI, the sources said. Microsoft declined to comment for this story.
If successfully developed, the Athena chip is expected to reduce the cost of running artificial intelligence models, which will not only have a positive impact on Microsoft and OpenAI's own projects, but may also have a profound impact on the entire artificial intelligence industry. Reducing costs will help promote the popularization of artificial intelligence, so that more enterprises and individuals can enjoy the convenience brought by advanced technology.
Image Source: Unsplash
In short, with the development of Microsoft's "Athena" chip, more high-quality, low-cost artificial intelligence products may appear in the future. In this way, this will further promote the widespread application of artificial intelligence technology, thereby benefiting all areas of society.
We are a professional distributor of electronic components, providing a wide range of electronic products, saving you a lot of time, effort and cost through our meticulous order preparation and fast delivery service.
Share this post