23. How does the AI industry unfold?
2023-04-06
It seems AI is the hottest thing on the planet right now. Many people are breathlessly saying this is the most important moment in computing since [insert big thing here]. Personally, I’m very excited about it, but I also recognize that no one knows what they’re talking about. Analysts are reading tea leaves as the tea plant is sprouting.
With the caveat that I, too, lack a crystal ball, I want to think through what the dynamics of this market might look like in one, five, or ten years.
The shape of things to come
There are two AI businesses emerging: the consumer (b2c) business, and the developer and enterprise (b2b) business. The first develops and serves AI-enabled applications to an end-user, like OpenAI’s ChatGPT and Anthropic’s Claude. The second develops and serves models, tools, and APIs that enable AI-driven applications to a builder or purveyor of other software and services, much like OpenAI’s suite of APIs. The provider list runs longer here: OpenAI, Anthropic, Cohere, Google, StabilityAI, among others.
Historically, the gpt APIs have grown relatively fast; however, their growth is limited by 1) the number of developers using them, 2) the number of developers using them to create compelling end-user applications, and 3) the ability of those developers to distribute their compelling applications to end-users. However, in the last few months the consumer business has absolutely exploded: ChatGPT became the fastest-growing consumer application in history in the matter of weeks. [1] The bottlenecks in this consumer equation are only 1) awareness of ChatGPT, 2) an internet connection, and 3) end-user imagination.
Clearly the demand curves of these two markets is unique and, apart from development of underlying models, seemingly uncorrelated. Serving a Fortune 500 CIO or a tech company on the verge of IPO yields a very different product and company than does serving a college sophomore, a YC-backed founding team, or the owner of a nail salon.
AI as cloud utility
It feels likely that AI for businesses and developers becomes a commodity. We already see open source (even machine local) alternatives to gpt models posting impressive performance statistics. OpenAI’s continual aggressive price cuts seem to be getting ahead of, even precipitating, this. [2]
A meaningful analogy for “intelligence as commodity” is the cloud market. Where most individual Azure, AWS, GCP, Oracle, etc. offerings are commoditized, the full suite of each start to look unique. Basic infrastructure like storage and compute, consumption pricing, and scalability are all table-stakes. Unique offerings like Google’s suite of ML services, integrations like Azure’s connections to the 365 Suite, compliance features like AWS GovCloud, discount structures, and developer experience impact customer decisions on the margin.
How might this map to AI offerings? From today’s offerings we can already start to see what features may shake out as future table-stakes:
Differentiation and “depth of suite” might happen on a few vectors:
These considerations have significant impact on the products providers build, the suites they add up to, the distribution strategies they undertake, and the success they have. (I might do some more writing on this soon.)
AI as consumer application
The dynamics of commercializing AI for end-users are different. First, there are fewer entrants tackling consumer, with ChatGPT, Claude, and Bing as main players, and ChatGPT holding a majority of mindshare. While some small companies and developers offer chat interfaces on LLMs, they don’t have significant adoption, nor are they defensible.
The consumer market feels less concerned with price wars, broadening offerings, or exposing model complexity for power users. Instead, success here requires excellent user interactions, emphasizing valuable use cases, and, for the moment, consistently delivering delight, and eventually, consistently delivering.
Some key factors in this market:
Questions
The business and consumer markets stand mostly separate, apart from the fact they are built on the same foundational technology. There’s a long list of strategic questions a provider might have, but a few bubble up for me:
- Can a single provider successfully serve and capture the entire opportunity of both consumer and business markets at the same time? The distribution motions alone of these two businesses feel drastically different.
- Are there interactions between the b2b and b2c markets? For instance, does a developer’s experience of the Claude chatbot influence what API they choose to build a new feature?
- Which market grows faster? More durable?
[1] Reuters 2023.
[3] This being said, I’m not sure that large foundational model providers could, or should, offer “verticalized” models (i.e., trained on a specific domain).
[4] It has always bothered me that I cannot see usage logs and basic statistics in OpenAI’s console, so I was very happy to see Anthropic offers this with Claude API access.
[5] OpenAI is reportedly seizing this opportunity with their “Foundry” product: TechCrunch 2023.
[7] 22. What language models are good at.
[8] I have been underwhelmed by the OpenAI plugins ecosystem at launch, but am confident the developer community will surface compelling offerings soon.
[9] Bloomberg 2022.