Outcome based pricing in software, the price of God, and nobody likes Nvidia part III
8 September 2024 | Issue #28 - Mentions $ADBE, $GOOG, $MSFT, $META, $NVDA, $AVGO
Welcome to the twenty-eighth edition of Tech takes from the cheap seats. This will be my public journal, where I aim to write weekly on tech and consumer news and trends that I thought were interesting.
Let’s dig in.
Software pricing
Longtime readers know that this newsletter keeps a close eye on software pricing given its importance in keeping the AI train chugging along. This week had a couple stories that are worth noting.
From Tech Crunch
Canva, the design platform, is increasing prices steeply for some customers. And it’s blaming the move in part on generative AI.
In the U.S., some Canva Teams subscribers on older pricing plans are seeing the sticker price for a five-person plan jump from $119.99 per year to $500 per year (with a 40% discount for the first 12 months). In Australia, meanwhile, the flat $39.99 AUS (about $26) per-month fee for up to five users has been raised to $13.50 AUS for each user.
On an individual user basis, Canva Teams prices are now $100 per person, or $10 a month per person, with a minimum of three people required for a Teams plan. Those prices were quietly changed earlier this year for new customers, but now the company is changing the price for customers who’d previously paid a lower price.
The price changes don’t apply to Canva’s Pro or Enterprise tiers.
In a statement to TechCrunch, a Canva spokesperson confirmed the new price points and pointed to the company’s growing suite of generative AI tools — including Magic Studio — as a reason for the adjustments. They also noted that some of Canva’s customers had been locked into lower prices that Canva no longer offers; Canva quietly changed its Teams pricing earlier this year to $10 per month for each user.
“Our original pricing reflected the early stage of this product and has remained unchanged for the last four years,” the spokesperson said. “We’re now updating the price for customers on this older plan to reflect our expanded product experience.”
While Canva cites increased costs from generative AI tools as a factor in their price hikes, I'd argue it's not the primary driver. A more likely scenario? Canva's gearing up for an IPO in the coming year. It's a common strategy for companies to boost their financial metrics pre-IPO to maximize valuation - nothing inherently wrong there, just business as usual. Let's not mistake this for a broader indicator of AI's value or a direct comparison to Adobe's pricing model. Even with the increase, Canva remains a bargain at roughly a third of Adobe Illustrator's cost, while offering far greater accessibility for non-pros. The reality is Canva has been underpricing their product for years. This move is about capturing more of their true value and evolving as a company. AI tools conveniently serve as a shiny new feature to justify the price bump to frustrated customers. It's a savvy play, leveraging innovation to support a necessary business decision.
Value-based pricing, a concept that gained traction in healthcare, is now making its way over to the software industry. Cloud-based systems revolutionized software deployment, making installation a breeze. This ease of deployment led to some interesting sales tactics - big discounts on long-term contracts became the norm, with salespeople eager to get a foot in the door. The strategy? Get customers to deploy widely, then hope they forget about the subscription. Once the initial discounted term expires, software companies can rake in full rates from a large user base. This approach thrived during the zero interest rate policy (ZIRP) era. Customers were less price-sensitive; money was cheap, and times were good. But as we've entered the age of efficiency, the game has changed. Every expense is now under the microscope, and "shelf-ware" - unused software licenses - has become a luxury companies can't afford. Enter AI and the concept of outcome-based pricing. The idea is appealing: customers only pay for software that actually delivers on its promises. It's a model that aims to align the interests of software providers and their clients more closely. In a perfect world, this would solve the problem of wasted resources and ensure companies only invest in tools that truly add value. However, we're not living in a perfect world…
From The Information
Last month, Zendesk, which makes customer support software aided by artificial intelligence, decided to sell it in a daring new way.
Instead of charging businesses based on how often they use the software—essentially an AI chatbot—to try to resolve customer problems, Zendesk began charging them only when the chatbot completed the task without needing employees to step in.
Traditional software fees don’t make sense in an increasingly automated world, said Nikhil Sane, Zendesk’s senior vice president of revenue acceleration.
“Just because I used your service doesn’t mean I got value,” he said.
The unusual pricing decision comes as Zendesk and other software providers are predicting AI will reliably automate certain roles in the workplace. If they’re right, that could mean fewer workers would subscribe to software plans that charge based on the number of users, also known as software as a service, which has been a bedrock of the enterprise software industry for more than 20 years.
Zendesk isn’t alone. Two of its rivals, Intercom and Forethought, have also started asking their customers to pay for AI-powered features only when they work well enough that customers can set them on autopilot.
The business model, called outcome-based pricing, also aims to entice more spending from customers during a period when many of them have tightened their IT budgets.
It’s too early to tell the impact of the new pricing on software providers’ growth and margins, and it still represents a small portion of their business. Publicly traded software firms haven’t adopted outcome-based prices.
But executives at the firms that have done so say the rest of the industry will follow as it increasingly uses AI to launch products that automate tasks such as customer support, sales and recruiting. That’s because software buyers are already growing wary of the high-priced AI features that are quickly becoming ubiquitous in software products, and they need to calculate how new purchases will affect their businesses’ bottom line.
One big risk of charging companies based on completing tasks rather than just for usage is that revenue might be more unpredictable—especially when the AI doesn’t work as intended—and the approach could lead to lower sales than traditional pricing schemes like subscriptions..
The article highlights RB2B, a software company that helps businesses pull social-media profiles for potential customers that visit their business and its success with Intercom’s AI bot.
Clarke estimated that Fin saved his two-person support staff 142 hours of work in August, given that it takes 15 minutes on average for a human to resolve a ticket. The company paid 99 cents per resolution, compared with roughly $10 for a human to resolve an average ticket.
For Clarke, Fin significantly reduced the tedium of resolving mundane or repetitive customer queries.
“I was tired of answering the same questions over and over,” he said.
Other Intercom customers may feel similarly. Some 17% of purchases of Intercom software in the last six months included outcome-based pricing for Fin, up from 10% in the prior six months, said Ryan Neu, CEO of Vendr, a marketplace where companies can find and buy software.
The article presents counterarguments to outcome-based pricing, citing CEOs from competing AI chatbot companies who have chosen not to adopt this model.
Some software executives are skeptical of outcome-based pricing. Bhavin Shah, CEO of Moveworks, whose chatbot automates IT support or help desk inquiries, said he decided against adopting such pricing because he believes it would be hard to put a single price on the value of resolving an IT ticket.
Different Moveworks customers—and even different corporate departments within an individual customer—would feel differently about the value they got from resolving an IT ticket, he said. Moveworks customers include Instacart and Palo Alto Networks, and they pay a subscription fee based on the number of employees who get access to its chatbot.
“If you sit here with a business model that requires this perfect agreement on value, then you’re not going to grow very fast,” Shah said.
Another skeptic of outcome-based pricing, Decagon, sells an AI chatbot for customer support—similar to those from Intercom and Zendesk—and charges for each use of the chatbot, regardless of whether it resolves customer issues on its own, CEO Jesse Zhang said.
An outcome-based model might encourage buyers to use the product less, he said. For example, they might feel they shouldn’t have to pay for simpler queries and filter those out before sending harder ones to Decagon’s AI, he said.
“We don’t really believe in the pure per-resolution pricing because it creates weird incentives” that could hurt his business, he said.
Zhang's claim that outcome-based pricing might discourage product usage seems questionable. Intercom's Fin AI chatbot charges 99 cents per resolution, while this article and podcast estimates the average cost of a human-handled call at $10-13. At face value, this makes AI resolutions appear highly cost-effective and should logically drive increased AI usage. However, the $10-13 figure warrants scrutiny. Based on the average U.S. call center wage of $18/hour, this implies a 30-minute resolution time (factoring extra costs for benefits and training). Yet, typical call center metrics suggest 10-20 inbound calls per hour. Using the midpoint of 15 calls, the cost per query comes to just $1.33, assuming all are resolved. This narrows the cost advantage of AI significantly. The math becomes more favorable for AI if we assume only half of human-handled calls reach resolution. This would push the cost to $2.67 per resolved query, making Fin's 99-cent price more attractive. However, if we consider outsourcing to countries with lower wages (e.g., Philippines at $3/hour), the human option becomes even more competitive. These calculations lead me to believe that the room for price increases on a per-resolution basis may be more limited than expected. It’ll be interesting to see Intercom's financial performance as it’ll be a crucial indicator for the next generation of software companies adopting similar models.
The price of God
From The Information
How much would you be willing to pay for ChatGPT every month? $50? $75? How about $200 or $2,000?
That’s the question facing OpenAI, whose executives we hear have discussed high-priced subscriptions for upcoming large language models, such as OpenAI’s reasoning-focused Strawberry and a new flagship LLM, dubbed Orion. (More on those models here and here.) How much customers are willing to pay for the AI chatbot matters not only to OpenAI but rivals offering similar products, including Google, Anthropic and others.
In early internal discussions at OpenAI, subscription prices ranging up to $2,000 per month were on the table, said one person with direct knowledge of the numbers, though nothing is final. We have strong doubts the final price would be that high.
Still, it’s a notable detail because it suggests that the paid version of ChatGPT, which was recently on pace to generate $2 billion in revenue annually, largely from $20-per-month subscriptions, may not be growing fast enough to cover the outsize costs of running the service. Those costs include the expenses of a free tier used by hundreds of millions of people per month.
And more-advanced models such as Strawberry and Orion may be more expensive to train and run than prior models. For instance, we’ve reported that, when given additional time to think, the Strawberry model can answer more complicated questions or puzzles than OpenAI's current models can. That additional thinking, or processing time, could mean more computing power—and, therefore, more costs. If that’s the case, OpenAI may want to pass along some of the costs to customers.
Of course, a high price would also mean OpenAI believes its existing white-collar customers of ChatGPT will find these upcoming models a lot more valuable to their coding, analytics or engineering work.
Exponential rises in cost isn’t just limited to training, opex is climbing too. A $2,000 monthly fee for a chatbot might seem steep compared to current offerings, but it's not out of line with high-end enterprise software subscriptions. Take Bloomberg terminals, for instance, which run about $24,000 annually per user. The value proposition hinges on the chatbot's capabilities. If it can boost an equity analyst's productivity comparably to a Bloomberg terminal, the price could be justified. This pricing strategy suggests that OpenAI will be focused on enterprises to monetise its IP.
However, there are a few considerations that will need to be mulled over before setting its price.
Competition: Multiple well-funded rivals are developing similar technologies.
Performance: To warrant premium pricing, these new models must significantly outperform existing models.
Industry influence: As a historical price setter in the chatbot market, OpenAI's moves could pave the way for competitors like Google and Anthropic to follow suit with their own price hikes.
If we do see widespread price increases, it may indicate the substantial computing costs these new models require for inference. Assuming OpenAI maintains its current ~50% gross margin, these price points suggest a significant step up.
Co-pilot apathy
While OpenAI is contemplating raising prices, Microsoft faces a different challenge: some of their customers are struggling to justify the $30 monthly subscription cost for 365 Copilot, finding it difficult to realize sufficient value from the AI-powered tool.
From The Information
Microsoft’s vision of using artificial intelligence to take some of the drudgery out of creating spreadsheets, documents and slide presentations is running into snags at businesses like Ascendion, a technology consulting firm.
The firm has about 100 employees testing a Microsoft AI feature known as 365 Copilot that automates tasks in Microsoft 365, the suite of applications, formerly known as Office, that includes Word, Excel and PowerPoint. The AI does a good job of summarizing recordings of meetings and drafting emails based on short written prompts, said Viral Tripathi, Ascendion’s chief information officer.
However, Tripathi said, 365 Copilot has fallen short in other areas, such as generating visuals and presentations in Excel and PowerPoint. The firm still plans to expand its use of the AI feature to all of its 3,500 employees globally, as long as it can demonstrate a return on its investment in the technology. “So far, we’ve had mixed results,” he said. “Most people don’t find it that valuable right now, but it’s a product that’s going to improve over time.”
Corporate technology managers are echoing that perspective about Microsoft’s efforts to remake its productivity applications—one of the most lucrative and longstanding software franchises—for the AI era. While there’s still optimism that 365 Copilot and other forms of AI will eventually deliver breakthroughs in productivity and other benefits, many businesses say they haven’t seen them yet and aren’t sure when they will.
Some of its weak performance is related to data restrictions
According to some of Microsoft’s competitors, part of 365 Copilot’s weakness is that its intended use cases are too broad, rather than directly tied to a specific function. SAP offers its own AI assistant, which is embedded in its own sales and human resources software.
“The Microsoft Copilot is rather stupid because it knows nothing about the business context,” Thomas Saueressig, an executive board member at SAP, said in an interview.
Saueressig added that SAP and Microsoft inked a deal that would let people connect their SAP accounts with Microsoft’s applications so its AI assistant could draw on customers’ SAP data. (Microsoft also sells Viva Sales Copilot—an AI assistant separate from 365 Copilot—that connects to the company’s own customer relationship management software.)
Part of it comes down to operational process
One of the top priorities for the Microsoft 365 team in recent months has been getting the AI tools in Excel to work better with large spreadsheets. Currently, Excel struggles when people use the assistant to reorganize spreadsheets or create data visualizations, especially in large documents with more than a million rows, according to a current Microsoft employee involved in the effort.
To solve the issue, Microsoft engineers have devised fixes that involve configuring the AI software to break up each prompt a user types into 365 Copilot into several smaller tasks and instructing different AI models to complete each step individually, according to someone involved in the effort. They found that doing so helped the AI models carry out the tasks more reliably, without making mistakes that could derail processes such as visualizing data, this person said.
Microsoft’s Spataro confirmed that this work is underway, adding that engineers are also building new features in the PowerPoint Copilot that similarly break down the process of creating AI-generated presentations into several steps. Users of the software will be able to select what they want the AI to do at each step, he said.
Microsoft finds itself in a unique position, attempting to monetize a product still evolving in functionality. To reach its full potential, the company must make exponentially larger investments. This strategy ventures into uncharted territory, with success hinging on the validity of AI scaling laws.
Breaking free from Nvidia: An ongoing saga
From Yahoo
ChatGPT developer OpenAI has been musing over building its own AI chips for some time now but it looks like the project is definitely going ahead, as United Daily News reports the company is paying TSMC to make the new chips. But rather than using its current N4 or N3 process nodes, OpenAI has booked production slots for the 1.6 nm, so-called A16, process node.
The report from UDN (via Wccftech) doesn't provide any concrete evidence for this claim but the Taiwanese news agency is usually pretty accurate when it comes to tech forecasts like this. At the moment, OpenAI spends vast amounts of money to run ChatGPT, in part due to the very high cost of Nvidia's AI servers.
Nvidia's hardware dominates the industry, with Alphabet, Amazon, Meta, Microsoft, and Tesla spending hundreds of millions of dollars on its Hopper H100 and Blackwell superchips. While the cost of designing and developing a competitive AI chip is just as expensive, once you have a working product, the ongoing costs are much lower.
UDN suggests that OpenAI had originally planned to use TSMC's relatively low-cost N5 process node to manufacture its AI chip but that's apparently been dropped in favour of a system that's still in development—A16 will be the successor to N2, which itself isn't being used to mass produce chips yet.
OpenAI's unique position as the leading LLM developer sets it apart in the AI landscape. The industry generally accepts that computing costs increase tenfold between generations, quickly reaching astronomical figures (as evidenced by MS estimates). OpenAI is undoubtedly running similar calculations internally, grappling with the economic challenges of scaling their technology. One potential solution is developing custom chips, which could potentially reduce costs by a factor of 10 (comparing Google TPU prices to Nvidia's B200). However, OpenAI's late entry into chip development presents challenges. They may need to sacrifice some performance, given the necessity of building an ecosystem of libraries and toolsets to optimize their chips, instead of leveraging Nvidia's established CUDA platform. Nvidia maintains a significant lead in cost-performance ratio. Consequently, if OpenAI chooses to develop its own chips, it might need to temporarily slow its progress towards AGI. This "pit stop" in the race towards advanced AI could be a strategic necessity for long-term sustainability.
From The Information
Broadcom CEO Hock Tan said a long-term trend in the artificial intelligence chip industry is that “hyperscalers"—the biggest cloud providers and other tech giants operating huge platforms—are all moving toward creating their own custom chips. If such a trend accelerates, Broadcom, which helps tech giants like Google and Meta Platforms design custom AI chips, could become more competitive against Nvidia, the dominant AI chip supplier.
“Those few hyperscalers, platform guys, will create their own [AI chips] if they haven’t already done it, and start to train them on their large language models,” Tan said during a conference call with analysts after Broadcom reported its quarterly earnings. Broadcom’s revenue for the fiscal third quarter through Aug. 4 rose 47% to $13.1 billion. The company posted a net loss of $1.88 billion due in part to a one-time tax provision.
Most enterprise customers will continue to use general-purpose AI chips because they don’t have the capabilities or financial resources to create their own custom chips. But the sheer size of the demand for custom chips from the small number of hyperscalers will rival the demand for general-purpose AI chips from all other customers, Tan said.
Related to the previous story, it's worth noting a significant shift in industry perspective. Hock Tan, who previously believed general-purpose silicon would ultimately dominate based on semiconductor history, reversed his stance 3-6 months ago. He now predicts custom ASICs will gain market share from GPUs in AI applications. This shift aligns with current industry trends. Google employs its TPUs for both training and inference of Gemini, while Meta utilizes custom silicon for inference in ranking and content recommendation systems. However, training remains Nvidia's stronghold due to the critical need for raw performance. Their GPUs offer superior performance per cost and watt for these intensive tasks. Until custom ASICs can match this efficiency in training, Nvidia's customers will likely remain tethered to its ecosystem.
That’s all for this week. If you’ve made it this far, thanks for reading. If you’ve enjoyed this newsletter, consider subscribing or sharing with a friend
This is a free publication but if you’d like to support my work, please consider buying me a coffee. I welcome any thoughts or feedback, feel free to shoot me an email at portseacapital@gmail.com. None of this is investment advice, do your own due diligence.