Intel's woes, Is the future of GenAI LLMs or SLMs, TikTok's woes
29 October 2023 | Issue #13 - Mentions $NVDA, $MSFT, $INTC, $ARM, $INTU, $AMZN, $META, $NOW, $GOOG, TikTok, $SE, $GOTO
Welcome to the thirteenth edition of Tech takes from the cheap seats. This will be my public journal, where I aim to write weekly on tech and consumer news and trends that I thought were interesting.
It was a big week for Mega-cap tech earnings this week. There’s a ton of content online breaking down the results, but a couple I enjoy reading the most are from MBI Deep Dives and TMT Breakout. Rather than trying to replicate their excellent commentary I will stick to my usual routine of highlighting/analysing tech and consumer news. Without further ado…
Let’s dig in.
Intel’s doing it tough
From Reuters
Nvidia (NVDA.O) dominates the market for artificial intelligence computing chips. Now it is coming after Intel’s longtime stronghold of personal computers.
Nvidia has quietly begun designing central processing units (CPUs) that would run Microsoft’s (MSFT.O) Windows operating system and use technology from Arm Holdings(O9Ty.F), , two people familiar with the matter told Reuters.
The AI chip giant's new pursuit is part of Microsoft's effort to help chip companies build Arm-based processors for Windows PCs. Microsoft's plans take aim at Apple, which has nearly doubled its market share in the three years since releasing its own Arm-based chips in-house for its Mac computers, according to preliminary third-quarter data from research firm IDC.
Advanced Micro Devices (AMD.O) also plans to make chips for PCs with Arm technology, according to two people familiar with the matter.
Nvidia and AMD could sell PC chips as soon as 2025, one of the people familiar with the matter said. Nvidia and AMD would join Qualcomm (QCOM.O), which has been making Arm-based chips for laptops since 2016. At an event on Tuesday that will be attended by Microsoft executives, including vice president of Windows and Devices Pavan Davuluri, Qualcomm plans to reveal more details about a flagship chip that a team of ex-Apple engineers designed, according to a person familiar with the matter.
Nvidia shares closed up 3.84%, and Intel shares ended down 3.06% after the Reuters report on Nvidia's plans. Arm's shares were up 4.89% at close.
It was convenient timing for this news to drop a few days before Intel’s results, which meant that we were able to get the company’s thoughts on how they’re thinking about the added competition in its cash cow. For added context, the article goes on to share that Microsoft executives have observed how successful Apple’s experience with ARM-based chips has been with their Macs, improving their battery life and performance, and want to replicate that in its Windows PCs.
In 2016, Microsoft tapped Qualcomm to spearhead the effort for moving the Windows operating system to Arm’s underlying processor architecture, which has long powered smartphones and their small batteries. Microsoft granted Qualcomm an exclusivity arrangement to develop Windows-compatible chips until 2024, according to two sources familiar with the matter.
Microsoft has encouraged others to enter the market once that exclusivity deal expires, the two sources told Reuters.
“Microsoft learned from the 90s that they don’t want to be dependent on Intel again, they don’t want to be dependent on a single vendor,” said Jay Goldberg, chief executive of D2D Advisory, a finance and strategy consulting firm. “If Arm really took off in PC (chips), they were never going to let Qualcomm be the sole supplier.”
Microsoft has been encouraging the involved chipmakers to build advanced AI features into the CPUs they are designing. The company envisions AI-enhanced software such as its Copilot to become an increasingly important part of using Windows. To make that a reality, forthcoming chips from Nvidia, AMD and others will need to devote the on-chip resources to do so.
There’s no guarantee that this will be a success either. Ars Technica has pointed out that ARM-based Windows PCs have been available for over a decade (this Microsoft Surface uses the ARM-based Nvidia Tegra 3 T30), which hasn’t put a dent in the laptop market. They believe performance hasn’t lived up to its promise because it lacks the same software and hardware integration as Apple Silicon Macs. Supposedly, the vast majority of software written for Intel Macs can run on Apple Silicon Macs completely unmodified, something that’s true to a lesser extent in Windows 11. I posit it’s also because of Apple’s integrated distribution model. The company was strongly profit motivated to switch away Intel chips (estimates online suggest a saving of ~70% on costs). Because Macs are only made by Apple, it was probably easier to make the switch. Windows may be struggling because of the range of third parties (that aren’t as profit motivated, because they still need to pay Qualcomm/AMD/Nvidia to switch from Intel) that need to coordinate together to boost adoption. Pat Gelsinger, the CEO of Intel, dismissed it as a threat at their latest earnings result.
“ARM and Windows client alternatives, generally, they've been relegated to pretty insignificant roles in the PC business. And we take all competition seriously. But I think history as our guide here, we don't see these as potentially being all that significant, overall.
Our momentum is strong. We have a strong road map: Meteor Lake launching this AI PC generation December 14, Arrow Lake, Lunar Lake. We're already demonstrated the next-generation product at Lunar Lake, which has significant improvements in performance and capabilities. We'll be signing Panther Lake, the next generation in the fab in Q1 and Intel 18A. We announced our AI Acceleration Program, which already has over 100 ISVs part of it.
We'll have, we expect in the next two years, over 100 million x86 AI-enhanced PCs in the marketplace. This is just an extraordinary amount of volume, the ecosystem benefits that that brings into the marketplace.
And thinking about other alternative architectures like ARM, we also say, wow, what a great opportunity for our foundry business. And given the results I referenced before, we see that as a unique opportunity that we have to participate in the full success of the ARM ecosystem or whatever market segments that may be as an accelerant to our foundry offerings, which are now becoming, we think, very significant around the ARM ecosystem with our foundry packaging and Intel 18A wafer capabilities as well.”
I would doubt it very much that investors would be happy if ARM PCs take off and Intel swaps x86 design revenue out for foundry revenue from ARM. I assume the gross margins for foundry are much less attractive, but I can’t blame him for trying to put a positive spin. The other thing I found interesting from Intel’s earnings call was that Gaudi2, the company’s AI data center chip competing with Nvidia (who’s performance is close to matching the H100, pre-Nvidia’s software release), is also experiencing demand outstripping supply
In addition, we expect to capture a growing portion of the accelerator market in 2024 with our suite of AI accelerators led by Gaudi, which is setting leadership benchmark results with third parties, like MLCommons and Hugging Face. We are pleased with the customer momentum we are seeing from our accelerator portfolio, and Gaudi in particular, and we have nearly doubled our pipeline over the last 90 days. As we look to 2024, like many others, we now are focused on having enough supply to meet our growing demand.
Dell is partnering with us to deliver Gaudi for cloud and enterprise customers with its next-generation PowerEdge systems featuring Xeon and Gaudi AI accelerators to support AI workloads ranging from large scale training to inferencing at the edge.
Together, with Stability.ai, we are building one of the world's largest AI supercomputers entirely on 4th Gen Xeon processors and 4,000 Intel Gaudi 2 AI accelerators. Our Gaudi road map remains on track with Gaudi 3 out of the fab, now in packaging and expected to launch next year. And in 2025, Falcon Shores brings our GPU and Gaudi capabilities into a single product.
….
“The interest that we've seen in Gaudi is a worldwide statement. So we have demand portfolio that really sort of matches the Intel balance across all geographies. We're also seeing a huge upsurge in the amount of DevCloud, where this provides the fast on-ramp to the Intel AI and all of our advanced silicon offerings.
…
But we also, as I said, in the first part of your question, we saw a worldwide demand for our activities there. And overall, we're now supply constrained on Gaudi and racing to catch up to that supply worldwide.”
Unfortunately for Intel though, the growth in the small segment of AI data center chips aren’t enough to offset the overall decline in revenue in its data center business, which fell 10% year on year to $3.8bn. As I’ve written previously, this year’s mad rush for capex on GenAI has meant that companies were substituting spend within their budgets away from non-AI servers (CPUs, Intel’s bread and butter) towards AI servers (GPUs). It’s possible this spend will come back over the next few years as companies digest their AI capex, though if Microsoft, Meta, Alphabet and Amazon’s capex commentary are anything to go by, 2024 (at least) should be another big year for AI servers. Thankfully for Intel, it isn’t completely shut out of this market due to its 2019 acquisition of Habana Labs (the designer of Gaudi).
AI content moderation
There has an open debate as to how fast GenAI gets adopted broadly by consumers and enterprises. Some argue that it may take many years due to the red tape that exists in corporations that creates hurdles such as data privacy and reliability. There is certainly inertia that exists, we only need to look at penetration rates of workloads on the cloud to see how slowly things move in corporates. On the other hand, we haven’t seen uptake happen this fast for any other application in history, and one that showed clear productivity and efficiency gains at that. Adoption should happen quicker than the move from on-premise to cloud given the infrastructure is mostly in place to be “turned on”. Indeed, recent checks from JPM with Microsoft partners indicated that it will take ~2.5 years for M365 Copilots to penetrate 60%+ of 160M eligible users, and thought adoption could reach 80-100% due to getting work “3-4x faster”. I thought this article from the WSJ gave some good context on some of the issues companies are thinking about as it relates to adopting AI tools.
Businesses weighing the risks and benefits of generative artificial intelligence are running up against a challenge social-media platforms have long wrestled with: preventing technology from being hijacked for malicious ends.
Taking a page from those platforms, business technology leaders are turning to a mixture of software-based “guardrails” and human moderators to keep its use within prescribed bounds.
AI models like OpenAI’s GPT-4 are trained on vast amounts of internet content. Given the right prompts, a large language model can generate reams of toxic content inspired by the Web’s darkest corners. That means content moderation needs to happen at the source—when AI models are trained—and on the outputs they churn out.
Intuit, the Mountain View, Calif.-based maker of TurboTax software, recently released a generative AI-based assistant that offers customers financial recommendations. Intuit Assist, which is currently available to a limited number of users, relies on large language models trained on internet data and models fine-tuned with Intuit’s own data.
The company is now planning to build a staff of eight full-time moderators to review what goes in and out of the large language model-powered system, including helping to prevent employees from leaking sensitive company data, said Atticus Tysen, the company’s chief information security officer.
“As we go into trying to make really meaningful, specific answers around financials, we just don’t know how well these models are going to do. So it was important to us that we built a human into the loop,” Tysen said.
Intuit’s homegrown content moderation system, which is currently in its early stages, uses a separate large language model to automatically flag what it considers objectionable content, like profanity, Tysen said. For instance, a customer asking questions unrelated to financial guidance, or trying to engineer a prompt injection attack, will also be automatically blocked by the system, he said. Those attacks could include tricking a chatbot into revealing customer data or how it works.
Human moderators will then be alerted to review the text and can send it to the model building team—improving the system’s ability to block or pick up on harmful content. Intuit’s customers will also be able to notify the company if they believe their prompts were wrongly flagged, and if they think the AI assistant has generated something inappropriate.
I have consistently thought that the winners from AI in software are the platforms that sit as the system of record with the most volume of data to train their LLMs on. This article helps to reinforce that notion and highlights the benefits that scale brings too. Much like the scenario witnessed in the realm of social media, where the largest companies reaped the rewards of heightened regulatory scrutiny, leading to increased content moderation, as we transition into an increasingly AI-centric world, this trend appears to be manifesting once more.
GenAI is expensive… continued
The other big factor that will act as a driver of adoption of GenAI products within consumer and enterprise is cost. I’ve written previously about how expensive it is for companies to enable AI in its products. They need to make a profit at the end of the day and so will need to make a margin on top of what its costing them. IT budgets don’t magically increase by 100% just because AI is cool and exciting (the O365 copilot add-on is almost double the $36/user/month price for 365 E3). Software vendors need to prove its return on investment or try to get the cost down for customers. I thought this article was good at touching on this issue.
OpenAI is no longer the only game in town when it comes to selling generative artificial intelligence. That’s beginning to affect the growth of its sales to corporate customers.
Less than a year after OpenAI launched ChatGPT and built a considerable consumer business, several big companies that were also early customers of its AI, such as Salesforce and Wix, say they are using less expensive alternatives. Some of those firms are paying for similar AI from competing providers that claim they can help the firms use AI more cheaply. Other customers are beginning to buy OpenAI’s software through Microsoft because they can bundle the purchase with other products. That’s a problem for OpenAI, as Microsoft keeps much of the OpenAI-related revenue it generates.
Take Salesforce as an example. Earlier this year the sales software firm was relying on OpenAI’s GPT-4 large language models for tools that automatically draft emails or summarize meetings. Salesforce still uses OpenAI but is trying to power more of its AI services with open-source models as well as those it has developed in-house, both of which can be less expensive, said Salesforce’s senior vice president of AI, Jayesh Govindarajan.
“We’re at the very beginning of this cost-reduction exercise in AI,” he said. “It’s only going to become more important as these AI products reach greater scale and we begin to focus on achieving cost effectiveness.”
The shift shows how fast the market for LLMs is changing, prompting industry pioneer OpenAI to make adjustments as it finalizes a tender offer that values the company at more than $80 billion. Companies including Google, Amazon Web Services, Anthropic and Cohere are selling competing services, and some startups and bigger companies are finding ways to customize open-source AI software for their own needs without paying for expensive proprietary models from the likes of OpenAI.
To address the shifting market, OpenAI is working to reduce the cost of its models, The Information has reported. Its revenue has skyrocketed in the past year, and it is on track to generate over $1.3 billion annually, but much of that revenue comes from ChatGPT subscriptions, meaning OpenAI still has to prove its bona fides as an enterprise software seller. Its sales team is nascent and teeny compared to Microsoft’s: Chief Operating Officer Brad Lightcap oversees those operations, and the company earlier this year hired former Stripe executive James Dyett to handle its largest accounts. See the OpenAI org chart here. (Spokespeople for OpenAI and Microsoft did not have comment for this article.)
…
One flagship OpenAI customer that’s exploring other options is Morgan Stanley, which was among the first companies to sign a large deal with the startup. In that deal, OpenAI agreed to build a bespoke version of its GPT-4 model trained on Morgan Stanley’s market research data to power a chatbot that could quickly answer questions for its wealth managers. The two companies began talking in late 2021, when Morgan Stanley’s tech executives scouted out OpenAI because they believed it was the only vendor that could build such a tool, Jeff McMillan, chief analytics, innovation and data officer in the company’s wealth management division, said in an interview earlier this year.
Morgan Stanley is still a direct OpenAI customer, but in recent months it has also started using some OpenAI models through Microsoft Azure. A Morgan Stanley spokesperson said the bank intends to be a paying customer of the Azure OpenAI service but didn’t say what the bank will use it for.
Meanwhile, some companies that initially saw OpenAI as the clear leader in AI have begun testing alternatives to save on costs. Wix, a web-design software company, primarily used OpenAI software starting last year to build tools that automatically generate text, images and layouts for websites. The company initially bought AI exclusively from OpenAI but now uses Microsoft’s Azure OpenAI Service to power some of its AI functions, and it is also testing open-source models and Google’s Vertex AI models as it looks to save on costs, according to Wix’s head of AI research, Eli Brosh.
Earlier this year, “a lot of this stuff was only available from OpenAI,” Brosh said. “Now things are changing so fast, and our goal is to [be able] to use any large language model, not necessarily just OpenAI,” he said.
By the same token, OpenAI in some cases is gaining business from customers of other AI services that want more choices. Mutual fund giant Fidelity Investments, for instance, has long been a customer of Amazon’s SageMaker platform, which lets companies run and tweak copies of open-source AI software on the AWS cloud, according to two people familiar with the situation. But more recently, Fidelity has begun paying for Microsoft’s Azure OpenAI Service to test OpenAI’s models, one of the people said. Fidelity’s spending on OpenAI and AWS SageMaker has not been previously reported.
However, OpenAI risks being outflanked on cost by open-source models, most of which are much smaller and less sophisticated than the startup’s larger AI models. Some developers have found that the open-source models can substitute for OpenAI models on less-sophisticated tasks. And larger companies like Microsoft are experimenting with swapping in open-source models to run products that previously relied on OpenAI’s models, to reduce costs, The Information previously reported.
For instance, Pete Hunt, founder and CEO of developer tools startup Dagster, recently started Summarize.tech, which automatically summarizes the contents of videos or audio. Hunt was using OpenAI’s GPT-3.5 model to power the service, which has roughly 200,000 monthly users, but recently switched to Mistral-7B-Instruct, an open-source model. As a result, Hunt’s service went from racking up around $2,000 per month on OpenAI costs to less than $1,000 per month, and users haven’t complained about any change in quality, he said.
“This wouldn’t have been possible with the open-source models that were available just a month or two ago,” Hunt said. Summarize.tech has always been profitable, he added, but it has for the most part generated “more than beer money, but not enough to pay my mortgage.” Now, as open-source models bring the cost down, he sees room for more profits.
“We might be getting closer to that mortgage payment,” he said.
The emerging trend that has become increasingly evident is that many companies are discovering more efficient and cost-effective use cases in smaller or open-source language models. As another article aptly phrased it, "Using (GPT-4) to summarize an email is like getting a Lamborghini to deliver a pizza." Furthermore, the article sheds light on Amazon's strategic approach, which involves diversifying its collaborations with LLM developers. Through its Bedrock service, customers can choose from a variety of leading foundational models, with the exception of OpenAI's. Microsoft, on the other hand, has seemingly opted for a deeper integration with OpenAI, (to be fair, they also have Meta’s Llama model as part of their offerings).
In BT's Microsoft earnings review, the advantages of this approach are discussed in detail. In brief, Microsoft can harness and monetize OpenAI's model across multiple products such as copilots, advertisements, and Azure, optimizing and fully utilizing its technology stack in ways that competitors with a more general-purpose approach cannot. While this holds true to a certain extent, it's worth noting that there may be premium or unnecessary costs associated with it, as mentioned in the aforementioned article. It will be fascinating to observe which approach ultimately prevails.
On a related note, a question that has been on my mind is where customers will source the additional budget required to deploy these tools. As mentioned earlier, IT budgets don't typically increase substantially in a vacuum, unless there is a significant boost in sales. Therefore, the necessary funds must be reallocated from existing resources. This question was posed to ServiceNow’s CEO Bill McDermott by an analyst from Jefferies during their recent earnings discussion.
SS: When you think about your board-level conversations, are you seeing that the budget that's been carved out for spend on GenAI, is that being taken away from other parts of the overall IT budget? Or is that, hey, this is a strategic imperative and we need to find the money, whether we're growing our IT budget or not?
BM: Yeah, Samad. The CEOs all have boards of directors, and they don't want to show up without a GenAI plan. So this is a CEO-level decision. And I think that is why we meet with so many CEOs and the C-suite is now completely embedded in the ServiceNow go-to-market plan, and it's working beautifully.
What they are doing is as follows, according to IDC, the IT budget this year would've been about 3.5% spend, and next year, it's expected to go to, instead of incrementally increasing 3.5%, which is your typical year, it's expected, according to IDC, to incrementally go up 7%; and that's the IT budget itself.
What I believe is going to happen, and based upon the CEO discussions that I'm having and also based on my own way of thinking, I would very much like to take the position of looking at the world through the customers' eyes and if I'm them, 7% may or may not get it done. I might look to G&A functions to further fuel this generative AI revolution, because this is really about business transformation and truly transforming the way you run your company, and it's not a nice-to-have IT project.
And I do think that is one of the – interesting question you have, because I think it's one of the reasons why I have said repeatedly, the IT strategy has become the business strategy, because digital transformation is an end-to-end imperative. Now generative AI across platforms that matter, and there's only a few, and we're one of them, is really to me, going to get a very nice tailwind investment in 2024 regardless of the macro.
It appears that the budget allocation is shifting away from solely residing within the Chief Information Officer’s (CIO) domain and extending to the involvement from the CEO and CFO. This shift could potentially result in various adjustments, such as reducing headcount additions or reallocating resources from other General and Administrative (G&A) expenditures to facilitate the incorporation of GenAI. This makes sense if the tools do improve productivity as planned.
Google is not a monopoly
This week came with more testimonies from the DOJ’s lawsuit against Google. SVP Prabhakar Raghavan, Google’s head of search and advertising, disclosed that the company paid $26.3bn in 2021 to make its search engine the default on most smartphones and browsers and is worried about losing users to TikTok.
A top Google executive on Friday revealed the internet giant paid $26.3bn in 2021 to make its search engine the default on most smartphones and browsers, after a judge forced the company to disclose what it claimed was highly sensitive commercial information.
The disclosure, which came in the US government’s antitrust trial against Google in Washington DC, marked the first time the company has revealed how much it pays to guarantee its search service will get pole position in front of users when they conduct a search.
The $26.3bn figure is slightly higher than estimates I’ve seen from various brokers (GS had it at ~$22bn for example).
Google’s top search executive said concerns about Amazon.com Inc. keep him awake at night as the company loses users to the online retail giant and newer apps such as ByteDance Ltd.’s TikTok.
Testifying as part of the Justice Department’s antitrust case against Google Thursday, Prabhakar Raghavan, a Google senior vice president, said young people in particular are using apps such as TikTok and Meta Platforms Inc.’s Instagram and WhatsApp, where they spend an average of four hours per day.
“I feel a keen sense not to become the next roadkill,” said Raghavan, one of the first employees the Alphabet Inc. unit has called in its defense. For young people, “Grandpa Google knows the answer and will help you with homework. But when it comes to doing interesting things, they like to start elsewhere.”
…
Raghavan’s testimony served as a rebuttal to the idea that Google could be a “one-stop shop” for internet search, an argument made earlier in the trial by Justice Department witnesses, including Microsoft Corp. Chief Executive Officer Satya Nadella. US v. Google, which is expected to last 10 weeks, is the government’s biggest tech monopoly trial of the last two decades.
Raghavan underscored that Google was at risk of losing market share to apps like TikTok and Instagram, particularly because engaging with them diverges so sharply from traditional online search behavior — going to a search engine via a web browser, and typing a query.
“The fastest growing section of queries is young people using their camera to point to things,” Raghavan said on the witness stand.
“Our user research shows where young people go, older users follow,” he added, explaining why Google has focused on TikTok. Raghavan made similar remarks in 2022, relying on internal Google research, stating that “something like almost 40% of young people, when they’re looking for a place for lunch, they don’t go to Google Maps or Search. They go to TikTok or Instagram.”
Pandu Nayak, a Google vice president for search, testified last week that Google has focused on TikTok of late to figure out how younger people are searching for information.
“Young people particularly are increasingly turning to TikTok for their information needs, and we want to understand what is it that they’re doing there, what are they finding useful, what should we do with Google to address that,” he said.
I agree that an increasing number of young people are turning to platforms like TikTok and Instagram as their primary apps for discovering new restaurants and places to visit. However, for more immediate, location-dependent decisions, it's likely that Google Maps will maintain its dominance. When looking for a place to dine, my typical approach would involve searching the specific location I'll be at using Google Maps as the starting point. For more exploratory ventures or places less reliant on immediate location, Instagram becomes a more suitable choice.
In light of the ongoing court case, Google faces the delicate task of highlighting its competition without unsettling investors. In terms of search engine query growth and market share trends, Google appears to be performing well. Nonetheless, the company may have a deeper level of data granularity that isn't readily apparent to external observers.
TikTok’s next moves in Indonesia
News came out this week indicating that TikTok was looking for ways to save its ecommerce business in Indonesia, which led to a sell-off in shares of Sea.
TikTok is boosting its resources to explore ways of saving its ecommerce business in Indonesia — such as building a new app or partnerships with local companies — as the Chinese-owned group battles to remain in its biggest marketplace.
Beijing-based ByteDance, owner of the viral TikTok video app, has put together product and technology teams in Singapore to discuss ideas after Jakarta imposed a ban. One suggestion has been to create an online commerce platform that would be separate to its video app in a bid to satisfy regulators in the south-east Asian economy, according to three people with knowledge of the matter.
Another source at TikTok said the situation was “fluid” and, although the company was not actively working on a separate app, all options were being considered.
…
Senior management, who have been spending time in Jakarta since the ban was imposed, have also held discussions with retail companies about partnerships, including with local technology champion GoTo. This would be another option they believe could allow them to continue ecommerce transactions. However, many attempts to meet more senior Indonesia ministers to discuss the issue have been unsuccessful, one of the people close to TikTok said.
While senior management are putting people and resources into building a second app, there are reservations over precedents being set in other markets.
“We are already worried about the contagion effect in other south-east Asian markets including Vietnam and Malaysia,” said one regionally focused executive for TikTok Shop. “If we separate Shop from the main TikTok app in Indonesia, we may then be put into a position where we are forced to also do that in the US. That would be disastrous.”
A second person close to TikTok said the group had yet to receive concrete assurance from the Indonesian government that a new shopping app would be allowed to operate. “They can build the app extremely fast if they want [to], but they don’t want to put in all that time if [the government] won’t allow that, either.”
The market's reaction appears to be an overreaction, in my opinion. It was always expected that TikTok would maintain a presence in Indonesia following the government's enforcement of separating social media and e-commerce. This move benefits Sea because TikTok can no longer share data from its dominant social app to efficiently acquire and target customers in e-commerce. However, this recent development underscores that TikTok's reentry into the market won't be straightforward, as it must do so in a manner that doesn't return it to its pre-regulation dominance.
Presently, customers can purchase items from TikTok videos by clicking on links to various e-commerce platforms. If this system remains in place, Sea stands as the clear beneficiary, being the most popular e-commerce platform for consumers and enjoying effectively free distribution. If TikTok were to strike an exclusive deal with Tokopedia, it could potentially lead to a less optimal shipping experience for consumers (although scale might help address this issue). The outcome largely depends on the economic terms that Tokopedia would accept. Considering that Tokopedia is currently operating at a loss, any deal would need to be exceptionally attractive. Given TikTok's significant cash burn rate, the negotiations would indeed be quite intriguing.
That’s all for this week. If you’ve made it this far, thanks for reading. If you’ve enjoyed this newsletter, consider subscribing or sharing with a friend
I welcome any thoughts or feedback, feel free to shoot me an email at portseacapital@gmail.com. None of this is investment advice, do your own due diligence.
Tickers: NVDA 0.00%↑ , MSFT 0.00%↑ , INTC 0.00%↑, ARM 0.00%↑, INTU 0.00%↑, AMZN 0.00%↑, META 0.00%↑, NOW 0.00%↑, GOOG 0.00%↑, TikTok, SE 0.00%↑, $GOTO