More than a token gesture: CSPs aim to profit from AI currency

More than a token gesture: CSPs aim to profit from AI currency
A question for many communications several providers (CSPs) is whether tokens will be the currency of the AI era. And if they are, will CSPs become revenue-generating providers of them or paying consumers?
Some of the world’s largest telcos appear to have concluded that tokens are indeed set to be an important currency. And that it is wise to generate them. (A token is a small piece of text that is either part of the query to a large language model or constitutes part of its response. A short word may be one token, while longer words tend to be split into two or more tokens. Using tokens costs, although these costs have been falling.)
Speaking at MWC in March, Mathew Oommen, President, Reliance, was clear about Jio’s role in token economics.
“The telecom industry of yesterday and the telecom industry of tomorrow is definitely not the same,” said Oommen. “The telecom currency is going to be rapidly changing from minutes to bytes to tokens, and we sincerely believe that at Jio, we will be one of the first scalable token service providers,” explained Oommen, adding “we are determined to deliver the lowest cost ... token per month.”
China Telecom, meanwhile, is already using what it calls its AI-native computing power foundation to drive token consumption, as Light Reading recently reported.
The operator, which has been investing in combining AI and network infrastructures “relied on its intelligent cloud system and leveraged the advantages of its channels .... to deploy OpenClaw through cloud computers, cloud hosts, and e-Surfing Smart Boxes, etc., driving more than 60,000 new cloud computer activations and a 10-fold increase in average daily token consumption.”
China Telecom also highlighted its Smart Ringback Tone business, which “received an enthusiastic market response, with over 4 million users creating content with AI and a 14-fold increase in average daily token consumption.”
And in the US, T-Mobile is already talking about the move from informational tokens to kinetic tokens, as it looks towards 6G. It describes kinetic tokens as “data constructs that do not just represent information, but initiate physical outcomes such as movement, control, adaptation or coordination in the real world.”
Being more than a token pipe
It is unclear today how much revenue AI token generation can produce. But Oommen’s aim is to avoid Jio becoming a token pipe while other companies take the lion’s share of value from AI.
“Let me reiterate that we do not want to be the LTP [or] largest token pipe,” stated Oommen.
“AI's reckoning is going to fundamentally change your networks and your devices. They are not going to be the same. The question is, can we become the fabric of that AI infrastructure and become the owner of the tokenomics? ," asked Oommen. "And that is the opportunity, so that we can become the largest token generator ... and not just be another token pipe.”
Tokens also represent a cost to telcos, with AI agents and employees consuming tokens each time they query language models.
One of the industry efforts to understand and manage the cost of token usage is taking place within TM Forum’s wider collaboration around Model-as-a-service (MODaaS). MODaaS defines how CSPs can source, operate and scale multiple AI models as enterprise‑grade services across cloud, edge and on‑premises environments, regardless of whether models are sourced from hyperscalers, vendors or developed in‑house.
In a TM Forum survey last year conducted forour AI Benchmark telcos expressed concerned about the costs associated with using AI. It found that at the time more than three quarters of CSP respondents view cost as at least a moderate challenge to becoming AI-native, with nearly a third believing it’s a major challenge.
Token pricing
But AI technology is changing swiftly, and with it so is the cost of using AI tokens. Andreas Lewitzki, Telenor, points out in a recent LinkedIn post that’s Google’s TurboQuant promises to sizeably cut the cost of AI inferencing.
As this article explains in some technical depth, Google’s TurboQuant reduces the memory needed to run an enterprise AI workload, making standard “hardware sufficient for workloads that previously demanded cloud scale.”
And that could alter how telcos view agentic AI.
“Lowering the cost of tokens won’t decrease demand, it’s going to explode it. We’re moving from "budgeting" AI responses to building massive, real-time agentic workflows,” according to Telenor’s Lewitzki.
