Home Blog Page 15

Tencent cites AI chip shortages as cloud growth bottleneck

  • Company says it will lower 2025 capex guidance compared with previous forecasts, though spending will still exceed 2024 levels
  • Tencent posted Q3 revenue of 192.9 billion yuan, buoyed by a 15% rise in domestic gaming revenue and a 43% surge in global gaming, as well as a 21% jump in advertising driven by AI-enhanced targeting.

Tencent Holdings said on Thursday that a shortage of advanced artificial intelligence chips is limiting the expansion of its cloud business, as China’s tech giants feel the strain of ongoing US export restrictions.

Following strong third-quarter results, Tencent President Martin Lau acknowledged that supply constraints mean the company is allocating available AI compute power first to internal AI initiatives, rather than renting it out to external clients.

“One constraint of the cloud business growth is the availability of AI chips. When AI chips are actually in short supply, we actually prioritise internal use,” Lau said in a post-earnings call.

He added, “If there is not an AI chip supply constraint, our cloud revenue should be growing more,” highlighting the impact of tightened US rules affecting supplies from top vendors such as Nvidia .

Tencent posted Q3 revenue of 192.9 billion yuan ($27.08 billion), buoyed by a 15 per cent rise in domestic gaming revenue and a 43 per cent surge in global gaming, as well as a 21 per cent jump in advertising driven by AI-enhanced targeting.

Bets big on AI

Net profit climbed well above analyst estimates to 63.1 billion yuan. The company did not break out individual cloud results, but the broader FinTech and Business Services segment, which includes cloud, grew 10 per cent.

Tencent’s capital expenditure for the quarter totaled 13 billion yuan ($1.83 billion), down 24 per cent year-on-year. The company signaled it will lower 2025 capex guidance compared with previous forecasts, though spending will still exceed 2024 levels. AI-focused investments are expected to account for a “low teens” percentage of revenue next year.

Facing intensifying local competition, Tencent has bet big on AI as the next growth engine, integrating advanced models—including DeepSeek’s—across platforms like WeChat and launching Yuanbao, a leading ChatGPT-style AI assistant.

The company’s efforts underscore both the transformative promise and logistical hurdles of large-scale AI adoption in the current geopolitical environment.

Baidu unveils two AI chips amid US export curbs

  • Expected to offer Chinese firms greater control over their computing infrastructure as global tensions reshape the technology supply chain.
  • Baidu is now positioning these latest chips as alternatives to US-designed hardware now subject to tightened export rules.
  • Baidu also revealed two “supernode” products leveraging advanced networking to connect hundreds of processors for high-performance AI workloads.

Baidu has introduced two new artificial intelligence semiconductors, the M100 and M300, in a bid to supply Chinese enterprises with powerful and cost-effective AI compute options amid continued US export restrictions.

Announced Thursday at the Baidu World technology conference, these domestically developed chips are expected to offer Chinese firms greater control over their computing infrastructure as global tensions reshape the technology supply chain.

The M100, focused on AI inference, will launch in early 2026, while the more versatile M300, designed for both training and inference, is slated for early 2027. Having developed its own processors since 2011, Baidu is now positioning these latest chips as alternatives to US-designed hardware now subject to tightened export rules.

Domestic innovation

In addition to the new AI chips, Baidu revealed two “supernode” products leveraging advanced networking to connect hundreds of processors for high-performance AI workloads.

The Tianchi 256 supernode, built from 256 of Baidu’s P800 chips, will debut in the first half of next year, with a 512-chip version to follow in the second half. Supernodes are seen as a way to compensate for individual chip limitations and compete with industry leaders—including Huawei’s CloudMatrix 384 and Nvidia’s recently released GB200 NVL72.

Baidu also showcased the latest version of its Ernie large language model, highlighting expanded capabilities in text, image, and video analysis, as the company races to stay at the forefront of China’s competitive AI landscape.

The moves come as China accelerates efforts to localise its tech supply chain, bolstered by state and industry pressure on domestic innovation. Baidu’s announcements signal both resilience and ambition in the face of ongoing chip trade restrictions, as the company aims to establish itself as a key supplier of next-generation AI silicon for the Chinese market.

Tuta warns users against installing OpenAI’s Atlas AI browser

  • Atlas can access, read, and remember activity across logged‑in sites—including email and banking—by building persistent “memories” of browsing sessions when users grant permissions.
  • Atlas is useful for automation but potentially over‑permissive in data capture and susceptible to malicious page‑level instructions.

German encrypted email provider Tuta urged users to avoid installing OpenAI’s new Atlas AI browser, arguing the ChatGPT‑integrated app amasses extensive behavioural data and introduces novel attack surfaces that could outweigh its convenience features.

In a detailed advisory, Tuta said Atlas can access, read, and remember activity across logged‑in sites—including email and banking—by building persistent “memories” of browsing sessions when users grant permissions.

The company said those capabilities make it difficult for consumers to control what is stored or forgotten, and warned that “Incognito” mode is not truly private because interactions may still be visible to ChatGPT and third parties, with chats retained for 30 days for abuse detection.

Tuta also highlighted OpenAI’s US jurisdiction and temporary data retention even after deletion, and pointed to “Agent mode” as an additional risk area given prompt‑injection and phishing vulnerabilities observed in agentic browsers.

Data collection

OpenAI introduced Atlas as an AI‑powered alternative to mainstream browsers that can summarise content, compare products, analyse data, and execute tasks directly on web pages. Tuta framed those features as a double‑edged sword—useful for automation but potentially over‑permissive in data capture and susceptible to malicious page‑level instructions.

The firm consolidated its objections into five primary reasons to “think twice” before using Atlas until stronger safeguards and clearer data controls are in place.

Tuta referenced industry research suggesting agent browsers may be more vulnerable to phishing than traditional clients and cited demonstrations indicating AI agents can retain sensitive contextual information from browsing sessions. The company also cautioned that future product changes, such as advertising, could expand the use of collected data.

OpenAI has positioned Atlas as a productivity tool that personalises the web experience by remembering user preferences and completing tasks on their behalf.

The company says Atlas is not intended to store sensitive credentials. Tuta, however, argues current guardrails are insufficient and that users cannot reliably constrain what AI agents remember in practice.

What’s next

  • User adoption and enterprise policies: The warning may prompt privacy‑conscious users and regulated organisations to pause deployment pending clearer controls and third‑party audits.
  • Regulatory scrutiny: Atlas’s data practices and “agent mode” could draw attention from EU data protection authorities and consumer watchdogs, particularly around consent, retention, and cross‑border transfers.
  • Competitive responses: Browser makers with privacy positioning may seek to differentiate with stricter permissions, on‑device memory, or agent isolation by default.

Xiaomi recruits DeepSeek wunderkind Luo Fuli to turbocharge MiMo

  • Luo, known domestically as an “AI prodigy,” rose to prominence after prolific research contributions and roles at Alibaba and High-Flyer Quant/DeepSeek before helping build DeepSeek-V2.

Luo Fuli, a prominent developer behind DeepSeek’s frontier AI models, said she has joined Xiaomi to work on artificial general intelligence, confirming months of speculation in a WeChat post.

“Intelligence will ultimately step beyond language into the physical world. I’m at Xiaomi MiMo… striving toward the AGI we envision,” she wrote, signaling a push to embed advanced AI across Xiaomi’s devices and vehicles.

Luo’s move follows reports that Xiaomi CEO Lei Jun personally courted her with a multimillion-dollar package, as the company looks to elevate its MiMo large language model and compete with leading Chinese and global AI systems.

Luo’s track record at DeepSeek—where models matched or beat top systems at lower cost—positions Xiaomi to accelerate on-device intelligence for phones and its expanding EV platform. Local industry coverage highlights that Luo’s appointment aligns with Xiaomi’s strategy to advance MiMo and AGI-oriented research in-house.

Xiaomi claims its MiMo-7B has outperformed larger peers on selected benchmarks, a sign the company aims to optimise smaller, efficient models for real-world applications.

Luo, known domestically as an “AI prodigy,” rose to prominence after prolific research contributions and roles at Alibaba and High-Flyer Quant/DeepSeek before helping build DeepSeek-V2, experience that could help Xiaomi close the gap with rivals in both cloud and edge AI deployment.

Anthropic to invest $50b in US data centres with Fluidstack

  • Expected to create roughly 800 permanent jobs and 2,400 construction jobs as facilities come online through 2026.

Anthropic said it will invest $50 billion to build custom data centres in the United States in partnership with infrastructure provider Fluidstack, starting with sites in Texas and New York and additional locations to follow.

The buildout, designed around the company’s Claude AI models, is expected to create roughly 800 permanent jobs and 2,400 construction jobs as facilities come online through 2026.

The spending plan adds to a surge of AI infrastructure investment across US. tech, with Anthropic citing alignment with the federal AI policy push to bolster domestic capacity. The Google- and Amazon-backed startup was valued at about $183 billion in early September, underscoring investor confidence as enterprises accelerate AI adoption; the company says it now serves more than 300,000 enterprise customers.

Record capex investments

Anthropic’s facilities will be custom-built to support training and inference at scale for its Claude family, reflecting industry momentum to secure compute capacity amid tight supply of advanced chips and power.

The Texas and New York projects mark the first phase of a broader US expansion in data centre infrastructure, developed jointly with Fluidstack to optimise performance and efficiency for frontier models 125.

The company framed the outlay as a long-term commitment to US-based AI leadership and domestic technology infrastructure, with initial sites slated to begin coming online in 2026. The initiative follows a year of record AI-related capex announcements across Big Tech and hyperscalers as demand for model training and enterprise AI workloads surges.

IBM unveils experimental ‘Loon’ quantum computing chip

  • The approach reduces the burden on purely quantum error-correction codes but requires more complex chip designs, with qubits supplemented by additional quantum interconnects.
  • IBM believes Nighthawk chip could outperform classical computers on select tasks by the end of next year.
  • To accelerate validation, IBM is collaborating with startups and academic researchers to openly share code and benchmarks.

IBM announced “Loon,” an experimental quantum computing chip that the company says marks a key step toward building useful, error-managed quantum computers before the end of the decade.

Quantum computers promise to tackle problems that would take classical systems thousands of years to solve, but fragile quantum states make today’s machines highly error-prone.

Tech giants including Alphabet’s Google and Amazon are racing alongside IBM to tame those errors and demonstrate quantum advantage in real-world tasks.

IBM has been pursuing a hybrid error-correction strategy it proposed in 2021, adapting algorithms originally designed to improve cellular signals and running them across both quantum processors and classical chips.

The approach reduces the burden on purely quantum error-correction codes but requires more complex chip designs, with qubits supplemented by additional quantum interconnects.

Quantum advantage

Jay Gambetta, director of IBM Research and an IBM Fellow, said access to the Albany NanoTech Complex in New York—equipped with tools on par with leading-edge semiconductor fabs—was critical to integrating the new quantum connections into Loon’s architecture.

The company did not disclose when external users will be able to test Loon, which remains in early stages. The company also unveiled “Nighthawk,” a separate chip slated to be available by year-end. IBM believes Nighthawk could outperform classical computers on select tasks by the end of next year.

To accelerate validation, IBM is collaborating with startups and academic researchers to openly share code and benchmarks.

“We’re confident there’ll be many examples of quantum advantage,” Gambetta said. “But let’s take it out of headlines and papers and actually make a community where you submit your code, and the community tests things, and they select out which ones are the right ones.”