Home Blog Page 20

Plaud sees UAE as stepping stone for Mideast expansion

  • Personalised AI note-taker converts raw speech into prioritised and actionable intelligence.
  • Aims to become the nervous system for organisations — capturing institutional memory reliably, securely and in ways that scale human judgement.

Plaud, the world’s No: 1 AI note-taking brand, has officially launched its Plaud Note Pro in the UAE. Purpose-built for high-impact professionals, this all-in-one AI note-taker transforms conversations into instant insights with professional-grade accuracy, security, and ease of use.

Kamel Ouadi, Head of Plaud Global Brand Centre.

Already trusted by over one million users globally, Plaud is rapidly becoming the go-to AI work companion for professionals in the GCC. Its recent rise in regional media reflects its relevance—from wearable productivity to AI’s role in economic transformation.

With the UAE’s growing focus on AI-driven productivity and digital transformation under the National AI Strategy 2031, the arrival of Plaud Note Pro could not be more timely. Engineered to support professionals across law, healthcare, finance, and education, the device enables seamless human-AI collaboration- highlighting what matters most in real-time.

Kamel Ouadi, Head of Plaud Global Brand Centre, shares the Plaud philosophy and commitment to innovation and highlights of Plaud Intelligence here.

  • How does Plaud’s philosophy, “From Data to Insights: Intelligence, Amplified,” shape the design and capabilities of your latest products?

Our philosophy drives everything: we don’t just capture audio — we design systems that convert raw speech into prioritised, actionable intelligence. notePin’s hardware and firmware are tuned for natural, hands-free capture while the cloud and on-device models immediately extract decisions, action items and concise summaries so users can act faster. This human-centred pipeline — capture, align, distil — is how we amplify intelligence for everyday work, not merely produce logs.

  • What makes your products, Note Pro and Note Pin truly differentiated from other AI voice recorders?

Wearability is about form and function. Note Pin is engineered as a multi-wear device (clip, pin, necklace, wrist) with studio-grade microphones and speech enhancement so it’s discreet yet reliable in meetings or on the move. Our product design has been recognised by industry awards and the device pairs on-device UX (instant highlights) with server-grade AI pipelines that produce searchable, structured notes — not just raw transcripts.

The new Note Pro adds a “press-to-highlight” human alignment feature and expanded mic range for real-time prioritisation, which sets it apart.  Purpose-built for high-impact professionals, this all-in-one AI note-taker transforms conversations into instant insights with professional-grade accuracy, security, and ease of use. Plaud Note Pro reflects our vision of empowering professionals in high-context, high-value environments—ensuring every conversation becomes a source of productivity.

  • How important is the Middle East market for you and why?

The Middle East is a strategic priority for us. We design our technologies to enhance economies and elevate quality of life, focusing on achieving true human–AI alignment—where machines extend human capability rather than replace it.

Countries like UAE are fostering the culture of innovation, supported by the federal vision of the government. For us, the UAE is more than a market; it’s an ideal proving ground for innovation, with its forward-looking policies, diverse talent pool, and culture of rapid adoption. Showcasing our work here allows us to demonstrate how thoughtful AI can improve daily life and economic resilience, and serves as a springboard for deeper collaboration and expansion across the wider Middle East.

The multilingual professional communities are tech savvy and appreciative of how technology can enhance quality of life.

  • What are the biggest opportunities you expect in the wearable AI note-taking space in the near future?

Three big opportunities: first, seamless multimodal integration (voice + images + documents) to create richer context; second, enterprise workflow embeds where insights flow directly into CRMs and knowledge systems; third, real-time human-AI alignment (flagging, prioritizing) that turns meetings into immediate outcomes. Devices will get smaller and smarter, and the real win will be improving decision velocity — turning conversations into verifiable actions within hours, not days.

  • Looking ahead, how do you envision Plaud’s AI leadership evolving?

We’ll continue to lead where device design, human interaction and AI intersect. Expect deeper multimodal models, stronger privacy and compliance features for regulated industries, and platform integrations that let teams query collective meeting history as easily as asking a colleague. Our aim: become the nervous system for organisations — capturing institutional memory reliably, securely, and in ways that scale human judgement.

Alphabet taps debt markets amid rising AI and cloud demand

  • Intends to use the raised funds for general corporate purposes, which may include repaying portions of its current outstanding debt.
  • Major tech firms are facing capacity constraints as surging demand for cloud and artificial intelligence services drives significant new investment.

Alphabet Inc, the parent company of Google, is making a strategic move in the capital markets with a multi-tranche senior unsecured notes offering denominated in both US dollars and euros.

According to a report by Moody’s Ratings, Alphabet intends to use the raised funds for general corporate purposes, which may include repaying portions of its current outstanding debt.

This marks Alphabet’s first foray into fresh debt since April, when it issued €6.75 billion ($7.87 billion) of euro-denominated bonds.

The company’s peers are adopting similar strategies: Oracle raised $18 billion in new debt in September, while Meta tapped the bond market for a substantial $30 billion last month.

Emile El Nems, a senior credit officer with Moody’s Ratings, noted that major tech firms are facing capacity constraints as surging demand for cloud and artificial intelligence services drives significant new investment.

“Layer on top of that the potential demand that could be coming in from AI computing…and you say to yourself, there is something there,” he remarked, highlighting a trend of heavyweights seeking fresh funding to stay ahead in the AI race.

Financial health

Despite their aggressive funding activities, Alphabet, Oracle, and Meta maintain relatively modest leverage ratios compared to other large corporations, according to Moody’s Ratings. This prudent financial management gives them room to maneuvre as AI and cloud technologies reshape the competitive landscape.

Alphabet continues to dominate multiple digital sectors, led by its search engine—which now prominently features the Gemini AI platform—and its powerhouse positions in online advertising and YouTube.

As the company channels new capital into enhancing its AI and cloud infrastructure, its market leadership and robust balance sheet place it in a strong position to meet the sector’s explosive future demand.

OpenAI makes bold AI move with $38b AWS deal

  • Willing to make historic bets to lead the AI revolution in a bid to cement its status as a central player in one of the most transformative industries of the decade.
  • Amazon is set to deploy advanced Nvidia GB200 and GB300 AI accelerators in custom-built clusters dedicated to supporting OpenAI’s workload.

In a landmark transaction set to reshape the artificial intelligence landscape, OpenAI has signed a seven-year, $38 billion agreement to purchase cloud services from Amazon Web Services (AWS). The deal, which comes on the heels of OpenAI’s high-profile restructuring, marks the company’s most significant bet yet to expand its AI capabilities and operate more independently.

Under the agreement, AWS will provide OpenAI with access to hundreds of thousands of Nvidia’s cutting-edge AI graphics processors—essential for training and running next-generation artificial intelligence models like ChatGPT. The new partnership positions Amazon as a critical infrastructure provider for OpenAI, intensifying competition among cloud giants as they vie for dominance in the booming AI sector.

Massive ambitions

OpenAI CEO Sam Altman has declared ambitions rarely seen in tech, stating plans to commit $1.4 trillion toward developing 30 gigawatts of computing power—enough to supply electricity to roughly 25 million U.S. homes. “Scaling frontier AI requires massive, reliable compute,” Altman said.

“Our partnership with AWS strengthens the broad compute ecosystem that will power this next era and bring advanced AI to everyone.”

The partnership is scheduled to roll out immediately, with all planned capacity coming online by the end of 2026, and further expansion possible in the years that follow. Amazon is set to deploy advanced Nvidia GB200 and GB300 AI accelerators in custom-built clusters dedicated to supporting OpenAI’s workload.

The news sent Amazon shares to record highs, adding nearly $140 billion to the company’s market value as Wall Street celebrated the cloud unit’s growing momentum.

OpenAI’s move is also reshaping relationships among Big Tech rivals. While the company had previously relied heavily on Microsoft for cloud power, recent changes have reduced that exclusivity. OpenAI has now secured cloud arrangements with Google, Oracle, and, as of last week, restructured a $250 billion Azure deal with Microsoft—signaling an industry-wide shift toward multi-cloud strategies.

Financial risks loom

Despite OpenAI’s expected $20 billion annualised revenue run rate by year-end, the company’s colossal cloud expenditures—totaling well over a trillion dollars—have sparked concerns about rising losses and the potential for an AI investment bubble.

Some analysts and investors are questioning how the fast-growing firm will sustainably finance such ambitious commitments as it positions for a possible $1 trillion IPO.

China cuts data centre energy costs to rev up homegrown chip adoption

  • Enhanced incentives target tech giants including ByteDance, Alibaba and Tencent to reduce dependence on US suppliers.

China has ramped up subsidies that reduce electricity bills by up to 50 per cent for its leading data centres, in a significant push to promote domestic semiconductor usage and reduce the country’s dependence on US suppliers like Nvidia, according to the Financial Times.

The enhanced incentives by local governments target tech giants including ByteDance, Alibaba, and Tencent—companies that have faced rising electricity costs following Beijing’s ban on the purchase of Nvidia’s advanced artificial intelligence chips.

The new subsidies are intended to offset the higher operating costs associated with Chinese-made processors from firms such as Huawei and Cambricon, which industry experts say consume 30–50 per cent more power than Nvidia’s most advanced H20 chips.

Provinces with dense data centre clusters—such as Gansu, Guizhou, and Inner Mongolia—have unveiled plans to slash power bills by up to half for large data centres, provided they exclusively use approved Chinese semiconductors. Data centres running foreign chips, including those from Nvidia, are explicitly excluded from these programs, reinforcing China’s drive for self-sufficiency in advanced technology.

Local officials confirmed that aggressive energy subsidies, in some cases large enough to cover a data centre’s entire operating cost for about a year, are now part of fierce interprovincial competition to attract the next wave of AI infrastructure projects.

Industrial power rates in these regions are already about 30 per cent below those in eastern coastal provinces, and the new incentives lower them further to around 0.4 yuan ($5.6 cents) per kilowatt-hour. By comparison, similar electricity in the United States averages roughly 9.1 cents per kWh, with prices varying widely by state due to a fragmented grid.

While Chinese chips still lag behind Nvidia’s in single-chip compute performance, industry leaders such as Huawei have adopted clustering strategies to bridge the gap. This approach increases total power consumption but enables tech companies to keep pace with surging demand for AI applications.

Despite higher energy needs, China’s more centralised and resource-rich grid offers both cheaper and greener electricity than the US, ensuring no imminent supply shortages for rapidly scaling data centre operations. Mega-project incentives underscore the country’s determination to accelerate semiconductor self-reliance and compete head-to-head with the US in the global race for AI supremacy.

Microsoft to invest $15b in UAE AI data centres by end of 2029

  • Company has already accumulated the equivalent of 21,500 Nvidia A100 GPUs in the UAE, using a mix of A100, H100, and H200 chips.

Microsoft announced plans to ramp up its total investment in the United Arab Emirates (UAE) to a remarkable $15 billion by the end of 2029, with a significant portion dedicated to the expansion of AI data centres across the country.

The tech giant has also secured approval from the US Trump administration to export Nvidia’s advanced chips for use in its UAE-based facilities, a senior Microsoft executive told Reuters.

“The biggest share of [the investment], by far, both looking back and looking forward, is the expansion of AI data centres across the UAE,” Microsoft Vice Chair and President Brad Smith stated at the ADIPEC energy conference in Abu Dhabi.

“From our perspective, it’s an investment that is critical to meet the demand here for the use of AI.”

The UAE has been aggressively investing to position itself as a global artificial intelligence powerhouse, leveraging strong diplomatic relations with Washington to secure access to US technology, including state-of-the-art AI processors. Microsoft’s latest initiative underscores the country’s ambition and the global competition to supply cloud and AI infrastructure.

Stargate UAE

Last year, Microsoft invested $1.5 billion to acquire a minority stake in Abu Dhabi’s leading AI company G42, securing a seat on the board now held by Smith.

However, G42’s historical ties to China prompted concern in US political circles, with lawmakers scrutinising potential risks of advanced US semiconductors reaching Beijing via the UAE.

G42 has since pledged to work closely with American partners and authorities to ensure AI development complies with US regulatory standards, and Smith noted “enormous progress” in meeting these requirements.

Microsoft, under US government licenses approved last year, has already accumulated the equivalent of 21,500 Nvidia A100 GPUs in the UAE, using a mix of A100, H100, and H200 chips. The White House approved additional exports in September, covering chips equivalent to 60,400 Nvidia A100 GPUs, including next-generation GB300 processors.

These chips are expected to ship “in a matter of months,” Smith confirmed, and will be deployed in Microsoft’s own UAE data centres.

To date, Microsoft will have invested $7.3 billion in the UAE from 2023 through 2025, with a further $7.9 billion allocated through 2029 for ongoing AI and cloud infrastructure expansion, according to Smith’s blog post.

Notably, none of the $15.2 billion investment covers “Stargate UAE”— the $10 billion first phase of what will become one of the world’s largest data centre hubs, recently announced during US President Donald Trump’s Gulf visit in May.

Political reaction in Washington remains mixed. US House Select Committee on China Chairman Rep. John Moolenaar welcomed closer UAE-US technology ties, while cautioning that the transfer of advanced technology should be contingent on the UAE “verifiably and irreversibly choosing America” amidst its ongoing relations with China.

Related Posts:

Microsoft strikes five year AI infrastructure deal worth $9.7b with IREN

  • Company is reinforcing its position at the forefront in the race to meet surging demand for generative AI applications and advanced cloud services.
  • Microsoft will rapidly expand its AI infrastructure without the delays and capital expense of constructing new sites or securing additional grid power.

Microsoft has entered into a landmark $9.7 billion agreement with data-centre operator IREN, securing access to Nvidia’s next-generation chips in a bid to overcome the computing shortfall preventing the tech giant from fully capitalising on the artificial intelligence surge.

News of the five-year partnership sent IREN shares soaring as much as 24.7 per cent to a record high on Monday, before closing up nearly 10 per cent. Dell Technologies also saw its stock rise about 1 per cent as it will supply IREN with Nvidia’s cutting-edge GB300 AI processors and related infrastructure, roughly $5.8 billion of which Microsoft is slated to use.

The deal highlights the intensifying scramble for AI computing power—a message echoed in recent earnings reports from top technology players, which noted that capacity constraints were tempering their ability to exploit the current AI boom.

By partnering with IREN, which operates data centres across North America with a combined capacity of 2,910 megawatts, Microsoft will rapidly expand its AI infrastructure without the delays and capital expense of constructing new sites or securing additional grid power.

The move also allows Microsoft to avoid locking up funds in chips that risk obsolescence as newer, more advanced models hit the market.

Competitive AI space

Much of the hardware will be deployed at IREN’s 750-megawatt Childress, Texas campus, where Nvidia processors and new liquid-cooled data centers are scheduled for phased installation through 2026—ultimately providing around 200 megawatts of critical IT capacity.

IREN, now valued at $16.52 billion after its stock price rose more than six fold this year, said funding from Microsoft’ would help finance its $5.8 billion contract with Dell. But the agreement includes a provision that allows Microsoft to walk away if IREN cannot meet strict delivery timelines.

This accelerated investment in so-called “neocloud” providers comes on the heels of Microsoft’s recent $17.4 billion infrastructure deal with Nebius Group, reflecting a broader trend of top tech firms seeking innovative partners—including CoreWeave —to keep pace in the competitive AI space.

Separately, AI infrastructure startup Lambda also announced a multibillion-dollar agreement with Microsoft to roll out Nvidia-powered AI systems, further underscoring the sector’s growing focus on high-performance computing.