Saturday, May 17, 2025
- Advertisement -
More
    Home Blog Page 3

    AMD expects $1.5b revenue hit from US’ new round of export controls

    • AMD projects second-quarter revenue between $7.1b and $7.7b.

    Advanced Micro Devices (AMD), a leading player in the semiconductor industry, recently disclosed the financial implications of newly imposed US export controls on its revenue, particularly with regard to shipments of advanced artificial intelligence (AI) processors to China.

    According to AMD’s finance chief, Jean Hu, these restrictions are expected to result in substantial $1.5 billion revenue hit in 2025. The announcement sheds light not only on the challenges faced by AMD amid evolving geopolitical and regulatory landscapes but also on the broader implications for the global semiconductor sector.

    In an earnings call, AMD revealed that the latest round of export controls, instituted in April under tightened US government policies, mandates that advanced AI chips destined for China require export licenses.

    Optimistic outlook

    The regulatory hurdle effectively restricts AMD’s ability to ship key products, such as the MI308 AI processor, to one of its largest markets. The company anticipates that these curbs could incur approximately an 800 million charge related to inventory adjustments and purchase commitments.

    Despite this headwind, AMD projects second-quarter revenue between $7.1 billion and $7.7 billion—figures that surpass Wall Street’s expectations and are likely buoyed by accelerated chip purchases ahead of the tariff implementations.

    AMD’s chief executive officer, Lisa Su, provided a cautiously optimistic perspective on the situation. She underscored the resilience of the company’s differentiated product portfolio and consistent operational execution, which, despite the “dynamic macro and regulatory environment,” position AMD well for strong growth by 2025.

    Su further noted that the majority of the financial impact from the export curbs is expected within the second and third quarters, yet she maintains confidence in the company’s ability to achieve “strong double-digit” growth in AI chip revenue from its data centre business.

    Increased volatility

    This optimism resonates with AMD’s ongoing commitment to supplying advanced processors to major cloud infrastructure providers such as Microsoft and Meta Platforms, which continue to allocate significant investments toward AI capabilities.

    The ramifications of US export controls extend beyond AMD, reflecting a broader strategic effort by the Biden and Trump administrations to limit China’s access to cutting-edge semiconductor technology.

    These measures aim to impede China’s development of advanced AI models and applications that the US government views as potential national security risks.

    Notably, Nvidia, AMD’s chief competitor, has faced even larger financial repercussions—a $5.5 billion charge—stemming from similar restrictions. Both companies now require export licenses to sell AI chips in China, signaling a profound shift in supply chain and market dynamics.

    This heightened regulatory environment has contributed to increased volatility among AI-related stocks, as market participants reassess growth prospects amid fears of overhype and geopolitical uncertainty.

    For instance, recent developments such as DeepSeek’s announcement—demonstrating high-performance AI models using less advanced chips—have further pressured chip valuations.

    Moreover, the semiconductor industry faces additional trade challenges from tariffs exceeding 145 per cent on products manufactured in China, Vietnam, and Malaysia, with potential further duties looming under the US Commerce Department’s Section 232 investigation.

    How deepfakes are changing reality—and can we halt them?

    • Completely stopping deepfakes may be an unrealistic goal given the pace of technological innovation and the complexity of underlying issues.
    • A multifaceted strategy, encompassing technological defenses, regulatory frameworks, platform accountability, and public education offers the best prospect for mitigating their negative impacts.

    In recent years, deepfakes have emerged as one of the most provocative and challenging phenomena in the digital age.

    These hyper-realistic synthetic media—often videos, images, or audio recordings—that portray people saying or doing things they never actually did, have transitioned from niche technological curiosities to pervasive tools impacting politics, entertainment, misinformation, and even personal reputations.

    The rapid proliferation of deepfakes raises pressing questions: Why have deepfakes become so widespread, and more critically, can they be effectively stopped?

    Rise and proliferation of deepfakes

    The advent of deepfakes is rooted in advancements in artificial intelligence (AI), particularly in machine learning techniques such as generative adversarial networks (GANs).

    Related Posts:

    GANs pit two neural networks against each other—one generating synthetic images or videos, the other discerning real from fake—to progressively improve the quality of fabricated media. This innovation has democratised the ability to produce highly convincing fake content.

    What once required expert knowledge in graphics design and video editing can now be accomplished with user-friendly software, often freely accessible on the internet.

    Several factors have contributed to the ubiquity of deepfakes:

    • Technological accessibility and advancement: As AI models become more sophisticated and computational power cheaper and more accessible, the barriers to creating deepfakes dissolve. Open-source code and tutorials, combined with powerful consumer-grade GPUs, enable amateurs and professionals alike to create compelling deepfake content.
    • Social and political incentives: In an era characterised by intense political polarisation and a 24-hour news cycle, deepfakes serve as potent tools for disinformation campaigns. They can be used to undermine public trust, manipulate elections, defame adversaries, or create confusion during crises. The incentive to produce and disseminate such videos, whether for ideological, financial, or malicious reasons, fuels their spread.
    • Entertainment and satire: Beyond malicious use, deepfakes have cultural and commercial appeal. Filmmakers use them to resurrect deceased actors or de-age performers, while comedians create satirical content. This legitimate usage normalises and promotes the technology, further embedding it into the digital ecosystem.
    • Viral dynamics of social media: Platforms designed to maximise engagement inadvertently incentivise sensational content. Deepfake videos, by their shocking and convincing nature, are more likely to be shared widely before their veracity can be assessed. This virality exacerbates their reach and reinforces their presence in the public consciousness.

    Challenges in stopping deepfakes

    Given their mounting prevalence, the notion of “stopping” deepfakes seems both a social imperative and a practical challenge. However, several obstacles complicate mitigation efforts:

    Technical complexity and evolution: As detection technology improves, so too do deepfake methods. Techniques to evade detection, such as improving frame consistency or embedding subtle artifacts, evolve continuously. This technological arms race makes it difficult to develop foolproof detection algorithms.

    Legal and ethical ambiguities: Legislators struggle to keep pace with technological innovation. Defining the legal status of deepfakes, distinguishing between permissible satirical use and harmful disinformation, and establishing jurisdiction across borders proves complicated. Privacy laws, freedom of speech principles, and varied national regulations create a fragmented legal landscape.

    Scale and speed: The volume of digital content produced daily is staggering. Manual verification is impossible at scale, and automated detection systems are still fallible. Real-time monitoring is likewise a daunting prospect, given the diversity of platforms and channels.

    Differentiation between harmful and harmless deepfakes: Since deepfakes have legitimate uses in art, education, and entertainment, blanket bans or heavy-handed regulation may stifle innovation or censor free expression.

    Prospects for combating deepfakes

    Despite these challenges, several approaches offer hope in managing the deepfake phenomenon:

    Technological solutions: Advances in AI can be harnessed to detect and flag deepfakes. Deepfake detection tools analyse inconsistencies in facial movements, unnatural blinking, or anomalies in lighting. Blockchain-based content authentication and digital watermarking can also verify source authenticity. However, these require widespread adoption and integration into social media infrastructures.

    Platform responsibility and policy enforcement: Social media companies wield significant power. By refining content moderation policies, expanding fact-checking partnerships, and deploying proactive detection systems, platforms can limit the spread of malicious deepfakes. Transparency reports and user education initiatives can further empower the public to critically evaluate digital content.

    Legislation and regulation: Governments can enact targeted legislation criminalising malicious deepfakes, especially those intended to defraud, defame, or incite violence. Regulatory frameworks incentivising transparency, such as mandating disclosure of synthetic media, can also deter misuse. International cooperation is essential given the borderless nature of the internet.

    Public awareness and media literacy: Equipping individuals with critical thinking skills to recognise and question suspicious content is vital. Media literacy campaigns, integrated into education systems, can reduce the societal impact of deepfakes by fostering a more discerning audience.

    Tools to detect deepfakes

    Developers have created AI deepfake detection tools to identify and stop manipulated content due to increasing public anxiety. Digital media analysis uses present-day technologies which combine machine learning and biometric analysis with computer vision to spot modifications within digital media.

    Here are some of the tools:

    • OpenAI’s Deepfake Detector
    • ttestiv Deepfake Video Detection Software
    • FaceForensics++
    • Pindrop Security
    • Cloudflare Bot Management
    • Hive AI’s Deepfake Detection
    • Intel’s FakeCatcher
    • Sensity
    • Reality Defender
    • AI Voice Detector
    • Microsoft’s Video Authenticator
    • Deepware Scanner

    Collaboration between researchers, technology companies, and policymakers is essential to advance detection capabilities and implement appropriate regulatory frameworks.

    TECOM Group reports strong performance in first quarter

    • Records a 6% increase in its customer base, now exceeding 12,000 clients, and attracts global companies and top-tier talent.

    Dubai-based TECOM Group has demonstrated remarkable financial and operational performance in the first quarter of 2025, reporting a 21 per cent year-on-year (YoY) increase in revenues to AED680 million. Simultaneously, the company’s net profit rose by 23 per cent YoY, reaching AED361 million during this period.

    The robust growth reflects TECOM Group’s effective management of its diverse business portfolio and its unwavering commitment to fostering Dubai’s knowledge economy.

    A deeper look into the company’s financial health reveals that earnings before interest, taxes, depreciation, and amortisation (EBITDA) also experienced a significant boost, increasing by 23 per cent YoY to AED540 million.

    The EBITDA margin expanded to an impressive 79 per cent, highlighting not only increased revenues but also enhanced operational efficiencies across all business sectors. These results signal a strong start for TECOM Group in 2025, evidencing the success of its strategic initiatives aimed at sustainable growth.

    A leading curator

    Abdulla Belhoul, Chief Executive Officer of TECOM Group, said that the company’s steadfast performance is a testament to the strength of its diverse asset portfolio and its critical role in attracting global companies and top-tier talent.

    “TECOM’s contribution to Dubai and the UAE’s knowledge economy remains pivotal, with its ecosystems fostering growth across six key strategic sectors. This performance serves to reinforce TECOM’s position as a leading curator of Dubai’s most dynamic business districts, which are designed to nurture innovation, entrepreneurship, and economic diversification.”

    From an operational perspective, TECOM Group’s first quarter also recorded a six per cent increase in its customer base, now exceeding 12,000 clients. The growth is underpinned by sustained demand for commercial and industrial assets, as well as land leasing, affirming the ongoing attractiveness of TECOM’s offerings to a broad spectrum of businesses.

    Notable developments during the period further illustrate TECOM Group’s momentum and influence. In February, Epson launched its state-of-the-art Innovation Centre at Dubai Production City.

    Strategic partnerships

    The new facility aims to provide critical local insights to Epson’s global teams, fostering the development of next-generation technologies. Similarly, Dubai Internet City confirmed its substantial impact, contributing to 65 per cent of Dubai’s technology sector GDP, according to a recent study conducted with Accenture.

    Dubai Industrial City also made headlines with strategic partnerships and significant investments. Fabtech Engineering’s collaboration with France’s Groupe M is set to accelerate innovation in the UAE’s nuclear and sustainable energy sectors.

    Meanwhile, the area attracted over AED350 million in investments from food and beverage companies in 2024, underscoring its prominence as an industrial hub.

    The influence of TECOM’s various districts extends beyond technology and industry. Dubai Science Park welcomed biopharmaceutical giant MSD and hosted the Middle East’s inaugural Longevity Science Semester Symposium, reflecting its commitment to life sciences and health innovation.

    Meanwhile, Dubai Design District’s hosting of the Autumn/Winter 2025-26 Dubai Fashion Week edition reinforces Dubai’s emergence as a global fashion destination.

    Microsoft shits towards end of the password era

    • Unlike passwords, which remain susceptible to brute-force attacks and phishing scams, passkeys are resistant to such tactics due to their cryptographic basis?
    • By partnering with industry leaders like the FIDO Alliance and innovating user experiences, the company is fostering a more secure and accessible digital future.
    • As cyber threats continue to evolve, passkeys offer a robust defense mechanism, positioning us on the cusp of a safer interconnected world where remembering a string of characters is no longer a prerequisite for digital access.

    In a significant stride towards enhanced digital security, Microsoft announced a transformative shift in its authentication protocols by making passwordless login the default option for all new user accounts.

    The bold move, unveiled around World Password Day, was humorously rebranded by the company as “World Passkey Day,” underscoring the growing importance of passkeys as the future of secure and user-friendly authentication.

    Microsoft’s new approach allows new users to forgo the traditional password setup entirely, instead opting for a range of passwordless login methods that leverage biometric data or secure device-specific PINs.

    Existing users are also empowered to unlink and remove passwords from their accounts, further embedding this security paradigm shift. By adopting passkeys, Microsoft aligns itself with a wider industry movement championed by the FIDO Alliance—an organisation dedicated to reducing the global over-reliance on passwords through the development of open authentication standards.

    Passkeys represent an evolution in digital security by fundamentally addressing the vulnerabilities intrinsic to passwords. Unlike passwords, which remain susceptible to brute-force attacks and phishing scams, passkeys are resistant to such tactics due to their cryptographic basis.

    This makes them not only more secure but also more convenient. Users can authenticate themselves effortlessly using biometric identifiers such as facial recognition or fingerprints, or through secure PINs tied to their devices.

    Surge in password-based cyberattacks

    Microsoft’s assertion that “hundreds of websites, representing billions of accounts,” now support passkeys illustrates the rapid adoption and scalability of this technology.

    An alarming statistic that Microsoft highlighted is the surge in password-based cyberattacks, which reached an unprecedented 7,000 attempts per second last year, more than doubling the rate from the previous year.

    Such figures demonstrate why a transition away from passwords is not just advantageous but necessary. Cybercriminals continue to exploit the weaknesses inherent in passwords, and by making passkey authentication standard, Microsoft aims to mitigate this risk comprehensively.

    Enhancing security

    The company’s strategy also emphasises usability. Rather than overwhelming users with multiple sign-in options, Microsoft’s system intelligently selects the most secure and convenient method available on each account, gradually encouraging users to enroll passkeys.

    Early trials revealed that this streamlined approach accelerated login times and reduced traditional password usage by over 20 per cent, indicating both efficiency gains and positive user reception.

    The adoption of passkeys is still in its early stages globally, with notable uptake in regions such as China and promising adoption rates internationally. However, as the technology matures and more services integrate passkey support, the vision of a truly passwordless digital ecosystem comes closer to realisation.

    Microsoft’s leadership in this transition underscores the critical balance between enhancing security and maintaining user convenience.

    Google’s introduces emoji reactions in Gmail

    • Google appears to be positioning email not merely as a formal communication channel but as a versatile, user-friendly platform capable of supporting nuanced interpersonal interactions.
    • Emoji reactions are disabled for emails sent via Google Group aliases or those involving Google Groups in the recipient list.

    Google has introduced a new feature in Gmail that allows users to respond quickly to emails using emoji reactions.

    Officially launched on April 29th, 2025, this feature enables users to express emotions, acknowledgment, or appreciation in a succinct and visually engaging manner. The gradual rollout of emoji reactions to all Gmail users is underway, although Workspace administrators retain the authority to disable the feature by default through the Google Admin console.

    Google characterises the new emoji reactions as a tool for users to “quickly respond, acknowledge receipt of an email, and express themselves more authentically.”

    The emphasis on authentic expression aligns with the broader trend toward more personalised and emotionally nuanced digital communication, a domain traditionally dominated by social media and instant messaging platforms such as WhatsApp and Slack.

    Unicode Consortium emojis

    By integrating emoji reactions into Gmail, Google appears to be positioning email not merely as a formal communication channel but as a versatile, user-friendly platform capable of supporting nuanced interpersonal interactions.

    The emoji reactions feature includes the entire set of Unicode Consortium emojis, incorporating recent additions like the fingerprint and the tired face with bags under the eyes.

    Google offers examples to illustrate the practical applications of these reactions: sending a gratitude emoji to thank a colleague, using food emojis to vote on team outing options, or employing celebratory emojis to commend a client’s achievement.

    These scenarios highlight the potential for emoji reactions to foster a sense of community and engagement within professional email exchanges, thus subtly bridging the gap between conventional email formality and the expressiveness of informal messaging.

    Practical limitations

    Despite its potential benefits, the feature comes with certain restrictions and caveats. Emoji reactions are disabled for emails sent via Google Group aliases or those involving Google Groups in the recipient list.

    Additionally, reactions are unavailable for emails distributed to more than twenty recipients or when recipients are blind copied (BCC). There is also a rate limit of twenty reactions per message from a single user to prevent misuse or overuse.

    Another practical limitation arises for users accessing Gmail through third-party email applications; in these cases, emoji reactions may manifest as separate emails bearing links such as “[Name] reacted via Gmail,” which could contribute to inbox clutter and reduce convenience.

    The rollout, spanning from April 29th through the end of May 2025, is available to all Google Workspace customers, Workspace Individual Subscribers, and users holding personal Google accounts. It is noteworthy that, within Workspace environments, the feature is initially disabled by default and must be manually enabled by administrators.

    The approach offers organisations the discretion to determine the appropriateness of emoji reactions within their professional culture and compliance frameworks.

    Getting started 

    • Admins: This feature will be OFF by default and can be enabled at the domain level by going to the Admin console > Apps > Gmail > End User Access > Emoji reactions. Visit the Help Center to learn more about managing Gmail settings for your users. 

    WhatsApp gives users advanced AI tools without compromising privacy

    • By leveraging sophisticated encryption methods and confidential computing infrastructure, the platform deftly balances the utility of AI-powered features with rigorous privacy protections.

    WhatsApp has introduced an innovative feature called Private Processing, designed to enable users to leverage advanced artificial intelligence (AI) tools without compromising their privacy.

    The new technological advancement represents a significant step forward in harmonising the benefits of AI with the stringent privacy expectations that users have come to associate with WhatsApp.

    Private Processing operates on the principles of confidential computing, utilising a Trusted Execution Environment (TEE) to create a secure, private cloud environment. This infrastructure ensures that users can interact with AI-driven functionalities—such as summarising unread messages or receiving writing suggestions—while maintaining full control over their personal data.

    According to WhatsApp’s official communication, the aim is to “enable AI capabilities with the privacy that people have come to expect,” assuring users that neither Meta nor WhatsApp can access the content of their AI interactions.

    Robust privacy framework

    The technical sophistication of Private Processing lies in its multi-layered privacy-preserving mechanisms. Initially, the system verifies user identity through anonymous credentials obtained via the WhatsApp client, thereby safeguarding user anonymity from the outset.

    Subsequently, WhatsApp retrieves encryption keys from an external content delivery network (CDN), further preventing any direct linkage between AI requests and individual users by Meta or WhatsApp.

    The user’s device then establishes an oblivious HTTP (OHTTP) connection through a third-party relay to a Meta gateway, effectively masking the requester’s IP address from the service provider.

    Once the secure connection is in place, the establishment of a Remote Attestation plus Transport Layer Security (RA-TLS) session between the user and Meta’s TEE ensures that AI requests are encrypted end-to-end.

    The processing occurs within a confidential virtual machine (CVM), where the data is ephemeral—no messages or requests are stored post-processing. Finally, the AI-generated results are securely transmitted back to the user’s device using encryption keys accessible exclusively to the user and the Private Processing server.

    The comprehensive protocol guarantees a robust privacy framework, empowering users to interact with AI services without fear of data exposure.

    An opt-in feature

    Importantly, Private Processing is an opt-in feature, underscoring WhatsApp’s commitment to user autonomy. By not enabling it by default, the platform allows individuals to decide when and how to employ the feature, thus respecting diverse preferences regarding privacy and AI assistance.

    Although the rollout will be gradual, this measured release strategy is indicative of WhatsApp’s cautious approach to integrating AI within a privacy-centric communication platform.

    This development follows closely on the heels of another recent privacy enhancement—Advanced Chat Privacy. This complementary feature provides users with additional safeguards, such as the ability to block chat participants from exporting conversations, prevent automatic media downloads, and restrict the use of messages for AI training purposes.

    Such measures offer users greater assurance that their in-chat communications remain confidential, reinforcing trust within the digital messaging ecosystem.

    Related Posts: