Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

http/2 surely not simpler?


I feel like securing against request smuggling is simpler with http/2. That is of course only one aspect.

Ultimately though, its not like this is getting rid of http/1.1 in general, just DNS over http/1.1. I imagine the real reason is simply nobody was using it. Anyone not on the cutting edge is using normal dns, everyone else is using http/2 (or 3?) for dns. It is an extremely weird middle ground to use dns over http 1. Im guessing the ven diagram was empty.


Request smuggling is an issue when reverse proxying and multiplexing multiple front-end streams over a shared HTTP/1.1 connection on the backend. HTTP/2 on the front-end doesn't resolve that issue, though the exploit techniques are slightly different. In fact, HTTP/2 on the front-end is a deceptive solution to the problem because HTTP/2 is more complex (the binary framing doesn't save you, yet you still have to deal with unexpected headers--you can still send Content-Length headers, for example) and the exploits less intuitive.

HTTP/1.1 is a simpler protocol and easier to implement, even with chunked Transfer-Encoding and pipelining. (For one thing, there's no need to implement HPACK.) It's trying to build multiplexing tunnels across it that is problematic, because buggy or confused handling of the line-delimited framing between ostensibly trusted end point opens up opportunities for desync that, in a simple 1:1 situation, would just be a stupid bug, no different from any other protocol implementation bug.

Because HTTP/2 is more complicated, there's arguably more opportunities for classic memory safety bugs. Contrary common wisdom, there's not a meaningful difference between text and binary protocols in that regard; if anything, text-based protocols are more forgiving of bugs, which is why they tend to promote and ossify proliferation of protocol violations. I've written HTTP and RTSP/RTP stacks several times, including RTSP/RTP nested inside bonded HTTP connections (what Quicktime used to use back in the day). I've also implemented MIME message parsers. The biggest headache and opportunity for bugs, IME, is dealing with header bodies, specifically the various flavors of structured headers, and unfortunately HTTP/2 doesn't directly address that--you're still handed a blob to parse, same as HTTP/1.1 and MIME generally. HTTP/2 does partially address the header folding problem, but it's common to reject those in HTTP/1.x implementations, something you can't do in e-mail stacks, unfortunately.


Argubaly, the complexity issue is not only the protocols themselves but also the fact that thanks to the companies pushing HTTP/2 and 3, there are now multiple (competing/overlapping/incompatible) protocols

For example, people passing requests received by HTTP/2 frontends to HTTP/1.1 backends


Having to support http/1.1 and http/2 is definitely not simpler.


HTTP/2 is basically HTTP/1.1, just over some custom binary protocol bolted on on top of TLS.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: