-
Notifications
You must be signed in to change notification settings - Fork 62
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Should we support looser header name validation? #113
Comments
So in general, the maxim is "Be liberal in what you accept and conservative in what you send" (or something close to that). I definitely think there's value in not raising an exception here, that said, these should probably be "quarantined" for lack of a better turn of phrase. I think urllib3 might drop these on the floor (or has a few cases where that happens by virtue of using the standard library's http client) and that's also surprising. A way to signal to users "Hey, these are ... weird, maybe be careful with them" would probably be valuable |
So I think the issue you're pointing out is the header named It looks like urllib3 silently allows this through: In [7]: [h for h in urllib3.PoolManager().request("GET", "https://club.huawei.com").headers if "/" in h]
Out[7]: ['get-ban-to-cache-result/portal.php'] It would be nice to know what In Firefox's network debugging tab, it seems to show this header as being silently discarded: If that's what browsers do, then that's a pretty strong argument that we should do it as well. I don't know if there's a WHAT-WG spec for how headers handle this... does anyone else? |
package main
import (
"fmt"
"net/http"
)
func main() {
resp, _ := http.Get("https://club.huawei.com")
headerName := "get-ban-to-cache-result/portal.php"
for key := range resp.Header {
if key == headerName {
fmt.Printf("resp.Header.Get(\"%s\") = \"%s\"\n", headerName, resp.Header.Get(headerName))
}
}
} When run produces
So it looks like Go allows you to receive that leniently |
Digging through the I also poked around in the WHATWG fetch spec, and it doesn't seem to have any details at all on header parsing yet. This is the relevant subsection: https://fetch.spec.whatwg.org/#http-network-fetch But it just says:
And
|
So I guess there are two axes that we have to make a decision on: first, which characters we're going to handle specially. For example, we clearly need to be more tolerant of And second, for each bad header name, we have a menu of choices for how to handle it. The ones I can think of:
There are also concerns about "request smuggling". When different HTTP implementations handle edge cases differently, then you can end up in a situation where e.g. your firewall interprets your request as harmless and passes it on to your backend, but then the backend interprets it as something harmful. (This is apparently why RFC 7230 is so worried about trailing whitespace -- I don't know the details, but my guess is that some implementations used to strip the whitespace and other implementations treated it as part of the header name.) But here's something promising: it looks like nodejs's http parser unconditionally discards invalid header names. (Except that they have a "non-strict" mode that allows embedded spaces in header names, because of something involving how IIS 6 used to work. But hopefully we don't have to care about that any more.) So that's evidence that there might not be much demand for seeing these invalid header names. One possibility: continue to hard-error on trailing whitespace, but for the other invalid cases silently discard the header. Another option: only do that in client mode; in server mode continue to hard-error on all invalid headers. |
BTW if anyone wants to play with nodejs's http parser, there's a python wrapper here: https://github.com/pallas/pyllhttp |
I would urge to make a decision and go down the path of urllib3 and other libraries that pass parsable headers even if they don't percisely follow the RFC 7230. Often users can't control what response headers the server is sending, but they would still like to process the data. The choice to hard error is currently made on the basis of safety, but people are now using a workaround and direclty overwriting I think that discarding invalid headers is worse than passing them through. It still creates the same problem for users that have to access that part of the request. Even an opt-in option is likely to be unaccessible to the end user who is utilising h11 through other libraries, which might not implement the option. I think the decision should be made soon. It it's a bad idea to start bypassing security features by modifying the library's internal variables, but the current state leaves the users with no other choice. |
For reference, python's http.client parses all incoming headers according to RFC2822. |
This is quite the hurdle for me at the moment. Played around with patching the pattern in I feel a bit uneasy about doing this but I can't control what the server is sending to me. Edit: Tried monkey-patching the abnf token value just before sending a request to the misbehaving server but that didn't bite (I guess it's to late to try to temporarily patch it). Did however bite if I changed the token pattern in the library code, but this makes me even more uneasy. |
HTTP defines the syntax of field names as There are characters which are obviously unsafe to allow, like ':', but the comment from @njsmith above gets to the real issue here -- response smuggling. HTTP is designed to allow messages to be handled by multiple implementations, and when those implementations handle messages differently, it can be exploited by attackers. So, this is a security issue. And, while the choice of allowable characters for field names is somewhat arbitrary, it's important: implementations need to align on it. Aligning on the standard is not only the most straightforward thing, it's also the safest, because it's unambiguous, stable, and conservative (being more strict is good here). So my suggestion would be to follow the RFC but allowing explicit loosening, with appropirate warnings about how it can cause security vulnerabilities. |
Closely related to #97.
Prompted by encode/httpx#1363 (comment)
So, h11 currently has stricter-than-urllib3 rules on header name validation...
Which is occurring because the response looks like this...
That's not all that unexpected, since it's obviously simply just due to
h11
being a wonderfully thoroughly engineered package. And doing a great job of following the relevant specs. However we might(?) want to be following a path of as-lax-as-possible-if-still-parsable on stuff that comes in from the wire, while keeping the constraints on always ensuring spec-compliant outputs. (?)In practice, if httpx is failing to parse responses like this, then at least some subset of users are going to see behaviour that from their perspective is a regression vs. other HTTP tooling.
What are our thoughts here?
The text was updated successfully, but these errors were encountered: