Why Make `fetch`'s Response Body a Stream?
Émile Bergeron asked on Twitter why fetch
's Response
body is a stream:
Can someone explain to me what's the advantage of a readable stream for the response body?
In my mind, parsing the body as JSON should be idempotent, but there's probably a good reason behind that API choice that I'm not seeing today. 🤔
@tjcrowder maybe? #JavaScript #wat
const response = await fetch(SOME_URL); response.json(); // fine response.json(); // TypeError: Failed to execute 'json' on 'Response': // body stream is locked
I don't have any special knowledge on this, but I've always assumed it's an efficiency thing. I could see that in a couple of ways. Making it a stream means:
- In theory, the browser can avoid loading the entire body into memory and then doing something with it (parsing as JSON, loading into an
ArrayBuffer
, etc.); instead, depending on the method you call, the browser can feed the response's network stream directly to the JSON parser, orArrayBuffer
builder, orBlob
builder, etc. - The parsing/etc. can happen in concert with the read (in theory that would mean an early error in the JSON would allow it to avoid continuing to read the body).
- The
Response
object doesn't have to keep a copy of the body after you've consumed it in case you calljson()
or such again; keeping the result is the job of the code using the response.
Émile pointed out that you can clone()
the Response
and then use json
on both; from his follow-up tweet:
I'm aware of some alternative to avoid the error, like cloning the response.
const response = await fetch(SOME_URL); response.clone().json(); // fine! response.json(); // fine as well
...and that's true, but following through the specs (1, 2, 3) I'd probably want to avoid doing that when possible, since if I'm reading right, it triggers a read of the body and stores two copies of it (one for the clone, one for the original), at least until you consume them. So I think I'd use text()
and then JSON.parse
twice:
const response = await fetch(SOME_URL);
if (!response.ok) { // Don't forget the HTTP result check!
throw new Error("HTTP error " + response.status);
}
let text = await response.text();
const copy1 = JSON.parse(text);
const copy2 = JSON.parse(text);
text = null;
I could easily be wrong about specific motivations, but using streams rather than passing big blocks of data around is a fairly popular thing, and it makes sense to me to ensure that only the consumer of the API keeps the data around.
Happy Coding!
Have a question or comment about this post? Ping me on Mastodon at @tjcrowdertech@hachyderm.io!