Jump directly to main content

A common question I get from devs new to JS is: How do I do “parallel” HTTP/API calls with JS?

Note, I am quoting “parallel” because the solution here is possible in a single thread without involving a web worker. What they are asking for is concurrency rather than true parallelism.

Also the question is always tied together with Promises in JS. So let us tackle that topic first.

Promises

Lets say you have an async function name pause that waits for a given number of milliseconds that’s used like the following:

function pause(milliseconds) {
  return new Promise((resolve) => {
    setTimeout(resolve, milliseconds);
  });
}

console.log('Wait for 1 second');
await pause(1000);
console.log('Done');

What if we added two pause(1000) one after the other.

await pause(1000);
await pause(1000);

The above code takes 2 seconds to complete.

But what if you do the following? How long would it take to complete?

await Promise.all([
  pause(1000),
  pause(1000),
])
Answer

1 second

Why? Because both Promises (and thus setTimeouts) were created at the same time. Since the timeouts started at the same time they would resolve 1 second later at the same time (timeouts are handled concurrently by the JS engine event loop).

Ok! How about the following code? How long would this take to complete?

const promise1 = pause(1000);
const promise2 = pause(1000);

await promise1;
await promise2;
Answer

1 second

How? I created the second pause Promise without waiting (“awaiting”) for the first one to complete. I await them later. The timeouts then complete at the same time and the second await will always resolve instantly after the first one. The serial awaits makes negligible difference.

Concurrent API Calls

So now lets get into API calls. How does the following work?

await Promise.all([
  fetch('https://example.com/path1'),
  fetch('https://example.com/path2'),
])

It works like one would expect it to. Both fetch()s happen together and the Promise.all resolves when both the fetch()s completes.

But have you thought through how this works at the network level? What if I create 100 fetch()s inside the Promise.all, will the browser make 100 concurrent calls? What about node.js? Does the browser and node.js use threads internally? Is it heavy on memory consumption? How about file system calls from node.js?

The answer is different from browser than from node.js. Browsers have a maximum in terms of the number of parallel requests it would make. The maximum for HTTP/1 connections is generally a low limit like 6 parallel calls (could be different across browsers, but yes, mostly <10). For HTTP/2, server implementations limit the concurrency to prevent DoS attacks. In any case, once browser hits the maximum, it would queue the rest of the requests and wait till one of the in-flight requests completes before processing the next request in the queue.

node.js doesn’t limit the number of I/O calls that can be done concurrently. But of course node.js will consume resources if you make infinite calls. For network calls, node.js internally doesn’t need multiple threads as operating systems support non-blocking I/O calls. Some file system APIs don’t have non-block mode at an operating system level. For those APIs node.js seems to use a thread pool internally. So the memory consumption rate isn’t exponential. I’ve made 5000 concurrent network calls without stressing node.js. So it is up to you to measure and limit the number of concurrent calls you make. Libraries like p-map helps in those cases.

A note on error handling: With Promise.all if one of the calls fail Promise.all throws an exception without waiting for the rest of the requests to complete. If you don’t want this behavior, then you can use Promise.allSettled.

That’s all for this article. Thanks for reading.