10: Non-blocking Requests

(This page is not yet complete)

Because Bee Client is a wrapper for HttpURLConnection, it relies on the standard Java i/o libraries. Thus, the calling thread will block on each request until the corresponding response has been received.

Nowadays, it is popular to assert that blocking I/O such as this is somehow evil. This assertion is based on incorrect assumptions, but nevertheless there are times when non-blocking I/O is more suitable.

Fortunately, Bee Client can be used for non-blocking requests too. This is done by wrapping requests in futures.

Before we look at how this is done, let’s first think though when it is should be done.

Shall I Benefit from Non-blocking Requests?

  • If excess parallelism* is already present, blocking I/O is not harmful and is simpler to understand. Suppose you’ve got a many-threaded webserver (e.g. Tomcat) and 250 threads running on a four-core server. That’s plenty of excess parallelism; no need for futures.
  • Will adding futures cause thread-starvation problems? You may not know the answer to this until you try, but adding futures carelessly can seriously slow down an application in some cases (example case history). If you’re not sure, make some measurements. Maybe try other optimisation approaches first.
  • Am I affected by Amdahl’s law? Just because something is expressed in a parallel way does not mean it will run faster. Often, the rate-limiting step is elsewhere, so don’t optimise the wrong things.

*Excess parallelism is when there are plenty more threads than there are physical processors. The theoretical basis for this concept was developed by Valiant and others in the ‘90s. On the JVM, the cost of thread context switching is still an inefficiency unfortunately.

If you’ve considered all the above, then maybe the next step is to look at decoupling your HTTP requests using futures.

to be completed http://docs.scala-lang.org/sips/pending/futures-promises.html