Historical context

The stateless HTTP

HTTP was designed as a stateless protocol: each time a client sends a request, the server processes all the information required – usually reading or writing in a database and sending a web page in response – and closes the connection.

Any two requests are treated independently by the server, even if they come from the same client. This stateless nature makes the protocol simple and scalable, easing the development of proxies and load balancers among other things.

The stateless nature of the web is also one of its earliest weaknesses:

  • Because there is no tracking of clients by default, when there is a need for authentication, each request message needs to prove the server that it is being sent in behalf of a legitimate user. That is why cookies exist.
  • Once a page is served, the communication is done. There is no way for the server to push new, updated data to the client.

This project tries to address the later issue.

Polling with Meta Refresh

The first solution widely used to get updated data from a web site is polling, that is, resending the request at periodic intervals. This is usually known as refreshing.

This approach is with great difference the simplest one out there, and it’s used by many web applications (e.g. Jenkins). Polling is also used in other stateless protocols when there is occasionally a need to fetch updated data, e.g. POP3, IMAP,

HTTP makes it really easy to implement this strategy, only having to add this code to a web page in order to make the browser refresh each 3 seconds:

<meta http-equiv="refresh" content="3">

There are many downsides with this approach:

  • Not real time: Since we are polling, we only get new data each time a refresh happens, and we need to choose the refresh interval. The smaller we make the refresh interval, the sooner we will have new data on the screen, but the more we will suffer from the following issues.

  • Performance: Each refresh requests the entire page to the server. This incurs in a considerable amount of traffic for the client, specially on slow or metered mobile connections.

    Since the first load and successive refreshes are exactly the same request, they incur in the same overhead, which may be overwhelming depending on the number of users, the average time a user maintains a page open and the speed at which requests are served.

    If our server has 10000 page loads each hour, each user in average keeps the page opened for 10 minutes and we make a refresh each 3 minutes, our server will need to endure 40000 requests per hour (one for the initial load plus three refreshes).

  • Bad user experience: Having a page refresh often requires that the browser blanks the page until it has rendered all its contents, converts the tab icon in a spinner during the download and may produce a jump if the user scrolled during that time. Because of the flash of the content, it may be hard for the user to notice what has really changed, which can be a source of annoyance and confusion on it own.

AJAX polling

Starting with Internet Explorer 5.0 (1999) and soon followed by every other browser web applications’ scripts have the capability to send arbitrary requests back to the server and process the responses from the client side.

This feature, popularized a few years later, is the base of most modern web applications. The requests sent with this technology are called AJAX requests [1], being AJAX an acronym of Asynchronous Javascript And XML, although they do not necessarily need to contain or expect XML in response. Indeed the data serialization language XML has been largely replaced with JSON, a simpler alternative.

AJAX applications can send specialized requests to the server catered to their needs. In order to perform polling we no longer need to request the full page, so we can send more specialized requests which require less traffic towards the client and less performance overhead for the server.

For example, if we ran a local on-line newspaper, instead of asking the server each 5 minutes What are the news at Salamanca?, we could ask What are the news at Salamanca? the first time the page is loaded and the rest of times, each 5 minutes, just ask Are there any news at Salamanca since 11:00?

While the former would require querying and dumping a set of articles from the database into the response, the later for most of the times would be just a query of existence (resolved much faster) and send an empty or small response, consuming much less traffic.

The AJAX pattern does not specify how the data received from the response will be translated into UI elements. For this purpose Javascript code specific to the application is required. Often this code decodes the response and creates, deletes or updates HTML elements.

AJAX polling solves many of the issues of basic refreshing: efficiency greatly improves for both client and server, and user experience is greatly improved. The polling interval is still an issue though.

There is also a new downside: client-side HTML manipulation gets complex very easily. As a consequence, AJAX applications tend to be very complex in the client side, requiring often more code than the server side.


In order to avoid the polling interval problem and offer real-time data pushed from the server to the browsers web developers came with a myriad of ideas. These are commonly referred as Comet [2].

One of these techniques is known as Long polling. Long polling works similarly to AJAX polling, with one important difference: when there is no interesting data to push to the client, the server process maintains the request unresolved, awaiting for interesting data to become available. The server resolves the request either once interesting data have become into existence or an specific timeout has been reached.

The client script sends a long polling request once the page has been loaded and sends another one every time the previous one has been finalized.

This way we really get pushed data with low-latency: Our request is responded once there is data available, without needing to wait for the polling interval. If the request is not responded after a considerable time we will need to send a new one, but the election of this time does not penalize our data latency.

One use case of Comet is on-line chats. For example, it was used in the web messenger Meebo. The open instant messaging standard XMPP has defined an official extension known as BOSH [3] that allows streaming over HTTP using long polling as an alternative to raw TCP, being used not only in order to provide web chat interfaces but also to access the service in desktop clients within restricted networks.

On the other hand, Comet technologies are harder to implement. They somewhat abuse the protocols and in some cases the internals of browsers. Things get harder when there is a need to make cross-domain requests. One technique used in that case is to load an everlasting HTML document where the server sends an script tag for each message it wants to send to the client and then keeps the connection pending for the next. It works, but it is somewhat hacky.

The C10K problem

The latency improvement of Comet comes at the cost of server-side complexity. With real poll-based systems the server queried the database at request time and responded immediately. The Comet methodology requires a publish-subscribe (often called PubSub) approach instead.

A PubSub server tracks a big number of connections from users which subscribe notifications. Each time a relevant event occurs, like a change in a database, a notification is published, being delivered to the subscripted clients.

PubSub servers have to maintain a great number of opened connections simultaneously, although most will remain idle for the most time (awaiting notifications).

This is somewhat different from the way traditional HTTP servers and applications were developed, in which each request was assigned a system thread to process it, resolving it as fast as possible. This is the way Apache, PHP, Tomcat, and many others work.

Although a PubSub server can be built on the traditional approach by means of waits and synchronization primitives like message passing, efficiency is greatly harmed.

Traditional web servers were not thought to have many requests running simultaneously, but only a few and the rest of them waiting queued. As a consequence, each active request consumes a fair amount of resources like memory and operating system primitives.

Threads are by themselves a relatively scarce resource. Too many threads often reach the operating systems limits throwing away server responsiveness. This problem is popularly regarded as the C10K problem because of a popular writing about the difficulty of writing a server that handles ten thousands connections using one thread each.

Since the C10K article was written in 2003 threads have become more efficient and computers more powerful, but the solution adopted for the C10K problem with the greatest success is ditching threads completely or limit them to a few, usually one thread per processor core in order to take advantage of multiprocessor systems.

Asynchronous programming

In order to manage concurrent connections without threads the asynchronous programming model is used instead. In it, the kernel is freed from the heavy lifting of switching context between worker threads. Instead, this job is taken by the server application, which can fine-grain control how many resources it allocates to each request (often only a few bytes in data structures, while a thread requires many kilobytes for its call stack as the very minimum).

Asynchronous servers feature an event loop which is used in order to choose the next request to be attended, in a similar way to the operating system scheduler. In order to be notified of incoming connections and data they use efficient socket selection APIs from the operating system, like epoll (Linux), kqueue (FreeBSD) and I/O Completion Ports (Windows). Once the server decodes a request it starts processing it. Meanwhile a request is processed, the server is not doing any other thing, so it’s crucial not to use any blocking system calls. A request can be delayed though, not sending a response and returning to the event queue. The delayed requests can continue their processing in response to other events like timeouts or messages from other clients.

Asynchronous programming has been successfully exploited in many recent developments, including the nginx and lighttpd HTTP servers which bring great performance for file-serving and use as reverse-proxies. New asynchronous servers like Tornado and Node.js establish a solid programming base in order to create high concurrency web servers with ease.


Snorky server also uses asynchronous programming. It is based on Tornado.

Asynchronous programming is different from synchronous programming. On one hand, asynchronous code is more deterministic since, using only one thread, we can assert that the server is not executing code to handle more than one request. This avoids the needs of synchronization primitives like locks and mutexes, common sources of complexity and performance degradation. On the other hand, some operations are harder to do in a non-blocking fashion, which is crucial to get an asynchronous server to perform well. File reads are an example of this [4].


As time of writing, Snorky is not intensive in file-writing.


While Comet techniques worked, there was a motivation to develop a better technology and make it standard. The result of this effort brang the WebSocket protocol, standarized in 2011.

WebSocket protocol handles streaming two-way communication between a client of a server with minimal overhead. WebSocket works over TCP or TLS (for security) and also provides framing.

A WebSocket connection is initiated with an HTTP request, usually to the standard ports 80 or 443, therefore working through restrictive firewalls without additional configuration.

Although the user needs a compatible browser in order to use WebSocket, most of them are already. Internet Explorer in versions before the 10 is the only major desktop browser without WebSocket support yet a significant user base.

Furthermore, the simplicity of the WebSocket API, with only three methods (open, send, close) and their counterpart events (on open, on message, on close) has powered an effort to create compatibility layers for those browsers that do not support WebSocket. One of them is SockJS, which uses several Comet techniques as fallbacks for WebSocket behind a extremely simple API, both in client and server.


Snorky supports SockJS as an alternative to WebSocket.

The client-side frameworks

As web applications have become more and more intensive in the client, the complexity of them have increased.

We are slowly reaching a point where the server no longer sends a full-formated page, but it only provides the raw data and the code to format the data into something legible is executed on the client side. Application responsiveness is greatly improved, feeling more like their traditional desktop counterparts.

In order to deal with the complexity of these interfaces several projects have been started. Usually they provide some templating mechanism to render the data and implement patterns based on Model-View-Controller in order to respond to user interaction and update representations of the data with few lines of code.

Examples of these projects are the AngularJS framework, Ember.js, React, Knockout.js and many other.

This project

Client-side frameworks are game changers: Been able to display and update data easily in a declarative way paves the road for better interfaces and greater user experiences.

But our interfaces are still disconnected from the server for most of the time: Even with all those fancy things we are still moving AJAX requests there and back again. If the user loads a page with the messages they have to moderate in a forum, or they are waiting for an auction to close, often their only way to get fast updated information is to press many times the F5 key, just because Push technology is hard and few people go through the problem of implementing it.

This project tries to bridge the gap bringing easy data synchronization between servers and clients requiring the least effort to developers, specially those who want to build collaborative applications.