Fogetti

← back to the blog


Event Driven Modelling in Javascript - Part III

Posted on July 11th, 2016 in html5, canvas, event driven, simulation, modeling, web-workers, web, workers, request-animation-frame, elastic, collision

In the last post we took a look at the implementation details of the collision system and eventually we saw the collision system running in HTML5.

That's all good and nice but there were some issues with this animation. First of all the animation was slow and jerky. And second, it wasn't visually appealing at all.

So in this post we will speed things up a little bit and try to add some eye candy to our animation.

Our goal is to achieve a smooth animation and add a tail effect to those particles to make them look like a colorful comet by using an effect similar to motion blur.

So our particles are moving, but every now and then they seem to stop or to slow down, and other times they seem to quickly speed up just to slow down again.

Root cause analysis: what might be the reason to this? Well to understand the cause of this, we have to understand how animation works in general. What we really want to achieve here is slicing every passing second into a given number of slices, and keep refreshing the canvas by each passing time slice. This is what time sliced modeling is dedicated to solve in the first place and there is a tremendous amount of information on the internet how to achieve this.

So to do this in javascript one naive approach might be to use the setInterval feature and just make all the prediction, resolution and animation drawing in one call. But there are at least 2 drawbacks of this approach.

Prediction and resolution are computationally heavy operations. Even if you start calculating the physics part precisely at the beginning of each time slice, there is no guarantee that the calculation will finish before the beginning of the next time slice. Don't under estimate this calculation. It's not so easy to get it right at first. And it can give underwhelming results if implemented incorrectly.

Also, because Javascript has a concurrency model which is based on the event loop, and this event loop has a semantic which prescribes "Run-to-completion" of each scheduled message, using setInterval doesn't give us any solid guarantee about when precisely our function will run in the future.

So what to do then? All hopes are lost? Not so fast! Browser vendors luckily recognized this shortcoming of Javascript and they quickly came to the rescue of us by providing a method called requestAnimationFrame. If we take a closer look we will see that the the function signature is very similar to that of setInterval except that the interval parameter is missing. And that's the point! The clever thing about requestAnimationFrame is that it will execute our callback function precisely at the time when the next frame is about to be drawn to the screen! Instead of calling the Javascript engine from our code to do something on behalf of our code, the Javascript engine will call our code at the perfect time to ask if there is any work to do. Neat!

So now that we know how to solve the jerkiness, let's add some cool afterburner effect to our particles!

The desired effect that we want to achieve is that each particle leaves a fading trace on their path that they take. So let's think about this for a minute first. What is motion blur in this case? Isn't it just simply a fading trace of the particle? And what does fading mean in this context for that matter?

Well it turns out that fading is basically just the same object with a different alpha value (given we use RGBA color codes). Which means that if we keep a trace of a particle, where the current position has the highest alpha value, and the oldest position has the lowest alpha value, we produced a trace! So let's do that!

First of all we should notice, that producing the trace of each particle one-by-one has the same overall net effect as producing the reduced alpha value for the whole image at once. So putting all these information together, we get a simple algorithm:

#1 Copy the last canvas of the moving particles into a buffer canvas

#2 Move those particles on the buffer canvas by the prediction algorithm

#3 Combine the buffer canvas and the visible canvas

CollisionSystem.prototype.redraw = function() {
    // buffer canvas
    var canvas2 = document.createElement("canvas");
    canvas2.width = this.canvas.width;
    canvas2.height = this.canvas.height;

    var cx2 = canvas2.getContext("2d");
    cx2.fillStyle = "rgba(0,0,0,0.01)";
    cx2.fillRect(0, 0, this.c.canvas.width, this.c.canvas.height);
    cx2.drawImage(this.c.canvas, 0, 0);

    this.c.clearRect(0, 0, this.c.canvas.width, this.c.canvas.height);

    for (var i = 0; i < this.particles.length; i++) {
        this.particles[i].draw(cx2);
    }

    this.c.fillStyle = "rgba(0,0,0,0.08)";
    this.c.fillRect(0, 0, this.c.canvas.width, this.c.canvas.height);
    this.c.drawImage(canvas2, 0, 0);

    if (this.t < this.limit) this.pq.insert(new Event(this.t + 1.0 / this.hz, null, null));
};

This last step is essentially our motion blur step. Since the combined canvas will contain the latest particle positions with the highest intensity value plus all prior particle position with gradually fading intensity values combined, we basically created an afterburner effect. Cool, isn't it!

So let's see how does this look like in a live demo.

See the Pen Event Driven Collision II by Gergely Nagy (@fogetti) on CodePen.

You can check out the code here: event-driven-part-2

Note that the jerkiness is yet present and to be fixed. So let's continue by removing the jerkiness.

Now we can understand how the event loop works and how requestAnimationFrame works, so we can fix the jerkiness, right? Well, not so fast! That's definitely true that requestAnimationFrame will execute our code precisely at the time when the next frame is about to show up on the screen, but still we have to halt the event loop every time when a new animation frame is scheduled to run. This is not going to work out well. Now the code is interleaving the drawing and prediction logic in a nondeterministic way. Even if the browser is ready to show a new frame, our event loop might not have anything to show yet. Or it might take too long to compute the next prediction to show in the next frame and the prediction might overlap with the next frame.

So let's decouple these two separate concerns. Let's run our animation frame drawing logic in one thread and the prediction algorithm in another. So now the two parts are running in parallel and the frame updating logic is performing a much easier task: simply pulling off predicted particles from a queue and draw their new position to the canvas. And on the other side of the queue: let's predict the new positions of the particles, and put those predicted particles into a queue!

This is much more optimal than the previous solution, since the time of the work to be performed when a new frame is scheduled by the browser is only limited by the speed of the HTML5 canvas operations.

So let's take a look at the code:

DrawImage.prototype.simulate = function(worker) {
        ...
    var self = this;
    var worker = new Worker(...); // Create worker
    worker.postMessage({
            url: document.location.href,
            0: this.canvas.width,
            1: this.canvas.height
        }); // Copy and send particle dimensions

    requestAnimationFrame(function(){
        self.animate();
    });

    // Register a handler to get the worker's response
    worker.onmessage = function(e) {
        self.particleQ.push(e.data);
    };
};

Easy enough. First we create a new web worker from the collision system. This will be our producer. Then we register an onmessage handler to handle the particles produced on the producer side and we register our animate method to draw the particles in each new frame request.

One note on the collision worker: some of the code is omitted deliberately. You can find the code in my github repo. But note that this code is instantiating the worker from a blob which is stored in the HTML DOM tree. This step is usually not necessary. It's usually much easier to put all the javascript code into the same folder as the caller, and create the worker by simply calling new Worker('js/collision-system.js'). The code in the repository is more complicated for technical reasons that Codepen is imposing and we are not going to discuss here.

The collision system is similarly simple:

CollisionSystem.prototype.postResult = function() {
    postMessage(this.particles);
    if (this.t < this.limit) this.pq.insert(new Event(this.t + 1.0 / this.hz, null, null));
};

We post the predicted particles to the consumer worker and keep running the prediction algorithm.

So let's take a look at the live demo:

See the Pen Event Driven Collision III by Gergely Nagy (@fogetti) on CodePen.

Nice and smooth!

That's all folks.

Make sure to check the code in my repo: event-driven-part-3