← Back to home

2018: 120fps and no jank

The first consumer devices with variable framerates up to 120Hz are reaching the market and it makes for a notably different user experience. Websites, however, often seem to struggle to reach 60fps — how can we even talk about 120fps then?

Until I met Paul Lewis, I had no appreciation for 60fps. The human eye can only see like 18 frames per second anyways, right? Well now I can’t help but spot a janky animation on the phone of the person on the other side of the street on a foggy London day.

My colleague Ian Vollick recently told me that the new iPad Pros have 120Hz displays. But why bother? The human eye surely can’t tell the difference between 60fps and 120fps, right? RIGHT? I went out and gave them a try: It makes one hell of a difference. More natural motion blur and lower latency are not only noticeable (especially during scrolling) but make the device so much more enjoyable.

Performance junkie that I am, my first thoughts were about how the web was going to perform in a world of 120Hz displays (spoiler alert: not well) and I thought I’d share these thoughts. You know, for once pretend to be a thought leader or taste maker or whatever word the kids use these days. The problem is that we are still struggling to consistently hit 60fps on the web. So while we can certainly talk about 120fps, it’s more of a thought experiment and an exploration. I don‘t think it’s realistic to expect every app to run comfortably at 120fps by end of 2018.

Why are we struggling with 60fps?

A DevTools recording showing a janky main thread

Most screens run at 60Hz and having your animations run at that rate gives them a very smooth feeling. It’s literally easy on the eye. In my experience,there are two major issues that prevent developers from reaching that goal:

  1. Animating anything but opacity or transform. I know it’s tempting to animate height or margin-top because it’s so easy to implement, but in most cases it will make things slow. Potentially very slow, like 3fps slow.
  2. Using JavaScript for your app’s logic as well as to drive animations (rather than using CSS Transitions, CSS Animations or Web Animations API). Keep in mind: Running at 60fps means you have a frame budget of 16ms, some of which will be required by the browser to update the display.

The first problem is mostly an educational one. While it might be tempting to go down the easier route and animate background-position to implement a parallax effect, it will also mean that the browser will repaint on every frame, which is rather slow and will destroy the effect. It’s worth your time to figure out the math and use CSS3D Transforms for parallaxing. It’s our job as DevRels and DevTools engineers to catch these kind of mistakes as early as possible. So we’ve been adding warning to DevTools and Lighthouse to inform developers when they are animating slow properties. Paul even did an entire Udacity course on Browser Rendering Optimization.

The second problem, however, has multiple angles from which it can be tackled and I decided that those will be my main concerns for the year of 2018.

Angle 1: Stop using JavaScript for animations

Before I start bickering about all the reasons why “using JavaScript for your animations” is not ideal, I should clarify what that actually means: JavaScript-driven animations are animations where you write code using requestAnimationFrame (or rAF) to set an element’s transform on every frame. I wrote a small demo where you can inspect the code to see a comparison between a CSS animation and a JS animation.

The reason some animation libraries use JS-driven animations is simple: It’s much harder (or sometimes even impossible) to do complex animations using just CSS Animations and CSS Transitions. In JavaScript you have an entire programming language at your disposal to imperatively define timelines and animation paths.

However, as long as an animation is JavaScript-driven, it will be at the mercy of consistent rAF scheduling and will only update when your app’s tab gets JavaScript processing time. That means you have to sacrifice a portion of your already limited and precious frame budget, limiting the amount of app logic you can execute in the remainder of the time. If your app logic takes too long, your animation will degrade in quality. To simulate that, check the checkbox in the demo above.

Another reason you will want to avoid JS-driven animations in the future is that — at least currently — Safari on the iPad Pro keeps scheduling rAF at 60Hz even though the display is running twice that refresh rate. I repeat: JS-driven animations are currently capped to 60fps, while CSS-driven animations will be running at 120fps (or higher if the device allows). So if you want to tap into the new smoothness, JS-driven animations won‘t get you there.

Note: I was wondering why they were capping rAF at 60Hz and asked Simon Fraser, who is one of Apple’s WebKit engineers and representatives on the CSS Working Group (Tweets 1, 2, 3). The TL;DR is that there’s too little to gain running rAF at 120Hz to justify both the increase in battery drain and the compat risk.

Solutions for 2018

There’s 2 closely related APIs to solve this problem: The Web Animations API (WAAPI) and Houdini’s AnimationWorklet.

Web Animations API has been around for a long time, but browser support has always been lacking which is why it never took off. Recently, Apple has started working on implementing the API and it’s currently available in Safari TP and behind a flag in iOS Safari. Animations done with WAAPI run on the compositor and are therefore immune to a janky main thread. The API is incredibly powerful with support for multiple timelines and effects. The majority of animations you end up doing on a webapp should be covered by WAAPI.

For the animations that are not easily done with WAAPI, we have Houdini’s AnimationWorklet. It’s a new way to write JavaScript-driven Animations. The genius is that your piece of JavaScript will run in a different thread and is therefore just as immune to main thread jank as WAAPI animations. An experimental AnimationWorklet implementation will land in Chrome early next year. So get ready to get your hands dirty. Sadly, consistent cross-browser support will probably take quite a bit longer.

Sneak peek: Offscreen Canvas & Houdini’s Custom Paint

I’m not going to go into much detail on either of these technologies, but with OffscreenCanvas and Houdini’s Custom Paint API are APIs that expose <canvas>-like APIs that can run off the main thread. OffscreenCanvas is a better fit for long-running image generations while Custom Paint is better for responsive layouts that are tied to CSS. Both are scheduled to land in Chrome stable soon and will help you to make sure that paint operations won’t eat away at your frame budget.

Angle 2: The main thread is the UI thread

The web trying to catch up with the capabilities of native apps has been an on-going theme for years now. But only recently has it occurred to me that when it comes to animations and performance, native apps do one thing drastically different: Their UI thread — the thread that does animations and handles input — is exclusively for the UI. Any kind of logic is run in a different thread. Both iOS and Android enforce this behavior, although as far as I know with different levels of strictness. The web, on the other hand, has historically just had the main thread where everything happens. If we want to even think about 120fps (where our frame budget is 8ms, half of which are probably used by the browser itself) we should look into adopting a similar pattern.

Solutions for 2018

Enter: WebWorkers. WebWorkers are the web’s way of introducing real parallelism. WebWorkers allow you to spawn a truly separate thread, run your JavaScript code and allow you to communicate with your main window. For example, you could transcode a 4k video in a worker and your main thread will remain jank free. The API is surprisingly old and supported in all major browsers — even IE10!

⚠️ Confusion warning: WebWorkers are not ServiceWorkers.

One of my goals for 2018 is to show that you can use the main thread exclusively for UI work by utilizing WebWorkers. My plan is to only run Custom Elements, state management and some scripts for class shuffling on the main thread. Everything else goes in a worker. Since communication with a worker is not only asynchronous but has to happen via a rather limited channel, I have been working on Comlink. It’s an RPC library to help you to make WebWorkers and the main thread work together without having to cry yourself to sleep at night.

Diagram showing an app architecture that has state logic and app logic in a worker and only components on the main thread.

Angle 3: Yielding to the browser

Once we have put all our animations on the compositor and all our logic in a worker, it’s still possible (not to say “easy”) to clog up the main thread. Appending a large number of new children to an element will require the browser to recalculate the layout of the page and draw all these new children. Especially on mid- to low-range mobile devices this can take a long time: This demo I wrote (WARNING: Lot’s of flashing!) repaints 100 children on every frame. A high-end MacBook Pro is just about managing to do so. My Pixel phone is barely making it to 10fps.

What I am saying is that the main thread can be busy in multiple ways, not just with executing JavaScript. Calculating the effective styles after adding a class, layouting all the DOM elements and repainting them accordingly is happening on the main thread, too. This explains why the frame budget I keep talking about is such a scarce and precious resource.

Solutions for 2018

The solution here is a change in best practices and discipline: Yield to the browser. Often. Do your work in small chunks. This way the browser has the chance to ship a frame in between your little chunks of work. One helpful puzzle piece here is requestIdleCallback or rIC for short.

rIC takes a function that will be called when the browser is idle. You can queue up multiple callbacks and the browser will invoke them all in the same frame as long as there’s time left to do so before the next frame has to be shipped. Once the frame budget is used up, the browser will get busy to ship a frame and then continue processing the queue of rIC-scheduled functions afterwards.

The support matrix for rIC is currently pretty bad, so I’d like to see that change in the next year. In the meantime, there’s a crude but effective (and tiny) rIC shim by our Paul Lewis that you can use.

Jank-free in 2018

My goal in 2018 is to eliminate jank on the main thread — or rather: Teach developers how to do so. As I said earlier, I can’t (and won’t) expect every app to run at 120fps, but we can make sure we are running as fast as the current device lets us. There’s lots of work to be done in terms of tools, libraries, best practices and architecture, but I won’t be the only one exploring this space. Looking at the frameworks panel at CDS, the very first thing that is brought up is the desire to use WebWorkers to move things off the main thread. I feel like Workers, Comlink and Custom Elements are a set of tools that can get us very far in this space already.

If you have made efforts in a similar area or any kind of feedback, I’d love to hear about it! After all, I’m trying to plan my year here ;)