Time Moves Sideways: Using mapped time displacement on timelapse video footage

On July 24th, 2022, I sacrificed a good night’s sleep to take 2,348 photos of the downtown Vancouver skyline as the sun rose behind it.

My camera peered out through the sunroof of my Mom’s Kia Soul toward downtown, and I set it to work, taking a photo every 6 seconds. I watched the new season of Stranger Things in the back of the car, on a blanket.

My goal was to capture the scene in a timelapse. I had done a few timelapses of this kind before, with less experience as a photographer. It was time to see if I’d improved.

My camera peering out a car’s sunroof, shooting the city at sunrise

^ The ✨gear✨ used for those who care: Panasonic S5, Canon 70-200mm L IS USM @ 150mm, f/7.1, AmazonBasics tripod (long-since discontinued), USB battery bank.

The shoot went well. But later, when I processed the photos, I noticed that there were a few issues with my approach.

The most obvious problem was that over time, as I shifted my weight in the back of the car, I inadvertently moved the camera. Oops.

The second problem was that my vantage point wasn’t entirely clear. Vancouver is designed in such a way that only the rich get nice views of the city, while the middle class are underhoused to make way for those nice views. I carefully positioned the car on the side of the road, so the camera could peer between two very expensive houses. The vantage point was so narrow, I caught the roof of one of those houses in my shot. It couldn’t be helped.

The third problem was some annoying exposure flickering, that was visible only in the last few-hundred frames. I can’t account for this, but I think it may have been due to glare from tree shadows moving in the wind.

After some colour correction and conversion from raw sensor data to YUV video files, I set my sights on correcting for camera movement. The point and planar trackers in Fusion didn’t seem to like my images; they shifted a few pixels in the +Y direction for every frame after the first few hundred. Before the trackers could even reach the end of the sequence, the tracking points drifted entirely off-screen. After scratching my head for a while, I gave up and switched to Mocha Pro’s planar tracker. Mocha had no problem accurately tracking my shot, which let me convincingly cancel out all camera movement with a simple X-Y transform and crop.

Once all camera movement was eliminated, painting out the house roof was as simple as using a paint node with the clone stroke tool. Exposure flickering was effectively taken care of with the Color Stabilizer effect in Resolve’s Edit page (which is inexplicably not available in Fusion, ugh). Then, any frames that contained passing obstructions (cars, birds, etc) were removed.

Next, I did a couple of passes for colour correction. I broke the timelapse up into several time segments marked by distinct lighting changes. I tweaked each of those segments independently with basic colour wheels and curves. Some segments with large changes in exposure got further application of the Color Stabilizer effect. Then the segments were faded together to produce smooth transitions.

This is as far as I had expected the project to go: a basic timelapse of a stunning landscape view. But I was underwhelmed with the result.

It’s fine, I guess! All of my post-processing fixes worked pretty well. But it lacked a certain wow factor that I’d expected to push me to post it to social media.

It was at this moment that I remembered this reel. In it, user @dylanzchen split their cityscape timelapse shot into several vertical slices, each of which was offset in time. The result was striking and original, and I considered stealing the idea. In the moment immediately following that thought, my brain served up a memory-stew containing this TikTok of some cats distorted by a “slit scan” effect, and the intro of this old Vsauce, which used a time displacement map to probably blow my 12-year-old self’s mind.

That led me down a rabbit hole, where I found a way to replicate that old Vsauce effect in Fusion, with an effect called “Time Mapper” included in this freeware plugin pack, developed by a person known to me only as Raf.

^ An early test of spacial time mapping in action.

The old Vsauce video used a time displacement of about 100 frames. If I used the same effect on all of my frames, I could effectively create a timelapse where time moves sideways across the landscape, with time displacement rolling as a function of a linear gradient, plus time itself.

Black-to-white

This is the gradient I used, which was generated in Affinity Photo in 16-bit precision (we’ll soon explore why) with the linear gradient generator. I had to use Affinity Photo, because it turns out that the “linear” gradient generator in Photoshop isn’t linear at all!

Feed that gradient and my frames into the Time Mapper node, and tada! Nothing happens! 🎉

Turns out that if you try to cache two-thousand-three-hundred-and-forty-eight 5.3K frames in 16-bit precision into RAM, you need a lot of it. Like, way more than the 96GB I have in my personal machine, and still way more than either Amazon or Microsoft would give me if I wanted to run this workload in the cloud.

One easy way to optimize this workload is to shorten the sequence. Not much happens in the first couple-hundred frames, so I dropped two out of every three of those, to get to the action quicker. Similar story with the last few hundred frames — those are even less dramatic, so I cut them altogether.

Now I was left with around 1,500 frames, which were still too big to fit into memory on the biggest cloud machine Amazon would give me keys to. But I wasn’t done yet. I cropped into a 16:9 frame, which reduced file sizes by half. I had initially planned to do this as a final step before posting to social media anyway.

If you’re following along with the math at home, you may have already noticed another potential optimization: reducing the colour depth of the images from 16-bit to 8-bit, thereby cutting data in half again. But the trouble was that I needed 16-bits (or at least 11-bits) of precision in the gradient image, so that the number of frames in the timelapse didn’t exceed the number of colour steps in the gradient, and so the time mapping animation could move smoothly. As far as I know, Fusion processes a node using the same colour depth as the input, so reducing the bit depth of the timelapse would be effectively the same as reducing the depth of the gradient image. So that was not an option.

Anyway, even with 16-bit colour depth, rendering this on a cloud-hosted machine was finally within the realm of possibility.

At this point in the story, I’m going to gloss over the numerous trials I went through to get Fusion running in the cloud. You see, Fusion needs an OpenGL-accelerated desktop to render the viewport. It will happily render frames on the CPU! But the GUI will not open without OpenGL!

Amazon eventually granted me access to a VM with 224GB RAM and two Nvidia Tesla M60 GPUs; less than I’d asked for, but enough after optimizing the workload. I got Fusion’s GUI to run with TightVNC, and the steps in this video by Craft Computing on YouTube, plus the info in this StackOverflow answer on the location of the nvidia-smi utility, which differed in my case from the video tutorial. Once Fusion was running in a hardware-accelerated desktop, I could switch from VNC to RDP for a nice latency reduction.

It occurs to me as of writing that I could have probably used the command line and saved myself some trouble… oh well.

At this point I also need to give a huge shout out to Raf, the developer of the Krokodove toolset that contains the Time Mapper tool I used, who graciously upped the frame limit from 1,000 to 2,000 at my request! I would have had to significantly scale back my ambition without this change.

Add a blur node for the border where the timelapse restarts, render for an hour, et voilà! I present the culmination of this project!

^ (duration shortened for this post)

The final render ended up being about 50 seconds long (at 30 frames per second). With some frame blending, we can slow it down even further, which looks pretty good as a screensaver.

As far as screensavers go, anyway. I’m not sure what I’ll do with the result. I don’t actually like screensavers.

Key takeaways:

  1. Don’t shoot a timelapse from inside a car which also has moving things (i.e. me) in it.
  2. Check framing before shooting. This is obvious in hindsight — I paid more attention to the obstacles I was shooting around than the frame itself. If I had done some test shots, I would have noticed that my composition wasn’t ideal, and probably would have tried to capture more of the landscape to the east (which would have included the sun itself).
  3. Don’t shoot under tree shadows. It couldn’t have been avoided in this case, but I could save some time in post by avoiding this mistake in future. Or by using a matte box. The lens hood just wasn’t enough.
  4. Fusion is very picky about file formats. Uncompressed TIFFs are way faster than Quicktime + Cineform.
  5. Mocha Pro is very worth it.