Outdoor Photography and Videography

Tobi Wulff Photography

Home / Tags / postprocessing / Essays

Fast Editing and Color Grading with a Gaming Mouse

There are many control surfaces out there to help with editing and colour grading but they are mostly geared towards professionals and are very expensive. Examples are surfaces from Tangent and Blackmagic.

North RouteburnSunset in the North Routeburn, Aspiring National Park, NZ. Cropping, exposure and contrast adjustments, monochrome processing in Darktable.

For amateurs and enthusiasts there are multiple cheaper options. I've written about my DIY controller for Davinci before but there are also consumer devices that can greatly speed up editing and grading. Because I have to use a mouse anyway (for lack of a complete, fully-features control surface), the Logitech G700s gamging mouse(de) is one of my favourite tools. Its main feature are the four thumb buttons on the left side that can be assigned arbitrary actions or shortcuts through the Logitech software. The configuration is stored in the mouse itself so once set up the Logitech software is no longer needed. This means the settings will work the same on any computer or any operating system. The mouse wheel is kind of special on this mouse because it can be pushed left or right (very useful for scrolling on a timeline), and it can be put in a free spinning mode with the button next to it. This is useful for browsing websites or scrolling through long documents such as the Davinci Manual (PDF).

Edit

For editing I use three of the buttons to switch between Davinci's three edit modes (pointer [shortcut A], trim [T], razor blade [B]). The fourth button is used to toggle snapping on or off since I constantly find myself switching between shifting clips around (in this case I want them to stick to the next clip so that there is no gap) and making fine adjustments to the length of clips or exact cuts. Another good option for those four buttons are the clip modes Insert, Overwrite, Replace, and Place on Top.

The top buttons aren't used as heavily because they are a bit awkward to reach while holding the mouse. At the moment I have the three buttons on the top left set up to set in and out points and to toggle video/audio linking for the selected clip. I almost find the I and O keys on the keyboard easier to reach but I also miss them sometimes when I don't look down. The two buttons in the center of the mouse switch through the profiles and turn the free-spinning mouse wheel on or off.

Colour

The most common action when colour correcting and grading, and the central piece of Davinci's colour page, are the nodes. So I set up the four buttons on the side of the mouse to quickly add serial, parallel, layer nodes, and also to add a serial node before the currently selected one. I haven't found a specific use for the three top buttons yet as there are so many possible shortcuts but none of them are used as often as handling nodes. Maybe I'll go for undo/redo, or for handling clip versions or the gallery.

Summary

Using a gaming mouse together with a few keyboard shortcuts or a simple control surface (I'm looking forward to see what the Tangent Ripple can do) can greatly speed up your editing and colour grading work. If you've only been using a normal three button mouse so far I highly recommend giving a gaming mouse with 7-10 additional buttons a go.

Please use the comment section below or head over to Google+ or Twitter @tobiaswulff to discuss this article or any of my photography and videography work. My Flickr, 500px and Vimeo pages also provide some space to leave comments and keep up to date with my portfolio. Lastly, if you want to get updates on future blog posts, please subscribe to my RSS feed. I plan to publish a new article every Wednesday.

Preparing for a Documentary Shoot with Blackmagic and GH4 - Part 1

The actual production time for my first proper documentary is coming up in a few weeks so I want to start writing about the pre-production process and my experiences as each shooting day happens.

Upskilling

There are so many areas you have to cover as a mostly one-man-band when making a short feature and there is always more to learn. Most of those areas also go hand in hand so even if you want to hand something off to someone else it still pays to learn the basics and get into the editor's head, or the audio guy's head, and of course the camera man's/DP's head.

Over the last year I slowly learned to use Davinci Resolve 12 to edit and color grade, and even though color grading (and editing to some extend but luckily a documentary isn't 100% creativity and some things just fall in chronological order) probably takes decades or a lifetime to master I slowly get the hang of matching shots and giving it a certain look. So while I think that there is lots to learn on an actual big project I've also got the basics to tackle a short film. Really useful tutorials I used to learn the skills are the Youtube tutorial videos of Casey Faris and Miesner Media. The official Resolve manual is very content-rich and well written. It is definitely worth a read if you're serious about using the program to produce films - at least it should be handy as a reference document.

For dealing with RAW timelapses and turning it into an edited and color graded video the recent mountain timeapse was a good exercise.

Equipment

I've slowly gathered all (most? acquiring equipment never ends) of my equipment over the last few months and am now ready to shoot a variety of scenes in different weather and lighting conditions. The cameras and how I plan to use them:

Blackmagic Pocket Cinema Camera with Metabones speedbooster and full-frame or APS-C lenses (from 11mm to 105mm, some of that with optical stabilisation): use whenever feasible because it produces the best image but it won't shoot slow motion. It is also too heavy to go running with (unless way stripped down) and therefore won't work on my Roxant stabilizer.

GH4 with speedbooster and full-frame or APS-C lenses: due to the different crop factor gives slightly different focal lengths than the BMPCC. Can shoot 4K and slow motion so will be used when those features are necessary. Is also more rugged (see my test in the rain here) and works on the stabilizer with a small MFT lens. The GH4 has decent audio input (as long as the pre-amps are turned down) so I don't necessarily need a separate audio recorder - something that is absolutely required with the BMPCC.

GoPro Hero 3: I don't like the image of the GoPros that much but it is a great little camera and can do super wide-angle shots, good slow motion, and fit in tight corners where other cameras won't go. I plan to use it for timelapses with a tiny rig (e.g. on a Gorilla pod) and to leave it outside for longer periods without having to worry about it too much.

Either one of those or a photography camera like the Olympus E-M1 will also be used to shoot timelapses without using any of the precious video equipment.

I've experimented a lot with rigs from Smallrig and will write a post at some point about the specific parts. At the moment I'm still swaying back and worth between more parts and attachments and a smaller rig so I don't want to finalise it just yet. What I can say, though, is that a minimal cage works best for small HDSLRs like the BMPCC and GH4, and Nato rails and top handles are amazing.

What's next

The next step is to shoot a daytrip in the outdoors where we prepare the course for the event. It will involve using the stabiliser and trying to record good audio while being on the move.

Until then, here again is the clip from last year's event:


Please use the comment section below or head over to Google+ or Twitter @tobiaswulff to discuss this article or any of my photography and videography work. My Flickr, 500px and Vimeo pages also provide some space to leave comments and keep up to date with my portfolio. Lastly, if you want to get updates on future blog posts, please subscribe to my RSS feed. I plan to publish a new article every Wednesday.

RAW Timelapse Workflow with Darktable and Davinci Resolve

I shot a new timelapse in the mountains, this time exclusively recording all the frames in full resolution and RAW (unlike my previous outdoor timelapse). It was recorded with the Olympus OM-D E-M1 (de) and Olympus 12-40mm F2.8 PRO (de) lens.

Out in the field

Basic outdoor timelapse 101: all manual settings, that is ISO, white balance, aperture, shutter speed. White balance was obviously daylight and I kept the ISO at its minimum (200). Shutter speed should be set to something "video-like". Video and film cameras usually use something called a 180 degree shutter which essentially means that the shutter speed is 1/(2 x frame rate). So for a 24 fps video that means 1/48 or (because photo cameras usually don't offer this setting) 1/50. Anything faster than that runs the risk of making the timelapse feel jittery and too sharp. For fast movements, like people or clouds, I like to go even slower and aim for something like 1/20 - 1/40. This gives the video a more dreamy and pleasing look.

I record every frame in RAW. I like to store JPEGs as well so I can generate a quick timelapse when I get home without having to go through the RAW workflow (described below) first.

To do the actual timelapse recording, there are several option depending on your circumstances and your equipment:

  • Using the camera's in-built timelapse function: most compact solution and works well on the E-M1 except when you want faster than 1 second intervals;
  • using a remote shutter release or remote timer: works great but you have to dial in the intervals using the anti-shock functionality and it's an extra cable flapping in the wind;
  • a slider or panning head triggering the camera: whenever the E-M1 sits on the panning head (see next section), it will receive it's shutter releases from the Genie. The result: accurate intervals perfectly timed with the stops between motions of the moving parts of the timelapse setup.

For filters I often use a graduated ND filter to make the bright sky and the darker ground a bit more even. This is particularly important at sunrise and sunset because the ground will be really dark. I also have a circular polarizer that lives on my lens 95% of the time: vegetation looks more lush, colours more vibrant, and annoying reflections of leaves or glaring surfaces disappear. It can also cut through a lot of haze and mist on a more cloudy day. Time in Pixels just released an excellent article about filters for video with many visual examples.

Getting moving

I've written about my DIY slider before and it is actually undergoing some major upgrades right now to make it more usable and flexible. However, I don't usually take it very far because it is heavy and big. In order to have something that always fits in even the smallest bag, is compact and rugged (not weather-proof, though) and "just works", I got myself a Genie Mini which is actually being developed here in NZ. It's controlled from a smartphone via Bluetooth so setting it up takes a few minutes since my phone is usually off when I'm in the outdoors (no reception anyway) but it's very intuitive and flexible (watch the videos on their website). All the shots in the video at the top of the page that have some side-to-side movement are done with the Genie Mini.

RAW workflow

The out of camera JPEGs are alright but (especially for landscapes) don't look nearly as good as they could when I develop my own final images from RAW: better colours, more dynamic range, more wiggle room in the highlights (and some in the shadows). This is particularly important when photographing sunsets, sunrises, or rapidly changing lighting conditions because the exposure can be adjusted so much in post. I load all my RAWs from one scene into Darktable, then do all my adjustments on one of them (shadows, highlights, general exposure, Velvia/saturation filter, contrast, noise reduction, but no cropping - I can always do that later when editing the video). Then, I copy the settings to all the other RAWs and export everything to bitmap files with a high bit rate, such as 16-bit PNG or TIFF. In theory, one could also make fine adjustments to individual frames at this stage.

Editing

The last step is to edit the photos into a timelapse video and maybe add some music and sound effects. I mainly use Davinci Resolve for editing because it also has colour grading built in but the colours should already be fairly correct and good looking from the last step. Davinci can directly import image sequences (i.e. individual files) and display them as video clips.

Please use the comment section below or head over to Google+ or Twitter @tobiaswulff to discuss this article or any of my photography and videography work. My Flickr, 500px and Vimeo pages also provide some space to leave comments and keep up to date with my portfolio. Lastly, if you want to get updates on future blog posts, please subscribe to my RSS feed. I plan to publish a new article every Wednesday.

Shooting on the River

A few weeks ago I was shooting photos and a short film on and next to a river while also doing grade 3 whitewater action. This is a quick summary of some gear I used and some helpful tips to keep your gear dry and in good working condition. But first, here is the finished short film about the trip:

Equipment

First and foremost, your equipment has to be stored in a safe way. This means a good water-proof (even when submerged) and to some degree crush-proof case, ideally with foam inserts to protect against shock. Pelican cases have a great reputation and I use the Pelican 1200 Case (de) because it's just the right size for a MFT camera and a decent sized lens plus some accessories. It also fits perfectly behind the seat cushion of the river bug that we are using. Pelican makes cases in a lot of different size, for memory cards, small electronic devices, cameras and up to big carry-on roller cases that themselves weigh 6-8 kgs already. I did a few rolls in the river (as you can see at the end of the video) and everything stayed perfectly dry. The main thing to watch out for is that absolutely nothing gets in between the rubber seal and the lid when closing the case. Many a camera have been lost in the past due to a bit of cloth or a camera strap preventing the seal from working correctly.

I also keep a small micro-fibre towel in the case to absorb puddles and to be able to wipe off any water on gear or your hands. As usual with any photography/video gear, there is also a small lens cloth to wipe off moisture and clean the lens. One lesson I learned - even tough nothing bad happened - is to always put small and vulnerable things like batteries and microphones into zip-lock bags. Sooner or later your camera will get a bit wet or water will drip from you or your equipment into the case so extra protection is required for things that should never ever get wet. It also keeps everything more organised and things can't fall out as easily.

Action cameras (such as my GoPro HERO3: Black Edition (de) ) are of course the main work horses of all thrill seekers and outdoor enthusiasts. I put a piece of paper towel or toilet paper inside the case to absorb moisture - that's basically all the more expensive GoPro absorption papers do. It also helps you verify if everything stayed tightly shut at the end of the day.

Main main camera was the Olympus OM-D E-M1 (de) but in the future it will definitely be the Panasonic LUMIX GH4 (de) with Metabones Canon EF to BMPCC Speed Booster (de) because of its slow-motion ability and better codecs and video features (unless I need a smaller foot print or the IBIS of the E-M1). I couldn't use a good microphone but I think the sound of the river is still ok and luckily there weren't many other sounds to record on location. The E-M1 and PRO lens performed flawlessly in the heavy rain which was great because for basically any other video camera I would've needed an umbrella or camera bag handy whenever I took it out of the case.

Post-processing

Because I've only used the on-camera microphone and also because a GoPro in its water-proof housing doesn't record sound very well, the major "trick" to improve the finished video was to make a good wild river soundtrack and apply it to all the tracks where it made sense, that is where there are rapids and whitewater - but I also left the GoPro sounds (they are more like clicks when hit by a wave) in on a separate track because they give it a more immediate feedback to what is happening visually. The same is true for the sound of the rain on the camera: it doesn't sound great because it's the built-in mic and it's the sound of rain hitting the camera housing but it helps with audience immersion.

Color grading has taken another big step forward in this project: I used Davinci Resolve 12 Lite and the color themes are cool/high-contrast for the bad weather scenes and a warm slightly teal/orange look for the good weather scenes. The GoPro footage was graded to fit well into the surrounding clips but it also has a more realistic and less stylised look. For the supermarket indoor shots I added glow around the highlights to give it a slightly dreamy feel because it is so different to the harsh outdoor and action shots. I will post some before and after color grading shots soon.

Please use the comment section below or head over to Google+ or Twitter @tobiaswulff to discuss this article or any of my photography and videography work. My Flickr, 500px and Vimeo pages also provide some space to leave comments and keep up to date with my portfolio. Lastly, if you want to get updates on future blog posts, please subscribe to my RSS feed. I plan to publish a new article every Wednesday.

Use a MIDI Controller as a Video Editing and Color Grading Surface

Control surfaces can greatly speed up editing and color grading work and also avoid issues like CTS because your hand can move around more freely instead of clutching a mouse all day. However, like most things in the video/film world, they can be very expensive. At the top end there are full suites like Blackmagic's Davinci (multiple $10,000s), the Avid Artist line (many $1,000s) and smaller devices like the Shuttle Pro V.2 (de) ($129). While they are often very well made and can be worth it if your profession is to produce multimedia every day as efficiently as possible, the actual cost of the functional parts is actually much lower. So I decided to build something similar to the Shuttle Pro but with a few key differences:
  • Sends MIDI messages instead of registering as a keyboard
  • Fewer buttons but a shift function that doubles the number of functions, including the jog/shuttle wheels
  • Different placement so that the hand can rest on the left and easily access buttons on the top and the right

MIDI

The MIDI protocol has been around for decades and is primarily used in the audio and lighting world for input devices and synthesizers. As opposed to using an input device that emulates a keyboard, MIDI has got the advantage that the incoming messages can be easily translated to keyboard shortcuts, whereas an Arduino emulating a keyboard will always send the same shortcuts. This gives much more flexibility, e.g. when changing programs or modes within a program (think media, edit, and colour pages in Davinci Resolve).

Within Resolve, shortcuts can be configured (or are configured by default) for pretty much all functionality apart from curves and colour/lieft/gamma/gain wheels - without a dedicated control surface one still has to use the mouse to modify these parameters. In order to process incoming MIDI messages on Linux I use mididings. I actually gave a talk at KiwiPyCon 2014 about using mididings to control photography (or any) software on Linux. On Windows I wrote my processing code in C++ and used the rtmidi library which is easy to compile (I use MinGW gcc) and comes with many excellent examples.

Assembly

The physical parts come down to a few buttons (cents to a few $), LED(s) (cents) and the jog/shuttle made by ALPS ($15-20). Figuring out the pins on the jog/shuttle was pretty straight-forward but this article goes through the process in more detail and might be useful to anyone trying to get a similar part working. I already had the plastic case and an Arduino to power the project lying around. In order to turn the Arduino into a MIDI device you'll have to replace the firmware on the ATmega used to communicate with the computer via USB (which is different from the main ATmega on an Arduino Uno!). The firmware and detailed instructions can be found on the HIDUINO Github page.

The shuttle controls forward and backward play at different speeds (J, K and L in Resolve) and the jog dial advances or rewinds the playhead one frame at a time (left and right arrow keys).

I still need to figure out a way to make an outer wheel for the shuttle and a knob or inner wheel for the jog rotary encoder. 3D printing might be the best way but first I'll have to learn how to create the virtual parts for it. At least the shuttle and jog wheels have sturdy grooves that should make it fairly easy to attach knobs or wheels to it.

At the moment, the Arduino will be connected to each button, LED and ALPS shuttle/jog through cables that go into the female headers on the little green board. Eventually, the Arduino will have to move into the enclosure and more sturdy, soldered connections between its pins and the components will be made.

Please use the comment section below or head over to Google+ or Twitter @tobiaswulff to discuss this article or any of my photography and videography work. My Flickr, 500px and Vimeo pages also provide some space to leave comments and keep up to date with my portfolio. Lastly, if you want to get updates on future blog posts, please subscribe to my RSS feed. I plan to publish a new article every Wednesday.

Long-run timelapses across multiple seasons - Part 2

In the first part of my timelapse blog series I wrote about different types and techniques for longer and really long (seasonal) timelapse movies. In this article I want to describe a few specific techniques and tools that are useful (and often mandatory) to finalize those timelapses.

But first I'll lead into this timelapse article with my new video "Mountains and Clouds 2015" that I've shot over the course of about one year in the South Island of New Zealand on various trips.

One thing I wish I had done from the start is shoot all the frames for timelapses in full resolution and as RAW files. Always keep future use of your files in mind! Unless I got lucky and the scene was really well exposed, I couldn't correct as much as when grading from RAWs and because of the better resolution the final video will also look sharper. I take bets on Twitter (@tobiaswulff) on which scene was the one shot in RAW ;) .

Batch Processing

The first step to processing timelapse RAW frames is to "develop" them from being a digital negative to a bitmap photo file. When exporting the image it is important to choose a format that will retain the full bit depth of the original RAW file: options are TIFF or 16-bit PNG. JPEGs should not be used since they only store 8-bit of color depth which will not fare well in color correction and grading later.

There is another way, though: keeping the files in their RAW format and using a video editing software that can deal with RAW footage, such as Davinci Resolve 12. For Resolve to ingest the RAW files they have to be converted to DNG. My photo management software of choice digikam has an in-built DNG converter and I believe so does Apple Aperture, Adobe Bridge and/or Lightroom. While working with a RAW video is pretty neat, because this is not a recognized format from a video camera (such as RED) the possibilities to adjust the image are limited and a proper RAW photo program such as Darktable or Lightroom is much better suited for the job. Nevertheless, you are importing 10+ bit image data into the editing/color grading software which gives much, much more room for colour and exposure adjustments.

Compile Videos

As described earlier, Resolve can ingest DNG RAW files so it is possible to do editing and color grading with the source material. For the timelapse video posted at the beginning of this article, however, I compiled each sequence into a video first, which can be done using one of the following two Linux programs: Blender or ffmpeg (CLI tool). This use of "baked" videos will make the edit smoother because the program doesn't have to deal with as much data.

ffmpeg is a command-line tool, so it is easy to batch-process or automate converting image file sequences into video files. ffmpeg supports all the usual video file formats and containers, including ProRes which is a 10 to 12-bit 4:2:2 or 4:4:4 codec and the preferred format for Resolve. However, I found that the picture didn't quite come out the way I wanted, in particular darker areas got too dark so for example stars in night time sequences (like the one at the end of my video) almost disappeared. I'm sure this can be adjusted using the codec settings but for now I have turned to Blender.

Blender is first and foremost a 3D modelling and animation program. However, in recent years it also became a more and more powerful video editor and VFX pipeline, and it can be used to turn any bitmap sequence into various video formats. Once an image sequence has been imported, it can be modified (scaled, rotated, color corrected, composited with a 3D scene, etc) using nodes as shown in this screenshot:

For exporting I chose AVI RAW since it gave the best quality and could be converted to ProRes for Resolve, again without any loss in quality. It might be possible to export directly to ProRes or to use ffmpeg under the hood but I haven't explored the export and encoder settings too deeply yet.

Long Timelapse Processing Techniques

The biggest problems with long-term timelapses in the outdoors (i.e. outside a controlled environment) are changing lighting conditions and that it's basically impossible to get the camera set up 100% exactly the same way every time: the tripod will be positioned slightly differently, a zoom lens will make the focal length setting inaccurate, pointing the camera at the same spot will still be a millimeter or two off ... Luckily, both issues can be dealt with fairly successfully in software as described below.

Stabilizing and Aligning Photos

In order to make the transition from one frame to the next as smoothly looking as possible, non-moving objects in each frame really shouldn't move or jump around. Therefore, it is necessary to either align all the photos before they are compiled into a sequence (or video), or to stabilize the final video. There are at least three very different ways to achieving this. These different approaches can differ greatly in terms of time and effort, and quality of the end result.

1) Align photos automatically using Hugin. Hugin is an HDR and panorama toolkit but it can also be used to align a sequence of photos without exposure bracketing or stitching them into a panorama. There are several algorithms to choose from when aligning photos (I usually use "Position(y, p, r)" but its results are not perfect). The algorithm will look at all the photos that are next to each other in the sequence and find common control points in the picture that it uses to align (translate, rotate and scale) them. Control points can also be manually added, removed and shifted to improve the alignment. In terms of speed this is the easiest and fastest approach. I usually roughly follow this tutorial - something you will need because it's a complex piece of software!

2) Align photos by hand in GIMP (or Photoshop): there are various plugins for those image editing programs that allow a user to specify two common control points in two pictures and the plugin will then do the alignment. The results are near-perfect but it will take a long time because you have to do it for each frame in your sequence (what's that - 60, 90, 200?). A professional suite like Adobe's probably contains automatic tools similar to Hugin as well.

3) Stabilize the final video: all professional editing and/or VFX programs such as Resolve, Hitfilm, After Effects and also Blender have built-in stabilizers. I haven't tested this method yet but because they work on a frame-by-frame basis they should be able to stabilize the footage very well. However, as described in the next step, I like to blend (or blur) my frames which will definitely make stabilizing the video more inaccurate so doing it before the video is compiled seems more robust to me.

Blending Photos

As shown in some of the examples from other photographers in the first timelapse article, we often simply blend images or videos together to make for a smoother (and longer) final product. This can either be done between sequences (say you show a locked down timelapse of autumn, then blend it into another timelapse of the same spot in winter) or between every single frame. Using the free toolkit ImageMagick, this can be done with one command:

convert frame_a.jpg frame_b.jpg -evaluate-sequence mean frame_ab.jpg

Ideally, your original source files would be organized to have odd sequence numbers (frame001, frame003, ...) and the generated blended images will fill the gaps (frame002, frame004, ...). This way you'll end up with all the frames for the timelapse video, now much smoother because even though lighting conditions and your camera setup change dramatically between each frame, there is now a frame in between that combines both conditions and makes it look much more pleasant on the eyes.

Conclusion

As you can see, there is quite a difference between the blended and the original image sequence - apart from the speed that it loops at of course since one has twice the frame-count of the other. Note that this is not a perfect alignment and also that I haven't done any RAW processing yet (these are simply out-of-camera JPEGs) so for a final product I would first process all the frames so that they are similar in exposure and saturation. Left is original (rough and unpleasant), right is blended (smoother):

Please head over to Google+ or Twitter @tobiaswulff (see links on top of the page) to discuss this article or any of my photography and videography work. My Flickr, 500px and Vimeo pages also provide some space to leave comments and keep up to date with my portfolio. Lastly, if you want to get updates on future blog posts, please subscribe to my RSS feed. I plan to publish a new article every Wednesday.