Outdoor Photography and Videography

Tobi Wulff Photography

Home / Essays / 2015

Experience shooting TWALK 2015

Earlier this year I shot a video at TWALK 2015, the Canterbury University Tramping Club's annual 24-hour orienteering event. It was kind of a last minute decision to bring my camera (an Olympus E-M5) and an older GoPro Hero along so no shooting plans were made and I had no big ambitions to produce a very polished, finished product.


If you want to read more about the event, check out the CUTC TWALK website.

I ran with in a group of 6 and because we could share the tasks of navigating and searching the hidden checkpoints, I had enough time to run ahead and film my team and also a lot of the other competitors. Obviously, the busiest and most crowded time was at the start and it thinned out as the race went on.

After about 3-4 hours we finished the first leg, had a quick rest and a decent amount to eat from the big table of free 24 hour warm buffet, then headed out again for the evening/night leg. Unfortunately, none of my cameras were particularly good in low light so as soon as the sun disappeared that was it for filming apart from a few shots of headlamps. An A7s would have been amazing but it also started to rain and get really cold while we were exposed on the tops so anything other than a GoPro or a really well weather-sealed camera could have got negatively affected.

Equipment

As mentioned above, the main camera was an Olympus OM-D E-M5 (de) with an old manual Olympus OM ZUIKO 28mm f2.8 lens which turns into a 56mm full-frame equivalent or "normal lens" on a MFT sensor. The reason I used an OM lens and not a more modern zoom lens was the great manual focus ring that is absolutely necessary for those rack focus shots. It's also a great looking lens: it's almost as sharp as my best MFT prime lens at a quarter the price. Finally, 28mm translates to 56mm equivalent on full-frame which is just a really nice and natural looking focal length for documentary-style shooting. To rig the camera up for better stability and handling for the indoor shots I used various parts from Smallrig such as Quick Release Handle with Nato Rail (de) and aluminium rods.

The second camera that I used when I needed a wider angle or high mobility was the original GoPro Hero (de) that I have since upgraded to a Hero 3 (which of course would have delivered much better image quality for this video, in particular in those low light shots). It was mounted on a cheap monopod to keep it steady and enable me to do smooth panning shots. While it impacted mobility a little bit (still a very lightweight kit), it improved image quality dramatically by making it possible to take very stable and steady shots and I highly recommend that people use a light monopod with their GoPros if shooting hand-held. It also keeps the horizon level almost automatically because the heavier monopod leg is always pointing down - keeping only a GoPro level in your hand is almost impossible. I specifically wanted to avoid the typical point-of-view helmet/chest-mounted GoPro shots so I tried to use it like a traditional film camera by getting close to the ground or high above people's heads as much as possible. That opening shot of the competition when everyone starts running is also the GoPro on the extended monopod.

For audio I used the Zoom H5 Portable Recorder (de) with the stereo microphone module for sound effects and room tone of the woolshed.

Editing and Grading

For editing and colour grading in Linux I used kdenlive. While it worked and all the basic editing features and effects are available (for free), it is not a completely hassle or stress-free experience, especially once the project grows past a certain size. I also started to get random shifting in some of my clips on the timeline, or it would suddenly use a different version of the same clip. For colour grading I had to go into every single clip and adjust curves and levels to get the right look. While colour correction and grading should always be done on every clip individually, unfortunately kdenlive requires a lot of repetitive steps to get there while other programs make it much easier and faster (it's one middle mouse button click in Resolve). In the end, I got the project done without any major hiccups and I recommend kdenlive as a first, free editing program if you're on Linux but if you are serious about your projects I would step up to something like Davinci Resolve which is much more stable, feature-rich and also free.

Please head over to Google+ or Twitter @tobiaswulff (see links on top of the page) to discuss this article or any of my photography and videography work. My Flickr, 500px and Vimeo pages also provide some space to leave comments and keep up to date with my portfolio. Lastly, if you want to get updates on future blog posts, please subscribe to my RSS feed. I plan to publish a new article every Wednesday.

Smart Watches for Photographers

About a decade ago smartphones entered the market and gave us photographers and videographers many new tools to make our daily lives easier (has that ever actually worked out?). Such tools are for example: notes for screenwriters or location scouting, ephemerides to figure out where the sun and moon will be at a certain time, remote controls for cameras and GoPros, and of course the incredibly fast turnaround from shooting something at professional quality to getting it onto social media. Can you even still remember that previously all of this had to be done on big heavy laptops or even on paper?

From there on, the next step are smartwatches. Not only do they add a second display to your mobile computing setup, they also enable new functionality for automation, remote control and reminders that haven't been able before. And we all know it won't stop there and the next "smart" device category is already just around the corner (whether it will be Google Glass or something very different).

In this article I want to show what, in my opinion, a smartwatch like the Pebble Steel (de) or the new Pebble Time Round (de) can do for us photo and video people. Some of those features might not apply to other Android Wear devices or the Apple watch because the Pebble takes its own, different approach to the whole smartwatch thing. The Pebble watches are waterproof which means they can be used in bad weather or wet conditions where the phone should stay safely in a pack or dry bag.

Watchfaces

Custom watchfaces are essentially very simple "apps" that only have the task to display time and some additional information such as the weather. The are not very interactive, i.e. you cannot easily switch modes or go into submenus to trigger actions.

For photography my favourite watchfaces are 24-clocks that represent day and night time graphically on the face. The most popular and fully functional in the Pebble store are "Sunset Watch", "Twilight-Clock", and "SunTime Pro" - all free. The former has the cleanest watchface but takes a few seconds to retrieve or calculate sunrise and sunset times every time you switch to it; the latter displays the most information on the screen including inclination of the sun, battery and bluetooth status, however, it does not show the current phase of the moon. "Twilight-Clock" is somewhere in between and it's the watchface I'm currently using if I want sunlight information. Not only does it show when the sun rises and sets, it also graphically displays when and how long the different twilight periods (civil, nautical, astronomical) are.

Some watchfaces and some of the more complicated apps that act mostly like watchfaces with extra functionality (the most popular being Glance) can display upcoming calendar events. If your are shooting an event and you have to know what's going to happen every hour and where you have to be, a quick look on your watch can give you all this information and notifications (which make the watch vibrate) ensure you don't miss anything important.

Pebble + Tasker

If the "precooked" watchfaces and apps described above are not enough and you're not afraid of either searching for existing profiles or creating your own (essentially very simple programming using a graphical interface) then Tasker can turn your Android+Pebble into a gadget of truly limitless possibilities.

Use PebbleTasker to take photo remotely: PebbleTasker is a Pebble app that can directly run Tasker tasks on the phone. These tasks can be anything you want and they can contain one or many different actions (change volume, screen brightness, send a text, play music, lock the screen, etc). If used with a task that takes a photo (I'm sure video options exist as well), your smartwatch becomes something like the GoPro remote and you can use it to set the phone up in one place, then trigger it from up to about 10m away. Of course, Tasker can also be used to implement various self-timers so that the photo will be taken 2 or 5 or 12 seconds after pressing the button.

Use AutoPebble and Tasker's geolocation features to bring location aware menus onto your watch: Tasker can trigger tasks when the phone is in a certain location (either determined by cell phone tower, Wifi network, or GPS). AutoPebble can be used to push selection menus or lists of options to the watch. To do this, the Tasker task first has to have an item that opens the AutoPebble app on the watch, then shows a list of items. Each item in this list can be programmed to send a code back to the phone on normal and on long-press. Each code can then in return trigger another Tasker task using the Event Profile that listens for a code.

Say you want to record the ideal time for a photograph in a certain location while out scouting: when you get to the location, Tasker will vibrate the watch and display a list of actions, one of them being "record time and orientation". When the button for this item on the watch is being pressed, Tasker can then create a note (in Google docs or Evernote or any other note taking app with Tasker integration) with the current time and the orientation using the compass in the Pebble or the phone (I'm not 100% sure if the compass information is accessible within Tasker so the phone compass for GPS journey direction might have to suffice). Another similar possibility would be useful for film photographers: present a list of aperture values and then, when one has been selected, a list of shutter speed values, to record data about photos taken on a film camera without having to take the phone out and with automated geotagging.

Highly functional Watchfaces

Finally, both concepts of simple watchfaces and complex apps can be tied together using apps that mostly act like a watchface but can show additional information or integrate other apps using direct button actions or menus. My current main app that is on the watch 90% of the time is Glance. It displays time, weather and missed texts/calls in a clean and nice looking watchface, and the buttons bring up a list of notifications, past text messages, appointments and a PebbleTasker page to send commands to the phone (as described in the previous sections).

Conclusion

The possibilities are endless and my examples are only a few of the scenarios where the phone+watch combination would come in handy. Since I've just started using a Pebble, I'm sure I'll discover many more use cases in the future, some really useful, some more gimmicky. I'd be very interested in what other photographers have come up with.

Please head over to Google+ or Twitter @tobiaswulff (see links on top of the page) to discuss this article or any of my photography and videography work. My Flickr, 500px and Vimeo pages also provide some space to leave comments and keep up to date with my portfolio. Lastly, if you want to get updates on future blog posts, please subscribe to my RSS feed. I plan to publish a new article every Wednesday.

New Olympus OM-D Firmware (4.0)

Olympus has just released firmware version 4.0 for the OM-D E-M1 (de) and version 2.0 for the OM-D E-M5 Mark II (de). I haven't seen an announcement on their website yet but the updater software can already download and install both camera and lens updates. One tip that took me a few tries to figure out: when connecting the camera to the computer, select "Storage" from the camera screen, otherwise the Olympus updater software won't be able to see the device.

It's a free upgrade and brings a lot of exciting new features to both of those cameras. Unfortunately for me as an E-M1-only owner, some of those are for the E-M5II only. Before upgrading keep in mind that the upgrade process will wipe all your settings from the camera!

Update: After having played with the new firmware for a day now, I've updated the sections below with some observations and new discoveries (in bold). I will also try to put a sort of E-M1 guide page together with useful settings and little quirks.

Electronic Shutter

It looks like finally the E-M1 is getting an electronic shutter mode which will be great for situations where the loud mechanical shutter is not appropriate. I don't know who made that decision but the heart symbol for the silent shutter (next to the familiar rhombus for the anti-shock mode) is kinda cute. However, be aware of the limitations: rolling shutter (if panning while taking a shot) is worse and flickering lighting such as florescent lights or projector lamps can make photos shot with electronic shutter nearly unusable.

Update: The electronic shutter setting is in the second camera menu (menu button, then on the second page down). Select Anti-Shock/Silent, then pick a silent delay (0 seconds for no delay but no mechanical shutter), then half-press the shutter button to go back to photo shooting mode and select drive mode Single Silent (heart). This is a lot of setup but once it's done you can quickly switch between mechanical and silent shutter using the drive/HDR button on the top left of the camera body. I'm looking forward to using the electronic shutter in my timelapses to go a bit easier on the mechanical shutter mechanism.

Focus Stacking and Bracketing

The biggest feature additions are the new modes for focus stacking and bracketing. Both do essentially the same thing, that is taking a whole bunch of pictures with the same exposure settings but slightly different focus points. This is particularly useful in macro photography where the depth of field usually is very small. It works with compatible auto-focus lenses such as Olympus's M.ZUIKO PRO lens series (de) by automatically shifting the focus point after each photo.

In focus bracketing you will end up with all of those photos and you can post-process them however you like (similar to Panasonic's new Lytro-like focus-later technology). However, when focus stacking is selected, the camera will do all the magic inside and produce one photo out of 8 individual ones, all with slightly different focus points. This should result in a macro shot where the whole subject is in focus.

Update: It works - as long as nothing in the frame moves. Focus stacking only works with the electronic shutter and it's so quick that it can easily be done hand-held. I don't have a dedicated macro lens so I couldn't really shoot any meaningful examples but it turns long focal length f2.8 into "everything is in focus" which is pretty cool. When focus stacking is selected, it also keeps all 5 individual files on the card so you can post-process them later. Some of the little but great improvements that I haven't mentioned in the original article are:

  • the menu system remembers where you left off last time so you can quickly play with settings without having to go through pages and pages to find it again,
  • not only are there more colours for focus peaking (red or yellow is so much better than black or white!) but the intensity can also be changed,
  • histogram, level gauge and over/under exposure indicators can now be displayed at the same time: this is huge because previously I had to jump through all the different options with the Info button to get my camera level, then get the exposure right; you can selected two different custom modes to cycle through using the Info button and selected which parts you want on each screen - the settings are under Menu - Gear D - Info Settings - LV-Info.

Simulated Optical Viewfinder

The S-OVF mode disables some of the "live view" features in the viewfinder, such as boosting the light levels. This means that it won't assist the photographer in bad lighting conditions but on the other hand, you'll see exactly what a true optical viewfinder would see, that is it depends entirely on the currently selected aperture on your lens. White-balance compensation is also turned off for a "truer" image. I think most of the time a appreciate the assisting features of the EVF and I use the histogram to accurately determine whether my exposure is good, so I can't see myself using this mode too much but it's still a free new feature that could come in very handy in certain situations (e.g. when not using the histogram for some reason).

Update: I probably didn't get it fully right in the paragraph above because I didn't know how optical viewfinders used to work. When S-OVF is selected, exposure compensation is completely disabled so you see pretty much exactly what your eye would see outside the camera. If you want to judge exposure you have to go by the metering number - the histogram doesn't help at all because it only turns what's currently in the EVF into a graph which means it won't change as you alter ISO, aperture or shutter speed (because the OVF doesn't change). To see the photo as it will turn out when you press the shutter you have to do two things: 1) go back into normal EVF mode, and 2) turn off Live View Boost under Menu - Gear D - second page. This is my preferred setting because it gives the least unwanted surprises, and I've mapped the S-OVF to Fn2 so I can change to it if I want a more realistic view.

Video

There are a few upgrades that apply to video only, such as a new picture profile (E-M5II only) and synchronised recording with an Olympus audio recorder. I don't own either so sadly, video won't receive any useful improvements (I was really hoping for focus peaking during recording but at least they are adding more colours to choose from for the outlines). Another minor addition is the slate tone generator which I assume can be assigned to a button. Using this probably looks more professional than snipping your fingers in front of the camera when recording audio with an external recorder.

For the E-M1 there is another good and bad update for video: a new framerate. It's great that Olympus has added 24p but it is also still missing 60p to become a useful sports and documentary video camera (which otherwise the rugged and weatherproof body and the in-body stabilisation makes it perfectly suited for). I don't quite understand why Olympus is adding features like timecode (and those awful movie effects) first before improving on the essentials.

Lenses

The PRO lenses will also receive a new firmware which will add support for disabling the MF clutch. I only usually use the clutch to switch to a true manual focus while shooting video. When shooting photos I have previously pressed my back-button-focus button just to find it didn't do anything because the clutch was still on manual focus. This update might help in those situations.

So overall it's a great update and we should keep in mind that not all manufacturers release such improvements for free. However, there are still features missing that I'm sure the camera would be capable of handling. They might arrive in the future with another free upgrade despite the E-M1 Mark II probably not being too far away anymore. I'm optimistic because in this upgrade Olympus has added features to the E-M1 that at first looked like they were for the E-M5II only.

Please head over to Google+ or Twitter @tobiaswulff (see links on top of the page) to discuss this article or any of my photography and videography work. My Flickr, 500px and Vimeo pages also provide some space to leave comments and keep up to date with my portfolio. Lastly, if you want to get updates on future blog posts, please subscribe to my RSS feed. I plan to publish a new article every Wednesday.

Long-run timelapses across multiple seasons - Part 2

In the first part of my timelapse blog series I wrote about different types and techniques for longer and really long (seasonal) timelapse movies. In this article I want to describe a few specific techniques and tools that are useful (and often mandatory) to finalize those timelapses.

But first I'll lead into this timelapse article with my new video "Mountains and Clouds 2015" that I've shot over the course of about one year in the South Island of New Zealand on various trips.

One thing I wish I had done from the start is shoot all the frames for timelapses in full resolution and as RAW files. Always keep future use of your files in mind! Unless I got lucky and the scene was really well exposed, I couldn't correct as much as when grading from RAWs and because of the better resolution the final video will also look sharper. I take bets on Twitter (@tobiaswulff) on which scene was the one shot in RAW ;) .

Batch Processing

The first step to processing timelapse RAW frames is to "develop" them from being a digital negative to a bitmap photo file. When exporting the image it is important to choose a format that will retain the full bit depth of the original RAW file: options are TIFF or 16-bit PNG. JPEGs should not be used since they only store 8-bit of color depth which will not fare well in color correction and grading later.

There is another way, though: keeping the files in their RAW format and using a video editing software that can deal with RAW footage, such as Davinci Resolve 12. For Resolve to ingest the RAW files they have to be converted to DNG. My photo management software of choice digikam has an in-built DNG converter and I believe so does Apple Aperture, Adobe Bridge and/or Lightroom. While working with a RAW video is pretty neat, because this is not a recognized format from a video camera (such as RED) the possibilities to adjust the image are limited and a proper RAW photo program such as Darktable or Lightroom is much better suited for the job. Nevertheless, you are importing 10+ bit image data into the editing/color grading software which gives much, much more room for colour and exposure adjustments.

Compile Videos

As described earlier, Resolve can ingest DNG RAW files so it is possible to do editing and color grading with the source material. For the timelapse video posted at the beginning of this article, however, I compiled each sequence into a video first, which can be done using one of the following two Linux programs: Blender or ffmpeg (CLI tool). This use of "baked" videos will make the edit smoother because the program doesn't have to deal with as much data.

ffmpeg is a command-line tool, so it is easy to batch-process or automate converting image file sequences into video files. ffmpeg supports all the usual video file formats and containers, including ProRes which is a 10 to 12-bit 4:2:2 or 4:4:4 codec and the preferred format for Resolve. However, I found that the picture didn't quite come out the way I wanted, in particular darker areas got too dark so for example stars in night time sequences (like the one at the end of my video) almost disappeared. I'm sure this can be adjusted using the codec settings but for now I have turned to Blender.

Blender is first and foremost a 3D modelling and animation program. However, in recent years it also became a more and more powerful video editor and VFX pipeline, and it can be used to turn any bitmap sequence into various video formats. Once an image sequence has been imported, it can be modified (scaled, rotated, color corrected, composited with a 3D scene, etc) using nodes as shown in this screenshot:

For exporting I chose AVI RAW since it gave the best quality and could be converted to ProRes for Resolve, again without any loss in quality. It might be possible to export directly to ProRes or to use ffmpeg under the hood but I haven't explored the export and encoder settings too deeply yet.

Long Timelapse Processing Techniques

The biggest problems with long-term timelapses in the outdoors (i.e. outside a controlled environment) are changing lighting conditions and that it's basically impossible to get the camera set up 100% exactly the same way every time: the tripod will be positioned slightly differently, a zoom lens will make the focal length setting inaccurate, pointing the camera at the same spot will still be a millimeter or two off ... Luckily, both issues can be dealt with fairly successfully in software as described below.

Stabilizing and Aligning Photos

In order to make the transition from one frame to the next as smoothly looking as possible, non-moving objects in each frame really shouldn't move or jump around. Therefore, it is necessary to either align all the photos before they are compiled into a sequence (or video), or to stabilize the final video. There are at least three very different ways to achieving this. These different approaches can differ greatly in terms of time and effort, and quality of the end result.

1) Align photos automatically using Hugin. Hugin is an HDR and panorama toolkit but it can also be used to align a sequence of photos without exposure bracketing or stitching them into a panorama. There are several algorithms to choose from when aligning photos (I usually use "Position(y, p, r)" but its results are not perfect). The algorithm will look at all the photos that are next to each other in the sequence and find common control points in the picture that it uses to align (translate, rotate and scale) them. Control points can also be manually added, removed and shifted to improve the alignment. In terms of speed this is the easiest and fastest approach. I usually roughly follow this tutorial - something you will need because it's a complex piece of software!

2) Align photos by hand in GIMP (or Photoshop): there are various plugins for those image editing programs that allow a user to specify two common control points in two pictures and the plugin will then do the alignment. The results are near-perfect but it will take a long time because you have to do it for each frame in your sequence (what's that - 60, 90, 200?). A professional suite like Adobe's probably contains automatic tools similar to Hugin as well.

3) Stabilize the final video: all professional editing and/or VFX programs such as Resolve, Hitfilm, After Effects and also Blender have built-in stabilizers. I haven't tested this method yet but because they work on a frame-by-frame basis they should be able to stabilize the footage very well. However, as described in the next step, I like to blend (or blur) my frames which will definitely make stabilizing the video more inaccurate so doing it before the video is compiled seems more robust to me.

Blending Photos

As shown in some of the examples from other photographers in the first timelapse article, we often simply blend images or videos together to make for a smoother (and longer) final product. This can either be done between sequences (say you show a locked down timelapse of autumn, then blend it into another timelapse of the same spot in winter) or between every single frame. Using the free toolkit ImageMagick, this can be done with one command:

convert frame_a.jpg frame_b.jpg -evaluate-sequence mean frame_ab.jpg

Ideally, your original source files would be organized to have odd sequence numbers (frame001, frame003, ...) and the generated blended images will fill the gaps (frame002, frame004, ...). This way you'll end up with all the frames for the timelapse video, now much smoother because even though lighting conditions and your camera setup change dramatically between each frame, there is now a frame in between that combines both conditions and makes it look much more pleasant on the eyes.

Conclusion

As you can see, there is quite a difference between the blended and the original image sequence - apart from the speed that it loops at of course since one has twice the frame-count of the other. Note that this is not a perfect alignment and also that I haven't done any RAW processing yet (these are simply out-of-camera JPEGs) so for a final product I would first process all the frames so that they are similar in exposure and saturation. Left is original (rough and unpleasant), right is blended (smoother):

Please head over to Google+ or Twitter @tobiaswulff (see links on top of the page) to discuss this article or any of my photography and videography work. My Flickr, 500px and Vimeo pages also provide some space to leave comments and keep up to date with my portfolio. Lastly, if you want to get updates on future blog posts, please subscribe to my RSS feed. I plan to publish a new article every Wednesday.

DIY Motorised Dolly Slider

Motorised sliders (or dollies) bring motion and a nice parallax effect to timelapse shots. Commercial sliders have the advantage of being well built (hopefully roughly proportional the amount of money spent) and easy to set up. On the other hand, costs for rails and card alone can be several hundred dollars and adding motors and a control unit quickly pushes it $1000.

A home-made DIY slider can be made with $150-200 for materials and the Arduino board or other micro-controller, depending on what you already have lying around. I'd like to take mine hiking more often but it wasn't built with minimal weight in mind so this is definitely a point where a good commercial carbon-fiber slider and a cart with less metal could come into play one day.

There are many different designs out there but the biggest distinctions between them are:

  • continuous motion vs stepper motion
  • two rails vs monorail
  • motor mounted on end truss vs motor mounted on the cart

Continuous motion is cheaper because a very simple motor can be used and the expensive timing belt can be replaced with a wire to pull the cart. However, this does not work for longer exposures since the camera has to be absolutely still while the shutter is open so I opted for a stepper motor right from the start. Keeping a cart stable on a monorail requires more engineering than with two rails but it can cut down on weight and makes it easier to mount to a single tripod with a screw hole in the centre of the rail. I didn't know how to build this so I went with two rails and wheels on either side to keep it stable.

At first I thought I can keep the cart lighter by mounting the stepper motor to the end of the rails. While this is true, once you add up the weight of the metal cart itself, a camera body and lens, and ball head, a single stepper motor wouldn't make much of a difference any more. Having the motor on the cart has several advantages: everything, from the camera to the motor to the control unit is in one place and you won't need to run cables all over the place; only half the length of timing belt is required since it won't loop around the ends. Here is a link to a good example of a DIY monorail motor-on-cart slider.

    Here is a quick test video I shot using the slider. Unfortunately, I bumped it a bit at towards the end but you get the idea. There will be more exciting timelapses in the future that actually use the motion/parallex effect in a meaningful way.

    Material List

    • Arduino: $5-20 depending on original vs clone and capabilities - it's easier if it fits a shield
    • Motor driver shield for Arduino: $20
    • Battery pack and switching regulator: $10
    • 12V NEMA-size stepper motor and mount $23 - alternatively a smaller stepper motor
    • Timing belt and pulleys $40 - one could probably find much cheaper spare parts somewhere else
    • Aluminium rails $10-20 - I can't remember exactly
    • Steel or aluminium cart and ball bearings - can't remember how much it was, maybe $20; I had the ball bearings already
    • Cheap, small to medium-sized ball head - don't use the really small ones like the Giottos Mini Ball Head if you have anything bigger than a compact point&shoot because it will wobble a lot and adjusting it will be very hard: $20-$30

    In the photo above - once you look past the rat nest of wires - you can see the motor driver shield sitting on the Arduino. All connections come out of the shield (they are fed through from the Arduino), so the rainbow ribbon cable is for the rotary encoder, there are some wires for the LED, ground and 5V, and also the stepper motor itself which is hooked up to the left-hand side 4-pin screw terminal. The two "things" encased in plastic in-line with the wires are a fuse and a switching regulator to bring the input voltage (9-12V) down to 5V for the Arduino. A linear regulator like the one that is on the Arduino would work too but might generate too much heat for a closed up enclosure.

    Photo above: a cheap but decent-sized ball head that unfortunately wobbles a little bit but does the job. Since all ball heads (and all decent tripods) use the same screw sizes, you can mount whatever you want, small and cheap or big and fancy. I like that the ball head has got a two-way water bubble level built-in.

    Programming

    On the Arduino platform I use the Adafruit_Motorshield library. To move the stepper motor the minimal distance and as smoothly as possible I run:

    Adafruit_MotorShield *motor_shield = new Adafruit_MotorShield();
    motor_shield->begin();
    Adafruit_StepperMotor *motor = motor_shield->getStepper(200, 250); // steps and speed
    motor->step(distance, FORWARD, MICROSTEP);

    I also set up an LCD and a rotary button using the SoftwareSerial and ClickEncoder libraries, respectively. Text can be written to the 16x2 LCD directly over the serial line, plus there are some special characters that move the cursor, clear the screen, and so on. The ClickEncoder uses up one of the timers of the Arduino and unfortunately it is the same used by the MotorShield library so I can't use both at the same time. This is ok because I only use the rotary encoder to set up all the timelapse parameters, and once the slider is moving and the camera is taking pictures I don't want to touch it again anyway. It's basically two separate programs: first the menu/settings, then the timelapse.

    Issues

    I found that the 200 steps per revolution that the stepper motor provides aren't quite enough for super smooth and slow motion, so after about 15-20 minutes with one frame every second the cart will already reach the end of the rails. It is possible that I haven't configured the motor correctly yet but it is set to micro-steps in the code and as far as I know this is the smallest possible rotation. I use micro-steps instead of normal steps because the provide smoother movement: a normal step would yank the cart and make the heavy camera wobble too much. It also makes sure that the motor is always enganged in case the rails are on an angle. This way the cart can't slide back down. To solve the problem of step sizes being too big I might incorporate some model kit plastic gears but for now I have avoided it since it isn't that easy to get everything lined up correctly without making the whole construction incredibly flimsy. There are also other stepper motors out there that provide twice or more the amount of steps per full revolution, usually through internal gears (see the alternative smaller motor I've listed above).

    Currently I'm running the whole setup off of 8 AA batteries, specifically Panasonic Eneloop AA Ni-MH Rechargeable Batteries(de). However, since the stepper motor requires 12V and 8 rechargeable AAs only provide a maximum of 9.6V, it does have issues climbing an incline stepper than about 20 degrees. On the flat it works great, though, and it lasts for many hours as well. In the future I might upgrade to a 12V battery or boost it up with a converter, maybe using a LiPo battery for their amazing energy density.

    Future Developments

    I've got a little micro-switch that I want to mount at the end of the rails so that it detects the cart hitting the end. This will eventually stop the timelapse. I also want to refine the menu system and hopefully improve the software side of driving the motors, i.e. better speed and step control.

    In case I've already exhausted all possibilities regarding the motors, I might actually have to add two differently-sized gears to the system to bring the speed down so that I can take hour-long timelapses with many hundreds of frames.

    And finally, panning while moving sideways would make my timelapses look much more impressive so adding a second motor is high up on the agenda. However, I'm not sure yet how to fit it between the camera and the top of the ball head (glue it to the quick-release plate?), or alternatively if it could or should live under the ball head in which case I'm worried about stability. The latter would see the motor mounted under the cart, however, which would be a very clean looking solution. Either mounting point will give very a different result when the rig is on an incline and depending on the subject it can work well or look really out of place.

    The most important improvement, however, to be made is getting it off the ground. At the moment grass or plants can easily get caught in the wheels and there are no points to screw in a tripod quick-release plate (the end trusses hook quite nicely into the top of my Manfrotto BeFree Travel Tripod(de) , though). Often something can look good at eye-level but having to put it all the way down on the ground limits my possibilities so a good 1/4" screw hole at either end would make it immensely more useful and stable in vegetation.

    Please head over to Google+ or Twitter @tobiaswulff (see links on top of the page) to discuss this article or any of my photography and videography work. My Flickr and Vimeo pages also provide some space to leave comments and keep up to date with my portfolio. Lastly, if you want to get updates on future blog posts, please subscribe to my RSS feed. I plan to publish a new article every Wednesday.

    Finding Music for your Video Productions

    Google for it! Seems obvious. However, there is an overwhelming amount of stock music out there and the quality, price, and usability of many of websites vary widely. Maybe I'm not so good with the Google-fu but it took me several months to write down a good list of sites that offer what I need at a reasonable price.

    The first time I became actually aware of great stock music and where to get it from was through Dave Dugdale's Youtube channel. I found the track "Hummer" great for the energetic videos I was producing and consequently put it in the first half of my TWALK video (watch below). Premiumbeat is one of the more pricey sites out there but I their content is all high quality so there's no weeding through a lot of so-so music. Production value on all tracks is top notch. The big advantage from a licensing point of view is that you can reuse the music tracks in as many free and commercial pieces of work as you want. When buying a track the user can choose between the full song, loops or individual samples.

    Audiojungle takes the second place in my ranking. It's got huge variety and the prices are much lower than on Premiumbeat at $15-20 per track or around $30 for music packs. Because Premiumbeat is curated it feels like it's got its own sound whereas Audiojungle can be a bit different and refreshing. However, one big downside of Audiojungle is that a track or music pack can only by used in one (free or commercial) piece of work. I'm not an expert and this is definitely not legal advice but it seems like a song can still be used in multiple videos as long as they belong to the same project, so for example the main video, remixed in a trailer, and maybe an alternative cut or follow up video or making of/behind the scenes. In this case, $15-20 for 2-3 videos isn't bad value.

    So if you intend to use a track only one time, Audiojungle will be cheaper. If, on the other hand, you want to use the same track in the future, the $40 (edit: I wrongly said that a song costs $60 in a previous version of this article) of a Premiumbeat track can quickly pay for themselves.

    Vimeo also has got a stock music store. While the prices are very low (one or a few dollars per track), unfortunately the quality is much lower well and so far I haven't found anything that I wanted to use in my videos. I'm sure there are great songs and something on Vimeo might work much better for a particular video than another song from Audiojungle or Premiumbeat but if it's going to take hours to find, it's not worth the saved money. If you like to sort through a lot of tracks this is worth a look. Another nice feature is the close integration with the Vimeo video site.

    On the free side of things there are sites like the Free Music Archive, Incompetech and Musopen. Many tracks on these sites sound very different from your usual stock music so it's definitely worth a look but again, I find it hard to impossible to find the right kind of music with good production value for a trailer or a short video. I could imagine that they are very good for background music on narratives especially since you can probably find about any kind of music on those sites.

    Also check out lists of useful resources on reddit and one of the many, many videography blogs out there. There are a lot more music and sound repositories, both free and paid, that I haven't mentioned yet.

    UPDATE July 2016: Adding a link to www.jukedeck.com because it sounds really interesting and is free (seen on the Fenchel & Janisch Youtube channel).

    Please head over to Google+ or Twitter @tobiaswulff (see links on top of the page) to discuss this article or any of my photography and videography work. My Flickr and Vimeo pages also provide some space to leave comments and keep up to date with my portfolio. Lastly, if you want to get updates on future blog posts, please subscribe to my RSS feed. I plan to publish a new article every Wednesday.

    Backup strategy for photos and videos on Linux

    This is a continuation of my previous Workflow article. In the workflow article I've linked to Chase Jarvis' excellent videos and blog posts about working with digital media and keeping it safe. I highly recommend to watch them if you are interested in a more in-depth look at workflow and backups.

    When I started getting into photography beyond simple point-and-shoot or cellphone snapshots, I realised that leaving all my valuable photos in one location without a decent backup plan would be too risky. I've never had a hard drive suddenly die on me but I'm sure that one day it will happen - suddenly or, maybe even worse, gradually. Not that my old snapshots weren't valuable but RAW photography suddenly involved much more data and also more high-quality prints, competitions, and a growing portfolio. Of course, I had done backups to an external hard drive and this is often as far as most people take it. This prevents data loss due to a sudden one-disk failure. However, even my older photos which were taken during some of the most memorable and important periods of my life were always just backed up to that other drive in the same room. A violent power surge while it is plugged in, a fire, water damage, or theft could easily render all those important files inaccessible.

    Requirements

    The requirements were easy to jot down:

    • Full system protection against a one-disk failure so I don't get stopped dead in the tracks if something happens to my hardware,
    • off-site backups for the most important data so that even my house being swallowed off the surface of the earth doesn't lead to significant data loss,
    • ability to take snapshots of folders or partitions so that I can experiment with files without the risk of corrupting or losing any data (this also helps with taking backups),
    • protection against data corruption and bit flipping which does happen.

    Redundancy

    Multi-terrabyte hard disks are become pretty affordable and if your priced digital possessions are shot with expensive cameras, there's no good reason not to invest another $100-200 to make your data and ideally your whole system fully redundant. A RAID1 mirrors all data on the disk so if one disk suddenly fails the other one can keep running. It is recommended to use disks from different batches - or even models - so that a manufacturing issue does not propagate across both drives. RAID1 can easily be done in software by the Linux kernel and there are no complicated algorithms that could lead to issues recovering data further down the road. It is as simple as: both disk have exactly the same information stored on them.

    There are also a few other benefits of RAID1. First, while write speeds are slower, read speeds are twice as fast. Once the photo or video data has been offloaded from the memory cards, editing software can take advantage of that for a quicker and more fluid workflow. Additionally, when using a file system like btrfs, any data corruption on one disk can be repaired with data from the other disk. I'll talk about this further down.

    For hardware I use two Western Digital Red 4TB NAS Hard Drives (de). WD has an excellent reputation and the Red drives are designed for workstations and Network Attached Storage (NAS) systems where disks can run many hours at a time (think heavy editing or transcoding) or even 24/7. For a great big-scale reliability study of current hard drives check out this Backblaze article about the drives in their data centre - personally, I'd stay away from Seagate.

    Off-site backups

    All your important data must be stored securely off-site. While it is luckily quite unlikely to fall victim to a house-destroying catastrophe or serious theft it is still possible, and it is the one point were complete data loss could happen when it is expected the least. I use two identical external drives and one always lives off-site in a safe location. I swap them out every one to two weeks depending on the flow of new photos, videos and editing files.

    For my big backup disks I use the more affordable Western Digital Green 3TB Hard Drives (de) which are not designed to run for long periods of time so they shouldn't be used in workstations or servers. For backups they are ideal because they are only spun up every other week and it means cheaper storage: all of my data with room to spare for at least the next half year for under $100. To attach the backup disk to my computer I use a cheap and fast Sabrent USB 3.0 to SATA External Hard Drive Docking Station, then rsync to copy new and changed data from my workstation to the disk.

    Since I don't really need any of my older, smaller drives for my workstation (if I run out of space it is easier to buy new big multi-TB disks rather than reuse old sub-TB ones and assemble them into RAID0s and RAID1s), I'm also planning to copy some of my finished projects and previous years of photography to those smaller drives and to keep them in yet another off-site location as an archive. My only concern is that after a year of not spinning them up and running a btrfs scrub, data corruption might start to get noticeable.

    Data integrity

    The Linux file system btrfs provides multiple features that come in very handy when keeping data safe and running backups. For starters, btrfs scrub start /media/data will start a check of all my multimedia files and fix any issues found. Btrfs keeps metadata (which includes checksums) and actual data separate so if there is any damage to the real data there is a chance it can be recovered from the metadata. Furthermore, in a RAID1 system the data can be restored from the other disk which shouldn't show the same (random) data corruption.

    The next great feature is snapshots: btrfs subvolume snapshot photography photography-backup creates a snapshot of all my photos and keeps changes that happen from now on separate (through a mechanism called copy-on-write). So if I accidentally delete a file in my photography/ folder I can get it back from the backup folder, yet it doesn't use up any additional space on the drive if files stay the same. This can easily be automated using cronjobs to create and rotate snapshots on a daily or weekly basis. It is also handy to create a snapshot before copying files off to an external hard disk so that I can keep working on my photos and videos while the backup is running in the background.

    Other, smaller backups

    I also have rsnapshot running every hour to take backups of my XMP sidecar files that get created by Darktable, my RAW photo processing application. Since Darktable automatically saves any changes made to a photograph as they happen, it is quite possible to accidentally delete the whole editing history (the RAW file itself is never modified, though, by the way). However, I have to say that this hasn't happened to me at all in almost two years of using the program. Anyway, keeping the small XMP files available for up to a month is an easy, inexpensive and great way to keep my mind at peace.

    Conclusion

    Just today, Caleb Pike released a video on his workflow and backup strategy as well. While fairly different due to different tools and computer expertise, the main principles and requirements stay the same and I recommend watching his take on the topic, especially if you found mine too technical or Unix centric.

    Please head over to Google+ or Twitter @tobiaswulff (see links on top of the page) to discuss this article or any of my photography and videography work. My Flickr and Vimeo pages also provide some space to leave comments and keep up to date with my portfolio. Lastly, if you want to get updates on future blog posts, please subscribe to my RSS feed. I plan to publish a new article every Wednesday.

    Long-run timelapses across multiple seasons - Part 1

    Some links to products in this blog post are Amazon Affiliate links that earn me a few cents or dollars if a reader buys any product on Amazon through this link. The price of the product does not increase so it is a free way to support this site by using the links provided. The main product link goes to Amazon.com and the "(de)" leads to Amazon.de.

    This is the first part of a longer series of blog posts about timelapses. I have started planning for and taking long-run timelapses that span many weeks and months, and I want to talk about how these ideas and visions can be accomplished in a reasonably efficient workflow by an amateur photographer. I say "reasonably" because processing timelapses from RAW files and working on such long running sequences will always involve a lot of work.

    Who, and why

    Apart from an article on Photo Sentinel, there aren't many interesting articles or howtos available. I highly recommend reading the article if you're interested in timelapses because it showcases different techniques and links to some great videos in each category. However, it belongs to a company that sells specialised long-term timelapse equipment which does not really fit the kind of subjects I'm shooting. On Youtube and Vimeo there are only a couple of videos that portrait certain subjects in nature over the course of many months but there are some amazing and award-winning short videos and films that I will link to further down.

    The most impressive executions of this sort of timelapse - and the aforementioned howto talks about it as well - are several features by the BBC such as The British Year and of course Planet Earth. The team which shot the timelapses for The British Year talk at length about planning, shooting, editing and various tips in a blog post. I highly recommend Chad's blog and all the content on his website as it is a wealth of timelapse stories, workflow tips, and kit reviews.

    How-to

    The obvious but most time extensive way to shoot a timelapse across multiple seasons is to take individual photos of the same subject under similar lighting conditions and from the same spot over a long, long time. Another technique is to take multiple "normal" timelapses, that is sequence of an hour or a few hours, and then blend them together such as in the Youtube video "4 Seasons 1 Tree". Unfortunately, the blending will be very obvious, and there also isn't much movement or change within the individual sequences themselves. On the other hand, there is no flickering due to abrupt changes in lighting or weather. This could be enhanced by doing some masking and selective blending to change some areas of the image before others which can also be seen in the video as the ground changes before the tree does.

    The easiest way to accomplish a long running timelapse is to have a camera that can be left in a fixed spot and orientation. The photo above is actually a blend of two individual frames, one with different lighting and more leaves on the tree. It shows that blending and aligning photos on the computer can produce a very smooth result even if the original photos are totally unaligned and taken in completely different conditions. In amateur nature photography, it usually isn't an option to have an absolutely fixed camera spot because the locations are too exposed to the elements. Even in urban environments you wouldn't leave your camera or tripod anywhere except inside your own house or apartment - and then you wouldn't be able to take it somewhere else.

    Therefore - unfortunately - we have to re-set up the camera and point it at the same spot every single time. This gets very complicated if movement of the camera is involved but even with a static shooting position there will be slight variations due to uneven ground, zoom lens variations (zooms are not "clicked" after all) and inaccuracies when pointing the camera at the subject. A very sturdy tripod is important but because I usually travel on foot or bike and also take my equipment on hikes into the mountains, I couldn't just go for the most sturdy one out there. So my tripod is the Manfrotto BeFree Compact Aluminum Travel Tripod which I love because it fits even into a normal day pack, yet it can extend to eye level and is reasonable rigid. However, pushing down on it will bend the legs in their joints so it is tricky to get it set up 100% exactly the same way every time.

    So there will be variations in tripod position, tripod height, camera attitude and focal length. Luckily, those issues can be resolved almost completely in post-production and I will talk about methods and tools to align photos and blend them in the next blog post in this series. Apart from dedicated software and plugins for Lightroom there are also a bunch or free tools available that do a very good or even perfect job at the expense of a maybe not so polished user interface or some efficiency.

    Ongoing work and ongoing articles

    Something like the video "Fall" from NYC Central Park is probably the closest inspiration to what I am planning to achieve. I didn't know the video when I started my project. There is also a year-long timelapse from the Canadian Rocky Mountains which employs some really nice blending and obviously beautiful outdoor scenery.

    As I shoot individual frames and sequences for my own long-run timelapse video, I will add more parts to this series talking about specific shooting tips and releasing some more snippets of the ongoing work. Towards the end I'm sure it will all become fairly editing and video post-production heavy.

    Please head over to Google+ or Twitter @tobiaswulff (see links on top of the page) to discuss this article or any of my photography and videography work. My Flickr and Vimeo pages also provide some space to leave comments and keep up to date with my portfolio. Lastly, if you want to get updates on future blog posts, please subscribe to my RSS feed. I plan to publish a new article every Wednesday.

    Photoshoot Experience: KiwiPyCon Conference

    A few weeks ago I shot my first event which was KiwiPyCon 2015 at the university campus in Christchurch, New Zealand. KiwiPyCon is an annual programming and software development conference organised by the New Zealand Python User Group (NZPUG). It consisted of talks and tutorials on Friday, and talks (plus many morning tea, lunch, and afternoon tea breaks) on Saturday and Sunday. All photos can be found on the Flickr page I created for the event.

    I came to the event with two cameras and two lenses: my Olympus OM-D E-M1 (de) with the Olympus 12-40mm f/2.8 (de), and a borrowed Panasonic G5 with my Olympus 45mm f/1.8 (de) prime. The lecture theatres were fairly dark so I expected to use the prime lens quite often during the talks.

    The first challenge which, much to my frustration, didn't even have to do anything with photography itself, was to get photos up onto the Internet as fast as possible, ideally while a talk or conference segment was happening. New photos and updates could be announced on the official @NZPUG Twitter feed. I tried the Olympus OI Share app on my phone because I figured it would be easiest to select files from my camera via Wifi and share them directly to a Twitter app. This didn't work at all: loading photos from the camera was very slow and my phone often switched back to using the conference Wifi and therefore loosing connectivity with my camera. When I finally managed to load a full photo from the camera, I had problems sharing it to Twitter due to connection timeouts which was probably due to the slow university Wifi or Internet on the "visitor" network we were assigned. After trying for half an hour while all sorts of activity with people streaming in, signing up, chatting and getting ready for talks was happening around me, I gave up and tried using a tablet which can read SD cards from the camera, and the Android Flickr app.

    The Flickr app wasn't working either: I couldn't use my existing account (Yahoo said something about inactive account even though it works fine from a PC) and creating a new account and logging in also failed. So while it looked like reading SD cards directly and uploading to Flickr was the way to go, I wouldn't be able to use my tablet (with the long battery life) and eventually had to resort to using my old trusty Thinkpad laptop (with its 40 minute battery life). Finally, after deciding to use a proper computer, everything worked as expected: I pulled the photos from the SD card to the laptop, put them into a folder for each day and camera, and uploaded them to Flickr via the website. No apps, no camera Wifi, no sharing or APIs: just memory cards and HTTP. I still decided to not shoot RAW and I also downsized the images to around 5 Megapixels in camera so that the slow and sometimes unreliable visitor Wifi network would be able to handle all the uploads in a timely manner.

    One big problem I ran into with the silent electronic shutter of the Panasonic G5 was with the florescent lights: while not a problem with the normal shutter, the slower readout (technically the readout is the same but the exposure across the whole frame happens much faster with a mechanical shutter) caused horizontal line artefacts to appear in the final photo:

    I'm not sure why it wasn't a problem with some of the other portraits of presenters before this one since I've used the electronic shutter quite a lot on the first day but after this experience I quickly changed back to using the mechanical shutter. While a little bit more annoying for the audience, as an event photographer you aren't completely invisible anyway while running around the stage, and luckily neither the E-M1 nor the G5 have a loud shutter.

    A similar technical issue was around camera settings, specifically white balance. Being primarily a landscape photographer I like to use manual mode with manual ISO and manual white balance. However, I quickly learned to use a more automatic mode like A priority, and set ISO and WB to auto. There are still a few photos were the white balance is way off which was before I put the camera into a more automatic mode. After I learned and switched, most photos came out really well. The E-M1, despite being a m4/3 camera and therefore not a very good low-light performer, looks great up to ISO 1600, and together with the f/1.8 prime it was enough to capture action and speakers unless they waved their hands around really furiously. In this case I just had to wait a few seconds for them to calm down and then quickly get the shots.

    Over the three days I extensively used my Peak Design camera clip (de) I talked about last week, attached to my belt, in order to carry and work with two cameras, or during breaks grab some food and have conversations without having to hold a camera in my hands all the time. It performed flawlessly and kept even the fairly heavy (for a m4/3 camera) E-M1 + 12-40mm zoom lens secure on my hip. The built-in lock functionality was good to have because the opening of the clip is to the side when worn on a belt so in theory bumping the spring-loaded unlock button and nudging the camera could result in dropping it from hip height onto a potentially very hard floor.

    One of the bigger challenges was to capture the moment during prize-givings when a book, voucher or other gadget was handed to the lucky winner. Because the lecture theatre was fairly big and the prize-giving was happening so fast I wasn't always close enough or didn't have enough time to get a good focus. Ideally as a photographer you would want more time and a bit of a pose, and the person bringing the prize to the winner shouldn't stand between the camera and the person receiving the prize. I'm not sure what would work better in the future apart from having more than one photographer so that we can spread out in the theatre.

    Please head over to Google+ or Twitter @tobiaswulff to discuss this article and let me know how you handle event shoots like these. I ran into a few problems and challenges so any tips for the future are greatly appreciated. My Flickr and Vimeo pages also provide some space to leave comments and keep up to date with my portfolio. Lastly, if you want to get updates on future blog posts, please subscribe to my RSS feed. I plan to publish a new article every Wednesday.

    Peak Design Camera Clips and Straps for the Outdoors

    Ever since I started using my first piece of Peak Design camera accessory, I have become a big fan, and it's getting better and more useful every time I take it with me to an event or on a trip outdoors. It keeps your hands free and your gear secure.

    Obvious disclaimer that I'm not affiliated with Peak Design (I wish my blog was that successful) - I just love those products ever since I got them, and a few people have asked me about them so I decided to write it all up. Some links to products in this blog post are Amazon Affiliate links that earn me a few cents or dollars if a reader buys any product on Amazon through this link. The price of the product does not increase so it is a free way to support this site by using the links provided. The main product link goes to Amazon.com and the "(de)" leads to Amazon.de.


    Peak Design started making the Capture Camera Clip (de) pictured above in 2010 and has been expanding their product range with new, exciting products ever since. This is a great article on SmugMug that goes over the idea, the history, the people and the Kickstarter campaign behind the clip. I used to have my old camera, an Olympus OM-D E-M5 with kit zoom lens, strapped onto the shoulder strap of my backpack using a Maxpedition Janus. This worked reasonably well as long as I didn't have to jump off boulders or duck under trees that had fallen over onto the track. Given the wrong angle or too much force, the camera and lens could fall out of the bungee cord strap that held it in place. For a small compact camera or a light camera with a fairly long lens I still think that this is a very good and cheap system.

    I saw a friend use the Capture Camera clip and decided that it would fit my use case perfectly. It became a necessity when I upgraded to an Olympus OM-D E-M1 since it is bulkier and the zoom lens would no longer fit into the strap I had been using previously. One great feature is that the plate that attaches to the camera's tripod mount can slide into the clip in any 90 degree orientation so you can have the camera facing down, sideways or even up, for instance to change lenses. One thing I don't like so much is that you need a hex key to fasten and loosen the plate. It can be done by rotating the whole plate which is what I do most of the time but after a few times it gets to the skin on your fingers ...

    I went for the standard version which is constructed from aluminium as well as "glass-reinforced nylon". So far it seems to be built to last a long time, although I can see how the Pro version that is all metal could have the edge after many years of abuse. The plate that screws onto the camera as well as the parts that hold the plate in place are mostly metal so I'm confident it will always hold the camera securely. The main weak points made out of nylon are the screw holes on either side which maybe could snap one day if I screw it on too tightly to a thick strap or belt. Furthermore, the Pro version comes with a plate that can act as a quick release plate for Arca-style and RC2 tripods.

    The Capture clip works great on a backpack shoulder strap (sliding the camera in from above), on a belt at events (sliding the camera in sideways), or the shoulder strap of a messenger bag (works either from the top or the side).

    My next purchases from Peak Design were the Cuff (de) wrist strap to secure my camera to my arm or my backpack via a carabiner, and the Anchor Links (de) to convert my original Olympus neck strap into a more useful detachable strap. Right away I have to say what's great about a lot (but not necessarily all) of their accessories is that you get quite a few spare parts: I've only attached one anchor link to the PD plate and one to the right-hand side of my camera, and now I've got 4 (!) anchor links left that can be used once the first two wear out. I'm not sure how fast this will happen but I'm sure it will last me quite a long time.

    Another bonus of the clip/unclip system as opposed to a fixed neck strap or something like Blackrapid shoulder strap is that I can quickly change from wrist cuff to safety leash to shoulder strap. When I use my original E-M1 as a shoulder strap with anchor links on the bottom and right-hand side of the camera, I find it sits much more securely and comfortably than if I use a Blackrapid (clone) where the strap only goes through the bottom of the camera. The only disadvantage is that you loose the sliding action up and down the strap but I haven't found that to be a problem yet. I've also heard great things about the slide shoulder strap, however I'm quite happy with the original Olympus strap. From what I can see the Peak Design Slide offers much quicker length adjustments but I can't say that I needed to change the length of my neck or shoulder strap on the fly very much.

    Now, they are diving into the realms of camera bags with the Everday Messenger Bag which has been designed together with New Zealands Mr. HDR Landscapes: Trey Ratcliff. While I'm in no need to purchase any more shoulder bags or backpacks at the moment, I'll keep an eye on their developments because I'm sure it will be exciting, functional and very well made.

    1 of 2
    1. 1 2