Outdoor Photography and Videography

Tobi Wulff Photography

Home / Categories / Photography / Essays

Challenges Adjusting Time in JPGs and RAWs

I recently found myself in a situation where I had to adjust the date and time on all my photos from overseas, JPEGs and RAWs. Lesson learned: it is much easier to remember (if you do) to change the setting on the camera when you are switching time zones.


It is fairly easy to change EXIF and IPTC metadata in JPEGs because pretty much all the tools support it. Apart from just writing data directly, most of the tools (and luckily there are many options) I looked at also allow for automatic and intelligent data/time adjustments, so you only have to specify the offset in minutes or hours or whatever unit you require, and it will set the date and time accordingly. In the end this means the choice of the specific program comes down to personal preference. In Linux, there are several options, both for the CLI and as a GUI.

In digikam, the time adjustment can be found in the batch processing editor. To get there, select the photos you want to adjust, then hit B. You can select the individual destinations for the adjust times and I usually go with all the EXIF tags and the digikam timestamp (IPTC wasn't set when the files came out of camera). After the adjustments have been made to the files, it is important to re-read the photos back into digikam. To do this, select the photos again, then go to the menu Item and click "Reread Metadata".

On the CLI, the job is much easier in my opinion (as is often the case). To get a console for the album you want to edit, right-click on the album in the album view (sorry, this technique can't work when you want to edit photos based on tags or other filters; in that case you have to use the GUI method described above) and select "Open in Terminal". Now we can use (if installed) several programs to fix the date/time:

- exiftool: Does not have a date/time adjust option so for JPEGs I would not use it
- exiv2: Can read and write all the tags in JPEGs (and other formats, but not all RAW formats, see below) and has a handy date/time adjust function: "exiv2 ad -a -10 *.JPG" will subtract 10 hours from the EXIF timestamps. It can also be used to rename the files according to the timestamp ("exiv2 mv") but I like to use digikam for that (it can make filenames unique automatically if necessary).
- jhead: Functionality around timestamps and renaming is similar to exiv2 so it comes down to personal taste and specific use cases: "jhead -ta-10:00 *.JPG" will subtract 10 hours.


This is were things get a bit trickier and depending on your camera's RAW format some of the programs will not work, e.g. exiv2 supports ORF but not RW2, and the GUI alternatives (digikam or UFRaw) didn't contain any options to write arbitrary metadata. exiv2 can work on some formats as described above (which is nice because it is the shortest and simplest command) but failed to write RW2 (Panasonic). What did work was exiftool. One slight quirk is that while exiftool displays pretty field names when you print all the metadata within a file (no arugments, just "exiftool file.RW2"), it requires the arguments for time adjustment to be the technical, compressed names of all the individual fields that you want to write, so: exiftool -"ModifyDate"-=10 -"DateTimeOriginal"-=10 -"CreateDate"-=10 *.RW2

I hope someone else who is on the search for the right tool to adjust their photos' metadata will find this information useful. I'll keep it as a reference for the future because I'm sure I will forget to set my camera to the right time zone again.

Please use the comment section below or head over to Google+ or Twitter @tobiaswulff to discuss this article or any of my photography and videography work. My Flickr, 500px and Vimeo pages also provide some space to leave comments and keep up to date with my portfolio. Lastly, if you want to get updates on future blog posts, please subscribe to my RSS feed. I plan to publish a new article every Wednesday.

Best of 2015 Photography Portfolio

2016 has been my second year of picking up photography. I've done a Best of Portfolio for last year as well, just never published it, but this year I decided to write a quick blurb about each picture and make my progress over time public.

The Process

I got the idea from the Martin Bailey Photography Podcast. Martin talks about how important it is to develop the skills of narrowing all your photos and great moments of the year down to just the very best 10 pictures. In the end I couldn't get below 12 and figured it's a good number because there are 12 months (my photos are not strictly by month though). There is also a bonus photo which has been my wallpaper for over a year and I still love it - both on my desktop and on my phone. I've divided the 12 best photos up into three panels and will talk very briefly about each photo, starting at the top left, going around clockwise.

12 Best Photos of 2015

Rolling hills: shot in central Otago, NZ, on the Pisa range near sunset. Probably my strongest light-and-shadow photo so far which is why I like it so much. The composition could probably be improved but the light was changing fast and this one was the best of the lot. The moon adds a really nice central accent.
Butterfly: Shot at Singapore airport, again at sunset. Since the butterfly garden is in an airport with buildings on 3 sides of it, this was a very lucky moment and the light was gone just 5 minutes later. There are many butterflies in the garden and they constantly feed on fruit nectar and flowers so one doesn't have to wait long to get a shot with a beautiful insect like that. However, it was the only spot in the gardens at that time that received the golden sunlight onto the nice contrasting white flowers.
Pine tree: This is a shot from the Black Forest in Germany. I was walking through swampy marshlands (on wooden boardwalks) and noticed that the trees had those pollen containers (Biologists please send me the correct terminology) primed and ready to go at the slightest touch. So I shook the twig and took a rapid burst of photographs as the branch swayed back and worth and shook out the cloud of pollen. This photo was the best of the lot where framing, sharpness, and the swirl in the pollen cloud worked out best.
Ominous bridge: This is taken crossing the Top Butler river on the West Coast of NZ. When I took the picture I thought it just looked nice because a swingbridge high over a wild torrent always looks good. But at home I noticed how ominous and dark everything looked and after fiddling with the sliders in Darktable for a while I came up with this photo that (in my opinion) captures the harsh and mysterious conditions when hiking on the rough West Coast really well. Unfortunately, it comes out quite dark on some monitors and so far I haven't decided if I should just up the exposure or work on the darker bits individually which would ruin some of the high contrast look.

Beach: another sunset (must be a theme among photographers) taken in Sumner, Christchurch, NZ. I like there is so much activity on the beach: people walking, surfing, paragliding, birds flying - and you can see the spray from the ocean really well.
Power lines: This one is taken among some wheat fields in Germany while out on a bike ride. That part of Germany consists of continuous rolling hills yet in this photo it's got a neat "Great Plains" look. I also like the colours and the very limited colour palette.
Snowy ridge: Back in NZ in the middle of winter. As the photographer I know exactly what the scene looked like in real life but I think the photograph successfully plays some tricks on the eye in terms of perspective. Because we happened to come down the steep part of this mountain at just the right time, less than an hour before sunset I think, we got this really string shadow line on the ridge. The hiker sort of (but not completely) gives the scene a sense of scale.
Tops hut: Not a particularly hard or lucky photo but it's got all the elements of a picture-perfect NZ backcountry scene: some snow, clouds, a cozy hut, and just a very pleasant to look at mix of colours.

Waterfall: I'm not a huge practitioner of blurred water and waterfall photography but when you are out in the mountains you do spot a lot of pretty looking waterfalls. This one seemed to work best in black & white because it's got a lot of texture but not really any colour that would be missed (the water wasn't glacier blue or anything like that). My only complaint about it is that the rocks on the right are a bit too shiny. I find it quite tricky to balance my circular polarizer effect when doing long-exposure of water because you want to take the glare of rocks but the agitated water actually looks better with the reflections left in.
Sand columns: Another one where it's nearly impossible to get a good sense of scale. These are actually very small pebbles and other debris that prevent the river sand on a bank from getting eroded by rainfall. It's quite a unique photo and perspective which is why I chose it for my Best of collection.
Swirly clouds sunset: Taken from one of the Canterbury (NZ) foothills at sunset while low clouds got blown around our mountain from the West. It was actually a really strong wind but we camped behind a ridge line where we had this great view just above the tree line.
Dragonfly: This was the first trip after buying the Olympus 45 mm prime lens for MFT and I was blown away but its detail and sharpness. This scene might be a little bit busy but it's a neat scene with a clear subject in its environment with some really nice colours and detail. A fairly lucky shot for sure.


As promised, here is the link to the wallpaper in high resolution. This is one of my all time favourite photos because I love the perspective, the colours, and even the (yes, gimmicky) diorama effect. I hope someone else will also enjoy it as their wallpaper. If you want to redistribute the wallpaper, please link back to this blog post or to the photo in my gallery or on Flickr.

Please use the comment section below or head over to Google+ or Twitter @tobiaswulff to discuss this article or any of my photography and videography work. My Flickr, 500px and Vimeo pages also provide some space to leave comments and keep up to date with my portfolio. Lastly, if you want to get updates on future blog posts, please subscribe to my RSS feed. I plan to publish a new article every Wednesday.

RAW Timelapse Workflow with Darktable and Davinci Resolve

I shot a new timelapse in the mountains, this time exclusively recording all the frames in full resolution and RAW (unlike my previous outdoor timelapse). It was recorded with the Olympus OM-D E-M1 (de) and Olympus 12-40mm F2.8 PRO (de) lens.

Out in the field

Basic outdoor timelapse 101: all manual settings, that is ISO, white balance, aperture, shutter speed. White balance was obviously daylight and I kept the ISO at its minimum (200). Shutter speed should be set to something "video-like". Video and film cameras usually use something called a 180 degree shutter which essentially means that the shutter speed is 1/(2 x frame rate). So for a 24 fps video that means 1/48 or (because photo cameras usually don't offer this setting) 1/50. Anything faster than that runs the risk of making the timelapse feel jittery and too sharp. For fast movements, like people or clouds, I like to go even slower and aim for something like 1/20 - 1/40. This gives the video a more dreamy and pleasing look.

I record every frame in RAW. I like to store JPEGs as well so I can generate a quick timelapse when I get home without having to go through the RAW workflow (described below) first.

To do the actual timelapse recording, there are several option depending on your circumstances and your equipment:

  • Using the camera's in-built timelapse function: most compact solution and works well on the E-M1 except when you want faster than 1 second intervals;
  • using a remote shutter release or remote timer: works great but you have to dial in the intervals using the anti-shock functionality and it's an extra cable flapping in the wind;
  • a slider or panning head triggering the camera: whenever the E-M1 sits on the panning head (see next section), it will receive it's shutter releases from the Genie. The result: accurate intervals perfectly timed with the stops between motions of the moving parts of the timelapse setup.

For filters I often use a graduated ND filter to make the bright sky and the darker ground a bit more even. This is particularly important at sunrise and sunset because the ground will be really dark. I also have a circular polarizer that lives on my lens 95% of the time: vegetation looks more lush, colours more vibrant, and annoying reflections of leaves or glaring surfaces disappear. It can also cut through a lot of haze and mist on a more cloudy day. Time in Pixels just released an excellent article about filters for video with many visual examples.

Getting moving

I've written about my DIY slider before and it is actually undergoing some major upgrades right now to make it more usable and flexible. However, I don't usually take it very far because it is heavy and big. In order to have something that always fits in even the smallest bag, is compact and rugged (not weather-proof, though) and "just works", I got myself a Genie Mini which is actually being developed here in NZ. It's controlled from a smartphone via Bluetooth so setting it up takes a few minutes since my phone is usually off when I'm in the outdoors (no reception anyway) but it's very intuitive and flexible (watch the videos on their website). All the shots in the video at the top of the page that have some side-to-side movement are done with the Genie Mini.

RAW workflow

The out of camera JPEGs are alright but (especially for landscapes) don't look nearly as good as they could when I develop my own final images from RAW: better colours, more dynamic range, more wiggle room in the highlights (and some in the shadows). This is particularly important when photographing sunsets, sunrises, or rapidly changing lighting conditions because the exposure can be adjusted so much in post. I load all my RAWs from one scene into Darktable, then do all my adjustments on one of them (shadows, highlights, general exposure, Velvia/saturation filter, contrast, noise reduction, but no cropping - I can always do that later when editing the video). Then, I copy the settings to all the other RAWs and export everything to bitmap files with a high bit rate, such as 16-bit PNG or TIFF. In theory, one could also make fine adjustments to individual frames at this stage.


The last step is to edit the photos into a timelapse video and maybe add some music and sound effects. I mainly use Davinci Resolve for editing because it also has colour grading built in but the colours should already be fairly correct and good looking from the last step. Davinci can directly import image sequences (i.e. individual files) and display them as video clips.

Please use the comment section below or head over to Google+ or Twitter @tobiaswulff to discuss this article or any of my photography and videography work. My Flickr, 500px and Vimeo pages also provide some space to leave comments and keep up to date with my portfolio. Lastly, if you want to get updates on future blog posts, please subscribe to my RSS feed. I plan to publish a new article every Wednesday.

My (rather late) Guide to some Olympus OM-D E-M1 Features and Quirks

During my first big outdoor trip with the Olympus OM-D E-M1 (de) I found a few new ("hidden") features and quirks that I want to share.

HDR modes

In the dense Fiordland forests I experimented with the E-M1's HDR modes and found that, unfortunately, the camera itself doesn't really produce any useful, good looking HDR images (JPEGs). However, whether HDR1/2 modes or one of the multi-frame +/- EV modes from the HDR menu are selected, the camera will also automatically engage the high-framerate interval shooting mode so that the shutter only has to be pressed down once to capture all the exposures at a (theoretical) speed of 10 fps. If you want to make use of this feature you shouldn't set the "H" framerate to anything lower than the maximum. Note that in the modes HDR1 and HDR2 you won't get the individual exposures as RAWs - you only end up with one HDR image produced by the camera.

As described in the next section, the last used HDR mode cannot be toggled on and off with a press of the HDR button on the left-hand side of the camera the way you can toggle the other bracketing modes. Therefore, I had to sacrifice another Fn button and chose the one on the Olympus PRO lens since this is the only lens I regularly use for this kind of photography. My smaller lenses that lack the L button are typically used for portraits, wildlife or timelapses.

Quickly toggle bracketing

As opposed to the HDR functionality (which annoyingly has to be switched on and off with a separate Fn button), bracketing can be toggled with a quick press on the HDR button when the button lever is in position 2 and "lever2+left buttons" (the last item in the gear B menu) is enabled. The really cool thing is that a long press (more than 1 second) of the same button will take you to the bracketing menu where you can select between different modes (ISO bracketing which uses different ISO levels, AE bracketing which is similar but in addition varies the shutter speed as well, ART bracketing among others, and the new focus bracketing). Why can't toggling HDR work exactly the same way, it's basically the same thing??

ISO bracketing

I think this is my new favourite HDR mode for handheld shooting because it takes one exposure but delivers 3 different RAWs at +/- 1 EV (e.g. at ISOs LOW, 200 and 400). For many situations, that is enough coverage unless you're dealing with extremely high-contrast scenes. However, this also means that there is only one exposure and the camera delivers 3 different files with different light amplification so it is not the same as actually producing different exposures with longer or slower shutter speeds. Anything other than the base ISO of the camera (200 on the E-M1) will introduce noise and/or limit colours and dynamic range. Here are some interesting, yet slightly confusing discussions around this topic on Photography StackExchange.

One more note on AE bracketing which is similar to ISO bracketing because it does vary the ISO but also takes multiple true exposures at different shutter speeds: contrary to the E-M1's HDR modes, high-framerate interval shooting does not engage automatically so you'll have to set this separately if you don't want to press the shutter multiple times.

Interval Quirks

This, to me, is clearly a bug (although I can see why Olympus did it) and destroys the expectations of precision one might expect from the built-in interval timers used for timelapses and multiple self-timer exposures. Which features exactly do I mean? First, there is a self-timer mode where the user can enter a custom value (in seconds) for the camera to wait, and how many pictures to take at which intervals (also in seconds). The other feature is the timelapse mode where the user can set the interval between shots. Unfortunately, both modes suffer from one common problem: the interval timer only starts once the image (RAW, JPEG, or both) has been written to the SD card. Not only does this introduce an additional delay, it's also inconsistent as writing to the card does not always take the same amount of time.

I can see why Olympus did this, at least for continuous shooting like it is done in the timelapse mode: if it takes a while to write all data to the card, shooting more photos during that time will result in the internal buffer slowly filling up until the camera won't be able to take any more pictures. However, I didn't actually find this to be a problem, and here is both how I tested it and how I work around it when I want to shoot timelapses with interval times faster than the normally achievable 2-3 seconds.

You'll have to use a remote trigger than plugs into the camera. These can be bought for very little money on sites like Amazon. Then, enable the H framerate anti-shock mode (the one with the rhombus) and set your desired interval time (which can be fractions of a second) as the anti-shock delay time. Unfortunately, you'll have to go into the main menu to do this. An option to access this mode quicker is to save it as a MySet and make it accessible through a Fn button or a position on the mode dial. However, to change the interval time you'll still have to do a lot of navigating in the gear menus. Once the shutter remote is pushed down and locked, your camera will happily shoot timelapses at interval times below 1 second. This has worked fine with a reasonably fast and large SD card like the Transcend 64 GB UHS-3 Flash Memory Card (de) down to 1/2 and 1/4 s for many hundreds of shots.

Quickly access focus peaking settings

While the viewfinder or LCD is displaying focus peaking (either because assist is enabled and you've turned the focus ring, or because you've pushed a Fn button that has been assigned to peaking), a press of the INFO button will pop up a small menu that lets you adjust peaking colour and strength without having to trawl through the main menu. I wouldn't be surprised if this quick-access feature works for other things as well but I haven't noticed any yet.

Please use the comment section below or head over to Google+ or Twitter @tobiaswulff to discuss this article or any of my photography and videography work. My Flickr, 500px and Vimeo pages also provide some space to leave comments and keep up to date with my portfolio. Lastly, if you want to get updates on future blog posts, please subscribe to my RSS feed. I plan to publish a new article every Wednesday.

Smart Watches for Photographers

About a decade ago smartphones entered the market and gave us photographers and videographers many new tools to make our daily lives easier (has that ever actually worked out?). Such tools are for example: notes for screenwriters or location scouting, ephemerides to figure out where the sun and moon will be at a certain time, remote controls for cameras and GoPros, and of course the incredibly fast turnaround from shooting something at professional quality to getting it onto social media. Can you even still remember that previously all of this had to be done on big heavy laptops or even on paper?

From there on, the next step are smartwatches. Not only do they add a second display to your mobile computing setup, they also enable new functionality for automation, remote control and reminders that haven't been able before. And we all know it won't stop there and the next "smart" device category is already just around the corner (whether it will be Google Glass or something very different).

In this article I want to show what, in my opinion, a smartwatch like the Pebble Steel (de) or the new Pebble Time Round (de) can do for us photo and video people. Some of those features might not apply to other Android Wear devices or the Apple watch because the Pebble takes its own, different approach to the whole smartwatch thing. The Pebble watches are waterproof which means they can be used in bad weather or wet conditions where the phone should stay safely in a pack or dry bag.


Custom watchfaces are essentially very simple "apps" that only have the task to display time and some additional information such as the weather. The are not very interactive, i.e. you cannot easily switch modes or go into submenus to trigger actions.

For photography my favourite watchfaces are 24-clocks that represent day and night time graphically on the face. The most popular and fully functional in the Pebble store are "Sunset Watch", "Twilight-Clock", and "SunTime Pro" - all free. The former has the cleanest watchface but takes a few seconds to retrieve or calculate sunrise and sunset times every time you switch to it; the latter displays the most information on the screen including inclination of the sun, battery and bluetooth status, however, it does not show the current phase of the moon. "Twilight-Clock" is somewhere in between and it's the watchface I'm currently using if I want sunlight information. Not only does it show when the sun rises and sets, it also graphically displays when and how long the different twilight periods (civil, nautical, astronomical) are.

Some watchfaces and some of the more complicated apps that act mostly like watchfaces with extra functionality (the most popular being Glance) can display upcoming calendar events. If your are shooting an event and you have to know what's going to happen every hour and where you have to be, a quick look on your watch can give you all this information and notifications (which make the watch vibrate) ensure you don't miss anything important.

Pebble + Tasker

If the "precooked" watchfaces and apps described above are not enough and you're not afraid of either searching for existing profiles or creating your own (essentially very simple programming using a graphical interface) then Tasker can turn your Android+Pebble into a gadget of truly limitless possibilities.

Use PebbleTasker to take photo remotely: PebbleTasker is a Pebble app that can directly run Tasker tasks on the phone. These tasks can be anything you want and they can contain one or many different actions (change volume, screen brightness, send a text, play music, lock the screen, etc). If used with a task that takes a photo (I'm sure video options exist as well), your smartwatch becomes something like the GoPro remote and you can use it to set the phone up in one place, then trigger it from up to about 10m away. Of course, Tasker can also be used to implement various self-timers so that the photo will be taken 2 or 5 or 12 seconds after pressing the button.

Use AutoPebble and Tasker's geolocation features to bring location aware menus onto your watch: Tasker can trigger tasks when the phone is in a certain location (either determined by cell phone tower, Wifi network, or GPS). AutoPebble can be used to push selection menus or lists of options to the watch. To do this, the Tasker task first has to have an item that opens the AutoPebble app on the watch, then shows a list of items. Each item in this list can be programmed to send a code back to the phone on normal and on long-press. Each code can then in return trigger another Tasker task using the Event Profile that listens for a code.

Say you want to record the ideal time for a photograph in a certain location while out scouting: when you get to the location, Tasker will vibrate the watch and display a list of actions, one of them being "record time and orientation". When the button for this item on the watch is being pressed, Tasker can then create a note (in Google docs or Evernote or any other note taking app with Tasker integration) with the current time and the orientation using the compass in the Pebble or the phone (I'm not 100% sure if the compass information is accessible within Tasker so the phone compass for GPS journey direction might have to suffice). Another similar possibility would be useful for film photographers: present a list of aperture values and then, when one has been selected, a list of shutter speed values, to record data about photos taken on a film camera without having to take the phone out and with automated geotagging.

Highly functional Watchfaces

Finally, both concepts of simple watchfaces and complex apps can be tied together using apps that mostly act like a watchface but can show additional information or integrate other apps using direct button actions or menus. My current main app that is on the watch 90% of the time is Glance. It displays time, weather and missed texts/calls in a clean and nice looking watchface, and the buttons bring up a list of notifications, past text messages, appointments and a PebbleTasker page to send commands to the phone (as described in the previous sections).


The possibilities are endless and my examples are only a few of the scenarios where the phone+watch combination would come in handy. Since I've just started using a Pebble, I'm sure I'll discover many more use cases in the future, some really useful, some more gimmicky. I'd be very interested in what other photographers have come up with.

Please head over to Google+ or Twitter @tobiaswulff (see links on top of the page) to discuss this article or any of my photography and videography work. My Flickr, 500px and Vimeo pages also provide some space to leave comments and keep up to date with my portfolio. Lastly, if you want to get updates on future blog posts, please subscribe to my RSS feed. I plan to publish a new article every Wednesday.

New Olympus OM-D Firmware (4.0)

Olympus has just released firmware version 4.0 for the OM-D E-M1 (de) and version 2.0 for the OM-D E-M5 Mark II (de). I haven't seen an announcement on their website yet but the updater software can already download and install both camera and lens updates. One tip that took me a few tries to figure out: when connecting the camera to the computer, select "Storage" from the camera screen, otherwise the Olympus updater software won't be able to see the device.

It's a free upgrade and brings a lot of exciting new features to both of those cameras. Unfortunately for me as an E-M1-only owner, some of those are for the E-M5II only. Before upgrading keep in mind that the upgrade process will wipe all your settings from the camera!

Update: After having played with the new firmware for a day now, I've updated the sections below with some observations and new discoveries (in bold). I will also try to put a sort of E-M1 guide page together with useful settings and little quirks.

Electronic Shutter

It looks like finally the E-M1 is getting an electronic shutter mode which will be great for situations where the loud mechanical shutter is not appropriate. I don't know who made that decision but the heart symbol for the silent shutter (next to the familiar rhombus for the anti-shock mode) is kinda cute. However, be aware of the limitations: rolling shutter (if panning while taking a shot) is worse and flickering lighting such as florescent lights or projector lamps can make photos shot with electronic shutter nearly unusable.

Update: The electronic shutter setting is in the second camera menu (menu button, then on the second page down). Select Anti-Shock/Silent, then pick a silent delay (0 seconds for no delay but no mechanical shutter), then half-press the shutter button to go back to photo shooting mode and select drive mode Single Silent (heart). This is a lot of setup but once it's done you can quickly switch between mechanical and silent shutter using the drive/HDR button on the top left of the camera body. I'm looking forward to using the electronic shutter in my timelapses to go a bit easier on the mechanical shutter mechanism.

Focus Stacking and Bracketing

The biggest feature additions are the new modes for focus stacking and bracketing. Both do essentially the same thing, that is taking a whole bunch of pictures with the same exposure settings but slightly different focus points. This is particularly useful in macro photography where the depth of field usually is very small. It works with compatible auto-focus lenses such as Olympus's M.ZUIKO PRO lens series (de) by automatically shifting the focus point after each photo.

In focus bracketing you will end up with all of those photos and you can post-process them however you like (similar to Panasonic's new Lytro-like focus-later technology). However, when focus stacking is selected, the camera will do all the magic inside and produce one photo out of 8 individual ones, all with slightly different focus points. This should result in a macro shot where the whole subject is in focus.

Update: It works - as long as nothing in the frame moves. Focus stacking only works with the electronic shutter and it's so quick that it can easily be done hand-held. I don't have a dedicated macro lens so I couldn't really shoot any meaningful examples but it turns long focal length f2.8 into "everything is in focus" which is pretty cool. When focus stacking is selected, it also keeps all 5 individual files on the card so you can post-process them later. Some of the little but great improvements that I haven't mentioned in the original article are:

  • the menu system remembers where you left off last time so you can quickly play with settings without having to go through pages and pages to find it again,
  • not only are there more colours for focus peaking (red or yellow is so much better than black or white!) but the intensity can also be changed,
  • histogram, level gauge and over/under exposure indicators can now be displayed at the same time: this is huge because previously I had to jump through all the different options with the Info button to get my camera level, then get the exposure right; you can selected two different custom modes to cycle through using the Info button and selected which parts you want on each screen - the settings are under Menu - Gear D - Info Settings - LV-Info.

Simulated Optical Viewfinder

The S-OVF mode disables some of the "live view" features in the viewfinder, such as boosting the light levels. This means that it won't assist the photographer in bad lighting conditions but on the other hand, you'll see exactly what a true optical viewfinder would see, that is it depends entirely on the currently selected aperture on your lens. White-balance compensation is also turned off for a "truer" image. I think most of the time a appreciate the assisting features of the EVF and I use the histogram to accurately determine whether my exposure is good, so I can't see myself using this mode too much but it's still a free new feature that could come in very handy in certain situations (e.g. when not using the histogram for some reason).

Update: I probably didn't get it fully right in the paragraph above because I didn't know how optical viewfinders used to work. When S-OVF is selected, exposure compensation is completely disabled so you see pretty much exactly what your eye would see outside the camera. If you want to judge exposure you have to go by the metering number - the histogram doesn't help at all because it only turns what's currently in the EVF into a graph which means it won't change as you alter ISO, aperture or shutter speed (because the OVF doesn't change). To see the photo as it will turn out when you press the shutter you have to do two things: 1) go back into normal EVF mode, and 2) turn off Live View Boost under Menu - Gear D - second page. This is my preferred setting because it gives the least unwanted surprises, and I've mapped the S-OVF to Fn2 so I can change to it if I want a more realistic view.


There are a few upgrades that apply to video only, such as a new picture profile (E-M5II only) and synchronised recording with an Olympus audio recorder. I don't own either so sadly, video won't receive any useful improvements (I was really hoping for focus peaking during recording but at least they are adding more colours to choose from for the outlines). Another minor addition is the slate tone generator which I assume can be assigned to a button. Using this probably looks more professional than snipping your fingers in front of the camera when recording audio with an external recorder.

For the E-M1 there is another good and bad update for video: a new framerate. It's great that Olympus has added 24p but it is also still missing 60p to become a useful sports and documentary video camera (which otherwise the rugged and weatherproof body and the in-body stabilisation makes it perfectly suited for). I don't quite understand why Olympus is adding features like timecode (and those awful movie effects) first before improving on the essentials.


The PRO lenses will also receive a new firmware which will add support for disabling the MF clutch. I only usually use the clutch to switch to a true manual focus while shooting video. When shooting photos I have previously pressed my back-button-focus button just to find it didn't do anything because the clutch was still on manual focus. This update might help in those situations.

So overall it's a great update and we should keep in mind that not all manufacturers release such improvements for free. However, there are still features missing that I'm sure the camera would be capable of handling. They might arrive in the future with another free upgrade despite the E-M1 Mark II probably not being too far away anymore. I'm optimistic because in this upgrade Olympus has added features to the E-M1 that at first looked like they were for the E-M5II only.

Please head over to Google+ or Twitter @tobiaswulff (see links on top of the page) to discuss this article or any of my photography and videography work. My Flickr, 500px and Vimeo pages also provide some space to leave comments and keep up to date with my portfolio. Lastly, if you want to get updates on future blog posts, please subscribe to my RSS feed. I plan to publish a new article every Wednesday.

Long-run timelapses across multiple seasons - Part 2

In the first part of my timelapse blog series I wrote about different types and techniques for longer and really long (seasonal) timelapse movies. In this article I want to describe a few specific techniques and tools that are useful (and often mandatory) to finalize those timelapses.

But first I'll lead into this timelapse article with my new video "Mountains and Clouds 2015" that I've shot over the course of about one year in the South Island of New Zealand on various trips.

One thing I wish I had done from the start is shoot all the frames for timelapses in full resolution and as RAW files. Always keep future use of your files in mind! Unless I got lucky and the scene was really well exposed, I couldn't correct as much as when grading from RAWs and because of the better resolution the final video will also look sharper. I take bets on Twitter (@tobiaswulff) on which scene was the one shot in RAW ;) .

Batch Processing

The first step to processing timelapse RAW frames is to "develop" them from being a digital negative to a bitmap photo file. When exporting the image it is important to choose a format that will retain the full bit depth of the original RAW file: options are TIFF or 16-bit PNG. JPEGs should not be used since they only store 8-bit of color depth which will not fare well in color correction and grading later.

There is another way, though: keeping the files in their RAW format and using a video editing software that can deal with RAW footage, such as Davinci Resolve 12. For Resolve to ingest the RAW files they have to be converted to DNG. My photo management software of choice digikam has an in-built DNG converter and I believe so does Apple Aperture, Adobe Bridge and/or Lightroom. While working with a RAW video is pretty neat, because this is not a recognized format from a video camera (such as RED) the possibilities to adjust the image are limited and a proper RAW photo program such as Darktable or Lightroom is much better suited for the job. Nevertheless, you are importing 10+ bit image data into the editing/color grading software which gives much, much more room for colour and exposure adjustments.

Compile Videos

As described earlier, Resolve can ingest DNG RAW files so it is possible to do editing and color grading with the source material. For the timelapse video posted at the beginning of this article, however, I compiled each sequence into a video first, which can be done using one of the following two Linux programs: Blender or ffmpeg (CLI tool). This use of "baked" videos will make the edit smoother because the program doesn't have to deal with as much data.

ffmpeg is a command-line tool, so it is easy to batch-process or automate converting image file sequences into video files. ffmpeg supports all the usual video file formats and containers, including ProRes which is a 10 to 12-bit 4:2:2 or 4:4:4 codec and the preferred format for Resolve. However, I found that the picture didn't quite come out the way I wanted, in particular darker areas got too dark so for example stars in night time sequences (like the one at the end of my video) almost disappeared. I'm sure this can be adjusted using the codec settings but for now I have turned to Blender.

Blender is first and foremost a 3D modelling and animation program. However, in recent years it also became a more and more powerful video editor and VFX pipeline, and it can be used to turn any bitmap sequence into various video formats. Once an image sequence has been imported, it can be modified (scaled, rotated, color corrected, composited with a 3D scene, etc) using nodes as shown in this screenshot:

For exporting I chose AVI RAW since it gave the best quality and could be converted to ProRes for Resolve, again without any loss in quality. It might be possible to export directly to ProRes or to use ffmpeg under the hood but I haven't explored the export and encoder settings too deeply yet.

Long Timelapse Processing Techniques

The biggest problems with long-term timelapses in the outdoors (i.e. outside a controlled environment) are changing lighting conditions and that it's basically impossible to get the camera set up 100% exactly the same way every time: the tripod will be positioned slightly differently, a zoom lens will make the focal length setting inaccurate, pointing the camera at the same spot will still be a millimeter or two off ... Luckily, both issues can be dealt with fairly successfully in software as described below.

Stabilizing and Aligning Photos

In order to make the transition from one frame to the next as smoothly looking as possible, non-moving objects in each frame really shouldn't move or jump around. Therefore, it is necessary to either align all the photos before they are compiled into a sequence (or video), or to stabilize the final video. There are at least three very different ways to achieving this. These different approaches can differ greatly in terms of time and effort, and quality of the end result.

1) Align photos automatically using Hugin. Hugin is an HDR and panorama toolkit but it can also be used to align a sequence of photos without exposure bracketing or stitching them into a panorama. There are several algorithms to choose from when aligning photos (I usually use "Position(y, p, r)" but its results are not perfect). The algorithm will look at all the photos that are next to each other in the sequence and find common control points in the picture that it uses to align (translate, rotate and scale) them. Control points can also be manually added, removed and shifted to improve the alignment. In terms of speed this is the easiest and fastest approach. I usually roughly follow this tutorial - something you will need because it's a complex piece of software!

2) Align photos by hand in GIMP (or Photoshop): there are various plugins for those image editing programs that allow a user to specify two common control points in two pictures and the plugin will then do the alignment. The results are near-perfect but it will take a long time because you have to do it for each frame in your sequence (what's that - 60, 90, 200?). A professional suite like Adobe's probably contains automatic tools similar to Hugin as well.

3) Stabilize the final video: all professional editing and/or VFX programs such as Resolve, Hitfilm, After Effects and also Blender have built-in stabilizers. I haven't tested this method yet but because they work on a frame-by-frame basis they should be able to stabilize the footage very well. However, as described in the next step, I like to blend (or blur) my frames which will definitely make stabilizing the video more inaccurate so doing it before the video is compiled seems more robust to me.

Blending Photos

As shown in some of the examples from other photographers in the first timelapse article, we often simply blend images or videos together to make for a smoother (and longer) final product. This can either be done between sequences (say you show a locked down timelapse of autumn, then blend it into another timelapse of the same spot in winter) or between every single frame. Using the free toolkit ImageMagick, this can be done with one command:

convert frame_a.jpg frame_b.jpg -evaluate-sequence mean frame_ab.jpg

Ideally, your original source files would be organized to have odd sequence numbers (frame001, frame003, ...) and the generated blended images will fill the gaps (frame002, frame004, ...). This way you'll end up with all the frames for the timelapse video, now much smoother because even though lighting conditions and your camera setup change dramatically between each frame, there is now a frame in between that combines both conditions and makes it look much more pleasant on the eyes.


As you can see, there is quite a difference between the blended and the original image sequence - apart from the speed that it loops at of course since one has twice the frame-count of the other. Note that this is not a perfect alignment and also that I haven't done any RAW processing yet (these are simply out-of-camera JPEGs) so for a final product I would first process all the frames so that they are similar in exposure and saturation. Left is original (rough and unpleasant), right is blended (smoother):

Please head over to Google+ or Twitter @tobiaswulff (see links on top of the page) to discuss this article or any of my photography and videography work. My Flickr, 500px and Vimeo pages also provide some space to leave comments and keep up to date with my portfolio. Lastly, if you want to get updates on future blog posts, please subscribe to my RSS feed. I plan to publish a new article every Wednesday.

DIY Motorised Dolly Slider

Motorised sliders (or dollies) bring motion and a nice parallax effect to timelapse shots. Commercial sliders have the advantage of being well built (hopefully roughly proportional the amount of money spent) and easy to set up. On the other hand, costs for rails and card alone can be several hundred dollars and adding motors and a control unit quickly pushes it $1000.

A home-made DIY slider can be made with $150-200 for materials and the Arduino board or other micro-controller, depending on what you already have lying around. I'd like to take mine hiking more often but it wasn't built with minimal weight in mind so this is definitely a point where a good commercial carbon-fiber slider and a cart with less metal could come into play one day.

There are many different designs out there but the biggest distinctions between them are:

  • continuous motion vs stepper motion
  • two rails vs monorail
  • motor mounted on end truss vs motor mounted on the cart

Continuous motion is cheaper because a very simple motor can be used and the expensive timing belt can be replaced with a wire to pull the cart. However, this does not work for longer exposures since the camera has to be absolutely still while the shutter is open so I opted for a stepper motor right from the start. Keeping a cart stable on a monorail requires more engineering than with two rails but it can cut down on weight and makes it easier to mount to a single tripod with a screw hole in the centre of the rail. I didn't know how to build this so I went with two rails and wheels on either side to keep it stable.

At first I thought I can keep the cart lighter by mounting the stepper motor to the end of the rails. While this is true, once you add up the weight of the metal cart itself, a camera body and lens, and ball head, a single stepper motor wouldn't make much of a difference any more. Having the motor on the cart has several advantages: everything, from the camera to the motor to the control unit is in one place and you won't need to run cables all over the place; only half the length of timing belt is required since it won't loop around the ends. Here is a link to a good example of a DIY monorail motor-on-cart slider.

    Here is a quick test video I shot using the slider. Unfortunately, I bumped it a bit at towards the end but you get the idea. There will be more exciting timelapses in the future that actually use the motion/parallex effect in a meaningful way.

    Material List

    • Arduino: $5-20 depending on original vs clone and capabilities - it's easier if it fits a shield
    • Motor driver shield for Arduino: $20
    • Battery pack and switching regulator: $10
    • 12V NEMA-size stepper motor and mount $23 - alternatively a smaller stepper motor
    • Timing belt and pulleys $40 - one could probably find much cheaper spare parts somewhere else
    • Aluminium rails $10-20 - I can't remember exactly
    • Steel or aluminium cart and ball bearings - can't remember how much it was, maybe $20; I had the ball bearings already
    • Cheap, small to medium-sized ball head - don't use the really small ones like the Giottos Mini Ball Head if you have anything bigger than a compact point&shoot because it will wobble a lot and adjusting it will be very hard: $20-$30

    In the photo above - once you look past the rat nest of wires - you can see the motor driver shield sitting on the Arduino. All connections come out of the shield (they are fed through from the Arduino), so the rainbow ribbon cable is for the rotary encoder, there are some wires for the LED, ground and 5V, and also the stepper motor itself which is hooked up to the left-hand side 4-pin screw terminal. The two "things" encased in plastic in-line with the wires are a fuse and a switching regulator to bring the input voltage (9-12V) down to 5V for the Arduino. A linear regulator like the one that is on the Arduino would work too but might generate too much heat for a closed up enclosure.

    Photo above: a cheap but decent-sized ball head that unfortunately wobbles a little bit but does the job. Since all ball heads (and all decent tripods) use the same screw sizes, you can mount whatever you want, small and cheap or big and fancy. I like that the ball head has got a two-way water bubble level built-in.


    On the Arduino platform I use the Adafruit_Motorshield library. To move the stepper motor the minimal distance and as smoothly as possible I run:

    Adafruit_MotorShield *motor_shield = new Adafruit_MotorShield();
    Adafruit_StepperMotor *motor = motor_shield->getStepper(200, 250); // steps and speed
    motor->step(distance, FORWARD, MICROSTEP);

    I also set up an LCD and a rotary button using the SoftwareSerial and ClickEncoder libraries, respectively. Text can be written to the 16x2 LCD directly over the serial line, plus there are some special characters that move the cursor, clear the screen, and so on. The ClickEncoder uses up one of the timers of the Arduino and unfortunately it is the same used by the MotorShield library so I can't use both at the same time. This is ok because I only use the rotary encoder to set up all the timelapse parameters, and once the slider is moving and the camera is taking pictures I don't want to touch it again anyway. It's basically two separate programs: first the menu/settings, then the timelapse.


    I found that the 200 steps per revolution that the stepper motor provides aren't quite enough for super smooth and slow motion, so after about 15-20 minutes with one frame every second the cart will already reach the end of the rails. It is possible that I haven't configured the motor correctly yet but it is set to micro-steps in the code and as far as I know this is the smallest possible rotation. I use micro-steps instead of normal steps because the provide smoother movement: a normal step would yank the cart and make the heavy camera wobble too much. It also makes sure that the motor is always enganged in case the rails are on an angle. This way the cart can't slide back down. To solve the problem of step sizes being too big I might incorporate some model kit plastic gears but for now I have avoided it since it isn't that easy to get everything lined up correctly without making the whole construction incredibly flimsy. There are also other stepper motors out there that provide twice or more the amount of steps per full revolution, usually through internal gears (see the alternative smaller motor I've listed above).

    Currently I'm running the whole setup off of 8 AA batteries, specifically Panasonic Eneloop AA Ni-MH Rechargeable Batteries(de). However, since the stepper motor requires 12V and 8 rechargeable AAs only provide a maximum of 9.6V, it does have issues climbing an incline stepper than about 20 degrees. On the flat it works great, though, and it lasts for many hours as well. In the future I might upgrade to a 12V battery or boost it up with a converter, maybe using a LiPo battery for their amazing energy density.

    Future Developments

    I've got a little micro-switch that I want to mount at the end of the rails so that it detects the cart hitting the end. This will eventually stop the timelapse. I also want to refine the menu system and hopefully improve the software side of driving the motors, i.e. better speed and step control.

    In case I've already exhausted all possibilities regarding the motors, I might actually have to add two differently-sized gears to the system to bring the speed down so that I can take hour-long timelapses with many hundreds of frames.

    And finally, panning while moving sideways would make my timelapses look much more impressive so adding a second motor is high up on the agenda. However, I'm not sure yet how to fit it between the camera and the top of the ball head (glue it to the quick-release plate?), or alternatively if it could or should live under the ball head in which case I'm worried about stability. The latter would see the motor mounted under the cart, however, which would be a very clean looking solution. Either mounting point will give very a different result when the rig is on an incline and depending on the subject it can work well or look really out of place.

    The most important improvement, however, to be made is getting it off the ground. At the moment grass or plants can easily get caught in the wheels and there are no points to screw in a tripod quick-release plate (the end trusses hook quite nicely into the top of my Manfrotto BeFree Travel Tripod(de) , though). Often something can look good at eye-level but having to put it all the way down on the ground limits my possibilities so a good 1/4" screw hole at either end would make it immensely more useful and stable in vegetation.

    Please head over to Google+ or Twitter @tobiaswulff (see links on top of the page) to discuss this article or any of my photography and videography work. My Flickr and Vimeo pages also provide some space to leave comments and keep up to date with my portfolio. Lastly, if you want to get updates on future blog posts, please subscribe to my RSS feed. I plan to publish a new article every Wednesday.

    Long-run timelapses across multiple seasons - Part 1

    Some links to products in this blog post are Amazon Affiliate links that earn me a few cents or dollars if a reader buys any product on Amazon through this link. The price of the product does not increase so it is a free way to support this site by using the links provided. The main product link goes to Amazon.com and the "(de)" leads to Amazon.de.

    This is the first part of a longer series of blog posts about timelapses. I have started planning for and taking long-run timelapses that span many weeks and months, and I want to talk about how these ideas and visions can be accomplished in a reasonably efficient workflow by an amateur photographer. I say "reasonably" because processing timelapses from RAW files and working on such long running sequences will always involve a lot of work.

    Who, and why

    Apart from an article on Photo Sentinel, there aren't many interesting articles or howtos available. I highly recommend reading the article if you're interested in timelapses because it showcases different techniques and links to some great videos in each category. However, it belongs to a company that sells specialised long-term timelapse equipment which does not really fit the kind of subjects I'm shooting. On Youtube and Vimeo there are only a couple of videos that portrait certain subjects in nature over the course of many months but there are some amazing and award-winning short videos and films that I will link to further down.

    The most impressive executions of this sort of timelapse - and the aforementioned howto talks about it as well - are several features by the BBC such as The British Year and of course Planet Earth. The team which shot the timelapses for The British Year talk at length about planning, shooting, editing and various tips in a blog post. I highly recommend Chad's blog and all the content on his website as it is a wealth of timelapse stories, workflow tips, and kit reviews.


    The obvious but most time extensive way to shoot a timelapse across multiple seasons is to take individual photos of the same subject under similar lighting conditions and from the same spot over a long, long time. Another technique is to take multiple "normal" timelapses, that is sequence of an hour or a few hours, and then blend them together such as in the Youtube video "4 Seasons 1 Tree". Unfortunately, the blending will be very obvious, and there also isn't much movement or change within the individual sequences themselves. On the other hand, there is no flickering due to abrupt changes in lighting or weather. This could be enhanced by doing some masking and selective blending to change some areas of the image before others which can also be seen in the video as the ground changes before the tree does.

    The easiest way to accomplish a long running timelapse is to have a camera that can be left in a fixed spot and orientation. The photo above is actually a blend of two individual frames, one with different lighting and more leaves on the tree. It shows that blending and aligning photos on the computer can produce a very smooth result even if the original photos are totally unaligned and taken in completely different conditions. In amateur nature photography, it usually isn't an option to have an absolutely fixed camera spot because the locations are too exposed to the elements. Even in urban environments you wouldn't leave your camera or tripod anywhere except inside your own house or apartment - and then you wouldn't be able to take it somewhere else.

    Therefore - unfortunately - we have to re-set up the camera and point it at the same spot every single time. This gets very complicated if movement of the camera is involved but even with a static shooting position there will be slight variations due to uneven ground, zoom lens variations (zooms are not "clicked" after all) and inaccuracies when pointing the camera at the subject. A very sturdy tripod is important but because I usually travel on foot or bike and also take my equipment on hikes into the mountains, I couldn't just go for the most sturdy one out there. So my tripod is the Manfrotto BeFree Compact Aluminum Travel Tripod which I love because it fits even into a normal day pack, yet it can extend to eye level and is reasonable rigid. However, pushing down on it will bend the legs in their joints so it is tricky to get it set up 100% exactly the same way every time.

    So there will be variations in tripod position, tripod height, camera attitude and focal length. Luckily, those issues can be resolved almost completely in post-production and I will talk about methods and tools to align photos and blend them in the next blog post in this series. Apart from dedicated software and plugins for Lightroom there are also a bunch or free tools available that do a very good or even perfect job at the expense of a maybe not so polished user interface or some efficiency.

    Ongoing work and ongoing articles

    Something like the video "Fall" from NYC Central Park is probably the closest inspiration to what I am planning to achieve. I didn't know the video when I started my project. There is also a year-long timelapse from the Canadian Rocky Mountains which employs some really nice blending and obviously beautiful outdoor scenery.

    As I shoot individual frames and sequences for my own long-run timelapse video, I will add more parts to this series talking about specific shooting tips and releasing some more snippets of the ongoing work. Towards the end I'm sure it will all become fairly editing and video post-production heavy.

    Please head over to Google+ or Twitter @tobiaswulff (see links on top of the page) to discuss this article or any of my photography and videography work. My Flickr and Vimeo pages also provide some space to leave comments and keep up to date with my portfolio. Lastly, if you want to get updates on future blog posts, please subscribe to my RSS feed. I plan to publish a new article every Wednesday.

    Photoshoot Experience: KiwiPyCon Conference

    A few weeks ago I shot my first event which was KiwiPyCon 2015 at the university campus in Christchurch, New Zealand. KiwiPyCon is an annual programming and software development conference organised by the New Zealand Python User Group (NZPUG). It consisted of talks and tutorials on Friday, and talks (plus many morning tea, lunch, and afternoon tea breaks) on Saturday and Sunday. All photos can be found on the Flickr page I created for the event.

    I came to the event with two cameras and two lenses: my Olympus OM-D E-M1 (de) with the Olympus 12-40mm f/2.8 (de), and a borrowed Panasonic G5 with my Olympus 45mm f/1.8 (de) prime. The lecture theatres were fairly dark so I expected to use the prime lens quite often during the talks.

    The first challenge which, much to my frustration, didn't even have to do anything with photography itself, was to get photos up onto the Internet as fast as possible, ideally while a talk or conference segment was happening. New photos and updates could be announced on the official @NZPUG Twitter feed. I tried the Olympus OI Share app on my phone because I figured it would be easiest to select files from my camera via Wifi and share them directly to a Twitter app. This didn't work at all: loading photos from the camera was very slow and my phone often switched back to using the conference Wifi and therefore loosing connectivity with my camera. When I finally managed to load a full photo from the camera, I had problems sharing it to Twitter due to connection timeouts which was probably due to the slow university Wifi or Internet on the "visitor" network we were assigned. After trying for half an hour while all sorts of activity with people streaming in, signing up, chatting and getting ready for talks was happening around me, I gave up and tried using a tablet which can read SD cards from the camera, and the Android Flickr app.

    The Flickr app wasn't working either: I couldn't use my existing account (Yahoo said something about inactive account even though it works fine from a PC) and creating a new account and logging in also failed. So while it looked like reading SD cards directly and uploading to Flickr was the way to go, I wouldn't be able to use my tablet (with the long battery life) and eventually had to resort to using my old trusty Thinkpad laptop (with its 40 minute battery life). Finally, after deciding to use a proper computer, everything worked as expected: I pulled the photos from the SD card to the laptop, put them into a folder for each day and camera, and uploaded them to Flickr via the website. No apps, no camera Wifi, no sharing or APIs: just memory cards and HTTP. I still decided to not shoot RAW and I also downsized the images to around 5 Megapixels in camera so that the slow and sometimes unreliable visitor Wifi network would be able to handle all the uploads in a timely manner.

    One big problem I ran into with the silent electronic shutter of the Panasonic G5 was with the florescent lights: while not a problem with the normal shutter, the slower readout (technically the readout is the same but the exposure across the whole frame happens much faster with a mechanical shutter) caused horizontal line artefacts to appear in the final photo:

    I'm not sure why it wasn't a problem with some of the other portraits of presenters before this one since I've used the electronic shutter quite a lot on the first day but after this experience I quickly changed back to using the mechanical shutter. While a little bit more annoying for the audience, as an event photographer you aren't completely invisible anyway while running around the stage, and luckily neither the E-M1 nor the G5 have a loud shutter.

    A similar technical issue was around camera settings, specifically white balance. Being primarily a landscape photographer I like to use manual mode with manual ISO and manual white balance. However, I quickly learned to use a more automatic mode like A priority, and set ISO and WB to auto. There are still a few photos were the white balance is way off which was before I put the camera into a more automatic mode. After I learned and switched, most photos came out really well. The E-M1, despite being a m4/3 camera and therefore not a very good low-light performer, looks great up to ISO 1600, and together with the f/1.8 prime it was enough to capture action and speakers unless they waved their hands around really furiously. In this case I just had to wait a few seconds for them to calm down and then quickly get the shots.

    Over the three days I extensively used my Peak Design camera clip (de) I talked about last week, attached to my belt, in order to carry and work with two cameras, or during breaks grab some food and have conversations without having to hold a camera in my hands all the time. It performed flawlessly and kept even the fairly heavy (for a m4/3 camera) E-M1 + 12-40mm zoom lens secure on my hip. The built-in lock functionality was good to have because the opening of the clip is to the side when worn on a belt so in theory bumping the spring-loaded unlock button and nudging the camera could result in dropping it from hip height onto a potentially very hard floor.

    One of the bigger challenges was to capture the moment during prize-givings when a book, voucher or other gadget was handed to the lucky winner. Because the lecture theatre was fairly big and the prize-giving was happening so fast I wasn't always close enough or didn't have enough time to get a good focus. Ideally as a photographer you would want more time and a bit of a pose, and the person bringing the prize to the winner shouldn't stand between the camera and the person receiving the prize. I'm not sure what would work better in the future apart from having more than one photographer so that we can spread out in the theatre.

    Please head over to Google+ or Twitter @tobiaswulff to discuss this article and let me know how you handle event shoots like these. I ran into a few problems and challenges so any tips for the future are greatly appreciated. My Flickr and Vimeo pages also provide some space to leave comments and keep up to date with my portfolio. Lastly, if you want to get updates on future blog posts, please subscribe to my RSS feed. I plan to publish a new article every Wednesday.

    1 of 2
    1. 1 2