In order to make simulated multiple exposures I wrote some Python code to average several images. Here’s a little more about how I made the images.
I thought about how to build the program so that it most accurately simulated a multiple exposure created in a film camera. When making a double exposure, a photographer adjusts all available camera settings so that two exposures will result in an image that properly exposes the film. Say that for a given camera and scene the proper exposure time would be on fiftieth of a second. One way to make a double exposure would be to take one image with an exposure time of one twenty-fifth of a second, rewind the film one frame, and take another image with the same exposure time.
That’s one way to do it. But a photographer needn’t split the exposure time among images equally. Besides that, part of the fun of making multiple exposures is that one doesn’t necessarily know how it’s going to turn out. It could even be that the photographer has so much experience making multiple exposures that they know exactly how to adjust the camera to create a specific image.
With all that in mind I decided that any weighting applied to individual images is purely an artistic choice—this gave me some freedom about how to implement multiple exposures in code.
In order to specify which images should be made into a multiple exposure, I put the desired group of images in a folder on my computer’s filesystem. Then I had the script get a list of all the images in the folder. The images aren’t guaranteed to have the same dimensions and I didn’t want to hard-code output image dimensions, so the script loops through all the files in the folder and gets each file’s dimensions in pixels and exposure time. I made a function that, based on an input argument, finds either the smallest or largest dimension in the group of photos. Before processing, each photo is either cropped to a square with the smallest dimension or padded to a square with the largest dimension. This ensures there are no matrix dimension errors when it comes time to add everything together.
I used Python Imaging Library (PIL) to crop, pad, and do math on the images. When PIL loads an image, each pixel has a red, green and blue component that are stored as integers. I learned that the
ImageMath object only operates on single channel data, so the image’s
split() method is needed to store each color channel separately. It’s also necessary to convert each channel’s pixel data to floats before multiplying by a fraction.
In a loop, the script loads each image one-at-a-time, splits the current image into its three color channels, multiplies each channel by some fraction and then adds the result to three variables containing the red, green and blue channel for the new composite image. Finally, all three color channels are converted back to the integer domain, the channels are merged into one RGB image, and the image is written to disk.
I tried a couple different ways to combine the images including scaling each image by its actual exposure time and doing a naive average. I also thought about compensating for ISO but like, nah. I settled for the naive average this time around.
When processed as described above the resulting image is often very dark. The result is stark and stunning, but there are lots of interesting details muddied by the lack of dynamic range. I tried a few different things to bring out details and balance the image. When I generated the images for my avg day gallery I applied a simple histogram equalization just before the three color channels were combined into one image. This means that the equalization was performed in the integer domain and this seemed to really exaggerate the nuance in the composite images. Additionally, by definition, the histogram equalization algorithm spans the available dynamic range. The effect is interesting, but certainly not subtle.
Having finished the image processing code I needed to get all the photos published to my 2016 dailies Flickr set sorted into twelve folders. I wrote another script to query the Flickr API, download all the folders in the set and sort them by month they were taken. Then the downloading script called the image combination script one time for each of the twelve folders it had just created.
I had a ton of fun working on this project. If you’d like to take a look at the code it’s posted on GitHub. I’m looking forward to working on different ways to bring out all the interesting details—and replace that simple histogram equalization—a little further down the road.