A few weeks back I went to a meetup where people, uh, meet, to talk about artistic projects they’re working on and critique each other’s work. I took some prints of my AVG DAY project from last year and got some really good feedback.
Here is the question that stood out most to me: what do multiple exposures look like as images are added to the exposure? I heard that same question in some form throughout the year as I showed people the project and I this as well: how are the puzzling elements of these images formed?
One easy way to answer this question would be to build a multiple exposure for each day that had multiple photos associated with it.
Script Mechanics
The code I described last year served as a foundation for this new goal. I just needed to do a little metadata reading and book keeping to organize the photos from an entire month into groups of photos for each day. Once that was done it was trivial to call the averaging function on each group of images individually. Cool.
Another bit of feedback I received concerned whether the color of the multiple exposure images changed as the year went on. The way I processed the images last year made it impossible to answer that question. By performing averaging/histogram equalization on each color channel individually I completely destroyed the color balance present in each individual image. Rookie mistake!
I tried several different ways of doing the histogram equalization (including converting to a different colorspace and only performing equalization on the luminance or value channel, using a contrast-limited adaptive histogram equalization function) but in general those methods created images with too much contrast and displeasing color.
I settled on a much simpler technique: what scikit-image
calls “contrast stretching.” Essentially, calculate where the vast majority of the dynamic range in the multiple exposure lies and then force it to span the dynamic range of its storage data type. I know I’ll anger some pedants by making the following analogy, but it’s kind of like applying a compressor to a vocal track.
This seemed to make the multiple exposures with fewer than eight component images look like I expected, some with stunning results. Using this technique to generate a multiple exposure for an entire month resulted in browns and other muted tones that might be expected if you mix a bunch of colors together. While this result is arguably “more correct”, I am undecided if it is an improvement over the method employed last year.
One thing I didn’t do (but really should) is convert from sRGB or AdobeRGB to linear RGB before performing any image altering calculations and then converting it back.
Finishing Touches
After I settled on the image processing approach I took a beat and thought about what I should do with all the images that would be generated. The natural place for them is my image portfolio and the way to get them there is to upload them to flickr with carefully crafted metadata.
I have a few cusswords about dealing with image metadata in Python. In order to group the images by day I needed to get at the machine tags I added to the source images, and those tags are stored in IPTC metadata. It seems there are no Python3-ready libraries that will compile on a Mac and reliably read IPTC data. The approach I ended up taking was to install exiv2
(using Homebrew), make calls to exiv2
from the shell and parse standard out. To write metadata I generated a text file that contained exiv2
commands and then called exiv2
from the shell. Painful, but reliable for my use case.
Putting this project together was fun and the results are gratifying. A survey of the output goes a long way towards answering the general question posed above. It’s also instructive about what photographic elements combine to make compelling multiple exposures. This has already influenced the daily photos I’ve taken since.