Over the past month I have made some further improvements to make life easier to set up the specimen and to take the stacks.
Finally a simple manual lab jack arrived from China which will give me 5mm of vertical adjustment. This is so small that it fits neatly on the stage, having a footprint of just 25mm x 25mm and weighing next to nothing.
On top of this I have mounted a thin steel plate. All my plastic specimen mounting cards now have a very thin button magnet glued to them, which holds them tightly to the top of the jack while allowing rotation in the horizontal plane and z-y movement.
At the camera end of the business there is now a second rail mounted laterally which allows fine adjustment in this direction to frame the image. This came from one of the many suppliers of such things on Ebay. It is not a precision item like the lab jack, but will do the job required at very reasonable cost.
There is an additional light fitted at the front which is used to illuminate the stage while setting-up and provides enough light for the camera to see the specimen and to enable fine tuning of the camera and specimen positions. The main lights are now fixed to the equipment chassis and not the underframe which makes adjustment of the equipment chassis position relative to the camera much easier.
More software changes have been incorporated to speed up and automate certain functions. For example, there are now three options that can be pre-selected to decide what happens at the end of the shot sequence – stay put, go back to park, or return to the first image. And there is now no need to go back to the park position to commence a shoot. I had put that into the software so that it would force a reset of the counter to allow for errors in positioning, however I have found the setup to be repeatable in this respect and so that precaution was unnecessary and just time consuming.
All that remains is to make the control of the lights easier from the working position behind the camera, which entails a little more construction work and some wiring alterations. Then we will be done (perhaps!).
By now you will have guessed that I get as much fun out of the construction of the rig as I do from taking pictures!
Repeatability – (of an experiment, etc) producing or capable of producing the same result again
Predictability – consistent repetition of a state, course of action, behavior, or the like,making it possible to know in advance what to expect
………………………………..wouldn’t that be nice?
Last Tuesday at Tetbury Camera Club we had an inspiring talk by Jay Myrdal who explained how he made stunning commercial images back in the day before Photoshop when it all had to be done in camera – a plate camera quite often. The techniques he explained were fascinating and the results were truly stunning. Some of his images involved taking action shots such as an exploding light bulb – this would be just a part of the image but he would have to shoot it over and over again to get all the timings of the effects just right. He built a rig to automate and adjust the various factors involved so he could tweak the important parameters knowing exactly what he had done beforehand and so migrate towards an optimal set up. He also scrupulously calculated all the angles involved in multi-layered super-imposed images to get things to look right from the perspective point of view.
This got me thinking that I needed to apply his philosophy to my macro photography, which up to now I have been carrying out on a rather hit and miss basis in terms of choice of lens configuration, and adjustment of stacking parameters. So I decided to apply some of Jay’s thinking to the matter.
My rig is pretty much automated already, but the key issue here is recording all the parameters that I have used for each stack in enough detail to enable evaluative decisions to be made after processing (which may be some time later) so that the parameters for the next stack can be improved or consolidated. I already have lots of bits of paper lying around with scribblings of what I have done but no real way of tying these back to the images I have processed. And not everything is written down anyway. It needed organisation and preferably automation.
I was already in the process of updating my Raspberry Pi Python software for my focus stacker to add refinements based on the last few months shooting. As a result, my mind was in tune (as much as it will ever be!) with Python 2.7 and Tkinter, and then two and two came together and for once made four! I decided to automate the logging of my focus stacks using my RPi.
I won’t bore you with the details of how I did it – programming and more programming is all you need to know! But this is what it now does.
There are some user interface windows to enter information that the RPi doesn’t know such as what camera is being used, the lens configuration, the subject matter, and the camera settings. You can also enter a note for each stack. The software gathers up all the info it already knows about the shoot such as the near and far focus points, focus increment and so on; date and time stamps the log, gives it the next shoot number and then (and only) when the shoot sequence button is pressed, the whole lot is appended to a .csv log file.
The .csv file can be read by Exel and as long as the camera and RPi are telling the same time I can figure out which log goes with which image stack.
One hiccup in proceedings is that the RPi is not on the internet, and so the RPi cannot find out automatically what time it is. That’s because it doesn’t have its own real time clock and battery (it apparently kept the cost down) and relies on the internet to reset its clock each time it is turned on. No internet, and the clock just restarts from where it left off last time it was on! Not terribly useful. At the moment I have to enter the time manually into the RPi when I turn it on – it is very easy [sudo date -s “Sat, October 18, 2014 17:21:00] for example. But I might forget.
What would be better is to get the lap top to tell the RPI what the time is. That is all within the realms of possibility. I would have to set up the lap top as an NTP server which Windows 7 does support. It seems to require some mining in the registry to do this, something I am not too keen on doing. There is plenty of advice on the internet, but I will have to nerve myself up before trying this. Then you have to rummage in the RPi’s software settings too. I may have to get my (no-longer) resident expert to help me on this some time.
Finally, I have also made some measurements to establish/check the exact magnifications of my various lens configurations, and have produced a chart to enable me to select the best lens configuration for a given subject size. It requires measurement of the subject height to get it right first time – something I had not been bothering to do. The chart is shown below
This image of the Milky way was taken a few weeks ago in Pembrokeshire from the car park at Whitesands Beach at about 10pm. On the far right you can see the South Bishop Light which stands on its own small island West of Ramsey Island; it is about 9 km away to the South West.You can see it in my Astro Gallery.
The final image is made from 14 identical 30 second exposures at ISO1600 taken in sequence using a Sony NEX7 camera with a Samyang 12mm lens at f2.0. I have in camera noise reduction turned on. The lens is manually focussed, and I have the camera set to manual for exposure to keep everything the same between successive images. Once imported into Lightroom, I adjusted the colour balance of one of the images and then synced that across to the other 13. The 14 images were then exported to Photoshop CC14 as layers in one image. Below is the method I used:
You need to create two separate sets of 14 layers – one for the foreground and background – in fact “the ground”! And one for the sky and stars. So the first step is to duplicate all 14 layers, and group the 14 copies into a smart group which you can name “ground”.
In each of the successive images the stars will have shifted, and for what we want to do with the “sky” set of 14 we need them all aligned. We can get Photoshop to do this automatially but we need to get PS to ignore the ground – we do that by roughly masking out the ground in each image so that it just has stars to work with.
So, you hide all the layers except one of the sky layers to work on; create a layer mask for the visible layer and then thoroughly mask out the ground and any fixed lights etc. It doesn’t matter if you mask out some of the sky, but make sure you mask out all of the ground. You can then copy that mask to each of the other sky layers.
Unhide the sky layers, select all 14 and then Auto Align the layers; then delete all 14 layer masks. Select the 14 sky layers and convert them to a smart object which you can name “sky”.
What you now have is two smart objects: one “sky” with the stars lined up; one land which was already lined up unless you accidentally kicked your tripod!
Here is the clever bit – it does require the extended version of PS, or CC14. We are going to use Layers > Smart Objects > Stack Mode > Median to perform some clever processing which will substantially reduce the noise, particularly colour noise. Where there is a pixel at a particular position in the image with the same value in each of the 14 images, that same value will be used in the final result – eg a star. Where there is a pixel whose value changes randomly in each of the 14 images then the median value will be used in the final result – eg noise and plain sky. As the noise is random and if there are sufficient images as a sample to work on, then the result of this process is to cut out a large amount of the noise.
So we do the Layers > Smart Objects > Stack Mode > Median on each of our two smart objects.
Then Finally, select the foreground, add a layer mask and paint out the blurry sky to reveal the noise free sky layer – here, more care is needed in the masking to get the boundary between ground and sky just the way you want it.
After that it is “just” a matter of processing to bring out the details in the Milky Way, etc. I do this back in Lightroom.
Each of the 14 files from the NEX7 is a 25Mb dng file – thats a lot of pixels for PS to handle when it has these 14 images loaded up as layers in one file. So the processing is not instantaneous particularly when converting to smart objects and running the median filter. I noticed that the processor was running at about 60% but that 97% of my 8Gb of RAM was being used, so perhaps it would be quicker with more RAM. There are also issues with saving TIFF files over 2Gb. In any event, I flattened the layers after the PS processing was complete as there was no need to save the layers as it could all be done again if necessary and my final processing was in LR which is non destructive anyway.
This is the first time I have used this method and I am very pleased with the noise reduction. What would I do different next time? I would use a higher ISO – 3200 – and a shorter exposure time to cut the star trails.
Here are my jottings about my photographic projects and activities. I have been working on a focus stacking macro photography rig. There are quite a few posts about that. In addition I write about other photographic activities as and when!