Lower Orion Constellation

Just when you think you have a good “recipe” to process astronomy images taken with your gear, things don’t quite work out and you end up spending three evenings trying different settings, techniques and steps because you know there’s a better image waiting to be teased out.

M72 and Lower Orion Constellation

M72 and Lower Orion Constellation – Benoit Guertin

The image above (click for a full frame) is as much as I can stretch out from the lower half  of the Orion constellation and nebula with a 20 seconds ISO 800 exposure on 85mm F5.6 Canon lens from my light polluted backyard.

Below is the sky chart of the same area showing the famous Orion Nebula (blue and red box) and the Orion belt with the three bright stars Alnitak, Alnilam and Mintaka.  What is unfortunate is there are lots of interesting deep space nebula structures that glow in the hydrogen-alpha spectral lines of near infra-red, but all photographic cameras have IR filters to cut on the sensor those out.  That is why many modify the cameras to remove the filter, or get dedicated astro-imaging cameras.

Sky Chart - Lower Orion with nebula and open star clusters

Sky Chart – Lower Orion with nebula and open star clusters

Now, back to the main topic of trying to process this wide field image.  I had various issues with getting the background sky uniform, other times the color just disappeared and I was left with essentially a grey nebula; the distinctive red and greenish hue from the hydrogen and oxygen molecules was gone.  And there was the constant hassle of removing noise from the image as I was stretching it a fair bit.  I also had to be careful as I was using different software tools, and each don’t read/write the image files the same way.  And some formats would cause bad re-sampling or clipping, killing the dynamic range.

Below is a single 20 seconds exposure at ISO 800.  The Orion nebula (M72) is just barely visible over the light pollution.

orion_2017-02-27_original

Original image – high light position for 20 seconds exposure

The sky-flog (light pollution) is already half way into the light levels.  Yes, there are also utility lines in the frame.  As these will slightly “move” with every shot as as the equatorial mount tracked I figured I could make them numerically disappear.  More on that later…

Light levels of a 20 second exposure due to light pollution

Light levels of a 20 second exposure due to “sky fog”

The longer you expose, the more light enters the camera and fainter details can be captured.  However when the background level is already causing a peak mid-way, longer exposures won’t give you fainter details; it will simply give you a brighter light-polluted background.  So I needed to go with quantity of exposures to ideally reach at least 30 minutes of exposure time. Therefore programmed for 100 exposures.

Once the 100 exposures completed, I finished with dark, flat and offset frames to help with the processing.  So what were the final steps to reach the above final result?   As mentioned above, I used three different software tools, each for a specific set of tasks: DSS for registration and stacking, IRIS for color calibration and gradient removal and finally GIMP for levels and noise removal.

  1. Load the light, dark, flats and offset images in Deep Sky Stacker (DSS).
  2. Perform registration and stacking.  To get rid of the utility lines as well as any satellite or airplane tracks, the Median Kappa-Sigma method to stack yields the best results.  Essentially anything that falls out of the norm gets replaced with the norm.  So aircraft navigation lights which show up only on one frame of 100 gets replaced with the average of all the other frames.  That also meant the utility lines, which moved at every frame due to the mount tracking, would vanish in the final result.
  3. As my plan is to use IRIS to calibrate colors, where I can select a specific star for the calibration, I set the no background or RGB color calibration for DSS.
  4. The resulting file from DSS is saved in 16-bit TIF format (by default DSS saves in 32-bit, but that can’t be opened by IRIS).  I didn’t play around with the levels or curves in DSS.  That will be dealt later, a bit in IRIS, but mostly in GIMP.
  5. I use IRIS to perform background sky calibration to black by selecting the darkest part of the image and using the “black” command.  This will offset each RGB channel to read ZERO for the portion of the sky I selected.  The reason for this is the next steps work best when a black is truly ZERO.  While IRIS works in 16-bit, it’s actually -32,768 to + 32,768 for each RGB channel.  If your “black” has an intensity of -3404, the color calibration and scaling won’t be good.
  6. The next step requires you to find a yellow Sun-like star to perform color calibration.  As a white piece of paper under direct sunlight is “white”, finding a star with similar spectral color is best.  Sky chart software can help you with that (Carte du Ciel or C2A is what I use).  Once located and selected the “white” command will scale the RGB channels accordingly.
  7. The final step is to remove the remaining sky gradient, so that the background can be uniform.  Below is the image before using the sky gradient removal tool in IRIS.
  8. Image before removal of sky gradient in IRIS

    Image before removal of sky gradient in IRIS

  9. Once the sky gradient is removed, the tasks in IRIS is complete, save the file in BMP format (will be 16-bit)  for the next software: GIMP
  10. The first step in GIMP is to adjust light curves and levels.  This is done before any of the filers or layer techniques is performed.
  11. Then I played around with the saturation and Gaussian blur for noise reduction.  As you don’t always want the transformations to take place on the entire image, using layers is a must.
  12. For the final image above, I created two duplicate layers, where I could play with color saturation, blurring (to remove the background noise) and levels until I got the desired end result.  Masks are very helpful in selecting what portion of the image should be transparent to the other layers.  An example is I wanted a strong blur to blend away the digital image processing noise, but don’t want a final blurry night sky.
Advertisements

Processing RAW Cassini Spacecraft Images

Did you know that you can get access to the latest RAW images from the Cassini spacecraft directly from the NASA and JPL website?  Not only will you have first look at some stunning images of Saturn, the rings and the Moons like this one below from January 16th.  Click the image below for more information from NASA/JPL on that specific photo.

Daphnis making waves - Cassini spacecraft Jan. 16, 2017 - JPL/NASA

Daphnis making waves – Cassini spacecraft Jan. 16, 2017 – JPL/NASA

But you can also download raw images to try your luck at processing.  For this exercise I selected these series of pictures of the strangely perfect hexagonal-shaped storm on Saturn’s north pole.

Downloaded raw image set

Downloaded raw image set

These are images taken with different filters by the wide field camera, and I noted in an Excel file some information on each image, most importantly which filter was used.  Both the narrow and wide CCD on Cassini operate with two filter wheels, hence each image will always list two filters.  For those surprised at the rather “small” 1 mega-pixel camera, keep in mind the spacecraft was launched nearly 20 years ago, and development started in the 1980s.

There is a very detailed document on how to use, calibrate and process the images found at the following link.  But for what I wanted (quick processing) I only needed to find out which filters were the closest to an RGB setup.

Cassini ISS Broadband Filters

Cassini ISS Broadband Filters

Luckily this is well documented, and found them with the BL1, RED and GRN filters.

The image below is a quick addition of those 3 respective images assigned to red, green and blue channels.  The resulting image would be somewhat near the real colours, but I did not take any time to calibrate, hence they are probably a little off…

Saturn with normal RGB assignment (close to real colours)
Saturn with normal RGB assignment (close to real colours)

I also decided to try something that would provide a little more contrast and dive a little into the atmosphere and went with a IR-Red-Blue for RGB assignment by using a one of the narrow-band filters.

Cassini ISS Narrow Band Filters

Cassini ISS Narrow Band Filters

Saturn with IR, Red and Blue for RGB assignment

Saturn with IR, Red and Blue for RGB assignment

Both images above have not be calibrated, stretch or adjusted other than combine the raw images from Cassini.

The NASA/JPL site even has a section for amateurs to submit their photos and host a gallery to see what others have done.

References:
Cassini NASA/JPL site
Cassini Imaging Science Subsystem (ISS) Data User Guide

DeepSkyStacker – Faster and Better Results (updated)

Tried DeepSkyStacker and I think I’ve found a better and faster way of processing my images.

I had been using IRIS for the better part of the last 6 years, and I remember how impress I was at the results compared to the early versions of Registax for deep sky images.  While  IRIS is quite manual and command-line based, it nevertheless got the job done and allowed me to experiment with different methods.  But now, I decided it was time to move on to something a little modern.  I looked at what others were using, and came across DeepSkyStacker.

DeepSkyStacker

While IRIS offers a complete package, from image acquisition, pre/post-processing, and analysis tools; DeepSkyStacker only performs the registration and stacking.  But it does so in a faster and more efficient way.  DeepSkyStacker can fully utilise RAM and multi-core processing; hence what took 30 minutes in IRIS is now down to 5 minutes in DeepSkyStacker.

It also automates many steps, and you can even save the process and create batches.  So it’s down to load all your files, and then one click to register and stack.

DeepSkyStacker - Processing Files

DeepSkyStacker – Processing Files

I tried the with some wide field of views I had taken back in September.  And the resulting image appeared to be better.  Now I still have to use IRIS as I like how it can remove the sky background gradient and adjust the colors.  And GIMP is still required for the final adjustments.  So here are the main steps that gave me good results:

  1. Load the light, dark, offsets and flat frames (I had no flats or bias/offsets in my trial run, but that didn’t appear to cause an issue)
  2. Ensure that all pictures are checked and select to Register the checked pictures
  3. For the stacking, I found that selecting RGB Channels Background Calibration provided good color, and used the Kappa-Sigma clipping to remove noise.
  4. After stacking DSS will create an Autosave.tif (32-bit TIFF file).  I need to convert this into another format, but without loose the dynamic range.  My current solution is to use Microsoft Photo Gallery to open and save another copy as JPEG.  Finally did a quick stretching of the RGB levels to ensure better dynamic range when saving to 16-bit TIFF.  16-bit TIFF appears to be the only one that will open correctly in IRIS.
  5. Once in the image loaded in IRIS to remove the background sky gradient.  And then save it in BMP format for import into GIMP.  Yes I know I another file format, so far it’s what I find works best.  GIMP converts FITS and TIFF to 8-bit, causing incorrect color depth.
  6. Final adjustments with levels, light curves, saturation, noise filtering, etc.. is done in GIMP.

Now for a little more playing around, and trying it on some on my older pictures.

UPDATE:
DeepSkyStacker saves files in 32-bit TIFF by default.  After stacking many images the dynamic range is quite large, and this is not data we want to loose.  But the problem was finding a program that was able to correctly handle the 32-bit file format.  The next release of GIMP (version 2.10) will handle 32-bit files, but GIMP 2.8 was limited to 16-bit and even there it would convert the image to 8-bit for manipulation (GIMP 2.9.2 and up might work, but needs to be compiles on your computer – development package).  Not good…  Before downloading yet another photo imaging software I tried some of my current programs and found that the  Microsoft Photo Gallery software for Windows 10 does a great job of handling the 32-bit TIFF files.  Once the image opened, under File – Make a Copy I save a version in JPEG.  Yes I know not ideal, but I avoid a lot of the quantization conversion error and I’m able to continue my processing in IRIS and GIMP.

 

Layers and Blurring

Image

We spend lots of money on expensive optics and hours trying to get the focus spot-on or the mount alignment/guiding perfect for smooth tracking to avoid blurry and stretched stars.  So why would you want to blur your final image?

Consider the images below.  The one of the left is softer and more pleasing to the eyes, yet the stars remained sharp.

blurredlayers_compare

Side-by-side compare of blurred and the original image

One way to obtain this effect is by creating copies of the image, applying varying blur to each and then adding them from heaviest to the least blur using the Lighten only layer mode.

Take your original image and duplicate as required (in my example I blurred two layers, hence need a total of three identical layers).

blurredlayers_original

Original image (centered on Constellation Vulpecula)

Apply heavy blur to the bottom layer.  At the same time, reduce the color saturation and adjust the levels to get nice blacks.  You want the blacks to be nice and dark such that the general shape of the cloud-like structures appear due to the bright and dark zones.  In this example, the blur was applied to a level of 80 pixels.

blurredlayers_bottom

Heavy blur to the bottom layer, and reduced color saturation

Repeat the same for the middle layer, but with less blur (level of 20 pixels).  If you want the colors of the stars to pop out, increase the color saturation.  It will create an effect of nebulosity around bright stars.  Once again, adjust the levels as required.

blurredlayers_mid

Medium blur to create nebulosity effect

Finally, the top layer don’t apply any blur, adjust the curves to reduce the faint portion of the image as you don’t need to keep this portion of the image.  You only want to keep the nice bright stars.  The dim structures are kept in the lower two blurred layers.

Adjust the % between the layers to get the desired effects  The pixel intensity from bottom (most blurred) to the top will be kept only if the result is brighter than the previous layer.  The sharp and bright stars are from the top layer, while the overall dim structures are from the blurred lower layers.

blurredlayers

Final result after blending the 3 layers

Turn the various layers on/off to see what is the contribution of each.  It’s a lot of trial and error depending what you accentuate versus what you want to fade into the background.  Play with the level of blur, the curves and the % layer blending until you get the effect you desire.

For more information on the original image, see my post on Vulpecula.

Composition with Landscape

I’ve mentioned it before that you don’t need a fancy telescope and tracking equatorial mount to get into astrophotography.  Simply a camera on a tripod with a short focal lens can do wonders, especially with the high ISO settings in new cameras. A single 10 seconds exposition can reveal lots of stars, however to capture more photons a longer exposure is not better as the stars will become streaks.  But one can easily improve the image and get better signal/noise ratio by stacking multiple images.

However, there is one drawback to stacking multiple exposures if you decide to also capture the landscape: Earth rotates, therefore the sky moves while the landscape stays still.  If you align the images using the stars, then the landscape becomes a blur.  Not the end result that we want.  Luckily a quick composition with two layers and a mask solves everything.

Below is a single 10 seconds exposure at ISO 800 with a 17mm F4 lens; you have the landscape with city lights and the stars above.  Yes that is Orion…

Single 10sec exposure (ISO 800)

Single 10sec exposure (ISO 800)

In order to improve my signal, I worked with IRIS to align and stack 5 frames, this reveals many more stars, but also amplified the light pollution.

Aligning and stacking 5 images. More stars appear.

Aligning and stacking 5 images. More stars appear.

Luckily within IRIS there is a function to remove sky gradient.  The algorithm takes a series of sample points and attempts to make the sky uniform.  Not bad, the images are not a hopeless case.

Removing the sky gradient with IRIS

Removing the sky gradient with IRIS

As mentioned above, the alignment was performed with the stars, hence the background is now blurring.  Below is a close-up.

But when aligning on stars, the landscape blurs.

But when aligning on stars, the landscape blurs.

That is just 5 images, stack a much larger quantity or with more time between frames and it will only get worse.  It becomes pointless to shoot with the landscape if the end result is blurry.  Luckily working with layers in a photo editor can easily solve the issue.  We want to keep the stars from the stacked image, but the landscape from a single frame.  Follow these easy steps:

  1. Load into your base layer one of you single frames.  This is what will be used for the landscape.
  2. Load into a new layer your stacked image.  As your stacked image contains more and brighter stars select to Lighten Only instead of normally adding both layers.  You can play with the brightness of the stacked layer, and/or darken the base layer to get the desired blending.
  3. Create a mask to the stacked layer such that the blurred landscape is not permitted to show through.  See image below, I simply grabbed the airbrush and blackened the landscape area in the mask such that it will not show through the layer.  Note that the I only edited the mask, not the image itself.
Creating a mask for my layer: white is transparent, black will block

Creating a mask for my layer: white is transparent, black will block

The end result, is improved image of the sky, and a landscape that is still sharp.

Both layers added with the mask

Both layers added with the mask

Below is a comparison the composition with stack and layer (left) and a single shot (right).  We are able to achieve both of our goals of getting more stars (more signal) while keeping the landscape from becoming a blur.

Comparing the composition with layers (left) and single shot (right)

Comparing the composition with layers (left) and single shot (right)

And why not take some time to identify some key features in the image.

Constellations Orion and Taurus above the landscape.

Constellations Orion and Taurus above the landscape. (Click to open)

Planetary Imaging with a Webcam

Planetary imaging is usually where everyone starts.  The targets are bright objects in the sky such as the Moon and the planets that don’t require long exposures; Venus, Mars, Jupiter and Saturn.  And because there are no long exposures, no need for a mount that tracks.  The electronics of a webcam allows between 5 and 60 frames per second (fps), more than enough to get a good image that can be used with any sized telescope, and the result is a AVI movie that can be easily processed.

There are two ways to use the webcam:

  1. Prime focus: the original webcam lens is removed and the telescope becomes the lens; like swapping lens on a SLR camera.  Magnification is provided by the focal length of the telescope and the optional use of a barlow lens.
  2. Eyepiece projection: the webcam replaces the eye and the magnification is provided by the ratio of telescope focal length to eyepiece focal length.

In my case I went with a prime focus solution, hence I needed a webcam where the original lens could be removed and replaced with an 1.25″ adapter to fit into the telescope’s focuser.

Philips Vesta Pro 680K webcam modified for use on telescope

Philips Vesta Pro 680K webcam modified for use on telescope

The camera sensor, be it CMOS or CCD is sensitive to a wider spectrum than the human eye, therefore most have build-in UV and IR filter, either on the lens or the sensor.  As this filter was on the original webcam lens I purchased a BAADER UV-IR Rejection 1.25″ #2459207 filter for use with the adapter.  Refractors have a challenge getting all colours focused at the same spot, and even with an APO scope what falls in the UV and IR range will generally appear out of focus.  Best to keep those out with a filter.

Today a good planetary imager can be purchased for under $200, but when I started,  most astronomy imaging devices ran in the $1000+ camp.  The Philips Vesta 680K was rather popular as a wonderful man by the name of Steve Chambers figured out how to easily modify the webcam electronic to get much longer exposures.  The Vesta was also equipped with a CCD-based sensor, more sensitive than the CMOS technology used in most webcam.  These modified webcam became to be known as Vesta-SC.

I’ve spotted Jupiter, can I take a photo?  Actually you should take a video.  The reason is that there is a great deal of turbulence in the atmosphere and this causes the image to blur and giggle about.  By taking a video you are doing two things:

  1. Capturing a large quantity of images which can be later processed
  2. May happen upon a brief period of atmospheric stability

Here is a 30 second segment of Jupiter with my setup

I recommend taking a few videos with different settings such that you’ll be able to see after which provided the best results.  Select an uncompressed format such as AVI as to not get compression artifacts, and AVIs are easily broken into individual image frames.

Software such as IRIS or REGISTAX can be used to process the video.  REGISTAX is actually quite good and painless at doing this.  Don’t be intimidated by the large number of settings and parameters, you can get great results out-of-the-box with the default settings.

The process breaks down into 5 steps:

  1. Select your target (what you want the software to track on)
  2. Filter on the frames that have good image quality; only keeping those that are sharp and resemble each other
  3. Align (register) the individual images
  4. Stack the individual images
  5. Wavelet analysis and final brightness/colour balance

Because of the high number of images, you can actually improve image resolution by up-sampling or drizzling the image prior to stacking.  The end result is often an image that can be scaled up by 2x while maintaining resolution.

Wavelet analysis is a type of sharpening, similar to unsharp-mask, but treating each level of granularity as a different “frequency”.  While unsharp-mask is tuned to a specific size of detail, wavelet is able to treat various levels of details as different layers of the image and add the results.

End result:

Jupiter - April 11th, 2015 Benoit Guertin

Jupiter – April 11th, 2015
Benoit Guertin

 

Creating Diffraction Spikes with GIMP 2.8

Updated procedure to use a transparent background for the brush pattern. Also broke down certain steps into more details.

Ben Backyard Astronomy

Updated on November 3rd, 2014

Photos of open star clusters always appear to be more pleasant when stars have diffraction spikes.  But if your telescope does not have support vanes from a secondary mirror you are out of luck.  One solution is to simply tape in a cross pattern some string or fishing line over the dew shield.  Or you can turn to digital enhancement.  Below is a procedure to enhance your photos by digitally adding diffraction spikes using GIMP 2.8. in 8 easy steps!  No special plugin or filter required.

Lets try with my image of M45 – Pleiades taken with a Skywatcher 80ED.

M45 - Pleiades Benoit Guertin M45 – Pleiades
Benoit Guertin

The first step is to create a new “brush” in the shape of diffraction spikes.  To do this, start with new canvas with a transparent background.  In the screen shot below, a new 1000 x 1000 pixel image with Fill with: Transparency

View original post 624 more words