Is it nebulosity or an artefact?

  • Stargazing
  • Thread starter sophiecentaur
  • Start date
In summary, the image may show some nebulosity around the star, but it is hard to tell because the image is small.
  • #1
sophiecentaur
Science Advisor
Gold Member
29,043
6,936
I bought a 'nebula filter' at some great cost and the attached was the result of four, two minute exposures on a recent clear night. I'm clearly struggling with getting the optimal exposure and Iso gain setting on my DSLR but the Orion Nebula came out fairly convincingly. There's plenty of detail in the main nebula that I don't see on the straight snapshot with similar (obvs lower) exposures so I 'believe the picture. So what is going on around the star down and to the right (Hatsia?)? There's a definite disc around it which is not present on other stars of equal apparent magnitude. Looking at similar images I have of the Pleiades, I can see the same effect on some stars but not on others. What's the opinion as to what the image is showing?
stack 1 green.jpg
 

Attachments

  • stack 1 green.jpg
    stack 1 green.jpg
    17 KB · Views: 671
Astronomy news on Phys.org
  • #2
sophiecentaur said:
Looking at similar images I have of the Pleiades, I can see the same effect on some stars but not on others.

The Pleiades are surrounded by nebulosity

for your photo above, it would be good to see a closer in pic of the star in question.
So a same sized pic but zoomed in on that star ... I assume this current pic was significantly reduced in size?
The overall pic is too small to tell if you have a processing halo or true nebulosity.
The nebulosity in the Orion region is quite expansive and I'm picking that it is showing nebulosity

Dave
 
  • #3
I will sort that out. The jpeg I started with is big but, by the time it got to you, it had lost a lot of detail. Dunno why. :frown:
It seems to vary fro star to star. I have seen pics of Pleiades and I have not yet seen that vast cloudiness that can be obtained.
Just cooking dinner but I will try to get back after the Beef Hotpot. Yum yum.
 
  • Like
Likes davenn
  • #4
sophiecentaur said:
I bought a 'nebula filter' at some great cost and the attached was the result of four, two minute exposures on a recent clear night. I'm clearly struggling with getting the optimal exposure and Iso gain setting on my DSLR but the Orion Nebula came out fairly convincingly. There's plenty of detail in the main nebula that I don't see on the straight snapshot with similar (obvs lower) exposures so I 'believe the picture. So what is going on around the star down and to the right (Hatsia?)? There's a definite disc around it which is not present on other stars of equal apparent magnitude. Looking at similar images I have of the Pleiades, I can see the same effect on some stars but not on others. What's the opinion as to what the image is showing?...

Hatsya

This image from wikipedia commons shows plenty of nebulosity. It is rotated about 90 degrees from your picture.
Orion_Nebula


I suspect you are looking at double ionized oxygen. Your "nebula filter" might be optimized for that purpose. I do not know if Iota Orionis ionizes the oxygen from the Orion nebula or if that is Iota's own solar wind. If it is more blue than green it is reflected light.

Pleiades have nebulosity:
Dust that forms a faint reflection nebulosity around the brightest stars was thought at first to be left over from the formation of the cluster (hence the alternative name Maia Nebula after the star Maia), but is now known to be an unrelated dust cloud in the interstellar medium, through which the stars are currently passing

sophiecentaur said:
...is not present on other stars of equal apparent magnitude...

There are no stars in your picture with magnitude equal to Iota. Wikipedia puts Iota's apparent visual magnitude at 2.7. The trapezium is listed as 4.0. The entire orion nebula is also listed as 4.0. The display screen pixel will be saturated. Not an accurate measure for magnitude.

sophiecentaur said:
... I can see the same effect on some stars but not on others...
Compare to Vega. If it is in your camera then Vega should have it too.
 
  • Like
Likes jim mcnamara
  • #5
sophiecentaur said:
...
It seems to vary fro star to star. I have seen pics of Pleiades and I have not yet seen that vast cloudiness that can be obtained.
...

800px-Pleiades_large.jpg
 

Attachments

  • 800px-Pleiades_large.jpg
    800px-Pleiades_large.jpg
    49.1 KB · Views: 666
  • #6
is this nebulosity.jpg


This is the small part of the image. I guess the pale disc must be an artefact.
The Nasa image of the pleiades is very impressive and subjectively nice to look at but the artefacts, imo, spoil it. The diffraction stars and circles don't help anhyone to understand what's actually there. They're trying to show the nebulosity at the same time as very bright stars and the contrast is too great.
There is so much to get right for good astro images. I am working at it.
PS The hotpot was smashing. I can hardly move now!
PPS The filter is OIII 8nm wide; very revealing of some details.
 

Attachments

  • is this nebulosity.jpg
    is this nebulosity.jpg
    43.5 KB · Views: 662
  • Like
Likes davenn
  • #7
sophiecentaur said:
This is the small part of the image. I guess the pale disc must be an artefact.

I agree. I often generate that artifact as well during the compression of the original 16-bit (or 32-bit) per channel image down into an 8-bit/channel image, by over-doing the gamma correction (using gamma values << 1) in an effort to selectively boost the low-intensity parts of an image.
 
  • Like
Likes jim mcnamara
  • #8
Andy Resnick said:
I agree. I often generate that artifact as well during the compression of the original 16-bit (or 32-bit) per channel image down into an 8-bit/channel image, by over-doing the gamma correction (using gamma values << 1) in an effort to selectively boost the low-intensity parts of an image.
How does this argument sound? Limitiations to the sensor mean that the luminance is clipped to form a white disc with skirts falling off where the sensor can handle them. According to the Airy Pattern (see link), the first maximum is at a level of about 0.0175. That corresponds to a magnitude difference of 4. In my image, the star with the artefact, if that disc corresponds to the first sidelobe of the Airy pattern and the luminance level is clipped at 255 255 0, the skirts of the main peak fall away at about half way out to the edge of the disc artefact. The magnitude difference between the star and other stars which are just visible could be about 4 and that is the magnitude difference between the peak and the level of the first airy sideline. So the apparent visibility of this artefact is about the same as the visibility of the faintest stars in the picture. So, could that mean that the contrast range that the sensor can reproduce (for pixels which are close together) is only about 1.75%? OR could it be some extra characteristic imperfection of the lens (supposed to be ED Apochromatic)?
The original image was dng and the artefact was visible on PS, before being turned into jpeg.
 
  • #9
sophiecentaur said:
So the apparent visibility of this artefact is about the same as the visibility of the faintest stars in the picture. So, could that mean that the contrast range that the sensor can reproduce (for pixels which are close together) is only about 1.75%? OR could it be some extra characteristic imperfection of the lens (supposed to be ED Apochromatic)?
The original image was dng and the artefact was visible on PS, before being turned into jpeg.

I've gone around and around on this question, and manage to confuse myself every time, so I'll reply as best I can and highlight where I fall short:

The question is simple: What is the maximum obtainable dynamic range of my (stacked and processed) image? In other words, what is the maximum range of magnitudes I can obtain in a single (stacked and processed) image?

It comes down to the noise floor and the number of bits available (discretization of the signal). Let's just consider a single channel to make this 'simple'. Displays are 8-bits and RAW are (say) 14-bits. This translates to, if the signal is 100% full scale at magnitude 0 (Vega), a signal-to-noise ratio of 1 is achieved at magnitude 6 (8-bits) and 10.5 (RAW). This assumes there is no noise in the system- the faintest signal has intensity value of '1' in either case. If we say there is background light, thermal noise, etc. etc., and say SNR = 1 at 5% full scale (which is way better than I get), the minimum magnitudes are about the same.

Caution: I'm not sure I believe this. It's not clear to me why adding bits would seem to make the sensor 'more sensitive', because the incoming intensity hasn't changed. It seems to be that I can more finely reject the noise floor with more bits available.

Stacking and averaging many frames generates an image with more bits- the most I have ever generated is a 24-bit image (1024 14-bit images). This, according to the above, means I have an image that can potentially span 18 magnitudes, which seems to strain credibility. That said, I can reliably obtain clear images of magnitude 15 stars once I have accumulated about 300 images (22 bit images, and the calculation returns 16.5 magnitude floor).

So as a practical matter, it seems that I can generate images containing a range of up to 18 stellar magnitudes. There's obviously a bottom end based on the received intensity, but I have a really hard time calculating it, even starting with '1 photon per frame'.

The artifacts occur when I 'squish' the 22-bit image into 8 bits. Ideally, I'd like to map the 18-magnitude span (brightness ratio 1:14552000) into a 1:255 scale, but as you can see, there will be artifacts. One critical consideration to minimize the 'skirt' is to avoid clipping at both the top and bottom keep your noise floor just barely above 'zero' and only a few of the brightest stars should be at 100% scale.

Imaging Orion is particularly difficult due to the dynamic range present- I typically have to choose between blowing out the Trapezium or not getting the full glorious nebula.

Does that help?
 
  • #10
Andy Resnick said:
It's not clear to me why adding bits would seem to make the sensor 'more sensitive',
It's just that the noise level after stacking is lower than the peaks of the noise that occur on a single frame. The 'median' option takes the median value of samples on all the frames, which reduces the pk-pk noise excursion over the area and improves the SNR. The actual signal from n frames is n times the signal from one frame but the noise level (depending on the algorithm) is much less than n times. The 'sensitivity' increase that you refer to is due to the summation of many samples. It's a form of temporal filtering / bandwidth reduction.
Andy Resnick said:
to avoid clipping at both the top and bottom
Only the top is 'clipped'; the peak value for one frame is limited but the bottom doesn't 'clip' , as such, if you choose to increase the number of frames but averages between random 0 and 1 values over the total number.
Andy Resnick said:
Imaging Orion is particularly difficult due to the dynamic range present- I typically have to choose between blowing out the Trapezium or not getting the full glorious nebula.
You can ameliorate the problem if you fool around with the gain (curves) over the input range. AP is more of a pig than most regular daylight pictures which, when it's a problem, people use fill in flash or solar reflector screens to fill in the shadows. They used to use 'Soft Film' for high contrast subjects but we have a knob to twiddle nowadays.
 
  • #11
Andy Resnick said:
Imaging Orion is particularly difficult due to the dynamic range present- I typically have to choose between blowing out the Trapezium or not getting the full glorious nebula.

the best way (actually, pretty much the only way) to get the a full good image is to expose for the Trap. and the nebula separately and blend images in Photoshop or similar
 
Last edited:
  • #12
I would also suggest that you remove the filter and clean the camera lens then try to take another image (weather permitting) and see if the artifact disappears. Halos can be caused by a speck of dust or oil on the filter or the camera lens at that particular location in the image.
 
  • #13
NFuller said:
I would also suggest that you remove the filter and clean the camera lens then try to take another image (weather permitting) and see if the artifact disappears. Halos can be caused by a speck of dust or oil on the filter or the camera lens at that particular location in the image.
This raises another issue. Photographers are only too happy to consider cleaning lenses (carefully, of course). Astronomers seem against the whole idea of touching the lens. I recently washed the 10inch mirror on my Newtonian and it was NO BIG DEAL. I took care and avoided using a scrubbing brush or pan scourer. The result was that a load of dust went away but the surface was still a bit 'patchy'. Nobody died. A visual inspection (LED torch at night) of my 80mm Ed lens looks fine so I won't touch it yet. Why should I be scared to clean it?
I am capable of cleaning my DSLR sensor with the appropriate pads and liquid. Every time a lens is changed, there is the chance of grot getting on the surface so cleaning becomes essential.
The thing about this particular artefat seems to be that it's either there on a particular star or no, on the surrounding stars. I guess it's very magnitude dependent.
 
  • #14
sophiecentaur said:
IOnly the top is 'clipped'; the peak value for one frame is limited but the bottom doesn't 'clip' , as such, if you choose to increase the number of frames but averages between random 0 and 1 values over the total number.

What I mean by 'clipping at the bottom' is to avoid having any pixels with value '0' in the image. For me, this happens during post-processing, trying to get rid of that last bit of non-uniform background.
 
  • #15
davenn said:
the best way (actually, pretty much the only way) to get the a full good image is to expose for the Trap. and the nebula separately and blend images in Photoshop or similar

I think you are right...
 
  • #16
Andy Resnick said:
What I mean by 'clipping at the bottom' is to avoid having any pixels with value '0' in the image. For me, this happens during post-processing, trying to get rid of that last bit of non-uniform background.
I see what you mean now. But it's basically a subjective issue as to where you decide to cut the 'grass' at the bottom. The random noise gets lower and lower (relative to white level) as you filter more and more (more and more frames). There will always be some noise and always some stars that are only just discernible above that noise and you will lose them below your chosen black level. You really are making the system more sensitive with the processing. It's a bit like a single bit ADC which, sampling at a high enough level, can deliver as many quantised levels as you like. Oversampling makes things better.
davenn said:
the best way (actually, pretty much the only way) to get the a full good image is to expose for the Trap. and the nebula separately and blend images in Photoshop or similar
I have a slight problem with that approach because AP'ers produce a vast range of versions of any given astronomical object. It's more like the Impressionist School of AP. But then I have to admit that I never present a normal photograph without tinkering somewhat with the gain, levels and curves (and colour balance, probably). Also, I cannot resist removing zits and blotches from faces. PS is so clever at that sort of "repair" that it's hard to know where to stop. But I do like to think that the scene / face will still be very recognisable, whereas many AP images have false colour in order to show the features better. Google "Orion Images" and you will get every colour imaginable in the selection that you are given.
 
  • #17
sophiecentaur said:
I have a slight problem with that approach because AP'ers produce a vast range of versions of any given astronomical object.

I don't understand your problem, when you view it optically through your scope, you can see the nebula and the trapezium clearly.
Cameras have great difficulty with the huge dynamic range that the Orion Neb. presents. All the merging of 2 different exposure timed images is doing is giving a view that you would see with your eyes, where the core / trap. isn't blown out

sophiecentaur said:
But I do like to think that the scene / face will still be very recognisable, whereas many AP images have false colour in order to show the features better. Google "Orion Images" and you will get every colour imaginable in the selection that you are given.

I'm not talking about colour/false colour renditions. Forget the colour. So do 2 exposures of different times using grey scale. I'm talking purely about getting an image that presents a view that is well exposed across the range from brightest areas to darker areas :smile:Dave
 
  • #18
I guess I was introducing another issue with false colour but it is true that published pictures of many DSOs tend to vary wildly. Only a few of those pictures are like what you can actually see in your eyepiece. Perhaps you, Dave, make an effort to produce images like that but most others definitely do not.

davenn said:
So do 2 exposures of different times using grey scale.
What PS tool do you use for that? I use the stacking tool with just the Median stacking rule. I have tried to merge a straight image of M42 and to add an OIII filter image, which has some extra detail but, using layers, I couldn't get a ' better' picture. Is it done by using Masks or is it already done for you by some bolt on?
I know that with a TV display contrast ratio of, say 5000:1 it is possible to produce higher contrast images than a single Raw camera image. I just haven't.
EDIT: PS, when I turn the gain down on my OIII filtered image, I can actually see the trapezium but, as you say, the faint details get lost and the impact of the shot is lost.
 
Last edited:
  • #19
sophiecentaur said:
but it is true that published pictures of many DSOs tend to vary wildly. Only a few of those pictures are like what you can actually see in your eyepiece

ohhhh for sure, I see some really awesome ones with a well balanced colour spread
and I see some dreadful ones that are so very over cooked in post processing.
and in those cases what really annoys me is that those people are trying to pass them (that look) off as natural
... nothing could be further from the truth ... nice look for art maybe but definitely not as a good rendition of the object

sophiecentaur said:
Perhaps you, Dave, make an effort to produce images like that but most others definitely do not.

I do try, am not a perfectionist, but do try to have them look natural as possible ( broad colour range)

Your OIII filter will always produce an unnatural green rendition, as above, can't be helped, just the way they are.

Orion neb. has quite a wide range of colours compared to most other nebulae, which are predominantly red.
There's only a handful that have 2 or more colours, the Trifid being one of them, with strongly contrasting red and blue

Looking back through your posts, I didn't see any other imaging details other than the 4 x 2m exposures
A total of 8 mins of exposure time is quite reasonable and if you had done that much without the filter
you would have captured much more of the finer detail of the nebula.

sophiecentaur said:
I have tried to merge a straight image of M42 and to add an OIII filter image, which has some extra detail but, using layers, I couldn't get a ' better' picture.

Difficult with 2 very different images where one of them is at the green end of the spectrum :wink:

Layers or masks could be used ... I would normally use layers and blend the images and then use the Dodge and Burn tool
to bring out the parts of the wanted layer
Dave
 
  • #20
davenn said:
Dodge and Burn
What a good idea. I can cope with D&B!

That OIII filtered image was obviously not a serious contender for realism - lol. My question was more specifically about that artefact.
On the same night, I took the attached straight image, which I have only attacked with curves and levels. It is pretty convincing, I think but I can see the need for longer exposure time with my ED80 scope.

As for 'realism' I guess that only a few, if any, observers actually see Nebulae as the AP images suggest. So AP (from me up to Hubble) is really a world of its own. It's a disappointment for almost every budding astronomer that they look in their shiny new scope and see fuzzy and largely colourless images. But. of course, there's nothing quite like the fantastic buzz of seeing the Jovian Moons or a star cluster through the window of your own personal spaceship in your own back garden. That experience is at least as good as your best final photoshopped version of an hour's (day's?) worth of exposures of some faint DSO.
Stacked 1 .jpg
 

Attachments

  • Stacked 1 .jpg
    Stacked 1 .jpg
    61.8 KB · Views: 549
  • Like
Likes davenn
  • #21
sophiecentaur said:
On the same night, I took the attached straight image, which I have only attacked with curves and levels. It is pretty convincing, I think but I can see the need for longer exposure time with my ED80 scope.

that is a really nice image :smile:

The ONLY other suggestion I would make for you to do in processing for that image, is to alter the temperature to lower the level of that "blueish" tinting that it has
I usually use Lightroom for most edits like that and only use Photoshop for curves/levels stretching or blending.

I'm assuming it is a single exp. or is it stacked ?
Exp time ?, camera ISO ?, Focal Length ?Dave
 
  • #22
Calconceptofnumber1 said:
is it disruption in the light caused by subatomic waves/particles?
Thanks. Mostly down to the sky behaving itself for once, I think. Yes, I know about the blue sky. I did a fair bit of correcting but not enough! The colour of the nebula seems to agree with most of the images on line. I was pleased with the initial impression - focus and lack of trails
davenn said:
I'm assuming it is a single exp. or is it stacked ?
Exp time ?, camera ISO ?, Focal Length ?
The story is that I had decided to take the advice of a local expert at the Astro Society and I jacked up the gain to about 3200 from the 800 I have been using, just to see the effect. It was more of a sighting shot than anything. The frames (only four of them) were 2min and f=500mm. The histogram showed a broad peak about half way up and the frames were very pale and washed out, in terms of an equivalent daytime picture. I lowered the levels and did some 'curves' on one frame and, mainly because the stars were circular (for once!) I proceeded with 'inadequate data' and stacked the four frames. Comparing the stack with a single frame, I can just see a difference but of course I would need at least ten to get significant noise improvement. Polar alignment is always hard in my garden because polaris seems always to be a bit misty and the polariscope is hard to see through. Perhaps it's a dark adaption thing and the angle at 52N kills my neck.
After my four exposures, I did four with my OIII filter (four minutes this time) and the levels looked better so my green image was hardly processed at all.

But this artefact thing is still an issue for me. The first sidelobe maximum of the Airy pattern is at around 2% of the maximum, which is significant. It compares with a magnitude difference of between 4 and 5 (I always wonder if there should be a square factor in that % but brightness is Power and not Amplitude - isn't it?) The radius of that halo seems about right to be due to the first sidelobe. What do you think? Could it be a quantisation problem which is accentuating the step around the edge of the halo?
 
  • #23
Could it be internal/ghost reflections between the filter and lens elements (or between the lens elements themselves even without the filter)?
 
  • #24
Drakkith said:
Could it be internal/ghost reflections between the filter and lens elements (or between the lens elements themselves even without the filter)?
I've seen the effect with only the objective and sensor. :frown:
Could it just be an effect of charge distribution on the sensor, which is driven into limiting over a significant area for the brightest stars? It's the sort of thing that a daytime photographer would arrange never to come across but, on the other hand, it would not surprise anyone to find artefacts in the vicinity of a burned out highlight.
 
  • #25
sophiecentaur said:
I've seen the effect with only the objective and sensor. :frown:
Could it just be an effect of charge distribution on the sensor, which is driven into limiting over a significant area for the brightest stars? It's the sort of thing that a daytime photographer would arrange never to come across but, on the other hand, it would not surprise anyone to find artefacts in the vicinity of a burned out highlight.

I don't think so, but I'm not sure. I'm still betting on some sort of internal reflection instead of a sensor artifact.
 
  • #26
Perhaps you could use an "Artificial Star", typically used for telescope collimation. It could also be adapted to get an idea of the camera sensor overload characteristics (blooming) by giving a spot source without the telescope. With a stable source and no atmospherics to get in the way, you could likely get a beam profile across the image using PS.

Using a Google search for - telescope star simulator - turned up this site with some explanations and links: http://www.hubbleoptics.com/artificial-stars.html
 
  • #27
Drakkith said:
I don't think so, but I'm not sure. I'm still betting on some sort of internal reflection instead of a sensor artifact.
Hmm. Any reflection would involve the shape of the back end of the objective which is not flat and, moreover, it is expensively coated.
 
  • #28
Tom.G said:
Perhaps you could use an "Artificial Star", typically used for telescope collimation. It could also be adapted to get an idea of the camera sensor overload characteristics (blooming) by giving a spot source without the telescope. With a stable source and no atmospherics to get in the way, you could likely get a beam profile across the image using PS.

Using a Google search for - telescope star simulator - turned up this site with some explanations and links: http://www.hubbleoptics.com/artificial-stars.html
It would need to be a pretty high powered little star, though as this effect only happens on a few stars. The other stars on my above images don't show it. The ones in the trapezium are such a jumble the the halo effect wouldn't be spotted, I think. I will soon be able to try and see if I get the effect on a Zwo bottom of the range guide camera I have ordered. Cmos may be different again.
 
  • #29
sophiecentaur said:
It would need to be a pretty high powered little star
Think LASER pointer at whatever distance is appropriate, perhaps shining thru a pinhole in some aluminium foil.

Cmos may be different again. Very noisy, low sensitivity.
 
  • Like
Likes sophiecentaur

Related to Is it nebulosity or an artefact?

1. What is nebulosity and how is it different from an artifact?

Nebulosity refers to a hazy or cloudy appearance in an image, often caused by light scattering or interference. It can also refer to a diffuse cloud of gas and dust in space. An artifact, on the other hand, is an unintended or unwanted element in an image, often caused by technical errors or image processing. In short, nebulosity is a natural occurrence, while artifacts are man-made or accidental.

2. How can I determine if an image contains nebulosity or artifacts?

One way to determine if an image contains nebulosity or artifacts is to carefully examine the image and look for any patterns or inconsistencies. Nebulosity tends to be more organic and smooth, while artifacts often appear more geometric or pixelated. Additionally, checking the image's metadata or consulting with experts in the field can also help in identifying nebulosity or artifacts.

3. Can nebulosity and artifacts coexist in an image?

Yes, it is possible for nebulosity and artifacts to coexist in an image. For example, an image of a star cluster may have both natural nebulosity in the background as well as artifacts introduced during the image processing stage.

4. Are there any techniques to reduce artifacts in images?

Yes, there are various techniques that can be used to reduce or eliminate artifacts in images. Examples include using specialized filters during image capture, applying noise reduction algorithms during image processing, and utilizing advanced image stacking techniques.

5. Is there a way to distinguish between different types of artifacts in an image?

Yes, there are different types of artifacts that can occur in images, such as hot pixels, star trails, or lens flares. These can often be distinguished by their appearance and characteristics. For example, hot pixels tend to be small and bright, while star trails appear as elongated streaks. Consultation with experts and careful analysis can also help in differentiating between different types of artifacts.

Similar threads

  • Astronomy and Astrophysics
Replies
14
Views
597
  • Astronomy and Astrophysics
Replies
7
Views
2K
  • Astronomy and Astrophysics
2
Replies
39
Views
5K
  • Astronomy and Astrophysics
7
Replies
226
Views
11K
  • Astronomy and Astrophysics
Replies
7
Views
1K
  • Astronomy and Astrophysics
Replies
26
Views
2K
  • Astronomy and Astrophysics
2
Replies
43
Views
10K
  • Astronomy and Astrophysics
Replies
21
Views
3K
  • Astronomy and Astrophysics
Replies
33
Views
4K
  • Astronomy and Astrophysics
Replies
1
Views
4K
Back
Top