Diffraction Effects and Artifacts in Telescopes like the JWST

In summary: I believe we are in agreement on this. There was a question about diffraction artifacts, and an example of a star in the image showing those artifacts. That hexagonal artifacts (was well as any diffraction spikes) have a different cause than the actual dust rings. That's all I meant to say.
  • #36
sophiecentaur said:
It's very misleading because, in a real image there is not a point source and also there are a number of other sources in the vicinity of the low level parts of a diffraction spike. this constitutes a 'floor' which can be significantly above the least significant step in the ADC.

If a pixel lies along the path of diffraction spike, and is dim enough such that expected value for that pixel is less than 1 analog-to-digital converter units (ADU) after compensating for noise (dark frame subtraction), there is still a finite probability that the pixel will register 1 or more ADU. For example, for a given exposure time, in a dim section of a diffraction spike, a pixel might only have a 50% chance of registering a single ADU or more. On even dimmer section of the spike, the probability drops to 25%. This is due to probabilistic nature of shot noise, which is inherently part and parcel of the signal. In other words, if you look along a diffraction spike in the vicinity of 50% probability, half of the pixels in that region will register at least 1 ADU above the noise (i.e., ~1 ADU after dark frame subtraction).

The point is that even if the expected value of a pixel that lies in a diffraction spike is less than the least significant step in the ADC (i.e., 1 ADU), it does not guarantee that the pixel will not register a signal. The signal still has an effect on the pixel registration in a probabilistic manner.

[Edit: and if a particular pixel is along the intersection of diffraction spikes/artifacts, say from two or three or more different stars, the probabilistic contributions add together linearly, even if the expectation value of anyone of the spikes/artifacts, or all of them, is less than 1 ADU in that region.]

And you don't need a point source for this. As I've essentially stated in post #404 the interference pattern applies to all photons that pass through the optics and reach the detector, whether those photons originate from stars, nebulosity, accretion disks, anything. Any photon that passes through the telescope's optics is subject to an interference pattern before that photon reaches the sensor, if it reaches the sensor at all. It matters not what the source of the photons are for this. All photons from any distant source that reach the sensor are subject an interference pattern before reaching the sensor.

There's nothing misleading about this. Diffraction patterns and interference patterns and the probabilistic nature of quantum mechanics are not "misleading." It's just how the universe works.
 
Last edited:
  • Like
Likes dlgoff
Astronomy news on Phys.org
  • #37
I decided to put a piece of window screen in front of my objective lens for a diffraction test…

687C19BA-8DBB-442E-86FF-237B85AC1507.jpeg

CD362C7F-83B2-45B4-82B4-54AAB0EB2E28.jpeg

3AAED420-0843-48CC-B364-29A3DCE44526.jpeg

I took a single 5 minute exposure of Polaris at 600mm focal f/9, 100iso with the window screen in front of the lens…

D58A53FD-47F0-46AE-B2A1-16434FD63C18.jpeg


When I adjust the RAW conversion settings of half the 14 bit image (with the identical exposure / image data), the brighter Polaris diffraction pattern appears the same shape and size as the dimmer star in the upper right…
60A8C309-CA23-46A9-9F9A-051E4C3F1F85.jpeg
 
  • Like
Likes Imager and collinsmark
  • #38
collinsmark said:
There's nothing misleading about this. Diffraction patterns and interference patterns and the probabilistic nature of quantum mechanics are not "misleading." It's just how the universe works.
Yes true but there comes a point where even with photon statistics etc. that the probability of detecting a photon is FAPP zero within the length of the exposure. No amount of signal processing can pull the signal from the noise.

If this were not the case there would be no lower limit to the faintness of stars we could detect and no need for larger telescopes.

Regards Andrew
 
  • #39
Devin-M said:
When I adjust the RAW conversion settings of half the 14 bit image (with the identical exposure / image data), the brighter Polaris diffraction pattern appears the same shape and size as the dimmer star in the upper right…
Not sure what you are trying to show other than by signal processing you can manipulate how an image looks.

Can you tell me how you measured the size of the images to be the same as your earlier claim was they are not finite but extend off the image?

Regards Andrew
 
  • #40
sophiecentaur said:
I'm afraid there are too many points in your post to be addressed.

OK, I already accepted that the sensor itself is linear over a wide range and that stacking will allow a substantial increase in effective bit depth by averaging out random noise. I also know that the maths of diffraction goes on and on, as far down as you like. Real life is not like that. We always run out of range because noise and interference are present.

Do you have a reference about 32bit ADCs on the JWST? Wherever I have found a bit depth of the sensor arrays mentioned, it's been 16bits. Stacking can be achieved in many ways and they are all based on non linear processing to reject spurious data so I have to assume that, in fact, the linearity / purity of the images is restricted to 16 bits. JWST doesn't need defending and 16 bit data has been quite adequate as a source of fantastic and revealing images.

I'm pretty sure the bit depth of the sensor itself is 16 bit, since the full-well value of any of the sensors in NIRCam's sensor array is less than [itex] 2^{16} = 65536 [/itex]. (See https://jwst-docs.stsci.edu/jwst-ne...detector-overview/nircam-detector-performance)

But each pixel value is stored in 32 bit floating point format before or during the steps where calibration is applied and subframes are stacked. This allows for sub-ADU resolution of each pixel. As a matter of fact, as described below, it's possible to achieve resolutions not just below that of an ADU, but even a single photon, if sufficient stacking is performed.

One obvious reason for stacking is to identify cosmic rays. It's not difficult to identify because a subframe pixel affected by a cosmic ray will be a statistical outlier compared to the corresponding pixels in the other subframes.

And, as you mentioned, you can increase the signal to noise ratio above that of any single subframe by stacking multiple subframes. One can use the central limit theorem to show that (given a few assumptions about the noise being uncorrelated) the signal to noise ratio increases by a factor of [itex] \sqrt{N} [/itex] over a single subframe, where [itex] N [/itex] is the number of subframes stacked.

Stacking also increases the bit-depth in another way due to the probabilistic nature of photon arrival. Even if some sublte detail in a target is less than 1 ADU, it still affects the pixels in a probabilistic manner. For example if a star's diffraction spike over a particular pixel is only 1/5 of an ADU, you would expect 2 out of 10 subframes to have an additional ADU above the background for that pixel. And if you stack 10 subframes (averaging them), you can get that extra 0.2 ADU detail in the result. If you stack enough subframes you can gain resolutions better than even a single photon.

I don't know how much stacking is typically done in JWST images, but there's definitely some stacking done. There are gaps between the sensors within the sensor array, so there needs to be at least some stacking overlap for that at least. (See https://jwst-docs.stsci.edu/jwst-near-infrared-camera)

NIRCam modules field of view

NIRCam+modules+FOV.png


For what it's worth, here's an image from the recent Pillars Of Creation redo, specifically showing the stacking overlap of the individual sensor cores. I downloaded this particular image from Mast, then I used PixInsight to apply a quick-and-dirty stretch to it (otherwise it would look nearly all black), resized it for PF, and saved it as a JPEG. This image was acquired using the F090W filter.
jw02739_o001_t001_nircam_clear_f090w_i2d_VAR_POISSON_clone.jpg
 
Last edited:
  • #41
andrew s 1905 said:
Yes true but there comes a point where even with photon statistics etc. that the probability of detecting a photon is FAPP zero within the length of the exposure. No amount of signal processing can pull the signal from the noise.

Of course there are limitations with the sensor. I've never argued against that. What I object to are incorrect claims such as
  • diffraction spikes are caused by nonlinearities (false)
  • diffraction patterns themselves are inherently nonlinear (false)
  • dim stars never produce diffraction spikes; diffraction spikes are only caused by bright stars (false)
  • diffraction patterns are a nonlinear function of the star's brightness (false)
  • for a given exposure it's impossible to ever gain more detail below 1 ADU (false. You can gain better resolution than 1 ADU by stacking multiple subframes and exploit the probabilistic nature of photon arrival).
I've never claimed that for a given exposure you can gain more detail than a single ADU by "signal processing." Of course not. But you can take that single exposure and stack it together with many other similar, single exposures, and get that detail back. Or, if saturation isn't an issue, just take longer exposures.

If what you said is true ("the probability of detecting a photon is FAPP zero within the length of the exposure") then the act of stacking multiple sub-exposures of the same length would also have "FAPP zero" probability of detecting a photon. But it's not. You can get that detail* back by stacking. The point being that the information of that subtle diffraction spike of that dim star is still there, albeit in a probabilistic manner (i.e., it takes more than one exposure, but it can be gotten).

*[Edit: here "detail" refers to small variations in intensity, not detail in terms of angular resolution.]

Applying all that to this discussion: Diffraction patterns/interference patterns are not the result of the exposure time or the result of sensor limitations. Diffraction patterns/interference patterns are ultimately a function of the telescope's optics.

andrew s 1905 said:
If this were not the case there would be no lower limit to the faintness of stars we could detect and no need for larger telescopes.

Regards Andrew

There are several reasons for larger telescopes. Two in particular:
  1. For the same angular resolution (or for a given focal length), a bigger telescope gathers more light and allow the image to be acquired in less time, all else being roughly equal.
  2. And more importantly, the image produced by a given telescope is essentially a convolution of the diffraction pattern/interference pattern which we are discussing here. It's not possible to achieve more angular detail in the image than the angular detail in the diffraction pattern/interference pattern. Bigger telescopes have smaller/more detailed diffraction/interference patterns. So if you want more angular detail in the resulting image, you need a bigger scope.
 
Last edited:
  • #42
collinsmark said:
I'm pretty sure the bit depth of the sensor itself is 16 bit,
So we agree on that. The only way to increase the effective number of bits is by using multiple images and you say there's no stacking. So the linearity range cannot go below one bit 1:1/(216). That's not a big relative magnitude and can account for the 'smaller' / shorter spikes for the dimmer stars. I'm not sure why you took exception to this.
 
  • #43
collinsmark said:
  • diffraction spikes are caused by nonlinearities (false)
  • diffraction patterns themselves are inherently nonlinear (false)
  • dim stars never produce diffraction spikes; diffraction spikes are only caused by bright stars (false)
  • diffraction patterns are a nonlinear function of the star's brightness (false)
  • for a given exposure it's impossible to ever gain more detail below 1 ADU (false. You can gain better resolution than 1 ADU by stacking multiple subframes and exploit the probabilistic nature of photon arrival).
Where did you get this list from? You must have mis-read a lot of what I wrote or, at least, have confused the concept of "recorded image of a pattern" with the pattern itself. The linearity failure at low levels can destroy recorded spikes and 16 bits is where linearity fails. Some spikes are never there in 16 bit images.
 
  • #44
sophiecentaur said:
So we agree on that. The only way to increase the effective number of bits is by using multiple images and you say there's no stacking. So the linearity range cannot go below one bit 1:1/(216). That's not a big relative magnitude and can account for the 'smaller' / shorter spikes for the dimmer stars. I'm not sure why you took exception to this.
No, there is stacking. There's always at least some stacking, even with JWST's pristine sensors. I'm just not sure how much is typically done with JWST.

I don't take exception to the acknowledgment that there are practical limitations. Of course there are limitations such as finite amount of integration time for practical reasons. Of course.

What I take objection to is claims implying that it is impossible to detect small details such as the diffraction spikes of dimmer stars, even in principle. It's not impossible; the physics that cause the diffraction spikes of brigher stars is also equally present for dimmer stars. It just may take more integration time (either longer exposures or stacking of shorter exposures) to bring those spikes above the floor.
 
  • #45
sophiecentaur said:
Where did you get this list from? You must have mis-read a lot of what I wrote or, [...]
I wasn't replying to you in-particular on that one. :smile:
 
  • #46
collinsmark said:
I wasn't replying to you in-particular on that one. :smile:
Well I never made any of those claims so who were you replying to?
Regards Andrew
 
  • #47
collinsmark said:
what you said is true ("the probability of detecting a photon is FAPP zero within the length of the exposure") then the act of stacking multiple sub-exposures of the same length would also have "FAPP zero" probability of detecting a photon. But it's not. You can get that detail back by stacking. The point being that the information of that subtle diffraction spike of that dim star is still there, albeit in a prob
This is not true. Whatever the exposure time (single or multiple images) there will be an intensity where FAPP there will be zero photons detected. Yes by increasing the exposue you can record fainter details but even here there is a limit due to non zero sky brightness and othe noise sources.

Regards Andrew
 
  • #48
andrew s 1905 said:
This is not true. Whatever the exposure time (single or multiple images) there will be an intensity where FAPP there will be zero photons detected. Yes by increasing the exposue you can record fainter details but even here there is a limit due to non zero sky brightness and othe noise sources.

Regards Andrew
If there are no photons at all, then of course there will be no photons detected.

But if there's even a dim source, The Central Limit Theorem disagrees with you.

Just like rolling a die (as in "dice") that has an ever so slightly greater chance of landing on a particular number compared to any other number, the discrepancy can be determined with enough rolls. Even if the imperfection is smaller, it can be determined with a greater number of rolls.

In the case of a dim object viewed from a telescope, if the source's photon flux is greater than its surrounding background, and the photon's wavelengths are within the bandwidth of the receiver (within the filter's/sensor's bandwidth), and if the statistics of the system are stationary (i.e., we're not talking about a dynamical system such as a one-off flash, or something changing its behavior in an aperiodic fashion), then the photons can be detected with sufficient integration.

For a given exposure time of subframes, the pixel value of interest can be treated as a random variable with a mean (i.e, "average" value) and standard deviation. The standard deviation of the pixel value is the result of all the noise sources combined.

We can estimate the true mean by summing together the pixel values of multiple subframes, and then dividing by [itex] N [/itex], the number of subrames in the ensemble. (in other words, take the average value of the pixel).

What does that do to the standard deviation, you might ask? That is, the standard deviation caused by the combination of all noise sources after summing multiple subframes together?

The Central Limit Theorem shows that standard deviation of the averaged ensemble tends toward zero as [itex] N [/itex] increases. Specifically by a factor of [itex] \frac{1}{\sqrt{N}} [/itex].

Similarly, if instead of stacking, you wanted to take a longer exposure (and are not at risk of saturation), with exposure time [itex] T [/itex], the time averaged noise (per unit signal) decreases by a factor of [itex] \frac{1}{\sqrt{T}} [/itex] for all noise sources except the read noise, and then [itex] \frac{\mathrm{read \ noise}}{T} [/itex] is added on as final step. [Edit: I'm admittedly kind of sloppy here. The units of time here are not seconds, but rather the fraction of some fixed time interval such as that used for individual subframes described above.]

The implication here is that by increasing total integration time, the estimated mean approaches the true mean with arbitrarily close precision, as total integration time increases.

Of course there may be practical limitations in any real world system. Of course. But saying that it's not possible, even in principle, is incorrect.
 
Last edited:
  • #49
The pertinent equations suggest a different color point source will have a different diffraction shape/size, not a different brightness.
 
  • #50
Devin-M said:
The pertinent equations suggest a different color point source will have a different diffraction shape/size, not a different brightness.
Yes, the diffraction pattern is wavelength dependent. That's true.
 
  • #52
Suppose we have 2 equal intensity (photon per second) point sources with different wavelengths. The detectable diffraction spikes of the redder point source will be larger in pixel radius. Or with long enough integration time and low enough noise, both will extend outside the image frame.
d58a53fd-47f0-46ae-b2a1-16434fd63c18-jpeg.jpg
 
  • #53
5.jpg

687c19ba-8dbb-442e-86ff-237b85ac1507-jpeg.jpg
 
  • #54
collinsmark said:
If there are no photons at all, then of course there will be no photons detected.

But if there's even a dim source, The Central Limit Theorem disagrees with you.

Just like rolling a die (as in "dice") that has an ever so slightly greater chance of landing on a particular number compared to any other number, the discrepancy can be determined with enough rolls. Even if the imperfection is smaller, it can be determined with a greater number of rolls.

In the case of a dim object viewed from a telescope, if the source's photon flux is greater than its surrounding background, and the photon's wavelengths are within the bandwidth of the receiver (within the filter's/sensor's bandwidth), and if the statistics of the system are stationary (i.e., we're not talking about a dynamical system such as a one-off flash, or something changing its behavior in an aperiodic fashion), then the photons can be detected with sufficient integration.

For a given exposure time of subframes, the pixel value of interest can be treated as a random variable with a mean (i.e, "average" value) and standard deviation. The standard deviation of the pixel value is the result of all the noise sources combined.

We can estimate the true mean by summing together the pixel values of multiple subframes, and then dividing by [itex] N [/itex], the number of subrames in the ensemble. (in other words, take the average value of the pixel).

What does that do to the standard deviation, you might ask? That is, the standard deviation caused by the combination of all noise sources after summing multiple subframes together?

The Central Limit Theorem shows that standard deviation of the averaged ensemble tends toward zero as [itex] N [/itex] increases. Specifically by a factor of [itex] \frac{1}{\sqrt{N}} [/itex].

Similarly, if instead of stacking, you wanted to take a longer exposure (and are not at risk of saturation), with exposure time [itex] T [/itex], the time averaged noise (per unit signal) decreases by a factor of [itex] \frac{1}{\sqrt{T}} [/itex] for all noise sources except the read noise, and then [itex] \frac{\mathrm{read \ noise}}{T} [/itex] is added on as final step. [Edit: I'm admittedly kind of sloppy here. The units of time here are not seconds, but rather the fraction of some fixed time interval such as that used for individual subframes described above.]

The implication here is that by increasing total integration time, the estimated mean approaches the true mean with arbitrarily close precision, as total integration time increases.

Of course there may be practical limitations in any real world system. Of course. But saying that it's not possible, even in principle, is incorrect.
I am saying it is For All Practical Purposes not possible. I don't recall saying it was impossible in principle but given the finite life of stars I am inclined to think it is.

In your calculation there will come a point along the diffraction spike where the diffraction pattern will fall below the sky general background so your assumption is invalid.

We shall just have to disagree on this.

Regards Andrew
 
  • #55
Forget stars, suppose we have 2 satellites within the field of view, each with a monochrome laser point source. The red laser has photon count per second of 10k and the blue laser 10k+1. Which has the larger diffraction spike— the brighter blue satellite or the dimmer red?

5-jpg.jpg
 
Last edited:
  • #56
andrew s 1905 said:
In your calculation there will come a point along the diffraction spike where the diffraction pattern will fall below the sky general background so your assumption is invalid.
This is the numb of the problem . Even the JWST is working in marginal conditions, even if the margins have been changed a lot. If they were operating the scope within those margins then they would be wasting many billions of dollars.
@collinsmark seems to be insisting that the limited model of his maths is all that needs to be considered but, in the limited situation of a 16 bit sensor and the presence of many other interfering low level sources there is a very real limit to how far the maths will follow reality. This is not a problem but it's what limits what we can see.
 
  • #57
andrew s 1905 said:
I am saying it is For All Practical Purposes not possible. I don't recall saying it was impossible in principle

Oh. "For All Practical Purposes" is what you meant by "FAPP" when you said:

andrew s 1905 said:
the probability of detecting a photon is FAPP zero

I see now that "FAPP" is an acronym. That's the first time I've ever heard that word being used as an acronym. I thought you were shouting an expletive to emphasize your point. (To pound home your point, so to speak.)

I'm sorry. My bad. I think we are in agreement then. For practical reasons, yes, there are limitations. Of course.

By the way, if you didn't know, "fapp" has a much more commonly used colloquial meaning in the contemporary English language (i.e., slang) that I won't repeat here. 'Figure I should point that out so you know.

andrew s 1905 said:
In your calculation there will come a point along the diffraction spike where the diffraction pattern will fall below the sky general background so your assumption is invalid.

Not necessarily, because sky background alone can be treated as a form of noise. The variation (and standard deviation) of a patch of sky background in an otherwise boring patch of sky tends toward zero with increased total integration time. That means that even an arbitrarily small blip above the background is detectable. Sure it might not have been detected in any one, given image. But such blips are detectable with sufficient total integration time.
 
Last edited:
  • #58
collinsmark said:
I see now that "FAPP" is an acronym. That's the first time I've ever heard that word being used as an acronym. I thought you were shouting an expletive to emphasize your point.
I certainly prefer it when a not obvious acronym is used that it is clearly spelled out in its first usage. It if only used once, lose the acronym, and do the work to make your communication clear.
 
Last edited:
  • Like
Likes vanhees71, pinball1970 and collinsmark
  • #59
collinsmark said:
The variation (and standard deviation) of a patch of sky background in an otherwise boring patch of sky tends toward zero with increased total integration time.
But the stars are stationary and will have their own images after a long exposure - not a grey background.
 
  • #60
sophiecentaur said:
This is the numb of the problem . Even the JWST is working in marginal conditions, even if the margins have been changed a lot. If they were operating the scope within those margins then they would be wasting many billions of dollars.
@collinsmark seems to be insisting that the limited model of his maths is all that needs to be considered but, in the limited situation of a 16 bit sensor and the presence of many other interfering low level sources there is a very real limit to how far the maths will follow reality. This is not a problem but it's what limits what we can see.

I've said this before, but I'll try to say it again in different words. The bit depth of the sensor hardware is not the end-all-be-all of the overall bit depth that the camera is capable of achieving. You can increase the effective bit depth to some degree (sometimes to a very significant degree) though the process of stacking multiple subframes.

I think it's best now that I explain with examples. I'll start with some real-world examples and finish with a hypothetical, yet extreme example.

Here's an image of Mars that took with my backyard telescope, a couple of years ago:
2020-10-01-0924_6-aggressive-rgb-compose-rgb-jpg.jpg


The color resolution (bit depth) seems very smooth, does it not? However, the image was taken with an 8 bit sensor! The sensor only had 256 levels. Yet look at the final image. The final image is way, way higher level resolution than 8 bits. How did I do that? Integration. The final image was composed by integrating thousands of individual subframes.

Check out this Hubble Ultra Deep Field image (I didn't take this one; Hubble Space Telescope [HST] did):
heic0611b.jpg


HST pointed to very, very unpopulated (unpopulated by nearby stars, nebula, etc.) patch of sky, taking many individual, 20 minute subframes and stacking them to achieve this final image.

One might say, "that's impossible, all those galaxies would just blend into the background sky glow." Well, no, it's not impossible. As HST shows us here, it is possible. As I've said in a previous post, the standard deviation of the background sky glow tends toward zero with increased total integration time, and thus the background sky glow can be subtracted out.

One might also say, "that's impossible. The detail in objects so dim would be less than 1 ADU of the bit-depth of HST's sensor." Sure, some of the detail was less than 1 ADU of the sensor for a single, 20 minute exposure, but HST gained more bit depth by integrating many individual sub-exposures.

The image was taken with 4 different filters. All subexposures were approximately 20 minutes each. For the two shorter wavelength filters, 112 individual subframes were stacked for each filter. For the higher wavelength filters, 288 individual subframes were stacked for each filter.

That makes for a total integration time of over 11 days.

And no, integrating a whole 11 days worth of subframes to produce the "Hubble Ultra Deep Field" image, instead of settling on a single, 20 minute exposure, is not a waste of many billions of dollars.

I'm not just pulling math out of my butt. This is how real-world science is done. Right here.

------------

Now for a hypothetical, extreme example. Consider a 1 bit camera. Each pixel can represent either an on or off.

For the purposes of this example, assume that the camera has a high quantum efficiency and the camera is operating near unity gain, ~1 [itex]\mathrm{e^-}[/itex]/ADU. Also, for this hypothetical example, assume the sensor's read noise is small.

Now, put the camera on a tripod and point it at your favorite sleeping kitten, where there are both bright and dark regions (maybe the cat is sleeping in a ray of sunlight from the window). Adjust the exposure time such that some pixels are consistenly black over many different subframes, some pixlels are consistently white over many different subfames, and the rest of the pixels randomly alternate between black and white from one exposure to the next, to some varying extent from pixel location to pixel location.

Now take and record 255 separate subexposures. You'll find that when analyzing the data, in the really dark regions, some pixels are black in all 255 subframes. But a few pixels are white in 1 of the 255 frames. Moving to a slightly brighter region, there are pixel locations that are white in 2 of the 255 frames. On the really bright regions, some pixels are white in all frames. But some pixels nearby are white in only 254 of the 255 frames. Others nearby are white in only 253 of the frames. In the regions that are neutral brightness, the number of white pixels seem to be consistently around 46 out of the 255 subframes.

Now sum (or average, if you store your data in floating point format) each pixel location over all 255 subframes. Blam! you've got yourself an image with a bit depth of 8 bits. You started with a camera with only 1 bit, and now you have an 8 bit image. There's black, there's white, and 256 levels of grey, total (0 to 255).

Sure, this particular image suffers quite a bit from shot noise, but you can reduce the shot noise by integrating further, and producing an image with a bit depth greater than 8 bits as a byproduct. You'll even find that by doing so, you can eke out more detail in the shadows that were previously, consistently all black.

Isn't math neat?
 
Last edited:
  • Like
Likes Vanadium 50
  • #61
sophiecentaur said:
But the stars are stationary and will have their own images after a long exposure - not a grey background.
When I said "stationary," before, I meant that in a statistical sense. I mean a stochastic system that has random properties, but such that the statistics governing the randomness are not changing. In other words, things in the process can change randomly, but the statistics do not change.
https://en.wikipedia.org/wiki/Stationary_process

So long as the star's true brightness is statistically stationary, the standard deviation of its brightness (and that of any of its diffraction artifacts) in the integrated image tend to zero as total integration time increases.

Yes, this could present issues for Cepheid variable stars, depending on the star's period and the integration strategy.
 
  • #62
I tried another diffraction test with 6 spikes, but only so-so results. My exposure times were limited by thin clouds and Bortle 6 light pollution. This is a composite of around 120 exposures of Polaris, about 15 seconds per exposure at 6400iso with wire over the lens to create a 6 spike diffraction pattern.
F6F48FDB-3CA4-437C-892F-86357C297F51.jpeg

D43F88C3-C435-4A9E-A4B5-0240255511BB.jpeg
 
  • Like
Likes Imager, DennisN and collinsmark
  • #63
collinsmark said:
see now that "FAPP" is an acronym. That's the first time I've ever heard that word being used as an acronym. I thought you were shouting an expletive to emphasize your point. (To pound home your point, so to speak.)
My apologies it is was not my intention to shout. It is used a lot on this forum, which is where I came across it, so did not spell it out.
Again sorry for the confusion.

Regards Andrew
 
  • Like
Likes collinsmark
  • #64
collinsmark said:
So long as the star's true brightness is statistically stationary, the standard deviation of its brightness (and that of any of its diffraction artifacts) in the integrated image tend to zero as total integration time increases.
This is true but hardly relevant in a situation where a single exposure is made. There is a limit to how much you can give it if you want to avoid burn out of the brightest stars.
Then, you are stuck with the 16 bit limit unless you use stacking, which is a non linear process, so all simple bets are off.

This thread has gone in circles so we really don't know what we agree about and what we disagree about but everyone seems reasonably well informed. As far as I can see, we started off with the fact that bright star images seem to have 'longer' spikes than dim ones. Somewhere in the discussion we have explained why and now we're half in and out of theory and practice. Aren't we done? :smile:
 
  • Like
Likes Imager
  • #65
Well since the 1st maxima of an airy disc of red light is larger radius than the 1st maxima for blue light of equal intensities, if we’re imaging laser point sources I wonder if the spikes would be longer for a red laser compared to an equal intensity blue laser. With the question posed this way we don’t have to worry about sky background or sensor sensitivity.

31151475-2530-419C-B217-8D8A4B064C34.jpeg


When I look at this test photo I did last night, to my eye the tips of the spikes are red, then a bit inward we have green, and further inward still we have blue.
 
  • #66
Devin-M said:
I wonder if the spikes would be longer for a red laser compared to an equal intensity blue laser.
They will have a 'length' roughly proportional to the wavelength (also the spacing of any 'dots'. That is if everything else is controlled.
But images of sharp objects in space are mostly not monochromatic so we're really looking at the colour fringing at either end of the spikes and differences hard to discern.
 
  • #68
sophiecentaur said:
This is true but hardly relevant in a situation where a single exposure is made. There is a limit to how much you can give it if you want to avoid burn out of the brightest stars.
Then, you are stuck with the 16 bit limit unless you use stacking, which is a non linear process, so all simple bets are off.

This thread has gone in circles so we really don't know what we agree about and what we disagree about but everyone seems reasonably well informed. As far as I can see, we started off with the fact that bright star images seem to have 'longer' spikes than dim ones. Somewhere in the discussion we have explained why and now we're half in and out of theory and practice. Aren't we done? :smile:
I may have been contributing to this thread trending in different direcitons because I like this thread and don't wish it to become a source of misinformation. So when there is a false or misleading claim I can't in good conscience just sit by and let that false claim go unchallenged. And it seems there are many in this thread, which is why, I assume, it's taking twists and turns.

Since you bring up the history of the particular circles, I'll try to consolidate them.

It may have started somewhere around the time of:
sophiecentaur said:
Of course the spacing will be the same but a fainter image will not produce such long spikes because lower amplitude dots are not recordable.

This is subtly incorrect. Yes, maybe in that particular image the dots were not recorded. In that image. Sure. But the dots are not the byproduct of the camera or any part of the imaging hardware. The dots are the result of diffraction/interference of the optics including the Bahtinov mask in this case. And those diffraction dots are, in fact, recordable. So to say that the dots are not recordable isn't true. Therefore I objected.

---
Then immediately there there was some misinformation about the properties of Full Width Half Maximum (FWHM) by another poster that has since been recanted, so I won't repeat them here. All of that got sorted out. But that did take a few posts to get through.
---

Then there is another:
sophiecentaur said:
Your data near the peak can be considered to be linear but it eventually ends up being non-linear

This is untrue because the spikes themselves are linear all the way down to the quantum level (and the wavefunction of quantum mechanics [QM] is completely linear as far as anybody can tell for certain).

And if you're not talking about the linearity of the spikes themselves, but limitations of the camera sensor, the linearity of the camera sensor can be made linear with arbitrarily high precision by either by increasing the exposure time if saturation is not a concern, or by stacking. (And thus at this point the process of stacking entered into the discussion).

Also in the same post, we had this, separate separate idea that's not correct:
sophiecentaur said:
The bit depth of the cameras appears to be 16, which corresponds to a maximum relative magnitude of about 12, at which point the artefacts will disappear. That implies a star that is exposed at the limit will only display artefacts down to a magnitude of 12, relative to the correctly exposed star.

That claim is false. The 16 bit depth of the sensor is not the limiting factor. In later posts I explained why, complete with examples and some practical mathematics to achieve increased effective bit depth. But this 16 bit sensor seems to repeatedly brought up as an insurmountable limitation, even though in and of itself, it is not. Of course there are real-world, practical limitations regarding the dimmest things are possible for JWST to resolve. But this least significant step of the 16 bit ADC, by itself isn't one of them.

---
I'll mention that at some point somewhere around there I had a miscommunication with another poster about FAPPs and fapping, but that ended up getting all sorted out, so I won't rehash that the details here.
---

With this next post I briefly thought we were in agreement about all of this stuff I just discussed above:
sophiecentaur said:
OK, I already accepted that the sensor itself is linear over a wide range and that stacking will allow a substantial increase in effective bit depth by averaging out random noise. I also know that the maths of diffraction goes on and on, as far down as you like.

So for a moment I thought that we had resolved the disagreements.

But I guess not, because soon after we have:
sophiecentaur said:
It's very misleading because, in a real image there is not a point source and also there are a number of other sources in the vicinity of the low level parts of a diffraction spike. this constitutes a 'floor' which can be significantly above the least significant step in the ADC.
Which again repeats the false claim that the least significant step in the ADC is some sort of insurmountable fundamental limit. It's not.

Furthermore, something new is mentioned about requiring a point source and overlapping patterns. Diffraction applies to all sources, not just point sources. Even if the orientation of JWST is such that the image of a dimmer star lies within the diffraction spike of a brighter star, it is still theoretically possible to resolve the brightness of the dimmer star, it just might take more total integration time and knowledge about JWST's diffraction characteristcs (which scientists are aware of). Will the brightness of the dimmer star necessarily be recorded in any given image? No. But it is recordable. I personally don't take serious issue with any of this until it's brought back to the ADC. Yes, overlapping diffraction patterns complicate matters for sure, but let's keep the ADC out of it.

Then this came out the blue:
sophiecentaur said:
So we agree on that. The only way to increase the effective number of bits is by using multiple images and you say there's no stacking.
Boldface mine.

I never said there was no stacking! Of course there's stacking! There's always at least some stacking. In the JWST image that posted which shows the stacking/overlap, I can see regions of at least 6, maybe 7 overlaps (could be more) and the majority are around 4 or more overlaps.

And here again with the 16 bit thing:
sophiecentaur said:
The linearity failure at low levels can destroy recorded spikes and 16 bits is where linearity fails. Some spikes are never there in 16 bit images.

I'm repeating myself. But 16 bits is not where linearity fails. It may be the case for a single subframe, but there's always at least a little stacking which increases the bit depth in the resulting image. And if science dictates, and more details is necessary, JWST could always can be told to increase its total integration time on that particular target in question and stack more subframes.

And regarding the overlap of diffraction patterns (such as diffraction spikes of brighter stars overlapping the central spot of the diffraction pattern of a dimmer star), yes that makes things more complicated, but not insurmountable. And none of it fundamentally limited by the "16 bit" aspect of the sensor.

And then this post broke my heart:
sophiecentaur said:
@collinsmark seems to be insisting that the limited model of his maths is all that needs to be considered but, in the limited situation of a 16 bit sensor and the presence of many other interfering low level sources there is a very real limit to how far the maths will follow reality. This is not a problem but it's what limits what we can see.

Good grief. Where to start. Not only does the tone of that post start off as snide and condescending, it's wrong. It's wrong in two ways:
  1. Of course there's more to it than just the basic tenets, theorems, and implementation of information theory statistics, and mathematics. and I never said there wasn't anything more to it. But just because there is more to it, those simplistic, basic, theoretical ideas are essential considerations for this topic, and cannot be ignored if further understanding is to be achieved.
  2. There we go again with the 16 bit sensor limitation. Implying that this as a fundamental, insurmountable limitation is wrong.
  3. We can follow the maths all the way down to the quantum level with incredible precision. And this is true, even as applied to JWST images. As a matter of fact it's crucial, since in some cases we are talking about individual photons and their paths which are subject to diffraction and self interference patterns (not dissimilar to the double slit experiment). Implying that the maths and physics don't track reality at that level or above is wrong.

And then, since my last post, there a brand new incorrect claim:
sophiecentaur said:
This is true but hardly relevant in a situation where a single exposure is made. There is a limit to how much you can give it if you want to avoid burn out of the brightest stars.
Then, you are stuck with the 16 bit limit unless you use stacking, which is a non linear process, so all simple bets are off.

Gah! The stacking process is not nonlinear. With the exception of cosmic ray/hot pixel rejection and removal, (which might be considered part of the stacking process), it's linear in the
  • astrophotography sense: being performed well before the "stretch" or "curves" are applied (stacking is done in the process flow immediately after calibration frames are applied to the raw data from the sensor)
  • the mathematical sense: [itex] \mathcal{O}(x + y) = \mathcal{O}(x) + \mathcal{O}(y)[/itex],
  • and any other sense that I can think of.
Cosmic ray/hot pixel identification, rejection and removal fit in nicely as part of the stacking processes. But this isn't a necessary part of stacking. You could still do stacking without it.

Doing just the stacking alone, the mathematical operation is linear. As a matter of fact, it's mathematically equivalent (statistically speaking) to taking a longer exposure with unlimited bit depth (zero risk of causing saturation beyond that of the subframes), at the expense of additional read noise.

And this addition of read noise does not make the process nonlinear. (Any more than the addition operator is nonlinear, which it isn't.) Nor does this read noise present an insurmountable limitation if the read noise is uncorrelated/(statistically) stationary. The overall effect of the read noise can be reduced via the Central Limit Theorem, just like any other source of uncorrelated, (statistically) stationary noise source.

You're right to be concerned about saturation during a single exposure. Saturation is a primary consideration for determining the exposure length of subframes. After that, enough frames are stacked to bring about the necessary detail and signal to noise ratio (SNR) determined by the scientific needs of the object being imaged. If the science requires more effective bit depth or better SNR, more frames are stacked.

The read noise penalty for stacking is small, although it's not negligible, so it's something that is considered and dealt with. But it's in no way an insurmountable obstacle. And to the point of the last claim, there's nothing inherently nonlinear about it.

------------------------------------------------

Summary:

There are real, true, insurmountable limitations to JWST's capabilities, both theoretical and practical. I've never said otherwise. But things like the finite bit depth of the hardware sensor and the residual, thermal glow of background space are not among them.

Things such as the overlap between the diffraction pattern of one source over another: That makes things complicated (but can be dealt with). Scientists analyzing data must take diffraction into account all the time, with every image. But they do that. That's part of the process. But nothing about that is insurmountable.

There are many, many things that make gathering scientific data from JWST or any other telescope complicated. But not all of them are insurmountable. Many obstacles brought up recently in this thread, while they may have their own complications, and require careful considerations, are not truly limitations. JWST, science and math have ways of getting around them.

So what are some of the true limitations? There are many. Here are a few biggies:
  • Time. Time spent gathering data on anyone given target means less time for gathering data for other targets. JWST has a finite lifespan.
  • Aperture. While JWST's primary mirror may be large, it does limit the angular resolution that it can resolve. It also limits its light gathering rate (although this one points back to the previous bullet).
  • Stochastic, yet (statistically) nonstationary, dim sources. If a source is changing in a non-predictable way, and those changes are occurring fast enough given how bright it is, JWST may not be able to reliably detect those changes in detail.
And while there could be more, nowhere on that list is the 16 bit thing. And stacking is not inherently nonlinear.
 
Last edited:
  • #69
@collinsmark I'm sorry but your post is far too long - unnecessarily so for anyone to bother to read thoroughly. With respect, is says little more than "I (i.e. you) was right all the time."

You took exception to a comment that diffraction spikes are all different lengths. Can you show me an image full of stars with equal (displayed) spikes all over it? Perhaps the actual wording of the statement implied something to you that was not intended. No one has suggested that diffraction theory is wrong; diffraction assumes linearity.

Your list of bullet points demonstrates that you weren't actually reading what people had written and you are inconsistent. You started off say there isn't stacking and now you say there is stacking. You talk about the central limit theorem but that involves unlimited exposure time, which will, of course, saturate a sensor. You hop from 16 to 32 bit quantisation and you ignore any effect of nearby stars. 16bits (any sampling and quantising aamof) introduces non-linearity.
You state that stacking is inherently linear but a linear sum of many images is a very crude form of stacking. Each pixel of a stacked image can use and process (e.g. median) selected values from all available pixels. I can't see that is linear.
We are stuck with FAPP and the simple maths fails at some stage.
 
  • #70
sophiecentaur said:
Your list of bullet points demonstrates that you weren't actually reading what people had written
sophiecentaur said:
@collinsmark I'm sorry but your post is far too long - unnecessarily so for anyone to bother to read thoroughly.

@sophiecentaur, Who's not reading what other people have written? Could you clarify that? Do you even read what you write?!

sophiecentaur said:
You took exception to a comment that diffraction spikes are all different lengths. Can you show me an image full of stars with equal (displayed) spikes all over it?

Sure. Right here. @Devin-M was kind enough to generate one with direct experiment. Those two stars are of different brightness (as acquired directly from the camera). But when adjusting for the intensity, it's quite clear that the diffraction patterns are of the same size.

60a8c309-ca23-46a9-9f9a-051e4c3f1f85-jpeg.jpg


------
Edit:
sophiecentaur said:
Perhaps the actual wording of the statement implied something to you that was not intended. No one has suggested that diffraction theory is wrong; diffraction assumes linearity.
My objection was your statement where you said the spikes were not "detectable" (italics mine). I don't have an objection to saying that they were not detected in a particular image. But they are in fact detectable.
---------

sophiecentaur said:
You started off say there isn't stacking and now you say there is stacking.

I never said JWST does not use stacking. I kindly request that you do not put words in my mouth. I've never said that in my life. Not on PF, not anywhere.

That said, I'd be tickled to know where you think I said that. Which post and where. Show me a quote, if you would be so kind.

sophiecentaur said:
You talk about the central limit theorem but that involves unlimited exposure time, which will, of course, saturate a sensor.

No it doesn't. It essentially says, among other things, that the standard deviation of a set of averaged trials decreases by a factor approaching [itex] \frac{1}{\sqrt{N}} [/itex] compared to the standard deviation of the original set of trials. It says absolutely nothing about saturating sensors.

sophiecentaur said:
You hop from 16 to 32 bit quantisation and you ignore any effect of nearby stars. 16bits (any sampling and quantising aamof) introduces non-linearity.

The input is already quantized. From the very beginning. It starts with individual, quantized photons. From there all merely count them. We sum them. Sometimes we divide the results by a number. Nothing about any of that is nonlinear.

We can put those numbers in a 16 bit register or a 32 bit register. We can store those numbers in a file using 16 bit format or 32 bit floating point format. It doesn't matter.

Saturation is nonlinear, but that doesn't happen when stacking. That is one of the primary motivations to use stacking in the first place: it avoids saturation and nonlinearity. Stacking is about as linear as linear can get.

sophiecentaur said:
You state that stacking is inherently linear but a linear sum of many images is a very crude form of stacking. Each pixel of a stacked image can use and process (e.g. median) selected values from all available pixels. I can't see that is linear.

What do you think stacking is?

Averaging (as in "mean" -- not median) pixels from different subframes is what stacking is. The addition operator is a very linear operator. Division (by the number of elements summed) is a linear operator. What's not linear about that?!

sophiecentaur said:
We are stuck with FAPP and the simple maths fails at some stage.

Where does the math fail? Show me, please.

How might you suggest informing astronomers that use JWST, Hubble (HST), and pretty much any telescope around the world, that their stacking algorithms -- algorithms that they've been using for decades -- are all failures? Do you propose invalidating the countless academic papers that relied on astronomical data that invariably was produced, in part, using same general mathematical principles and theorems discussed here?
 
Last edited:

Similar threads

  • Astronomy and Astrophysics
Replies
25
Views
2K
  • Quantum Physics
Replies
4
Views
1K
  • Astronomy and Astrophysics
Replies
1
Views
1K
  • Astronomy and Astrophysics
Replies
1
Views
965
  • Science and Math Textbooks
Replies
2
Views
710
  • Astronomy and Astrophysics
Replies
28
Views
4K
Replies
1
Views
2K
  • Advanced Physics Homework Help
Replies
2
Views
1K
  • Astronomy and Astrophysics
Replies
5
Views
4K
  • Astronomy and Astrophysics
Replies
9
Views
3K
Back
Top