chi and h site logo

{ Practical astronomy | Optics and imaging | Cameras and detectors }


Cameras and detectors

Subsections:

As "camera" we define here the electronic detector, the case it is mounted in, and the electronics and software that go with it. This excludes the lens, which we deal with under "telescopes and lenses". For a dSLR we would consider only the body to be the camera. A compact digital camera is a camera with a lens thrown in, which cannot be removed. A webcam usually comes with a lens that can be removed. A CCD camera specialised for astrophotography comes without lens.

Type of camera

Compact digital camera

Compacts usually have a zoom lens to allow wide angle and mild tele photography. Travel zoom compacts have significant zoom factors, perhaps replacing a dSLR with two lesser zoom lenses. Even so, the zoom factors are not sufficient to photograph typical astronomical objects like the Moon, a star cluster, an interstellar nebula or a galaxy. If you are going to buy a compact, make sure it can take long exposures (30 s or more). Also useful is the "bulb" exposure setting, which allows you to take exposures as long as you like.

In general, you want the option to do things manually. Automatic exposure metering does not work at night; auto-focus does not work on faint or tiny objects. If you may have to stack multiple exposures or have to significantly boost the brightness of pictures, make sure the camera can take raw images with more than 8 bit and without JPEG compression.

If you are going to buy one of these cameras, perhaps you should consider a dSLR instead. If you have a compact already, use it for astronomical objects that it is suited to. Go for large objects: halos, aurorae, constellations, the Milky Way, etc. Small bright objects are accessible, too: Mount the camera afocally behind a telescope for detailed views of the Moon, or to capture telescopic views of the bright planets.

Digital single lens reflex camera (dSLR)

dSLR and travel zoom compact
A digital SLR and travel zoom compact camera side by side. The compact has 12x zoom, similar to the combination of the two lenses shown for the dSLR.

The dSLR is the big sister of the compact camera. The standard lens makes this camera roughly equivalent to the compact. The dSLR is bigger, because its roots go back to the film SLR of the mid to late 20th century. The old lenses can be used and the detector size is similar to the frames on 35 mm film. While these cameras are not so easy to carry around, the larger lenses and larger pixels make for better images. A larger lens has better diffraction-limited resolution; larger pixels collect more photons with less noise. The dSLR is more likely to permit to override the automatic features when they cannot cope with the strange images you take as an astronomer.

The principal difference to the compact is that the lens can be removed. You can buy a range of photo lenses from fish-eye to super-tele, or you can fix it to the back of a telescope for seriously large image scale.

CCD camera for astronomy

Amateur astronomers can also buy CCD cameras similar to those used by professionals. These are expensive and useless for your holiday snaps. They come without lens, need a computer for power, for control, and to store images. Proper CCD cameras are monochrome. They should be actively cooled to reduce noise. Their digitiser can probably generate 16-bit numbers, compared to the typical 14 bit in raw dSLR images.

These cameras are used by serious amateurs, often in the context of scientific or quasi-scientific work, such as photometry and astrometry. The preferred mode of working in that context is to have a monochrome camera and use filters to select the colour of light that is of interest.

Without filter these cameras have good sensitivity in the extreme red and near infrared. Deep-sky photographers may prefer CCD cameras over compact or dSLR, which have built-in infrared-blocking and colour-correcting filters that also block a lot of extreme red, including the Hα line of hydrogen that gives HII regions its characteristic colour.

Webcam

webcam
The Philips ToUcam Pro webcam as of 2002.

The webcam is the complete opposite of a CCD camera. It is designed for real-time video and generates quite poor 8-bit images. It is limited to very short exposure times. It is also very cheap, with a pair of webcams available for perhaps £20 from your local supermarket. It may appear cheap, but you also need a computer to do anything with it.

A field of astrophotography has developed where the webcam is the weapon of choice. The webcam can take vast numbers of short exposures in very little time, and they pile up immediately on a computer where they can be stacked quite efficiently. This makes the webcam ideal for imaging detail on the bright planets.

Recent compact and dSLR cameras can also be used as a webcam, as they tend to support recording video. The detector will be of a higher quality than in a webcam, and these cameras have their own power source and data storage; no need for a computer during the exposure. The recording quality of the video may also be better than in traditional webcams. However, the user's control over the exposure is more limited for video than for still photography, and this may render the video camera useless for certain objects:

Which camera for the job?

Stefan Seip (2008) gives a table of how suitable each of the four camera types is for particular uses:

Camera types and their use.
compact  dSLR   webcam  CCD   use for
+++ −− −− scenic shots during twilight
+−+ −− −− star trails
+−+ −−+− wide field (constellations or Milky Way)
+++ ++ Sun, Moon, their eclipses
−− ++ planets
+++ −− special events (e.g. meteors, aurora, halo)
+ ++ deep-sky, large and bright objects
−−+− −−++ deep-sky, small and faint objects

If you use a video mode of a compact or dSLR, consult the webcam column.

Although the compact is not very good at anything, it can be used in a variety of instances. A dSLR will always be better, can be used for almost anything and is very good overall. The only advantage a compact has over the dSLR is that it is easier to take with you and that it is ready without having to change lenses. The webcam has its niche use where nothing can compete with it. The CCD camera is better than the dSLR only for deep sky imaging.

Attaching the camera to the telescope

Having defined the camera as being without lens, we need a way of mechanically bringing together the two. To put a webcam on the telescope the general approach is to make up something cheap and simple. Remove the eyepiece from the telescope and the lens from the webcam. Cut an old film container into a tube and tape this in front of the webcam. The tube will probably fit into the telescope as if it were an eyepiece.

For compacts, the approach is sometimes even more ad hoc. Afocal projection will be used, so the telescope has an eyepiece in it. The camera replaces the human eye and can be held free hand behind the eyepiece. To fix the camera more permanently to the telescope you probably need some Heath Robinson contraption involving brackets around the eyepiece and the camera lens combined with several levers and joints. Digiscoping seems to be the buzzword for this, from mounting a digital compact behind a spotting scope.

T2 system
The T2 system, here for eyepiece projection.

A CCD camera may have a T2 thread and you can buy a T2 adapter for your telescope as well. T2 is an M42x0.75 thread, i.e. a thread of 42 mm diameter and with pitch 0.75 mm/rev. Astrophotography suppliers offer all sorts of gadgets for T2 assemblies, including extension tubes for eyepiece projection and T-junctions to divert light into an eyepiece for guiding. You can also get a T2 adapter for your dSLR, making it in this respect equivalent to a CCD camera. Conversely, you can possibly get T2 thread for your dSLR lenses so that the CCD camera can be put behind those for a wider field of view than through the telescope.

The picture illustrates the T2 system. At the bottom is the back of a small refractor with the focuser and a 2 in. tube for eyepieces. Partly inserted is an adapter to the T2 (M42x0.75) thread. The first loose piece is a T2 piece that can act as (i) adapter for an 1.25 in. eyepiece like the one inserted, (ii) a T2 extender piece with female and male threads at either end, (iii) both of the above to attach more extenders and a camera to accomplish eyepiece projection. The eyepiece part is followed by a combination of a 10 mm and a 20 mm T2 extender, then an adapter from T2 to the camera body's bayonet. Finally the camera body itself (which has a clip-in filter inserted). (This is a complex example; usually one would just use the 2 in. to T2 converter and the T2 to camera body converter.)

adjustable piggyback mount
A camera with lens mounted piggyback on a small refractor. In this case a guide scope mount is used, meaning that the knob and wheel on the mediating unit serve to adjust the pointing direction of the camera relative to the telescope by a degree or two.

The section title can be understood in a different way – how to mount camera and optics in parallel to another telescope, i.e. mounting an imaging instrument piggy-pack to a tracking instrument. The motive can be just to use the mount that belongs to the telescope for tracking; the telescope might then just be dead weight. But one can also use the telescope as guide scope to accurately follow a guide star while the piggyback optics takes an image of a different object nearby.

Piggyback mounts tend to be very specific to the telescope that carries the piggyback imager. In many cases the imager is mounted with the common photo tripod thread to the piggyback mount. For larger imaging optics this will have a dovetail similar to the main telescope, making the piggyback mount also heavier and more complex.

An elegant alternative to piggybacking a small imager onto a larger telescope is to mount two similar optics in parallel. The mount head is turned by 90° and takes a lateral dovetail. This has two dovetail bases attached at 90°, making for an H-shaped dovetail triplet. The two parallel dovetail bases mount the guide telescope and the imager on equal terms.

Detector

The light-sensitive detector in a digital camera is a rectangular area of semiconductor, subdivided into small – in most cases square – pixels. Quantum physics is at play once again: When a photon of light passes its energy to an electron of the crystal this liberates the electron from its atom and enables it to move freely through the crystal. However, during the image exposure the electron is limited to its own pixel and cannot move further than that. Not every arriving photon will create a free electron, the chances are somewhere between a half and one, depending on the colour of the photon and the quality of the detector. In "good old" film the odds are a lot worse, only about 10% of photons or less have any chemical impact on the emulsion.

After exposure, the electrons collected in each pixel can be "read out" of the pixel array. One way or another they become an electric signal in the detector electronics, the current is amplified, and then converted from analogue current or voltage to digital numbers ready to be transferred and processed like computer data. This is what your camera delivers as raw format, if it supports raw format.

Pixels, resolution, field of view

The size of the pixels limits the resolution of the images. We need to know what angle on the sky corresponds to one pixel in the image plane. We also need to know what angle on the sky the whole detector covers. The following table shows the pixel sizes and detector sizes of a few cameras.

Pixel sizes and detector sizes.
dP
μm
dx
mm
dy
mm
resolution camera
8.6 22.30 14.90 2592×1728 Canon EOS 600D, Bayer matrix binned
4.3 22.30 14.90 5184×3456 Canon EOS 600D, Bayer matrix interpolated
2.9 5.84 3.88 2048×1360 Panasonic Lumix DMC-TZ8, 2.5 Mpx
5.6 3.58 2.69 640×480 Philips ToUcam Pro, Bayer matrix interpolated

Comparing the compact Lumix to the dSLR, the resolution (number of pixels in the output image) is similar, while the linear sizes are four times smaller. Comparing the Lumix to the ToUcam webcam, the linear sizes are similar, but the Lumix has more pixels after binning and should have far better image quality (before compression to video format).

How these numbers translate to angles on the sky depends on the focal length f. The pixel size translates into an angular resolution and the detector size translates into the field of view on the sky. The table shows some typical combinations of optics and camera.

Pixel-limited resolution and detector field of view
(Bayer matrix binned).
f
mm
dP
μm
dx
mm
dy
mm
ΔαP fov optics and camera
4 2.9 5.8 3.9 2.4' 71° × 51° Panasonic Lumix DMC-TZ8, 1×, 2.5 Mpx
49 2.9 5.8 3.9 12.0" 6.8° × 4.5° Panasonic Lumix DMC-TZ8, 12×, 2.5 Mpx
55 8.6 22.3 14.9 0.5' 23° × 15° Sigma 55-200, Canon EOS 600D
200 8.6 22.3 14.9 8.9" 6.4° × 4.3° Sigma 55-200, Canon EOS 600D
560 8.6 22.3 14.9 3.2" 2.3° × 1.5° ED80 telescope, Canon EOS 600D
840 8.6 22.3 14.9 2.1" 1.5° × 1.0° Telementor II, Canon EOS 600D
2000 8.6 22.3 14.9 0.9" 38' × 26' 200 mm Schmidt-Cassegrain, Canon EOS 600D
4000 5.6 3.6 2.7 0.3" 3.1' × 2.2' 200 mm Schmidt-Cassegrain, 2×, ToUcam Pro VGA

Be careful about the resolution listed in this table. The actual resolution of the image is also limited by diffraction and by "seeing" (turbulence in the Earth's atmosphere). Diffraction is less of a consideration for larger lenses. In conventional photography this is why a dSLR will take better pictures than a compact.

Again, the compact camera appears similar to the dSLR, just all linear dimensions scaled down fourfold. As a conventional camera, the Lumix travel zoom is lighter, smaller and with a single 12x zoom lens covers the same range in field of view as the dSLR does with two zoom lenses, 18-55 and 55-200 mm.

Bayer matrix and colour detection

How does a camera take colour pictures? The incoming photons have a colour, but after conversion to electrons this information is lost; all electrons are identical particles and only their electric charge is evaluated by the detector electronics. One way or another monochrome images with three different colour filters have to be taken and these later combined in software. An elaborate way to do this would be to have three detectors, one with a red filter, one with a green filter and one with a blue filter. The simple way – and you have to do this when using a monochrome CCD camera – is to take a succession of three monochrome images and changing the filter in between images from red to green to blue.

For commercial cameras like compacts, dSLRs or webcams to be successful they must do something more clever, devious and acceptable to the consumer. Each pixel has its own little filter that makes it a red, green or blue pixel. They are arranged in what is called the Bayer matrix. Sets of two by two pixels form a larger unit, the Bayer matrix. Two diagonally opposite pixels in this four pixel matrix have a green filter, one corner is red and one is blue. All three colours are then spread quite well over the whole detector area, but the naive concept of square pixels situated next to each other is not quite true. Also, the full resolution is somewhat of a fallacy; more honest is to bin the Bayer matrix of four pixels into a single pixel. This can be done very easily with most cameras by reducing the pixel resolution by a factor two from the native, maximum resolution, say from 1600×1200 to 800×600.

coloured image from the sky coloured pixel filters coloured image after filters electrons in pixels
The Bayer matrix illustrated.
1. the coloured stream of photons from the sky,
2. the colour filters in front of the pixels,
3. the stream of photons after passage through the filters,
4. the electron counts without colour information.

Although the monochrome raw data (shown in step 4 of the illustration) carry no colour information within any pixel, such information is now contained in knowing which pixel one is looking at. The raw data need to be processed to rearrange the colour information such that all final image pixels have red, green and blue information. Suppose we want Bayer binning. In that case each output pixel corresponds to one Bayer matrix of pixels. Copy the R and B pixel value and average the two input G values into the output G value. If we want full resolution then we need to interpolate for each colour. For G we have a check-board pattern of pixels and need to fill in each gap from the four G neighbours. For R and B the process is more guesswork because there is only one input value for every four output pixels.

Bias, dark current and noise

It is time to talk about the Good, the Bad and the Ugly.

  1. Good is what is also called signal. This is for example the light from the star that we try to observe.
  2. Bad is what can often be captured by the term systematic errors. Next to the star that we want to observe might be a nebula we would like not to contribute to our measurement, i.e. source blending. Light pollution will brighten the image background, which is something we would prefer not to suffer, or which we may want to remove during data reduction.
  3. Ugly is what can be called statistical errors. If on average we receive 100 photons per minute from a star then chances are that in some minutes we get only 90 photons and in other minutes we get 110 photons.

The difference between good and the other two is that we want signal. What we want depends on what we intend to observe on the particular night. One astronomer wants to study the star and is annoyed by the nebula, another astronomer has the reverse taste.

The difference between bad and ugly is that we can measure the bad in isolation and use the result to remove it in data reduction. The ugly is different from one data set to the next, so that the "observe separately and subtract" algorithm does not work. The noise imprint on a sky frame is not the same as the noise imprint on a dark/bias frame taken just after the sky frame.

The ugly is not quite so unmanageable as it may seem. We can use statistical methods to reduce the noise, though never completely eliminate it. One trick is to expose for longer. In 10 minutes we would get 1000 photons on average and that number will fluctuate not by 10 (10%) but by 30 (3%). Similarly, we could take ten 1-minute exposures and average (or "stack") them.

Good and bad things add up normally: Add up four observations to find signal and systematic errors increase fourfold. Ugly things, however, grow only half as quickly:

S = S1 + S2 + S3 + ... + Sn
S ≈ Si · n

σ = √(σ12 + σ22 + σ32 + ... + σn2)
σ ≈ σi · √n

The examples above are all real things out there: starlight, sky background, photon noise. But there is also badness and ugliness in the detector itself. We can study this with dark/bias frames, where the lens cap is on and no light enters the detector. Dark/bias frames are very powerful exactly because they contain nothing good, no signal from the sky. What we see in them is all bad or ugly; the bad we see will also be in our sky frames and we can remove the bad by subtracting a dark/bias frame from the sky frame.

two dark frames
Two dark frames, plotted as-is.
sum and difference of two dark frames
Two dark frames added and subtracted, resp.
sum and difference of stacks of four dark frames
Two stacks of four frames added and subtracted, resp.

In the plots shown we look at a small extract of 200 pixels from some "double dark" frames taken with a dSLR as raw data. By "double dark" I mean a pair of dark/bias frames that are then subtracted from each other; this amounts to dark-correcting the dark frame and should result in data values around zero.

  1. The first plot shows in red and blue two dark/bias frames without any alteration. Both look very similar and featureless. Both show pixel values averaging 4100 and both show a scatter around this mean with standard deviation 35. The bias and dark current combined hence amount to about 4100. The scatter has two reasons and two components to it: First there is genuine noise that differs from one pixel to the next and also from one frame to the next. Second there is a pseudo-random variation of bias and dark current from pixel to pixel that happened when the detector was made in the silicon chip factory; this does not change from frame to frame. The two frames differ due to the noise only.
  2. The second plot shows what happens when we add (blue) or subtract (red) the two frames. The level in the addition has now doubled to 8200 while the level in the subtraction has reduced to zero. This illustrates the dark/bias correction. More interesting is that the scatter in the blue curve is larger than in the red curve. The red curve has a standard deviation of only 40.
    The red curve is free of the bias and dark current due to the subtraction of one frame from the other. The red curve is pure noise. The noise from two frames is thereby measured to be 40. If we could observe noise from only a single frame it would be 40/√2 ≈ 30.
    Further, the noise in the sum of frames is also 40, but the measured standard deviation is 55. The time-constant pixel-to-pixel variation of bias and dark current then is √(552−402) ≈ 40, coincidentally the same as the noise.
  3. In the third plot we add four frames to make one master dark/bias frame and four further frames to make a second master. Then we add (blue) or subtract (red) the master dark/bias frames. The scatter has again increased because we are combining more frames (eight instead of two) and the level in the sum is now also eight times that of the original single frame. The standard deviations are 165 in the sum and 80 in the difference.
    Using four times as many frames as before we expect the noise to be 40 · √4 = 80, which matches the observed value.
    For the permanent dark/bias variation we expect 40 · 4 = 160, while the noise contribution should be 80, like in the red curve; this should combine to a scatter in the blue curve of √(1602+802) ≈ 180, only slightly more than observed.

These are the results to take away from the experiment: For this camera, ISO setting and length of exposure, and using raw data of this particular variety

What is the difference between bias and dark current? Bias is a basic level of raw data count that exists even when the exposure time is zero. The bias is independent of how long the exposure is, but it may depend on the ISO setting. Dark current is the gradual accumulation of free electrons in the pixels that happens without there being any light to liberate electrons; this is sort of a leakage. If pixels are buckets that collect electrons, through this leak extra electrons enter the pixel; water buckets usually lose water when there is a leak.

At short exposures bias should dominate over dark current, while at infinitely long exposures dark current should dominate over bias. You should experiment with different exposure times for dark/bias frames and see how the level of bias plus dark current increases with increasing exposure time. You may find that your dSLR never collects any measurable amount of dark current even at exposures of 10 min. For a CCD camera this may turn out very different.

The simple way to get around the dark/bias distinction is to match up any sky frames with dark/bias frames taken with the same ISO setting and the same exposure time. Then both dark current and bias will match between the sky and dark/bias frame so that simply subtracting the latter is the appropriate correction.

Sensitivity, saturation and dynamic range

Cameras usually have a way for the user to change the sensitivity of the detector. dSLRs and compacts offer the ISO setting, which should give consistent exposure across camera models and brands, in strict proportion to the ISO number. For the same subject and illumination, at the same ISO setting all photographs should give the same degree of exposure and saturation, regardless of the brand or model.

In photography, one tends to keep the ISO setting small and increases it in order to obtain brighter images at shorter exposure times, e.g. to shoot in low light or when objects move fast. Doubling the ISO will double the signal. This can be good if the signal level is low, but it can be bad if regions of high signal then saturate. A pixel saturates when is detects so much light that either the electron count reaches the maximum possible given the properties of the semiconductor, or when the amplified current exceeds the maximum that the analogue-to-digital converter can encode in digital numbers.

Photographers avoid extremely high ISO settings in order that noise will not become too apparent in the picture. Increasing the ISO may increase the noise. dSLRs typically have two regimes: at low ISO values the noise is low and independent of ISO, but at large ISO values the noise grows in proportion to the ISO value itself. It is a good idea to explore this relationship and to find the optimum ISO setting that maximises signal while avoiding the high-ISO regime.

noise versus ISO
How noise increases with higher ISO setting. Both axes are logarithmic. The blue lines show a constant noise level and a proportionality between noise and ISO.

To do this, take "double darks" like we did above: Take two dark/bias frames at each ISO setting, subtract the pair from each other, determine the noise in the frame difference. Then draw this noise level as a function of the ISO setting. From the plot shown, for my dSLR we find 1600 ISO the best or highest to use.

Avoiding excessive noise is one reason to stay away from the very high ISO settings, loss of dynamic range is another. Dynamic range is the raw data value of pixel saturation divided by the noise level. To determine the dynamic range of our detector we need to know the noise level as determined above, and the saturation level. Take an overexposed image (a sky frame rather than a dark/bias frame) and take the corresponding dark/bias frame. Carry out the dark correction (subtract the dark/bias frame from the overexposed sky frame). Then find out what the raw image values are in the overexposed region; this is the saturation level. Divide this by the noise determined above, and the ratio is the dynamic range.

Continuing the example of my camera from the previous sections, the saturation value turns out to be 23000; without dark/bias subtraction this would be 27000. The camera generates raw data with 14 bit, i.e. values in the range 0 to 16383. In the Bayer binning the software evidently adds the two G values of each Bayer matrix (which has two G pixels, one R and one B) and multiplies by 2 the R and B values. The outcome are 15-bit numbers that range from 0 to 32767. But the camera does not actually use the full range. The bias leads to the bottom end below 4000 being unused, and saturation in the electronics is made to occur at 27000, before the camera runs out of bits.

At any rate, we now calculate the dynamic range as saturation divided by noise, i.e. 23000/40 ≈ 600.

Why do we call it dynamic range? Suppose we image a piece of sky with objects of vastly different brightness. The faintest object detections are three or five times higher than the noise. The brightest well-recorded objects are just fainter than the saturation level. The dynamic range tells us how differently bright the faintest and brightest objects can be.

It is not so clear from the plot, but by dropping the ISO from 1600 to 100 we can reduce the noise from 40 to 15. This would increase the dynamic range to 1500. This can be important if we want to make quantitative use of our data, like measure stellar brightness. If all we want is a pretty picture, this requires only 8 bit in the final data (data values ranging from 0 to 255) and we need a dynamic range of perhaps only 400.

Finally, there is also a smallest desirable ISO setting. ISO is too small when no noise is recorded in the raw data. We require the raw data to convert the noise σ to at least 2.5 in the initial raw data. After Bayer matrix binning that number is scaled up to 5. Noise levels of 40 at 1600 ISO and even 15 at 100 ISO pose no problem. Further, the plot of noise versus ISO indicates that this problem will never occur. But this is perhaps something to check for your own camera.

Flat field

A flat field is in some respects the opposite of a dark/bias frame. Where the dark/bias contains no light and should be black, the flat field is evenly illuminated and should be white or very light grey. A flat is taken with the lid off the optics and the optics pointing at a uniformly bright and ideally white subject. A wall, blank sky or a white tee shirt stretched in front of the aperture of the optics are possibilities. The flat field has to be taken with the same ISO setting and the same optics as the sky frames it corrects. But the exposure time does not have to be the same; it is adjusted to record the subject well.

A dark/bias should be black, but is not. Hence it is subtracted from any sky frames. A flat field (after itself being dark-corrected) should be uniformly white, but is not. Hence the dark-corrected sky frames are divided by the flat field. This compensates for variations in how much light makes it through the optics and how many electrons each pixel makes for that amount of light. The former variations are across the field of view, namely the vignette that causes the corners to be less illuminated than the centre. The latter pixel to pixel variations again stem from the manufacture of the silicon chip. Some pixels create slightly more free electrons for the same number of incident photons. Division by the flat field compensates for these variations.

JPEG versus raw

We have quietly assumed raw data above: photons arrive in the detector pixel, electrons are liberated from the crystal, then read out, the resulting current amplified and this signal digitised. Consumer cameras are normally used to obtain JPEG data instead, and not all cameras even offer raw data as an alternative. The following table summarises the various differences.

JPEG versus raw data.
  JPEG raw
How reliable is this table? Not very; the in-camera software is proprietary and aims to give pleasing image output. Quite reliable.
Which camera type offers the format? webcam, compact, dSLR dSLR, CCD
How many bits and brightness levels per colour? 8 bit, 256 values. Depends. In consumer cameras 12 bit (4096 values) or 14 bit (16384 values). In good CCD cameras 16 bit (65536 values).
Has dark correction been applied? Depends. A dSLR might offer this (often misnamed noise reduction). In a compact this may be compulsory and can be observed by a wait period of the same length as the intended exposure. In a webcam there is no time between frames to do this correction. No; the data is raw.
Has flat correction been applied? Not as such. Some consumer cameras correct for vignette if the lens is known by the in-camera software. It is possible that consumer cameras apply corrections for the irregularities in the detector that underlie that component of the lack of flatness. No; the data is raw.
Has a gamma correction been applied? Yes, output tends to be corrected with γ=2.2, i.e. J/Jmax = (R/Rmax)1/2.2 No; the data is raw.
Has a white balance correction been applied? Yes. No; the data is raw.
Has data compression been applied? Yes, lossy compression has been applied. JPEG format mandates transformation from a pixel-oriented data value to a series of Fourier components. This is best suited for gradually changing brightness, like one finds in a landscape or portrait. This is not so suitable for sharp edges and points as one finds in the starry skies. Yes, but the compression is loss-free. This, and the higher number of bits requires more storage space than JPEG, but it can be reverted in that every raw value of every pixel can be obtained, even if not all such values are stored explicitly.
How easily view-able is the data? Very. Gamma correction and white balance correction aim to give a good impression of the subject in the human eye. Hardly at all. Originally, a proprietary format is used. Viewing software for more than 8 bit per colour is uncommon. The lack of gamma correction makes the images appear very dark. The lack of white balance correction makes colours bland or slightly off.
What is the relationship between data value and incident light? Logarithmic, thanks to the gamma correction; equivalent to the human eye's response to light. Linear; the data is raw.
What is the potential for further processing? Quite limited. Some increase of contrast, brightness or colour balance is possible. But the 8-bit digitisation has little spare dynamic range for extreme processing. As high as can be. The data is "pure" in that no potentially wrong processing has been applied. The correct processing is in the hands of the user. For quantitative analysis, the user will convert from limited-range integer format to floating point format before proper processing and analysis. To create an image for human consumption, eventually a conversion to 8-bit JPEG or PNG format will be made, often with logarithmic scaling similar to a gamma correction.
However, the JPEG created by in-camera software uses proprietary knowledge, and it is difficult for the user to re-create such processing to the same level of quality. That is, unless they use proprietary off-camera software from the manufacturer for the purpose.
Can we use the data for astrometry? Not with high accuracy. Yes.
Can we use the data for photometry? No. Yes.

If a single JPEG gives almost the ideal image, then obtaining raw data and having to carry out the quite sophisticated in-camera processing with separate utilities wastes time and probably degrades the result. In all other cases, raw data offer more options. Dark and flat correction make sense only there, brightness boost is much more feasible there, stacking same-exposure frames to suppress noise is more feasible there, combining different-length exposures into a high dynamic range (HDR) final image (or even processing a single raw exposure with HDR) is much more feasible.

CCD versus CMOS

There are two different kinds of detectors in use, CCD and CMOS. CCD stands for charge-coupled device and refers to the way the image is read out from the pixel array. CMOS stands for complementary metal-oxide-semiconductor and refers to how the semiconductor is manufactured. CCDs need a specialised production line, and so CMOS manufacture is much more cost-effective. The major differences between CCD and CMOS are (DALSA Inc. 2009, Litwiller 2001, HowStuffWorks 2000)

There is still a marginal advantage in a CCD. Nevertheless, this should probably not drive you from one dSLR to another, but make you consider an actual CCD camera for astronomy.

Filters

We have seen the Bayer matrix of colour filters in front of each pixel in consumer cameras. These are now also common in CCD cameras for one-shot colour imaging. For more traditional monochrome CCD cameras, the user will often employ a filter wheel in front of the camera to quickly change between filters. The wheel can be fitted with RGB filters similar to consumer cameras, or with UBVR filters as used by professional astronomers, or with a collection of narrow-band filters that pass only particular interstellar emission lines.

There are further filters. Consumer cameras will have filters to block UV and IR light; when using a CCD camera you may have to deploy your own IR-blocking filter; when you remove a webcam's lens this may include the IR blocker.

dSLR and compact cameras likely also have a colour correction filter to improve the colour impression of the images on humans. For Canon dSLRs amateur astronomers have investigated this and created an improvement. An unfortunate effect of the built-in filter is that hardly any light in the Hα spectral line of hydrogen makes it through to the detector (Buil 2012a). This spectral line contains much of the overall light emitted by interstellar HII regions of ionised hydrogen, and this spectral line is also used in specialised solar telescopes. Baader manufacture a replacement filter to fit many Canon dSLR cameras, and you can send your camera to Baader or Astronomiser to have the Canon filter replaced with the Baader filter (Baader 2017, Ellis 2017).

Canon dSLR camera bodies can be fitted with clip-in filters. Although the Bayer matrix filters remain in place, one can here deploy narrow-band filters for interstellar emission lines, or a light pollution filter. A light pollution filter cannot do magic, such as remove photons from streetlights and allow photons from the universe to pass to the detector. The promise tends to be that they cut out the strongest sodium (Na) and mercury (Hg) emission lines, and hence much of the light from pre-LED streetlights. These filters do nothing about pollution from halogen security lights and probably nothing about LED streetlights. Worse, they are not precisely tuned to the offending emission lines, but cut out two wide ranges of wavelengths, which has a negative impact on the white balance (Buil 2012b). That said, these filters do overall favour light from astronomical objects and suppress light pollution.

Specialist Hα telescopes – to observe the Sun and prominences above the solar limb – have built-in narrow-band filters to allow through all light at the Hα hydrogen emission wavelength, while cutting out all other colours and thereby reducing the brightness of the Sun for safe observation.

To observe the Sun in white light, the user will often deploy a filter in front of the telescope. Baader foil filter material can be purchased cheaply in A4 or 1 m2 sheets and turned into homemade filters. Glass filters for particular apertures or tube diameters can also be purchased, but these are expensive and often optically inferior to the foil (Zamboni 2009).

Image acquisition

We have to interact with the camera in order to make detector settings like gain and length of exposure, choose what processing to perform in the camera and to trigger the exposure and storage of the data. A CCD camera will be controlled from a separate computer, which has a more or less camera-specific application installed to exercise this kind of control; data will be transferred immediately (if slowly) to the computer. For immediate processing, the preference is very often to do as little as possible and store raw data in FITS format (FITS Working Group 2008).

A dSLR or compact camera may also be controllable from a computer in a similar fashion, using the camera manufacturer's application or a more generic one from a third party. An example is gPhoto2 (Waugh et al. 2015), which supports many consumer cameras, though often only to take and download images with settings made beforehand manually on-camera. It is perhaps more likely that you use the camera's own interface (including regular wheels, knobs and buttons) to choose ISO, exposure, white balance, data format etc. Storage will then be on a memory card and there is no separate computer anywhere near the telescope. Later the data are transferred to a computer and dealt with further. Depending on the camera and operating system, data transfer may be over a USB cable or by inserting the memory card in the computer. Again, the preference will probably be to avoid immediate processing and avoid JPEG in favour of raw data. The raw data will be in a proprietary format, and an application from the camera manufacturer or a utility like dcraw (Coffin 2016) is needed to convert to something more suitable (TIFF, PPM etc.), PPM is a somewhat wasteful graphics format, but it has a very simple structure and the Netpbm software has conversion utilities from and to very many other formats, including JPEG and FITS Poskanzer 1988).

VLC media player
VLC media player displaying a live stream from a Philips ToUcam Pro webcam, with the dialogue for detailed camera control on the right.

At first glance, a webcam seems similar to a CCD camera in that it requires a computer to control the camera settings and to record the data. However, a webcam is a cheap product for the mass market, and it aims to provide live video, not high-quality data for archival or even later processing. Our goal is to take a large number of short-exposure frames that are each well exposed but otherwise not of high quality. The quality will stem from stacking many frames, removing drift and wind buffeting, and from processing to improve contrast and to emphasise small-scale features. Most likely, we will capture the frames as a short video clip, probably in AVI format. The webcam may include an application to do this, or we might use the VLC media player (VideoLAN 2017). VLC can display the live video stream from the webcam and record it to AVI as you watch. If the stacking software does not support AVI input, the video clip can be converted to PPM frames with MPlayer by "displaying" the clip into a file output driver (MPlayer Team 2016):

  mplayer -nosound -vo pnm file.avi

The result are a large number of PPM files in the working directory, each with one movie frame. These can be converted to FITS if that is helpful.