.
  Lightstalkers
* My Profile My Galleries My Networks

open invitation to a nerdish sensor discussion

Ladies and gentleman,

I am interested to ask anyone who finds such things interesting to share your knowledge about the nature of digital photography.

For example,  what is digital noise, and why is a smaller sensor more noisy than a larger?  What does the word "pixel" really mean?  On a sensor is a ‘picture element’ a single group of light receptors (two green, one blue, one red) or is it an array of such groups?

 I imagine  the sensors are made sensitive to a single  primary color by having micro-filters on them which block the others colors, but then that means that they are only getting part of the light  (ie,  the ‘filter factor," as when you put a magenta filter on your lens and have to open up to compensate.)

So, what, anyway is the interpolation or extrapolation or whatever that happens, and what in god’s name is an ‘anti-alias’ or high-pass  filter and how does it  help or hurt making pictures?

In other words what are talking about when we talk sensors, pixels, noise, etc.?  If we put our collective noggins together, we might actually figure this stuff out. I sure haven’t.

Stephen

by [a former member] at 2006-02-02 19:11:58 UTC (ed. Mar 12 2008 ) bogota , Colombia | Bookmark | | Report spam→

OK, as I understand it…

Light falling on the sensor is translated to voltages based on frequency and intensity (chrominance and luminance).  Higher sensitivity (ISO) is simulated by multiplying these voltages before they are sampled and digitized… or after, I’m not sure.  Digital noise is the result of sampling errors.  This is unavoidable, but somewhat correctable by averaging with adjacent data.  The fewer sensor elements there are, or the smaller they are, the less ability the mechanism has to average the data to reduce the errors.

The best most useful definition of a pixel is: the smallest complete sample of an image. If you want to call each microlens on a sensor a pixel, or each dot on the screen a pixel, feel free.  They are all pixels in different contexts.

The red-green-blue-green array you’re imagining is known as a Bayer filter.  (Two greens to each single red and blue because the human eye is more sensitive to green.)  Not all cameras use Bayer filters.  Because it is a computer and not a brain, a bunch of programmers have to figure out how to tell the processor to reconstruct the original image based on this thing, this Bayer pattern.  That reconstruction happens via interpolation—new pixels that didn’t exist before are generated based on the existing pixels and their values, and those which surround them—filtration, and anti-aliasing.  Actually the filtration takes place before the sampling process via a low-pass filter placed in front of the sensor.

Aliasing in computer graphics happens when you try to reproduce a high resolution image in a low resolution medium.  Not just high resolution, but high frequency, as in high frequency patterns.  Beyond a certain threshhold, the high frequency portions of the image cannot be accurately rendered within the 72 dots per inch allocated for their representation on screen.  The result is a jagged, blocky mess.  Moiré patterns are a form of aliasing.  Anti-aliasing employs an algorithm to reconstruct those portions of the image and render them in the target medium to more closely resemble the original.

The low-pass filter placed in front of digital camera sensors allows lower frequencies of light and attenuates (reduces) higher ones.  This helps to minimize aliasing at the signal frequency level.  I think.

Does that help?


by Jerome Pennington | 02 Feb 2006 19:02 (ed. Feb 2 2006) | | Report spam→
Thanks Stephen for venturing to start this thread.  I have read definitions of these things on various sites, but it never really sinks in.  Jerome for the first time I am beginning to understand the technical matters behind digital imaging, thanks for your clear explanation — I am just beginning, though, so I will have to read it over a couple times more.

So now, could anyone explain why it is that creating a full frame sensor — that is, no 1.5 crop factor — is posing such a problem to the manufacturers, so much so that Canon’s 5D is about the only camera that fits the bill but costs over 3000 bucks?  And why havent manufacturers been able to provide a smaller, lighter, rangefinder like camera at a decent cost, also with full frame viewing and decent sized files.

Also, there is another thing that confuses me a bit: as cameras move up in size 6MP, 8MP, 10MP, 12MP, each offering a slightly larger file, just what is being added here, and what are the real limitations in terms of making prints (not merely printing an image in a magazine, but an exhibition print).  I have seen sites, for example, that talk about how a large size file from a 10MP camera will yield a decent 11 by 14 print; but I have also heard people talking about making larger prints that look "great"  (in quotes, because as I havent seen them, I cannot vouch for the judgment).  This concerns me because I spend a lot of time printing and exhibiting, and from a traditional 35mm TriX negative I am able to make great sharp prints, anything from 11 by 14 to 20 by 24 or even mural prints (granted, they dont look the same as an equivalent sized print from a 4 by 5 neg, but they still look damn good).  Of course, you see the grain more when you step in close, but that is part of the look anyway.  So when it comes to making prints from digital files, is it that "noise" in a print is less acceptable, so people hesitate to push the envelope a bit and print bigger?  Can you in fact make really large prints?  Is it also true that working from a RAW file the able printer can make a better and bigger print?

Enlighten us Jerome.


by Jon Anderson | 03 Feb 2006 06:02 | Santo Domingo, Dominican Republic | | Report spam→
I’ve had a 14×20 made from a 6 mega pixel jpeg file from a Nikon D100 and Tokina lens. It was terrific. I was completely stunned by how good it was. I had no idea a mere 6 meg machine could do that. However, the Tokina lenses are extremely good and that is crucial. When you stretch a digital camera file beyond its 300 dpi maximum it gets softer rather than pixelating. For example, a 6 mega pixel file comes to 10×7 at 300 dpi, or thereabouts. If you want to make a 14×20 print you set the dpi at 150 and it looks surprisingly good but it must have been shot with a top quality lens. I dare say that a good lab can make a beautiful exhibition print at 14×20 from a 6 meg file as was the case for me. I can only imagine what a 12 meg file, jpeg or raw, would look like at the same size. At 12 megs it has got to be better than film. After all, a 35mm neg or slide equates to about 11 mega pixels.
With my 14×20 6 meg print, one step back and it looked grand.

by Paul Treacy | 03 Feb 2006 07:02 | New York City, United States | | Report spam→
By the way, the print above was a 400 iso exposure.

by Paul Treacy | 03 Feb 2006 07:02 | New York City, United States | | Report spam→
Thanks Paul, so let me try to wrap my head around this thing.  My 7.1 MP gives me at its outer limit a file sized 3072 by 2304 at 72 dpi.  If I resize that at 300 dpi I end up with something approaching a basic 8 by 10 print (slightly larger than your 6MP example), and I have been told that for printing purposes 300dpi is the desired goal.  Of course, as you state, one can cut back on the dpi and extend the parameters a bit, thus moving the print size up by almost two steps to the standard 16 by 20 (actual image size, 14 by 20), but what are you losing by cutting back on the dpi, and just how noticeable is it?  Is that loss necessarily a flaw, or does it contribute a  certain look to the final product —  a 35mm TriX neg enlarged beyond its "normal" parameters still has visual characteristics that are interesting and esthetically acceptable, even desirable (though admittedly this was not always thought to be the case.  When people switched from 4by5 to medium format and small format cameras, there was alot of bickering about the loss of image quality).  If as you say the result is "softer," just how soft does it get?  I mean I suppose you could argue that a mural print from a 35mm neg is "softer" in a sense, but if printed well, that hardly makes a difference except up close.

I guess one could argue that TriX has qualities that compensate for the "loss" of image quality at these greater enlargements; so can the same be said of digital?  I am nitpicking I suppose, but as a printer I am interested in these things, and I notice that while there is lots of discussion about crop factors and aliasing and such things, fewer people talk about the digital image in terms of the final print, so I am curious to hear more.




by Jon Anderson | 03 Feb 2006 09:02 | Santo Domingo, Dominican Republic | | Report spam→
First of all you should know that this is merely a regurgitation of information that is readily available to you from various sources on the internet.  I decided to educate myself when I started working as an assistant and digital camera tech in Los Angeles.  It often proved useful in taking the client’s attention off of the fact that I eschew Macs in favor of Intel and Microsoft.

Cost is the primary factor governing chip sizes these days.  Another matter for concern is that a full frame sensor extends beyond the sweet spot of the imaging circle produced by lenses designed for film. Out there, light does not fall quite perpendicular to the (film) plane, limiting the sensor’s ability to gather it and further reducing the already poor image quality there.

What is being added with each megapixel increase is, well, more pixels.  More information, and along with that other improvements.  Better signal processing algorithms to more accurately render the subtleties.  More efficient circuitry generating less heat and consuming less power.  Larger buffers.  Faster processing and higher transfer rates to removable media.

I haven’t looked much into printing.  Whether on screen or in print, when data are scarce, the image processing and display algorithms in both analog and digital media are taxed to and often beyond their limits.  It’s not really a question of resolution, unless you spread the pixels too thin, necessitating interpolation—filling in the chasms based on the characteristics of the surrounding landscape, so to speak.  It’s about the subtleties.  Near-white highlights, bokeh, fine detail along the diagonal (which tends to exaggerate the square-shaped-pixel nature of a digitized image), and more obvious things like the quality of ink and paper.

The one thing that I feel still needs to be addressed is the analog nature of film in terms of the subtle imperfections, the inconsistencies in density and chemistry across each single frame that are translated to the final print.  The eye does not see these but the brain does, in the same way that the warping of a vinyl record adds an imperceptible, low frequency analog wobble to the sound, imbuing it with a subliminal warmth that you can’t quite put your finger on, but you know is missing from the compact disc version.


by Jerome Pennington | 03 Feb 2006 10:02 | | Report spam→
Well, well.  Thanks guys for jumping in, and particularly to Jerome for the techincal information.  Let me suggest now, that to continue well we be careful to keep all the different issues from swimming together.

To summarize so far (bust me if Ive missed something important), weve got  some open questions from Jerome’s post.  jerome could you try to give us a description of what happens step by step for a given sensor, you know – shutter opens light comes in hits these sensors then gets converted,  and all that.  Where the interpolation happens, when the anti-aliasing come in and what for…?  That woudl be very helpful.

Then John and Paul got into an important  set of questions concerning size of sensor and print size, and an important aspect about what is softness or sharpness in a print.  Seems there is a clear difference to keep in mind between resolution and noise, which is why a good lens makes all the difference.  Also, in the case of film grain at least, the shape of the grain can give a feeling of sharpness.

So, let me pose two  basic questions: 

1) In the context  of a Bayer sensor for example  (which cameras have them, by the way?), ie,  the two green/ one blue/one red array,  what is a pixel?

2) what exactly is digital noise?

Thanks! Stephen






by [former member] | 03 Feb 2006 12:02 | NYC, United States | | Report spam→
Well, first in line is the low-pass filter.  I believe one thing that it does is help to reduce infra-red radiation, to which the chips are especially sensitive, as well as certain properties of the visible spectrum that contribute to unwanted aliasing when sampled.  I’m not really sure how that works; that’s my best guess, even after reading up a bit around the web just now.  It also helps to minimize aliasing, which in this context is a pretty abstract concept and kind of difficult for me to clarify with words.  Suffice it to say the low-pass filter cleans the incoming light frequencies a bit, removing unwanted components and making the image processing job a bit easier.

Some camera models use a Bayer filter, named after its inventor, to restrict the recorded frequencies to red, green and blue simply so the computer can deal with them on its very simple terms.  These these three additive primaries are enough to represent most of the information present in the original.  The resulting Bayer pattern is a fair approximation of reality.  Refer to this wikipedia article for a visual example.  The Bayer pattern is sampled by the sensor.

Here is a basic analogy of how a sensor array in a digital camera works:

Imagine a 10×10 array of people holding empty buckets.  Water falls onto the group, each catching a certain amount in his bucket.  There is an eleventh column comprised of auditors, one for each row, whose job it is to measure and record the amount of water captured in each bucket.  After the water has stopped, each bucket holder passes the contents of his bucket to his neighbor in the direction of the auditor.  The auditor in each row receives his neighbor’s bucketload, records the amount, dumps out the water, and the bucket brigade continues until all buckets are emptied and logged.  This happens really, really quickly.  :)

After the entire image is sampled, the demosaicing algorithms are applied to reconstruct the original image before storing it on media.

1) In the context  of a Bayer sensor for example  (which cameras have them, by the way?), ie,  the two green/ one blue/one red array,  what is a pixel?

It is the Bayer filter’s job to break up the incoming light into discrete values of specific chromaticity and luminance.  Its products are complete and distinct samples of the original image.  Therefore, I would say that each recorded RG&B value constitutes a pixel, in the context of the Bayer pattern.  I’m not sure which models do and do not use the Bayer mosaic.  The Foveon sensor by Sigma is an alternative to the Bayer technique.  Instead of separate RG&B cells, some kind of beam splitter allows all three colors to be captured in the same location, which effectively realizes a savings in real estate on the chip and allows for higher resolutions, theoretically.  Google it for more info.

Regarding noise… the job of each bucket (capacitor) in the sensor is to convert light energy into a specific electrical charge.  Thus it is very sensitive to electromagnetic energy, including the electrical charge which enables its function.  Like the background hiss heard through a poor audio connection, imaging sensors also produce their own noise background.  The level of this background noise is called the noise floor.  Light energy falling on the sensor must exceed the noise floor to become significant, i.e. to be recorded as an element of the original image.  That’s why noise is more prevalent in the shadows.  The capacitors do their best to record the actual amount of energy received, but things like heat, electromagnetic interference, and simple statistical variance conspire to introduce errors that cause a few or more of the capacitors to register a spike that we see as noise.

Some cameras use an automatic noise reduction mask to eliminate noise generated during very long exposures.  This is simply a recording of its own inherent noise pattern, applied inversely to cancel out the noise generated during the intial exposure.


by Jerome Pennington | 03 Feb 2006 14:02 | | Report spam→
Stephen,
Here is an explanation about the sensor sizes and noise from Tim Grey (www.timgrey.com).
Tim writes on technical issues with lot of clarity. He has a free daily newsletter with questions concerning
digital technology and workflow. Here is an example.

Could you comment on the relative resolution to be expected from an 8mp point & shoot, such as the Sony T9, and a DSLR, such as the 8mp Canon 20D? If the area of the sensor is bigger, but both have 8mp, then the larger must have bigger pixels, right? So why aren’t bigger pixels coarser?

==


This is actually a very good question that doesn’t get enough attention in my opinion, simply because there are issues involved that it seems aren’t well understood.


First, let’s consider the issue of resolution from the perspective of pixel count. This is determined by the number of megapixels your digital camera captures. This value determines how much data you have, and therefore how large a print you are able to produce from the image. Figure on average you’re printing an image at 300 dpi. So a 3000×2000 pixel (6 megapixels) image could be printed at 10"x6.6" without any interpolation. And, of course, with interpolation it could be printed larger (potentially much larger). The fact that the pixels are always being printed at the same size on the printer helps explain why the image doesn’t appear more course. However, it could contain less detail if the pixels are too large, as I’ll explain in a moment.


So, when considering the number of megapixels, you can generally think of it as determining how big you can print an image at good quality.


Of course, there are other factors involved. These include level of detail, dynamic range, noise levels, and other considerations. So, while you may have enough pixels to make a big print, you might not necessarily have the level of detail or quality you’re hoping for in the image. This is largely (though not exclusively) a matter of the size of the individual pixels in the sensor. Of course, this is where it gets a bit complicated. If the individual pixels (photodiodes on the sensor) are small, there is more detail because each pixel is covering a smaller area of the image, and therefore the image detail will appear less course. However, if the pixels are large (and therefore producing a relatively course image) then they’ll also be able to capture more light. This reduces noise (because of a more favorable signal to noise ratio) and also improves dynamic range (because a photodiode that is bigger can record a higher light value, so the difference between the minimum and maximum values is larger, and thus you have more dynamic range). These quality issues are generally more important than the benefit you might get from smaller pixels.


Therefore, as a general rule, you’ll want a sensor with a large number of pixels (so you can make bigger prints) and also with relatively large photodiodes (so you get less noise and higher dynamic range). This is part of the reason why you can get better images from a digital SLR with a relatively large sensor than you could with a small point-and-shoot with a smaller sensor, even if both have the same number of megapixels. Obviously lens quality and other factors also play a significant role, but the point is that the size of the individual photodiodes on the imaging sensor does have an impact on quality.


So, generally speaking, bigger is better when it comes to both the number of pixels on the imaging sensor and the size of those individual pixels.


-—-
Lot of stuff to chew here. Hope this helps.
Hari


by [former member] | 03 Feb 2006 15:02 | Brooklyn, New York, United States | | Report spam→
Ok, we  are cooking now.   Thanks Hari for your contribution.

I think Im getting the noise piece, and that  a bigger chip allows for bigger photodiodes (also called pixels, receptors etc) which capture more light (signal) relative to the basically constant static (noise)  level, noise being like a background hum produced by the electrical charge in the system and other vaguenesses. Thus a bigger chip with 10 megapixels has less noise than a smaller chip with 10 megapixels.

These virtues of the bigger diode are worth more than the loss of sharpness that comes of having bigger (coarser) diodes  which capture a grosser area of the scene than do  smaller diodes, which produce a finer rendering.  Is this the same reason why a fine grain film – like velvia or panatomic x – is very sharp.

But  I am again confused!  -  Is the sharpness that we get with Velvia  the same as "resolution"?   If so, a camera with a bigger chip will have less resolution than the smaller one with the same number of pixels,  just better dynamic range and lower noise, right?   Is that why the pictures from my little 7 megapixel point and shoot look very sharp, though noisy?  Must be more complicated.  damn.

As to the definition of a pixel -  Jerome please clarify one thing. You say in the context of a chip, a pixel is "the smallest complete sample of an image."  Does that mean the smallest full color (rgb) reading of the part of the scene that corresponds to that pixel?

Is the d of dpi, as in "print at 300/dpi"  the same as a pixel? 

The reference to the Wikipedia article was helpful, but gets us into the terrible waters of the algorithms and all that used to build up an image from the sampled info.  See, please,    http://en.wikipedia.org/wiki/Bayer_filter    

Can someone please explain to me what the fuck?

-
  I see that in a Bayer array each diode tosses out some light to get its R, G, or B reading (how much light is tossed out I guess depends on the color of the light hitting each diode, right?)

 So, the algorithms come in to relace the missing light that was tossed out.  Otherwise they could not call the sensor a 10 megapixel one, as it is recording less light than 10 megapixels worth if they were all full-color diodes,  ie , a black and white sensor?  

Does that also mean that a 10 million megapixel black and white sensor would be awesome (no algorith needed, lots of info)? Why isnt that made?

Seems that the algorithms used are key to how good the image is, and so obviously we have to choose carefully  our raw convertors.  Also, it shows how much better RAW is, because for a given shot, you could use a variety of convertors if you wanted to, rather than be locked into the algorithm of the camera which does the image processing in camera and thats it. Or am I wrong?

Is this algorithm activity different from the anti-mosaicing, or is it the same action?

Help.   Stephen


by [former member] | 04 Feb 2006 01:02 (ed. Feb 4 2006) | NYC, United States | | Report spam→
I have a related question to chip size, pixels. When I shoot RAW files (D2X, 12 mp camera), and open them in Adobe RAW  there is an option to change the file size: from the 12.2 mp (4288 × 2848) as it was shot, to 25.1 mp (6144 × 4081). The resulting files go from 34 meg files to 71 megs when opened in Photoshop.

 What is happening when I do this? Am I getting more information or simply stretching the info I have?

Stuart Isett
Digital Dunce


by [former member] | 04 Feb 2006 04:02 | Paris, France | | Report spam→
dpi and ppi, essentially the same thing. One might say 300 dpi as a print setting (where d is for dots) and 72 ppi for screen applications (where p is for pixel). That’s all.

by Paul Treacy | 04 Feb 2006 09:02 | New York City, United States | | Report spam→

Get notified when someone replies to this thread:
Feed-icon-10x10 via RSS
Recommended
Icon_email via email
You can unsubscribe later.

More about sponsorship→

Participants

Jerome Pennington, photographer, sushi snob Jerome Pennington
photographer, sushi snob
(pointing & shooting since 1987)
Santa Clarita, Ca , United States
Jon Anderson, Photographer & Writer Jon Anderson
Photographer & Writer
Ocala Florida , United States
Paul  Treacy, Photographer Paul Treacy
Photographer
(Photohumourist)
London , United Kingdom ( LGW )


Keywords

Top↑ | RSS/XML | Privacy Statement | Terms of Use | support@lightstalkers.org / ©2004-2014 November Eleven