#1
|
|||
|
|||
Interlacing and channel bandwidth
I've come across a curious statement in Albert Abramson's Zworykin: Pioneer of Television. Discussing the invention of odd-line interlacing to reduce flicker, he says "In addition [to reducing image flicker] doubling the field rate, which cut the number of lines in each field in half, afforded a considerable saving in channel bandwidth." (p. 118) But why would it? The number of lines scanned per second doesn't change, only the order does, and presumably this has no effect on the number of picture elements per line. Donald Fink in Television Engineering doesn't mention this as an advantage. On the contrary, he says "It must be understood that increasing the downward...velocities to twice the values they would have in progressive scanning does not mean that any more lines are scanned in the complete pattern." (p. 47, 1st edition, 1940) What am I missing here?
__________________
One Ruthie At A Time |
#2
|
|||
|
|||
That is wrong; the bandwidth is unaffected by the scanning pattern.
I just hate it when someone is allowed to publish without intelligent editing. |
#3
|
||||
|
||||
Two hypothetical progressive scan cases where the horizontal pixel resolution and the number of H lines in a complete image on the CRT is the same as with an interlaced system:
If the 525 H lines are scanned progressively 60 times a second, the term "field rate" would have no meaning except as a V deflection rate and the frame rate would double from 30Hz to 60Hz, the H freq. would double from about 15.75KHz to about 31.5 KHz, and the required pixel rate or pixel clock, a form of bandwidth representation, would also double if the horizontal resolution were to remain the same. One solution would be to use more RF bandwidth. The above would be similar to the non-interlaced 640x480 VGA 'standard' running about 60Hz V and 31.5KHz H. If the 525 H lines are scanned progressively 30 times a second, the term "field rate" would have no meaning except as a V deflection rate and the frame rate would be the same at 30Hz, the H freq. would remain the same at about 15.75KHz, and the required pixel rate or pixel clock, a form of bandwidth representation, for the same H resolution, would remain the same. There would likely be an annoying flicker due to the 30Hz vertical rate. It would be like a 24 frame film theater except that a CRT display is a bit brighter so the effect would be more pronounced, but offset by a 30Hz V rate. One solution would be to use be longer phosphors. Therefore the interlacing scheme represents a compromise solution taking into account the relationship between these two factors: 1.) the pixel clock frequency and 2.) the refresh rate for a given volume of pixels. In most analog video systems, MHz equals pixels per interval. resolution = pixels / time bandwidth = information / time The NTSC scheme interlaces half the image every 1/60 second. it takes 1/30 second to present the information. The first case above presents all of the information in 1/60 second. the second case presents the information in 1/30 second but in a progressive manner. The author is correct in what he seems to have meant, but not in what was said. He may have not explained it completely or properly. It is possible that the clarity of his statement relies upon information presented elsewhere in the volume. If what I have said is wrong, I am willing to consider rebuttals or corrections.
__________________
Timeless Information for Retro-Tech Hobbyists and Hardware Hackers No Kowtow
Last edited by Opcom; 07-30-2012 at 08:51 PM. |
#4
|
||||
|
||||
Isn't that the very reason TPTB chose I'lace: to halve the Chnl size.
|
#5
|
||||
|
||||
Interlace improves the TRADEOFF between flicker and bandwidth and spatial resolution; so the author's statement is correct - it just mentions one side of this three-legged stool.
By the way, movie projectors do not operate at 24 Hz because the flicker would be intolerable even at lower brightness. They always (at least) double-shutter to get a 48 Hz flicker rate. |
Audiokarma |
#6
|
||||
|
||||
For stationary pictures, as Opcom has explained, interlace does halve the bandwidth needed for a given resolution and refresh rate*. For moving pictures it's more complex. Yes, there is better temporal resolution for moving objects, though vertical resolution in those objects is reduced. Also when you try to de-interlace the picture, as required for LCD panels etc, you soon find out that it's not easy to do well.
It is now simple to do the TV equivalent of multiblade shutters as used in movie projectors. Framestores were a long way in the future in the 1930s *It's not exactly half. There are additional artefacts caused by interlace that give a lower perceived vertical resolution than you might expect. The Kell factor is used to quantify this. |
#7
|
||||
|
||||
Quote:
When non-interlaced sampling was considered (the horizontal pixel sampling in digital versions of 525- and 625-line systems, or vertical resolution of a progressively scanned system), a higher factor could be applied. For the hroizontal sampling, SMPTE and ITU standardized on filters that are 3 dB down at 0.85 of the Nyquist rate. With these specs, the system was judged to be transparent to the analog signal. The limiting resoluton is probably about 0.9 of Nyquist. |
#8
|
||||
|
||||
AFAIK there's nothing very scientific about the Kell factor. As oldtvnut says, it's determined subjectively without a great deal of theoretical backup. My feeling FWIW, is that it depends significantly on the vertical scanning aperture. In tube cameras this is gaussian which gives a falling vertical spatial frequency response. Usually corrected, at least partially by vertical aperture correction. Like wise in CRT receivers but without VAK. LCD displays and CCD cameras have a very different vertical aperture, much squarer.
The Kell factor has been used to justify decisions about choosing H bandwidth wrt number of lines. Has any good experimental work been done with modern cameras and displays? In any case all HD systems we use square pixels so if there is still a Kell effect there is a shortfall of vertical resolution. |
#9
|
||||
|
||||
Quote:
If you mean the display elements have square shapes, then, yes, this implies a certain vertical and horizontal spatial frequency response, different from that with a Gaussian CRT spot. |
#10
|
||||
|
||||
Quote:
As oldtvnut correctly points out, in all sampling theory the idealised sample is infinitesimal in length. (Dirac delta function if anyone is that interested). In TV this is generalised to 2 dimensions rather than one. Practical pixels have a finite size and shape. For LCD displays and CCD cameras this ideally approaches a square having the same dimensions as the pixel spacing. This gives a zero order hold function and hence a loss of HF response on both axes which follows a sin(x)/x curve. The point I am trying to make is that the assumptions which underpin Kell Factor stem from the days when H scanning was a continuous function while vertical scan was sampled. These assumptions may well not apply when the picture is inherently sampled at the sensor on both axes. As a thought experiment consider a sensor and/or display where each pixel can be individually addressed. They can then be read or written in an arbitrary sequence*. I can conceive that this might affect motion protrayal (motion above a very slow rate is aliased in TV systems) but I cannot see how it might affect our perception of H and V resolution. Hence the Kell factor of a progressively scanned system using modern techniques should be unity. I may have overlooked something here. For example unless there is some kind of optical filter before the sensor there can be H and V aliasing. Or there may be performance problems of the sensor that affect the axes differently. *In doing this thought experiment I was influenced by BBC Research Report 1991/4 "Image Scanning using a Fractal Curve" by John Drewery. http://www.bbc.co.uk/rd/publications..._1991_04.shtml John Drewery had a superb understanding of scanning, sampling and spectra. Back in about 1975 I remember him demonstrating the 3 dimensional spectrum of TV signals (PAL in this case) using some wonderful models that he had the BBC Research Dept workshop make from pieces of coloured PTFE. Nowadays this would have been done by computer graphics. Last edited by ppppenguin; 08-03-2012 at 01:11 AM. |
Audiokarma |
#11
|
||||
|
||||
There was some talk a while back that Europe, by delaying HD adoption,
intended to avoid any interlaced format in their new (1080?) standard. What became of this? |
#12
|
||||
|
||||
Quote:
For a full explanation of 3-dimensional spectra resulting from scanning, I also recommend an out-of-print book by Pearson: http://www.amazon.com/Transmission-D...n+transmission |
#13
|
||||
|
||||
Another note: Dr. Schreiber at MIT proposed random scan (random pixel sequence) as part of a high-definition TV system in the late 80s/early 90s. With proper frequency pre-emphasis / de-empahsis, channel degradations would appear as an increased noise level near edges, where it would be masked by the human visual system. Still images looked promising, but I don't recall if full high def motion was ever achieved. Such partially analog systems (Zenith also proposed one) were overtaken by the development of all-digital systems using MPEG compression.
|
#14
|
||||
|
||||
Quote:
|
#15
|
||||
|
||||
Quote:
Always wondered why HDTV cameras 2k? Naive? |
Audiokarma |
|
|