|
Thread Tools | Display Modes |
|
#1
|
||||
|
||||
Yet another way in which the powers that be want to screw folks who haven't bought the latest Chinese POC and or like old shows.
This is probably to keep folks from making a decent analog loophole based bootleg of ota broadcasted content while also forcing them to buy a copy the the program to see it in any reasonable amount of quality.
__________________
Tom C. Zenith: The quality stays in EVEN after the name falls off! What I want. --> http://www.videokarma.org/showpost.p...62&postcount=4 |
#2
|
||||
|
||||
I've been recording analog TV programs since the transition, and have had no problems with picture quality. My cable carries all local TV stations in SD and HD, and, again, I see no loss of image quality (except that the picture does not fill the screen on my 19" flat panel TV) when viewing the SD feed. The tapes are perfectly watchable when I play them back on my Panasonic VCR; in fact, I just recently watched a tape I had made of an episode of "Criminal Minds" on ION TV (channel 23 in my area). Again, no loss of picture quality that I could see when I watched the program.
__________________
Jeff, WB8NHV Collecting, restoring and enjoying vintage Zenith radios since 2002 Zenith. Gone, but not forgotten. |
#3
|
||||
|
||||
Could you provide a reference for this SMPTE recommendation please.
I work primarily with uncompressed SDI interfaces. In the SMPTE documents that specify these (125M, 292M, 424M etc) the standard colour depth is 10 bit, which means 10 bits per component per pixel. That is 10 bits each for Y, Pb and Pr. When handling the signals as parallel data (before serialisation or after deserialisation) the interface can be specified as either 10 bit (with Y and C multiplexed on a single 10 bit bus) or 20 bit (with Y and C on separate 10 bit buses). When compressing signals for transmission the MPEG2 or MPEG4/H264 encoder uses at least 8 of the 10 bits per pixel per component, depending on the profile. |
#4
|
|||
|
|||
I had read about that recommendation by the SMPTE on a reputable page on the Internet but I didn't copy it then so I do not know the specific details.
I think that they use 10 bits per channel when they produce HDTV programs in ATSC but when they broadcast the programs the use 8 bits per channel (or 24 bits total). I know also that they also use 8 bits per channel when they broadcast programs in DVB-T and DVB-T2 where you live in England. Actually, they do the same thing there on Freeview, that they broadcast the standard definition PAL programs at a much lower bit depth than the HD programs. |
#5
|
||||
|
||||
AFAIK all the MPEG2 encoders used for SD programs in the UK use 8 bits per component. The HD MPEG4/H264 encoders may use 10 bits rather than 8 but I have no evidence. The improvement when moving from 8 to 10 bits is very marginal on real pictures. It can of course be seen on special test signals such as shallow ramps.
PAL is no longer used in the UK and quite likely not anywhere else in Europe. Any material originated in PAL would be decoded to components before being used. Even an inexpensive single chip decoder can work remarkably well, leaving very few artifacts. The BBC and others have developed sophisticated 3D comb decoders that can convert PAL to component with essentially no artifacts. Quite a few people say "PAL" when the really mean 625 line, 50Hz. Just as some say NTSC when they mean 525 line, 59.94Hz. Likewise a lot of people say YUV when they really mean YCbCr or YPbPr. U and V are strictly speaking the weighted colour difference components immediately before being modulated on to a colour subcarrier. U and V have no place in a component system. I cannot believe that anyone anywhere is using 3 bits per component at the input to the encoder. The pictures would be unbelievably and utterly impossibly bad. Many years ago I experimentally digitised composite PAL video at 1 bit per pixel. As you would expect the pictures (in monochrome) looked truly horrible. The surprise was that in highly coloured areas the PAL subcarrier acted as a dither signal and made the picture vastly better. This was a demonstration of how you can trade quantising errors (limited bit depth) for noise. You can make a 3 bits per component picture without the posterisation errors you would expect simply by adding enough random noise. It will then look like a snowstorm. |
Audiokarma |
#6
|
||||
|
||||
Quote:
I would note also, however, that 8 bits per component can have visible artifacts on some picture material when viewed on a high-contrast capable display in a dimly lit room. If you want to produce shows that use the kind of dynamic range long used in the movies to contrast indoor/outdoor and day/night, the extra bits help. Viewers have gradually been trained (over a period of decades) to higher quality in terms of noise level and contrast range. The TASO study that set signal-to-noise for analog TV coverage in the 50s would probably get a one grade lower result (on a 5-step scale) today. |
#7
|
||||
|
||||
Quote:
I'm getting a kick out of watching REAL NTSC, (Don't have to travel to SA, Caribbean, Mexico, Canada or Burma to see the real thing!) |
|
|