|
#1
|
||||
|
||||
I hope we can take it as true that with reasonably modern broadcast kit and reasonable engineering standards both PAL and NTSC will give good results.
There are artefacts with both systems. Some relate to the udnerlying scan rates, others, such as lurid patterns on fine detail, are a side effect of the NTSC and PAL systems. These cross colour and cross luminance effects can be minimised by comb filter decoders which were much simpler on NTSC than PAL. Hence they were much more common in NTSC TVs. NTSC has lower chroma bandwidth so transitions between highly saturated colours are a little worse in NTSC. Readily seen on the green/magenta transition on colour bars. PAL and NTSC have different dot crawl effects. These are primarily visible on monochrome sets. Since PAL subcarrier is a higher frequency they are probably less visible in PAL. Phase errors should be minimal with decent kit and reasonable engineering. Now wind the clock back to the early 1960s o even to the 1950s. It is obvious from the work at Hazeltine labs, Telefunken and others that colour phase problems were of great concern. NTSC broadcast kit needed a lot of engineering attention to give consistent colour and the TVs weren't much better. You needed a hue control which can readily be misadjusted by viewers. The idea of colour phase alternation as a solution to this was first raised at Hazeltine labs (c1955?) but was judged impractical then. CPA could be done on dot, line or field basis. The latter 2 were totally out of reach back then. Bruch picked up the CPA idea, did on a line by line basis and invented PAL. At the time (late 1950s to mid 1960s) BBC engineers wanted to use NTSC and tried both 405and 625 NTSC systems. They reckoned they could wok to high enough standards to keep phase errors acecptable. Aided of course by much more modern kit than was available in the US in 1954. At the same time the french were pushing SECAM as a solution. Totally hideous in the studio and not really capable of being improved by better comb filters and suchlike. PAL was seen as the best answer AT THE TIME. Looking back, 625 NTSC would likely have worked perfectly well. Hindsight is gloriously 20:20 vision. In the US the coming of NTSC brought the decision to offset the line and frame rates by a harmless fraction of a percent. To avoid moving the sound subcarrier by a similar amount. Who was to know back then the sheer amount of grief that would cause for broadcasters when timecode was invented. Grief that continues to this day as all the 1080 and 720 systems have widely used options for 59.94Hz and other field rates with a 1000/1001 offset. The whole PAL/NTSC debate is now well behind us. For some years nobody (I'm sure somebody will find me an example of a small station in Africa that still uses PAL) has been producing new material in a composite format. High quality decoders are available to decode PAL and NTSC to their components with excellent results. Almost nobody is even radiating PAL or NTSC now. |
#2
|
||||
|
||||
Actually NTSC wider - 1.3m vs 1 m
With regards to monitoring in PAL-S... Come to think of it, some monitors had PAL-S & PAL-D switch for convenient signal evaluation (clever). |
#3
|
||||
|
||||
Quote:
It is interesting that the new 625 line UK standard had wider bandwidth to accommodate full double sideband R-Y and B-Y. And the 625 standard the video - audio carrier spacing was set for NTSC so that the aural carrier would be an integer multiple of the horizontal scan frequency to facilitate proper chroma-luma interleaving. |
#4
|
||||
|
||||
Quote:
SMPTE 170M(1993) gives more or less the same figures as for PAL but the USB isn't transmittable in a standard M channel. 170M notes the earlier NTSC standard where Q is 2dB down at 0.4MHz. A lot of NTSC coding has been done with narrowband 600kHz U/V axes rather than the complication of I/Q. This is discussed in SMPTE EG27. I would attach a copy but it's SMPTE copyright. Here's a quote from EG27: Quote:
|
#5
|
||||
|
||||
Quote:
http://www.snellgroup.com/documents/...des/edecod.pdf __________________________________________________ ____________ "...there are no modern [NTSC] receivers that utilize the theoretically possible wide- band I demodulation.." What about premium TVs like RCA 'Dimensia' (touted full chroma bandwidth in Ads), Pro-Scan, Sony 'Wega' and the incredible progressive scan Panasonic Xr-series? |
Audiokarma |
#6
|
||||
|
||||
Strictly that's John Watkinson's paper, published by S&W. JW is a very well respected engineer here in the UK. His books include "The art of digital audio" and "The art of digital video". Both of these books are always to hand by my desk.
He covers historic practice as stated in the SMPTE docs and then correctly states that modern NTSC coders often use 1.3MHz chroma. This too is correct, my own designs do, as do many others. I don't bother to switch filters when changing between PAL and NTSC. This is fine in the studio. However the upper sideband of a 1.3MHz chroma signal will be heavily mauled by a system M transmitter. Strictly the coders maintain 1.3MHz for U and V, not I and Q. Though if U and V are both 1.3MHz, I and Q will be too. Poynton, in "A Technical introduction to Digital Video" pp187-190, takes a similar view to Watkinson. He notes that SMPTE170M encourages the use of wideband (1.3MHz) chroma in the studio but also says that the practical broadcast chroma BW is only about 600kHz. The subtleties of I/Q coding have been largely ignored in practice. Most broadcast coders simply encode on the U/V axes and bandwidth limit before the TX. Hence even a receiver with full chroma BW and I/Q demod will not find any benefit on virtually all material. Any claims like this are markting puff. Last edited by ppppenguin; 09-08-2014 at 12:30 PM. |
#7
|
||||
|
||||
Quote:
What would be ATSC chroma res (6mhz chnl) vs COFDM chroma res (in 8mhz chnl)? Last edited by NewVista; 09-09-2014 at 08:41 AM. |
#8
|
||||
|
||||
Quote:
ASTC chroma vs COFDM chroma is almost an irrelevant question. Assuming we're talking about standard definition the input to the coder is in each case a standard "601" 4:2:2 signal as defined in SMPTE125M or its Eurpopean eqivalent. The output of the decoder is in the same format. The maximum possible chroma BW is 3.75MHz with a brick wall filter. This is followed by data compession using MPEG. This usually involves subsampling the signal to 4:2:0. Finally we get to the significant difference between ASTC and DVB, the channel coding. 8VSB for ATSC and COFDM for DVB. Without going into the differences between them or the arguments this has caused it's just a method of carrying a certain bit rate reliably from TX to RX. It has no influence whatsoever on Y or C BW. Apart from the likely decimation of chroma on the vertical axxis to make 4:2:0, on still pictures what comes out will be very close to what goes in. Any artefacts will depend on how heavily you compress the data. Such artefacts will not normally include any loss of BW. For moving pictures there are additional artefacts which may become visible if too much comrpession is used. Again loss of BW just doesn't happen. Failure of the channel coding produces different effects. In COFDM this is typically the picture freezing and/or breaking up into blocks. I don't know what happens when 8VSB runs out of eror correction. Channel width of 6MHz vs 8MHz is simply a consequence of band planning in the repsective countries. It just sets a limit to the bit rate that can be carried using a given channell coding system. I'm not familiar with ATSC but in DVB several programmes will be carreid in each 8MHz channel. These sets of programmes are called multiplexes. The total number depends on how heavily each is compressed and exactly which COFDM modulation schene is chosen. In COFDM parameters such as guard band can be chosen to give higher bit rate or better ruggedness. Proponents of 8VSB and COFDM modulation have argued the respective merits of their systems but provided you can send the bits from TX to RX without pushing the error correcton over the edge they will have no effect on the pictures. I don't know all the arguments but COFDM is inherently rugged in the presence of multipath while 8VSB needs sophisticated equalisers at the RX which weren't available when it was launched. I think COFDM makes greater demands on TX linearity. The COFDM decoder is more complex as it involves large FFTs. Moore's Law soon dealt with that problem. When you factor in the equalisers needed by 8VSB that probably evens up the complexity. 8VSB is more resistant to doppler effcts if the TX or RX is moving. Not usually a problem for domestic TVs What is almost certain, but possibly not too important for terrestrial TV, is that COFDM is the most efficient modulation scheme for getting the highest bit rate over a given imperfect channel. It's also very flexible since parameters such as guard band, amount of error correction, bits per symbol and number of carriers can be easily varied without changing the TX or RX. This makes it ideal for ADSL. Last edited by ppppenguin; 09-09-2014 at 01:18 AM. |
#9
|
||||
|
||||
Quote:
A single sideband part of the signal produces a quadrature signal component (all frequency components shifted by 90 degrees). A synchronous demodulator will ignore this component if a single SSB signal is transmitted. However, the chroma signal has two signals transmitted at 90 degree phase difference. Each synchronous chroma demodulator then ignores any quadrature component of its desired chroma component (e.g., R-Y), but sees the quadrature component of the other chroma component (e.g., B-Y). This is why the original NTSC specs extended only one component (I) into a vestigial sideband region. The I demodulator sees the lower I sideband, and no Q quadrature high frequencies are present because they aren't transmitted; the Q demodulator is narrowband and therefore does not see the quadrature components due to the wideband I signal. Transmitting wideband on both chroma axes [edit: and then cutting off part of the upper sideband] introduces quadrature distortion of higher frequency chroma, but since receivers are commonly narrowband, they do not see this distorted color detail (or any color detail!). In PAL, some quadrature distortion is tolerable because of the cancellation of phase errors, so more detail can hypothetically be squeezed out of the lower sideband of non-symmetrical chroma sidebands. Last edited by old_tv_nut; 09-09-2014 at 03:49 PM. |
#10
|
||||
|
||||
Quote:
The NTSC originally pursued CPA because of limited bandwidth available for the interleaved chroma channel and CPA would facilitate R-Y/ B-Y full vestigial sideband operation with quadrature crosstalk cancellation. Unfortunately the electronic technology still had a long way to go to effectively use CPA and ultimately the vestigial sideband I and double sideband Q was adopted. The picture would have reduced chroma bandwidth but produced superior pictures at the time. The NTSC made the right decision when forty years later the electronic technology could more effectively use the standard and hue errors had become a thing of the past. It is interesting to consider that 50's designed NTSC sets today display pictures consistently much better today than they did when they were new simply because the signal source now is consistently much better. |
Audiokarma |
#11
|
||||
|
||||
Quote:
So if I ran a TV station back in the 60's and 70's, I would have low passed the luma to remove anything above 3MHz, and then mix in the chroma subcarrier, then transmit that. Thus producing much less artifacts on viewer's TV sets. People would say that my station looks cleaner... B&W sets made after NTSC color was introduced low pass filtered the luma as well, so those viewers would not see a lack of fine detail either.
__________________
|
|
|