Videokarma.org

Go Back   Videokarma.org TV - Video - Vintage Television & Radio Forums > Recorded Video

We appreciate your help

in keeping this site going.
Reply
 
Thread Tools Display Modes
  #1  
Old 09-20-2022, 06:47 PM
DVtyro DVtyro is offline
VideoKarma Member
 
Join Date: Sep 2022
Posts: 137
How to read VCR signal layout

Hello! I am new here! I am interested in all things video, in particular old analog video recorders and cameras, as well as digital camcorders. I have a YouTube channel, but I don't want to advertise it now, as it may seem way too amateurish to the members.

Several of my late videos regarded analog VTRs, and the deeper I get into why and how some of them are better/worse than others, and what the differences are, the deeper I get into their specs and into how they work. I've never studied electronics, so understanding some things is hard, especially when several things happen at once, like modulating of this with this, and all this occurs in time.

Reading a bit about analog broadcast TV, I think I understand where the bandwidth requirement comes from, it is basically number of elements ("pixels") in line times the number of lines times the number of frames per second, so about 6 or 7 or 8 MHz depending on a system. Do I understand correctly, that on the picture below each particular frequency describes a pixel, and amplitude describes brightness? So, basically, the picture below can define a whole second of broadcast video, or in other terms, a whole frame? (Or field, whatever, not important at this point).



Moving on to VTR, I have a question of how to read the signal graph of a typical "color-under" VTR:



Does it describe a whole frame/field, or a single pixel? From what I've read, brightness of a pixel is not described with amplitude, but rather with frequency, so the range from luminance tip (black) and white frequency describe brightness range. So, a range between, say, 3.5 MHz and 4.5 MHz, which is 1 MHz, describes the whole range of brightness... for a single pixel? For a pair of pixels? What is luminance sideband is for in this case?

On another hand, I read about, say, VHS, that it has deviation of 1 MHz, but luma range of 3 MHz. Ok, what does it tell me? In other place I read that for analog television resolution can be calculated as roughly 80 LWPH per 1 MHz, so 3 MHz means 240 lines, it adds up. But how exactly to read this information on the graph? If I divide 1,000,000/480/30 = 70, which is close to 80. Is it where this number comes from? But in this case this means that bandwidth describes one scanline. So, the graph can be considered a "photo" of one scanline sweep, where sweeping from left edge of the screen to the right corresponds to frequencies from the left edge of the luminance band to the right? Is this correct? But in this case how do describe brightness for all of these pixels, if brightness is described as frequency in the 3.5 to 4.5 region, so I can have only one frequency there for every moment it time? I am lost here. If someone can explain it in layman terms, I would be very grateful!

Another thing, which is related to these graphs, is increase in resolution in "hi-band" formats. This is why I started to look into these graphs at the first place. For formats like SVHS, Hi8, SuperBeta, Umatic SP some of these things have been done:
  • moving Y carrier up, which in all cases improved resolution. Was the resolution improved because the overall width of luminance band, from the very left edge that touches chroma, to the very right, was extended?
  • another thing, in some formats luminance carrier deviation was increased. Do I understand correctly that this only affects the contrast? Or in other words, it affects dynamic range, considering that more "steps" is available? If this is true, why the increase in carrier deviation is always mentioned together with the increase in luminance resolution? The resolution does not depend on deviation per se, it depends on the overall width of the luminance band, which is expanded if the Y carrier is moved up and the deviation is increased... I see, so it is kind of a secondary effect. But the deviation per se does not improve the resolution?
  • And another thing: in the Sencore document I read that SuperBeta moved Y carrier up but did not expand the deviation compared to the original format, and because of that older Beta machines could play SuperBeta tapes. Is it true? Was SuperBeta specifically designed for backwards compatibility? Why moving Y carrier up and widening the whole luminance band in effect allowed to keep SuperBeta compatible, but moving Y carrier up and widening the deviation did not allow to keep SVHS compatible with VHS? Was the extra 0.6 MHz of the variance more, um, destructive for backward compatibility than widening the whole luminance band?
  • This one is more opinion-based, not technical, but still: would SuperBeta be the better option than SVHS or Hi8, considering that SuperBeta was close to broadcast TV in resolution yet compatible with older machines? If JVC chose to use the same approach, we could have something in between VHS and SVHS, but backwards compatible with VHS? Hollywood studios could make prerecorded tapes in this format, playable on older VHS machines?

Thanks!
Reply With Quote
  #2  
Old 09-20-2022, 08:02 PM
old_tv_nut's Avatar
old_tv_nut old_tv_nut is online now
See yourself on Color TV!
 
Join Date: Jul 2004
Location: Rancho Sahuarita
Posts: 7,215
You need to learn the difference between an electrical signal vs time and its spectrum of frequencies.

If you look at a video signal with an oscilloscope, you can pick out the time sequence of frames, fields, lines, and the particular spot on the screen left to right.

The diagrams you posted are frequency spectra, and represent a long term average of the different frequencies in the signals. In general, a lower frequency (towards the left) represents a coarse repeating pattern of lines or dots in the picture, and higher frequencies represent finer patterns. When such a signal is modulated on a carrier, it generates new frequencies spaced up and down from the carrier frequency.
__________________
www.bretl.com
Old TV literature, New York World's Fair, and other miscellany

Last edited by old_tv_nut; 09-20-2022 at 08:12 PM.
Reply With Quote
  #3  
Old 09-20-2022, 08:06 PM
old_tv_nut's Avatar
old_tv_nut old_tv_nut is online now
See yourself on Color TV!
 
Join Date: Jul 2004
Location: Rancho Sahuarita
Posts: 7,215
Was the resolution improved because the overall width of luminance band, from the very left edge that touches chroma, to the very right, was extended?

Basically, yes. But the luminance in tape machines is FM modulated, so it also reduced the noise level.
__________________
www.bretl.com
Old TV literature, New York World's Fair, and other miscellany
Reply With Quote
  #4  
Old 09-20-2022, 08:10 PM
old_tv_nut's Avatar
old_tv_nut old_tv_nut is online now
See yourself on Color TV!
 
Join Date: Jul 2004
Location: Rancho Sahuarita
Posts: 7,215
another thing, in some formats luminance carrier deviation was increased. Do I understand correctly that this only affects the contrast? Or in other words, it affects dynamic range, considering that more "steps" is available? If this is true, why the increase in carrier deviation is always mentioned together with the increase in luminance resolution? The resolution does not depend on deviation per se, it depends on the overall width of the luminance band, which is expanded if the Y carrier is moved up and the deviation is increased... I see, so it is kind of a secondary effect. But the deviation per se does not improve the resolution?

The FM modulation of the luminance in video tape is called "narrow band" because the FM deviaton is in the same order as the bandwidth of the video baseband. Compare this to FM radio, where the deviation is larger than the baseband audio frequencies.

The overall bandwith of the video tape FM luminance is a combination of the baseband signal bandwidth and the deviation. If the carrier is too low in frequency the combination can suffer distortion.
__________________
www.bretl.com
Old TV literature, New York World's Fair, and other miscellany
Reply With Quote
  #5  
Old 09-20-2022, 08:49 PM
DVtyro DVtyro is offline
VideoKarma Member
 
Join Date: Sep 2022
Posts: 137
Quote:
Originally Posted by old_tv_nut View Post
You need to learn the difference between an electrical signal vs time and its spectrum of frequencies.
I know. This is what I am asking help with If you can point to the graph and say "this is this" and "this is that", I will be grateful.

Quote:
Originally Posted by old_tv_nut View Post
The diagrams you posted are frequency spectra, and represent a long term average of the different frequencies in the signals. In general, a lower frequency (towards the left) represents a coarse repeating pattern of lines or dots in the picture, and higher frequencies represent finer patterns. When such a signal is modulated on a carrier, it generates new frequencies spaced up and down from the carrier frequency.
The diagrams I posted are different one from another.
  • The TV broadcast spectrum is U(f), where U is strength at a particular f, so I suppose the whole band can describe level for, say, 6 million samples, which is a whole frame for, say, 1/30 of a second. Then the next 1/30 of a second I would have a different snapshot of a 6 MHz bandwidth, do I get it right?
  • On a VTR, brightness is coded as frequency, not amplitude, I get it. But I thought that brightness is confined between the tip of the carrier and its deviation. So, say brightness is between 3.5 and 4.5 MHz. If this is correct, what is to the left of the tip?
Quote:
Originally Posted by old_tv_nut View Post
The FM modulation of the luminance in video tape is called "narrow band" because the FM deviaton is in the same order as the bandwidth of the video baseband. Compare this to FM radio, where the deviation is larger than the baseband audio frequencies.
So what the deviation is for, and what the baseband is for? Is baseband to the left of the tip point, or all luminance including the deviation?

Quote:
Originally Posted by old_tv_nut View Post
The overall bandwith of the video tape FM luminance is a combination of the baseband signal bandwidth and the deviation. If the carrier is too low in frequency the combination can suffer distortion.
How is that? Although I think I've read somewhere about why the carrier was chosen as high as possible, need to re-read.

Last edited by DVtyro; 09-20-2022 at 11:48 PM.
Reply With Quote
Audiokarma
  #6  
Old 09-21-2022, 10:55 AM
old_tv_nut's Avatar
old_tv_nut old_tv_nut is online now
See yourself on Color TV!
 
Join Date: Jul 2004
Location: Rancho Sahuarita
Posts: 7,215
It's very difficult to give you a complete course in signal and modulation theory of video recording (or in general) in this forum. Give me some time and I'll try to find some books that are still available that you could buy.

What references do you have now?
__________________
www.bretl.com
Old TV literature, New York World's Fair, and other miscellany
Reply With Quote
  #7  
Old 09-21-2022, 11:07 AM
old_tv_nut's Avatar
old_tv_nut old_tv_nut is online now
See yourself on Color TV!
 
Join Date: Jul 2004
Location: Rancho Sahuarita
Posts: 7,215
The diagrams you posted show the range of frequencies which MAY be present. What energy is actually in those bands depends on the picture content.

First diagram:
In the narrow band FM of the luminance signal, the slowly varying parts of the baseband signal produce the range of frequencies from 3.5 to 4.5 MHz, as you stated. But modulation by higher frequencies (fine detail) produces the luminance sidebands shown. You really need college-level math (Bessel functions) to determine what these are. If the system does not pass these sidebands, the fine detail will be filtered out.

https://www.johndcook.com/blog/2016/...-an-fm-signal/
__________________
www.bretl.com
Old TV literature, New York World's Fair, and other miscellany
Reply With Quote
  #8  
Old 09-21-2022, 01:58 PM
DVtyro DVtyro is offline
VideoKarma Member
 
Join Date: Sep 2022
Posts: 137
@old_tv_nut, frankly, my math is too rusty get into it again. I do not strive for understanding these graphs on a level of a video engineer, but I would like to be able to compare two machines looking at the graphs. After all, these graphs were published in all pop-sci magazines, and a regular reader was supposed to understand them. All I want is conceptual understanding.

Please, see the attached picture.
  • Deviation from sync tip to peak white is used for brightness - yes/no ?
  • Deviation has no relation to resolution - yes/no?
  • Wider deviation means more contrast, means more S/N - yes/no?
  • Moving the sync tip to higher frequencies, even if deviation is not changed, extends luma bandwidth, which increases luma resolution - yes/no?
  • Luminance sideband to the LEFT of sync tip describes the level of detail? Or the whole luminance band INCLUDING the deviation describes the level of detail? Higher frequency means more detail?
  • Is the detail described by the luminance band for a "pixel", for a line, for a whole screen? The graphs I posted show a wiggly line between sync tip and peak white, I presume this is the third dimension, time. Sync tip is a pulse for the start of each line, right? So I can draw this graph in three dimensions: (f, U, t). The simplified 2D graph that I posted before in (f, U) coordinates is a snapshot of one "pixel", how much detail it has, how much frequency information. The higher the luminance frequency, the more high-frequency detail, that is, more alternating white and black pixels? Um, so I guess I cannot get it for one pixel, maybe for like two adjacent pixels? Not sure...
  • Were Beta VCRs more compatible from the start compared to VHS, or moving deviation to higher frequencies without widening the deviation is inherently more compatible? If JVC did the same, would regular VHS machines be able to play such an improved format? Because from what I've read, old Beta machines could play SuperBeta recordings.
  • Umatic, including Umatic SP, uses the same "color under" technique, and looking at the graphs, it is no better than SVHS and Hi8. I suppose, the only reason Umatic was preferred by pros was because they had all the needed interconnects, could connect to TBC, had timecode, etc. But the sheer image quality of an original recording was no better than on SVHS and Hi8, yes/no?
  • I suppose, regular VHS has never been used by pros. Well, maybe by some, on some small regional stations, but never at a large scale, and there were no standard VHS machines that had all the needed pro features to interconnect with other pro equipment?
Attached Images
File Type: png super-formats.png (62.1 KB, 5 views)
Reply With Quote
  #9  
Old 09-21-2022, 03:43 PM
old_tv_nut's Avatar
old_tv_nut old_tv_nut is online now
See yourself on Color TV!
 
Join Date: Jul 2004
Location: Rancho Sahuarita
Posts: 7,215
Answers in bold [EDIT - AK changes all-bold to lower case]
Quote:
Originally Posted by dvtyro View Post
@old_tv_nut, frankly, my math is too rusty get into it again. I do not strive for understanding these graphs on a level of a video engineer, but i would like to be able to compare two machines looking at the graphs. After all, these graphs were published in all pop-sci magazines, and a regular reader was supposed to understand them. All i want is conceptual understanding.

Please, see the attached picture.
  • deviation from sync tip to peak white is used for brightness - yes/no ?
    Yes
  • deviation has no relation to resolution - yes/no?
    Yes
  • wider deviation means more contrast, means more s/n - yes/no?
    Yes, means larger signal from the fm detector
  • moving the sync tip to higher frequencies, even if deviation is not changed, extends luma bandwidth, which increases luma resolution - yes/no?yes
  • luminance sideband to the left of sync tip describes the level of detail?
    Yes level of the sideband is determined by the level of detail

    or the whole luminance band including the deviation describes the level of detail? Higher frequency means more detail?
    Frequencies further to the left represent finer detail. Note that fm normally produces symmetrical sidebands above and below the range of deviation. In video recorders, the upper sidebands are suppressed due to frequency limitations of the recording process (head gap, etc.) this assymetrical sideband result is not ideal but the distortions can be small enough to not be of concern.
  • is the detail described by the luminance band for a "pixel", for a line, for a whole screen? The graphs i posted show a wiggly line between sync tip and peak white, i presume this is the third dimension, time. Sync tip is a pulse for the start of each line, right? So i can draw this graph in three dimensions: (f, u, t). The simplified 2d graph that i posted before in (f, u) coordinates is a snapshot of one "pixel", how much detail it has, how much frequency information. The higher the luminance frequency, the more high-frequency detail, that is, more alternating white and black pixels? Um, so i guess i cannot get it for one pixel, maybe for like two adjacent pixels? Not sure...
    Here you have run into mathematical fourier analysis theory. A single frequency in the frequency domain represents a sine wave that is continuous for all time past and future. So the spectrum conceptually is the result of measuring the electrical signal over infinite time, and therefore includes all frames, lines, and pixels. Practically speaking, of course we measure over shorter convenient time periods. A typical use of a spectrum analyzer might do the measurement over a few lines or a frame. This means that we can have the short term spectrum update every few lines or every frame. The mathematical result is that the spectrum resolution gets a bit blurred, in inverse proportion to the measurement time.
  • were beta vcrs more compatible from the start compared to vhs, or moving deviation to higher frequencies without widening the deviation is inherently more compatible? If jvc did the same, would regular vhs machines be able to play such an improved format? Because from what i've read, old beta machines could play superbeta recordings.
    I don't know. You will have to search further, maybe in ieee literature.
  • umatic, including umatic sp, uses the same "color under" technique, and looking at the graphs, it is no better than svhs and hi8. I suppose, the only reason umatic was preferred by pros was because they had all the needed interconnects, could connect to tbc, had timecode, etc. But the sheer image quality of an original recording was no better than on svhs and hi8, yes/no?
    I don't know. Depends on the details of the comparison. If u-matic had a higher tape speed and/or wider tracks, the signal to noise ratio would be better.
  • i suppose, regular vhs has never been used by pros. Well, maybe by some, on some small regional stations, but never at a large scale, and there were no standard vhs machines that had all the needed pro features to interconnect with other pro equipment?
    I don't know, but i suspect even small cable head end public video facilities mostly used u-matic. There can be other considerations besides video bandwidth, such as time-base correction to meet fcc broadcast standards.
__________________
www.bretl.com
Old TV literature, New York World's Fair, and other miscellany

Last edited by old_tv_nut; 09-21-2022 at 06:34 PM.
Reply With Quote
  #10  
Old 09-21-2022, 05:21 PM
DVtyro DVtyro is offline
VideoKarma Member
 
Join Date: Sep 2022
Posts: 137
Quote:
Originally Posted by old_tv_nut View Post
Frequencies further to the left represent finer detail. Note that fm normally produces symmetrical sidebands above and below the range of deviation. In video recorders, the upper sidebands are suppressed due to frequency limitations of the recording process (head gap, etc.) this assymetrical sideband result is not ideal but the distortions can be small enough to not be of concern.
Ah, interesting! So, the upper sideband is suppressed, and the deviation is plopped instead of it? The lower sideband then describes the detail? I see. Thanks!
Reply With Quote
Audiokarma
  #11  
Old 09-21-2022, 08:20 PM
Electronic M's Avatar
Electronic M Electronic M is offline
M is for Memory
 
Join Date: Jan 2011
Location: Pewaukee/Delafield Wi
Posts: 14,808
You're asking way too many questions at once. It's sensory overload reading it all.

There are some major fundamental things you're missing if you are describing analog TV and video information interms of pixels. There are no pixels in analog TV and they aren't a great way to quantify analog video system performance.
Television is a raster, which is a stack of scan lines (525 of them). A scan line is a continuous horizontal stripe of phosphor and analog information there are no defined pixel like boxes. The brightness is infinitely variable and can be any value at any physical location on the line. This is super apparent on a monochrome only TV. Color TVs have the illusion of pixels, but no actual pixels.... Basically to achieve color since you can't have one single phosphor make all 3 primary colors with independent control most color TVs have 3 different colors of phosphor arranged in a dot or stripe grid...The grid is NOT pixels, but merely a necessary side effect of needing 3 different color phosphors. They basically try to size the grid small enough that it won't noticably limit detail (the 15GP22 and the CRT in the GE Portacolor we're actually not small enough to prevent reduction in picture quality). Projection sets would use 3 monochrome CRTs as would field sequential sets that would rapidly show 3 complete video frames on a monochrome CRT with a spinning wheel with 3 different color filters in front of it such that on the green color frame the green color filter would be in front of the tube and tint it's light green, the blue filter in front of the screen when the blue video field was on screen, ETC.

Because a video dark to bright transition can be anywhere on the line digitizing that video can actually reduce resolution as the pixels in the digitizer may not line up with the transitions in a way that is can capture them or there may not be as many pixels as transitions thus there isn't a box to put some of the transitions into.


The horizontal line frequency is 15,750Hz so a horizontal line takes (time in seconds)= 1/(frequency in Hz)....Minus the duration of a complete horizontal sync pulse. If you convert the non-sync portion of the horizontal line time back to a frequency F=1/T vou can divide the video frequency by the visible line frequency and get the number of black/white transitions (number of vertical lines that can be displayed) at that video frequency. (I think I have that math right correct me if I'm wrong)

There's probably other things I could explain but I've already forgotten 75% of what you wrote.
__________________
Tom C.

Zenith: The quality stays in EVEN after the name falls off!
What I want. --> http://www.videokarma.org/showpost.p...62&postcount=4
Reply With Quote
  #12  
Old 09-21-2022, 09:57 PM
DVtyro DVtyro is offline
VideoKarma Member
 
Join Date: Sep 2022
Posts: 137
Quote:
Originally Posted by Electronic M View Post
You're asking way too many questions at once. It's sensory overload reading it all.
Then read slower and make pauses
Quote:
Originally Posted by Electronic M View Post
There are some major fundamental things you're missing if you are describing analog TV and video information interms of pixels. There are no pixels in analog TV and they aren't a great way to quantify analog video system performance.
Television is a raster, which is a stack of scan lines (525 of them). A scan line is a continuous horizontal stripe of phosphor and analog information there are no defined pixel like boxes.
Of course, there are. Here, two pixels are defined by one period.



How TV bandwidth is calculated? Suppose the frame is square, H high and L wide with number of lines N. Each line has LN/H elements a.k.a. pixels. Because there are N lines, total number of elements is LN^2/H. One period is good for two elements ("pixels"), so we need LN^2/(2H) periods per one frame. The frequency should be LN^2*n/(2H), where n is frame rate. Correcting for 4/3 display aspect ratio we get 4*N^2*n/(3*2) Hz. Say, for NTSC we get 4*525^2*30/6 = 5.5 MHz.

Quote:
Originally Posted by Electronic M View Post
The grid is NOT pixels, but merely a necessary side effect of needing 3 different color phosphors.
I am not interested in phosphors and CRT tubes.

Quote:
Originally Posted by Electronic M View Post
Because a video dark to bright transition can be anywhere on the line digitizing that video can actually reduce resolution as the pixels in the digitizer may not line up with the transitions in a way that is can capture them or there may not be as many pixels as transitions thus there isn't a box to put some of the transitions into.
Not interested in digitizing either.

Quote:
Originally Posted by Electronic M View Post
The horizontal line frequency is 15,750Hz so a horizontal line takes (time in seconds)= 1/(frequency in Hz)....Minus the duration of a complete horizontal sync pulse. If you convert the non-sync portion of the horizontal line time back to a frequency F=1/T vou can divide the video frequency by the visible line frequency and get the number of black/white transitions (number of vertical lines that can be displayed) at that video frequency. (I think I have that math right correct me if I'm wrong)
Right, elements. Or element pairs on EACH LINE, so they are effectively pixels (from "PIcture ELement").
Reply With Quote
  #13  
Old 09-21-2022, 10:27 PM
old_tv_nut's Avatar
old_tv_nut old_tv_nut is online now
See yourself on Color TV!
 
Join Date: Jul 2004
Location: Rancho Sahuarita
Posts: 7,215
"Pixels" implies a fixed location along the scanning line of the light or dark spot. In analog TV, the location is not fixed and can vary location left or right infinitesimally. Calling them pixels is misleading.

Analog TV resolution is specified in lines (two per black/white pair) per picture height or per picture width. Due to the bandwidth limit of NTSC broadcast, the system can produce about 440 lines per picture width, that is, 220 black spots and 220 white spots. Because the resulting spots can be infinitesimally placed leftward or rightward, the sampling rate of a visually equivalent digital signal needs to be higher than the Nyquist rate (twice the limiting analog frequency) to prevent visible aliasing patterns. Standard definition digital video recording uses a sampling rate of 13.5 M samples per second.
__________________
www.bretl.com
Old TV literature, New York World's Fair, and other miscellany
Reply With Quote
  #14  
Old 09-21-2022, 10:42 PM
old_tv_nut's Avatar
old_tv_nut old_tv_nut is online now
See yourself on Color TV!
 
Join Date: Jul 2004
Location: Rancho Sahuarita
Posts: 7,215
By the way, the scanning line structure of the analog picture samples the vertical resolution frequencies. 480 active lines (in NTSC) gives a practical limit of 330 lines per picture height of resolution without aliasing (and interlace flicker) getting too objectionable. This is a balanced spatial resolution for a 4x3 aspect ratio picture:
440 lines per width is the same physical spacing as 330 lines per height.
__________________
www.bretl.com
Old TV literature, New York World's Fair, and other miscellany
Reply With Quote
  #15  
Old 09-21-2022, 10:46 PM
old_tv_nut's Avatar
old_tv_nut old_tv_nut is online now
See yourself on Color TV!
 
Join Date: Jul 2004
Location: Rancho Sahuarita
Posts: 7,215
Extensive discussion of the above factors:
http://videokarma.org/showthread.php...ht=kell+factor
__________________
www.bretl.com
Old TV literature, New York World's Fair, and other miscellany
Reply With Quote
Audiokarma
Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT -5. The time now is 01:05 PM.



Powered by vBulletin® Version 3.8.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
©Copyright 2012 VideoKarma.org, All rights reserved.