[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Interlace (Re: [ProgSoc] physical to electronic)
> > A lot of cards, including most inbuilt Mac ones (not
> > from third parties) will only capture every second field. This does not
> > matter if you're capturing a show like Recovery or All Saints, actually.
> > But it means the difference between 30/25fps and 60/50fps.
> Since when did field == frame? You may have 30 frames per second in PAL,
> which is generally 60 fields per second. But fields are interlaced meaning
> you need two fields to make up a single frame.
This is a great oversimplification. Suppose I'm viewing a plain white background
with a round black spot moving over it. On interlace video (i.e. TV) a given
screen refresh will show only half the lines on the background and half the
lines on the moving spot. The next screen refresh will NOT fill in the other
half of the lines because the spot has moved a little bit and as the other
half of the interlace lines are drawn, the spot is drawn in it's new position.
The upshot is that combining two interlace fields by merely sticking the
lines between one another does not produce a correct picture unless nothing
in the picture is moving. In fact, an interlace video output never sends the
complete information for a moving picture and depends on your eye to fill in
> So where does 60/50 fps come from? Are you suggesting that interlaced
> frames are being captured fully? So each frame is captured after each
> field is displayed? Seems totally pointless and silly.
Even if you are going to knit two interlace fields together (which as I
say above is not the correct thing to do) and call the result a frame you
still need to capture at 60/50 fps because that is the speed that the input
is coming in at. I think that the suggestion was that some digitisers throw
half of the interlace signal away because that is easier than trying to knit
> > MPEG and MPEG2 are designed for output identical to input, whereas MPEG4
> > is for RealVideo types of stuff.
> So MPEG and MPEG2 are lossless?
Absolutely not, JPEG and MPEG are all lossy compression but the loss has been
tailored to the way that your eye views the image. The result is that the loss
is difficult to see with a human eye. In theory, an MPEG encoder that is given
interlace video can use the interlace information to its advantage because it
makes a similar compromise -- the still parts of the picture are drawn to higher
detail than the moving parts of the picture -- and the human viewer will still
find it quite acceptable. In practice, this type of encoder would probably only
exist in boards that do the digitisation and encoding all on the one board.
Other systems probably either throw away every second refresh on an interlace
system or knit two refreshes into a single frame, then feed the result to the
MPEG encoding system.
You are subscribed to the progsoc mailing list. To unsubscribe, send a
message containing "unsubscribe" to email@example.com.
If you are having trouble, ask firstname.lastname@example.org for help.
This list is archived at <http://www.progsoc.uts.edu.au/lists/progsoc/>