Discuss Project Report
Very helpful notes on video codecs. I'm
not looking for details on the whole notes. However it will explain
things that you don't understand about MPEG.
DVI - Digital Video Interface
CCD
imager
Video tips
-
Video overlay - Analog & digital on the same display
-
Overscan & Underscan
-
Progressive
-
Interlaced
-
RGB & NTSC colors
-
Single pixel lines & TVs bad idea
-
Computer monitors typically (unless low quality) do not interlace
an image. TV displays odd and even lines of a frame. This was do to the
fact the when early TVs drew the full frame at once, it would begin to
fade from the phosphors on the screen. With even & odd scan lines,
the fading wouldn't be as noticeable. These 2 sets are called fields.
Odd & Even fields. Interlacing blends the two fields together.
Since computer monitors are none interlaced, if you want your videos
rendered on the computer for display on TV you need to do field
rendering to digitally create interlacing of fields. Frame rate of 30
fps (actually 29.97 fps) is with 2 interlaced fields per frame. This
gives a field rate of 60 (59.94) fields per second.
Most computers can't handle displaying full screen uncompressed
video at 30 frames per second. A single frame of a 640 x 480 @ 24 bits
in color depth takes up nearly 1 MB. At 30 fps, that's nearly 30 MB for
1 second of video. A 32x CD ROM, can only read 4.8 MB per second.
To see movie you must see rapid moving images on screen. This gives
the illusion of moving objects. 30 fps (frames per second) is typical
motion video (29.97 fps on the PC). However 15 fps is also adequate.
For uncompressed video, in 1 second of animation the formula for
file
size is:
frames/second x image size x color depth (in bytes) = file size
So running 30 fps at (640 x 480) and 256 colors (8 bits = 1 byte)
30 x 307200 x 1 = 9,216,000 bytes or 9.216 MB for 1 second of
animation
As you can see, video compression is greatly needed. A way of
reducing the image file size is reducing the video size, color depth,
and frame rate. 15 fps, at 320 x 240, at 256 colors is adequate.
Compression:
There are 2 types of compression.
-
Lossless - preserves the exact image throughout the
compression and decompression.
-
Lossy - compression eliminates some of the date in the image
provided greater compression ratios than lossless compression.
CODECs are used to enCODe / DECode a file. Without the same CODEC
installed users can't play the files you compress.
Wavelet
Compression - Representing an image as a set of
mathematical expressions
Lossy is good for video, since some small drops in moving images
are not noticeable.
H.264 Link
Video formats: QuickTime (MOV) for the Mac, Audio Video Interleaved
(AVI) for Windows.
Interleaving mean to blend audio data with video data and other data so
that the sound remains synchronized with the video.
QuickTime format can have various tracks of data. Such as multiple
language tracks. Sorensen codec is very good.
You can get a QuickTime production package to create videos in the
QuickTime format (or convert from QuickTime to another format).
AVIs & MOVs don't require any special hardware to playpack the
files.
Compression ratios
Most computers can't handle displaying full
screen uncompressed video at 30 frames per second. A single frame of a
640 x 480 @ 24 bits in color depth takes up nearly 1 MB. At 30 fps,
that's nearly 30 MB for 1 second of video. A 32x CD ROM, can only read
4.8 MB per second.Integrated software applications or drivers, called
Codecs take care of this dilemma for you. These drivers compress the
video file into a smaller file and little compromise to the quality of
the image.There are 2 types of compression
-
Spatial - Compresses the space required to store each
individual frame. Much like JPEG, I t uses lossy compression. You trade
file size for image quality.
-
Temporal - (also known as frame differencing), it uses
similarities of individual frames over time to remove receptive data.
Only the changes between frames are stored. A reference frame is a
keyframe. Frames between keyframes are called extrapolated frames.
Typically keyframes are stored once ever half or whole second of video.
The more keyframes, the larger the file size, but the better quality of
the extrapolated frames.
Different codecs will work the compression in two kinds of ways
-
symmetric compression - compresses the data at the same rate
that it will take to decompress it.
-
asymmetric compression - compresses the data at a slower rate
that it will take to decompress it. This usually means greater
compression ratios
Deltas in a video
MPEGs: Keyframes & Macroblocks
DVDs
(Digital Versatilbe Discs) are a hardware format that defines the
physical process in which data is written to the disc. The software or
digital information on the disc contain multiplexed audio, video, text,
and other data for display on the screen. MPEG-2 uses a JPEG method for
storing each keyframe. Frames in-between the keyframes are the
difference from frame to frame. This process of the difference between
frames is called delta-frame encoding.
"MPEG-2 can represent interlaced or progressive video sequences,
whereas MPEG-1 is strictly meant for progressive sequences since the
target application was Compact Disc video coded at 1.2 Mbit/sec." MPEG2-FAQ
MPEG-4 - "Wavelet-based MPEG-4 files are smaller than JPEG or
QuickTime files, so they are designed to transmit video and images over
a narrower bandwidth and can mix video with text, graphics and 2-D and
3-D animation layers." Gotten from here
Movie editors - Adobe Premiere & After Effects. Rearrange
clips, do special effects, titles, etc.
-
Digitizing boards
-
Digital Video (DV)
Mophing of images
Digital TV - How Digital TV
Works. A camera rasterizes the scene by turning the image into a series
of pixels. Horizontal & Vertical sync signals define the ends of
lines or frames. Digital TV will be pure digital all the way to the
display. Unlike Digital Satellite TV where it's converted to analog for
display.
HDTV - High definition television. This will replace NTSC
as the video standard for the US. "An NTSC TV image has 525 horizontal
lines per frame (complete screen image). These lines are scanned from
left to right, and from top to bottom. Every other line is skipped.
Thus it takes two screen scans to complete a frame: one scan for the
odd-numbered horizontal lines, and another scan for the even-numbered
lines. Each half-frame screen scan takes approximately 1/60 of a
second; a complete frame is scanned every 1/30 second. This
alternate-line scanning system is known as interlacing. Like SECAM, PAL
scans the cathode ray tube (CRT) horizontally 625 times to form the
video image. NTSC scans 525 lines. Color definitions between the
systems vary slightly." PAL & SECAM are non-US standards"
HDTV will use
-
MPEG video
-
CD-quality audio with dolby digital sound.
-
Progressive scanning (each scan includes every line for a
complete picture) for computer interoperability up to 60 fps @ 1920 x
1080 pixels.
-
Transmit the packets that will permit any combination for
video, audio, and data.
Standard TV uses a screen ratio of 4:3. HDTV can us 16:9 ratio for
wide screen shows.
SDTV - Standard Definition Television.
"Standard definition television (SDTV) is a
digital television (DTV) format that provides a picture quality similar
to digital versatile disk (DVD). SDTV and high definition television
(HDTV) are the two categories of display formats for digital television
(DTV) transmissions, which are becoming the standard. HDTV provides a
higher quality display, with a vertical resolution display from 720p to
1080i (p is progressive, i is interlaced) and higher and an aspect
ratio (the width to height ratio of the screen) of 16:9, for a viewing
experience similar to watching a movie. In comparison, SDTV has a range
of lower resolutions and no defined aspect ratio. New television sets
will be either HDTV-capable or SDTV-capable, with receivers that can
convert the signal to their native display format. SDTV, in common with
HDTV, uses the MPEG-2 file compression method.
Because a compressed SDTV digital signal is smaller than a
compressed HDTV signal, broadcasters can transmit up to five SDTV
programs simultaneously instead of just one HDTV program. This is
multicasting. Multicasting is an attractive feature because television
stations can receive additional revenue from the additional advertising
these extra programs provide. With today's analog television system,
only one program at a time can be transmitted."
Vertical-Blank
Interval - This is the gap in-between the frames on a TV picture.
Closed captioning makes use of this. However you can send more than
just text, you can send data. Intel invented Intercast to transmit web
pages and other information behind the scenes of a TV show. WebTV
allows you to have interactive TV shows where you can play games or get
facts about what you are watching.
Get Windows
Media Encoder to do screen captures with voiceovers.
Printscreen in windows:
To capture the contents of a window, click on
it so it's active and then press ALT+PRINT SCREEN.
To capture the contents of the screen, press
PRINT SCREEN.
Now go to Paint or any other image program
and paste the image to print it out. Paint is found in
Start/Programs/Accessories/Paint.