Sound is recorded by the decibel (volume) samples an object makes over time.

Types of sounds:

Flash uses MP3 to store it's audio, but it can accept WAV, MP3, AIFF and others.

MP3 uses Perceptual encoding to reduce the file size. It discards information that you cannot hear.


Event vs. Streaming of sounds:

Event loads the sound (completely into memory when that frame is accessed, before playing it. This is good for short clips, such as popup sounds. Synchronizing audio and animation might not work well if your file is played back on a faster/slower machine.  It's best for sounds that repeat later on, as they are loaded already into memory.

Streaming sound plays before it's fully finished loading into memory. This is not good for sounds that you'll reuse in your program, as they need to get loaded each time. This locks in the audio track to the timeline to help lip-sync audio.


Pick music to match the mood of your project.


Sound Channel, lets you play multiple sounds at the same time

242-244 shows loading of an external file.


Audio calculations for an uncompressed audio file:

sample rate x seconds length x (bit resolution/8) x channels of sound

10 seconds of 22 kHz audio in mono sound is:

22,000 x 10 x 8/8 x 1 = 220,000 bytes

Audio compression - generally lossy (you loose data)


Computer monitors typically (unless low quality) do not interlace an image. TV displays odd and even lines of a frame. This was do to the fact the when early TVs drew the full frame at once, it would begin to fade from the phosphors on the screen. With even & odd scan lines, the fading wouldn't be as noticeable. These 2 sets are called fields. Odd & Even fields. Interlacing blends the two fields together. Since computer monitors are none interlaced, if you want your videos rendered on the computer for display on TV you need to do field rendering to digitally create interlacing of fields. Frame rate of 30 fps (actually 29.97 fps) is with 2 interlaced fields per frame. This gives a field rate of 60 (59.94) fields per second.


Most computers can't handle displaying full screen uncompressed video at 30 frames per second. A single frame of a 640 x 480 @ 24 bits in color depth takes up nearly 1 MB. At 30 fps, that's nearly 30 MB for 1 second of video. A 32x CD ROM, can only read 4.8 MB per second.

To see movie you must see rapid moving images on screen. This gives the illusion of moving objects. 30 fps (frames per second) is typical motion video (29.97 fps on the PC). However 15 fps is also adequate.

For uncompressed video, in 1 second of animation the formula for file
size is:

frames/second x image size x color depth (in bytes) = file size

So running 30 fps at (640 x 480) and 256 colors (8 bits = 1 byte)

30 x 307200 x 1 = 9,216,000 bytes or 9.216 MB for 1 second of animation

As you can see, video compression is greatly needed. A way of reducing the image file size is reducing the video size, color depth, and frame rate. 15 fps, at 320 x 240, at 256 colors is adequate.

Compression:

There are 2 types of compression.

CODECs are used to enCODe / DECode a file. Without the same CODEC installed users can't play the files you compress.


Video for Flash:

flv or f4v is Flash, Adobe Bridge, or Adobe Media Player. Video is streamed and buffered to the visit of a site watching a Flash video.


Using the Adobe Media Encoder:

You can import video from: AVI, DV, MPG, MOV, and WMV.

You can output your video to, FLV, F4V, or H.264 (MP4).

H.264 can play on many devices and cel phones.