Analog video is long gone and I don’t think we need to argue that point. The production industry switched over to digital video in the 1980s and everyone, except a niche of creative artists, use digital video every day. In this article, we are going to examine the building blocks of video; what actually makes up those moving images on your screen?
Let’s start with the big picture. Digital video was spawned out of analog video and they share the same base properties. Analog video delivers these properties via mechanical measures while digital video delivers them via binary code, a string of 1s and 0s.
Take a look at what data is recorded every time you pull out your smartphone to take a video of your cat. It is broken down into three parts: the video as a whole, each individual frame, and each individual pixel.
A video playing all at once would be confusing so every video has a time code. Standard formatting in America is hour:minute.second.frame. Eg. 01:20:30:04 is the fourth frame of the thirtieth second of the twentieth minute of the first hour of the video.
Videos are pictures shown at high speeds to give the illusion of movement. Frame rate states how many pictures are displayed each second. The greater the number, the smoother the motion looks. The standard for broadcasters in America is currently 59.93 fps (frames per second).
Video isn’t complete without sound. Digital audio, such as channel count, bit depth, sample rate, and format type, are also stored within digital video.
Width & Height
Notes the number of pixels in each row and column of the video. For example, 1080p’s resolution is actually 1080 rows of 1920 pixels. Do the math and you have over two million pixels per frame! We also get aspect ratio info from this, 1080p being the standard 16:9.
Interlaced vs Progressive
Interlacing is a method used to reduce the amount of data required to transfer the video. It separates the odd rows and the even rows of a frame into their own frames and displays every other. This cuts the transmitted pixels in half without destroying quality completely. The progressive method is when all rows are transmitted, resulting in more pixels and data.
The total amount of data recorded for each pixel. Imagine each bit as a placeholder for data input; the more bits you have per pixel, the greater variation you can have and thus greater quality.
Color model is the method for creating the color. For example, RGB uses a mixture of red, green, and blue to create colors, which is closest to how humans perceive color. RGB is the standard for recording and displaying video.
Color space is the algorithm used to digitally record color. It takes physical color and turns it into a number within the context of the color model. There are many methods for doing this and each depends on the manufacturer or creator, although Adobe is pretty standard.
The physical size of the pixels. The smaller the pixel, the closer to the screen you need to be to read it. The larger the pixel, the further away you have to be.
Metadata (Digital Only)
Metadata is all the auxiliary data which could include the camera used to record, the date and time of recording, the location, and any other relevant numerical value.
Compression method (Digital Only)
Compression is an algorithm which reduces the amount of data for transfer while trying to maintain perceptible video quality. With better compression, you can record longer.
Want to get these articles a week in advance and directly to your inbox? Sign up for our weekly newsletter, the Archer’s Quiver!
We are a premiere audiovisual integration firm serving corporate, government, healthcare, house of worship, and education markets with easy-to-use solutions that drive success. Family-owned and operated from Appleton, WI for over 35 years.