The Basics Of Video Editing Part IV: Preparing And Encoding Your Video For Delivery

Today we have our fourth lesson on the basics of video editing and we’ll be taking a look at how you can export your edits to various formats using both Adobe Premiere Pro CS5 and Final Cut Pro Studio 3. We’ll also take a look at designing video encoding specs so you can make your own. Come on in and let’s get started!

The lesson is in the video the notes for this lesson are below. They won’t replace the lesson, but you can use them as a sort of cheat sheet to refer back to as you’re trying things out in Final Cut or Premiere.

Exporting with Final Cut and Compressor

We’ve already looked at how to export your edit in Final Cut Pro in a couple of ways, and we go over those again in the video associated with this lesson, but there’s one more way we haven’t looked at: sending your video to Compressor, an encoding application that comes as part of the Final Cut Studio bundle. You do this by opening the sequence you want to send so its visible in your timeline, going to the File menu, then Send To, then choosing Compressor. The Compressor application will launch and you’ll see an entry with your sequence in it. Below, you’ll find a list of presets you can drag on to your sequence’s entry. That same panel will have a tab with preset destinations for the encoded video as well, and you can also drag those on to your sequence’s entry. When you’ve added all the presets you wanted, or added any you created yourself, go ahead and click the Submit button to submit the batch. Compressor will encode all your video and alert you when it’s done. When that happens, you’ll be able to find all the video in the destinations you selected.

Exporting with Premiere Pro CS5 and Adobe Media Encoder

Adobe Premiere Pro CS5 works similarly to Compressor in that it has a partner application called Adobe Media Encoder that handles its encoding. The difference is that you submit your batch and set all your settings directly from Premiere and then add it to the queue. You do this by selecting you various settings, whether they’re from a preset or your own creation, and clicking the Queue button at the bottom of the window. If you want to add more tasks to the queue, just keep repeating the export process and queuing them up. You can either start the batch of video encoding tasks directly from Adobe Media Encoder or you can just wait until it starts all by itself.

Understanding Encoding and Designing Your Own Encoding Specs

Encoding is very, very complex, but we’re going to talk about a few things so you can get to know how it works just a little bit and maybe create some of your own specifications when the presets just aren’t cutting it.

What Is Compression?
When you encode a video, you’re compressing it so it takes up less disk space. There are tons of different codecs that let you do this and many different file formats. For example, H.264 is a codec (and the main one we’ll be talking about in this section) and MOV is a file format. H.264 can encode your video, but you’ve probably seen it delivered as an MOV, AVI, or MP4 file. That’s because all these formats can serve as a content container for H.264 video. There’s no real significant difference between H.264 files with these various file types, so don’t worry too much about how to deliver. My preference is MP4, because pretty much everything can play it, but most video software can handle the other formats too.

Bit Rates
When you’re encoding video, you’re going to be dealing with bit rates. Bit rates are how much data is used for each second of video. Let’s say you have a video that was encoded at 1000kbps. Despite what it looks like, that doesn’t mean each second of video takes up 1000KB, but rather 125KB. In this case, kbps stands for kilobits per second, not kilobytes. Kilobits are basically eight times the number of kilobytes, so you can get kilobytes per second by dividing your number of kilobytes by eight. If your video was exactly 94 seconds long and encoded at a bit rate of 1000kbps, that means it would be 11.75MB in size. This is all assuming that every second of video equals exactly 1000 kilobits, which is only the case if you encode at a constant bit rate (CBR). CBR encoding is generally used for streaming media to keep the flow of data as consistent as possible, but for progressively downloaded video (what you find on YouTube, Vimeo and most other video sharing sites) you’re better off encoding at a variable bit rate (VBR). VBR encoding can work in a couple of ways, but most encoders just have you specify a single number (in kilobits per second) as the average bit rate. This means that if you specified 1000kbps, each second of the video would be encoded at around 1000kbps. Some seconds of your video will not be as complex as others, so ones with lower complexity won’t require all 1000kbps. The ones that require more will take more, and on average this will result in seemingly higher quality video without affecting the file size too much. There’s more to it than that, and we get into it more in the video associated with this lesson, but that’s the basic idea.

Key Frames
One more thing you should know about encoding is key frames. Pretty much every codec you’ll use to export video for the web, DVD, Blu-ray, etc., will have key frames. Codecs that don’t use key frames are generally designed for editing purposes because they’re less processor-intensive and therefore make editing a bit faster. The DV codec is one such codec. Compare it to H.264 and see how much faster it is when editing. So what are key frames? Key frames are the full picture. Let’s say you have 30 frames of video and the first and 30th frame are key frames. You can think of those frames like photos—all the detail of the video exists in that frame. Frames 2 through 29, however, do not have the full picture. Instead, frame 2 just contains the changes that have happened since frame 1 because frame 2 is not a key frame. How often do you need to use key frames? Less often than you’d think. Every six seconds is customary nowadays, but the more key frames you have the easier it is to scrub through the video. The downside is that more key frames tends to result in lower-quality video. Why? Because it takes a lot of data to store a key frame, and if you only have 1000 kilobits (or whatever) every second, the more frames in that second that use a full frame mean you have less room to store the changes between those frames. If you use key frames too frequently it’ll degrade the quality of those changes and the quality of those key frames to try and stick to the average bit rate. It may seem like many key frames is better, but you generally just end up with a lot of lower-quality key frames and, therefore, lower quality video.

How to Design an Encoding Specification
When designing a specification or a preset of your own, the first thing you need to figure out is how you’re delivering your video. If you’re delivering it on the web, you’re not going to want to target anything slower than the slowest broadband connection because anything at that level isn’t really fast enough to handle video worth watching in the first place. (Well, unless it’s a mobile phone, but we’ve learned to be patient with those.) The slowest broadband connection you’re going to find is probably the 768kbps DSL connection. This is the peak data rate, so if you’re thinking you should encode your video at 768kbps you are setting yourself up for trouble. If you’re reading this, you’re paying for an internet connection. Does it always perform at the peak rate? Probably not. If you want people to be able to progressively download your video in real time you need to take the lowest target connection speed and reduce it by one third. For 768kbps, that’s 512kbps, so you want to encode your video at 512kbps. This is really only a useful bit rate for standard definition video and so you shouldn’t use it with anything larger than 640×360 or 640×480. Around 1000-1200kbps is a good target video bit rate for 720p files and 1080p should be twice that, if not more. If you’re simply creating a source file to upload to YouTube, Vimeo, or some other video service, those bit rates can be much higher because those sites will re-compress the file using their own standards. If that’s the case, you should allocate bit rates closer to 3500-4000kbps for 720p and 8000-9000 for 1080p. Since your video is getting compressed for a second time, this extra quality will make a difference in the final product people see when you upload it to a video sharing site. As far as audio goes, I like to use 192kbps for MP3 or AAC audio, although you’ll need to keep it to 160kbps if you’re encoding for an Apple device. Why? Good question.

All of the suggestions mentioned in the last paragraph are based on the H.264 codec, but they should work well with pretty much any modern codec. The better the codec, the lower the bit rate can be. That means that 512kbps will look a lot better in H.264 than it will if you use a codec that’s not as good. Try encoding a file at 512kbps using both H.264 and the standard MPEG4 codec. You’ll see a difference, although it may be a little subtle.

The Most Important Thing You Should Know

Don’t screw anything up! Yeah, in a perfect world, right? The thing is, if you shoot crappy video your encode will look even crappier. Sometimes you will shoot crappy video and you won’t even realise how crappy it is until you encode it and it looks like someone blurred out all the detail. If you ever feel like your video should look better and it’s the fault of the encoding quality, try encoding a well-shot episode of television using the same settings and you’ll probably be surprised. A lot of us think out poorly-lit video looks pretty good at 1080p, but that’s because you’re getting quite a bit of detail at that resolution. That video is also at a really high bit rate, so detail is retained. The more you compress your video the more detail is thrown out, so if your video isn’t well shot and lit you’re going to lose a lot more detail when encoding than you would if you just shot it properly in the first place. While it’s easier said than done, do the best you can do get the highest quality video you can before you bring it into post production. We can do some amazing things in post, but nothing you’ve learned this week is going to miraculously turn your crappy footage into a work of art. In fact, there’s almost nothing you can do to save bad footage regardless of how good you are. You just can’t find detail that isn’t there. So before you sit down to create your amazing film, how to video, or whatever, take the time to shoot it well with good light or you will be kicking yourself when everything is over. It sucks to put a ton of work into something and then find out it looks terrible when you put it online or on a DVD, so do everything you can to make it look good so you don’t have to try to salvage it in post.

That’s all for our video editing lessons. Thanks for watching/reading! We’ll be following this up tomorrow with a look at some alternative editing software (primarily) for Windows and also provide some additional resources to help you learn more (if you want to). Then, on Monday, we’ll provide you with a complete guide and a PDF of all the notes so you have everything handy.


The Cheapest NBN 50 Plans

Here are the cheapest plans available for Australia’s most popular NBN speed tier.

At Lifehacker, we independently select and write about stuff we love and think you'll like too. We have affiliate and advertising partnerships, which means we may collect a share of sales or other compensation from the links on this page. BTW – prices are accurate and items in stock at the time of posting.

Comments


Leave a Reply