Blogs‎ > ‎Tech Stuff‎ > ‎

2015.10.16 More Custom DVR Stuff

posted Oct 16, 2015, 7:21 AM by Troy Cheek   [ updated Oct 16, 2015, 7:25 AM ]
http://forums.sagetv.com/forums/
I've occasionally talked about my homemade DVR solution.  It's basically a computer that records television for me, which is all a TiVo or other box you get from your cable or satellite TV provider actually is.  My software of choice is SageTV.  For a long time, I could not recommend this software as the company was bought out by Google and it was no longer available for sale.  If you wanted to see it in action, you had to have Google Fiber TV.  However, Google has recently given permission for the old owner of SageTV to open source the program.  The SageTV community has already started to work on a package that will have all the functionality of the old program, with of course improvements on the way.  If all you want is to record television on your computer, this is the program for you, especially since Microsoft has dropped Windows Media Center from the last couple of versions of Windows.

I, however, want to do more than just record television on my computer.  First of all, all the recording is generally done in the now-ancient MPEG-2 file format.  This is what's used on DVD media.  That's what digital OTA (over the air) and cable TV uses.  It works very well for DVD quality 4:3 standard definition video meant to display on old CRT televisions.  It's not so good for 16:9 video, high definition, or LCD screens.  Problem one is the file size.  SD video runs about 3 GB per hour.  HD video can be as high as 12 GB an hour.  That takes up a lot of hard disk drive space (my first 500 GB dedicated drive could only hold 40 hours of HD video).  And since I might want to watch the videos on my laptop or tablet computers, that limits how many hours I can take with me on the road.  Problem two is that even with a kick-ass computer, a large HD file has so much data that has to be sent through a network cable, decompressed, and displayed that playback sometimes gets choppy.

H.264 or MPEG-4 Part 10, Advanced Video Coding (MPEG-4 AVC) in the MP4 file container to the rescue.  This is a much more recent file format.  Even at compression settings approaching "no noticeable difference" it create files that are usually much less than 1 GB per hour for SD video.  For HD video, well, I do most of my viewing on a small screen, so I don't really need HD video.  As long as I'm converting the file anyway, I can take the opportunity to scale it down to SD resolution.  And some video has black bars where 16:9 content is show on 4:3 screens and vice versa.  Might as well crop those out while we're at it.

https://web.archive.org/web/20100407182843/http://www.cheek.org/tech/021.htm
There are a lot of programs that can do that stuff, but why sit in front of a computer and tell it how to crop, scale, and convert each and every file?  Isn't the whole point of computers to make my life easier?  So I've done my best to automate the whole process:
  1. A program called Directory Monitor 2 (DirMon2) watches my video directories for new files.  When it finds one, it fires off a series of other programs that do specific things to the file.
  2. A program extracts the closed caption data (CCExtractor) from the MPEG-2 file and sticks it in a text file.  This data would otherwise get lost when the file is converted from MPEG-2 to H.264 as the MP4 file container does not support it.  I could use MKV or some other file container which does support captions or subtitles, but I prefer MP4 because it plays on all my devices.
  3. A program scans the video and marks commercials for me (comskip).  This creates another text file that most of my players can read.  It also creates a log file that I can scan to find information about aspect ratios, black bars, and the like.
  4. I scan the video with a special tool called ffprobe which tells me things like resolution, frame rate, total length in seconds, etc.  I use this information to make a "sanity check" at the end of the process.
  5. Using data from Step 3, I use a video conversion program (HandBrake) to do a test conversion of 10 minutes or so of the video file, making sure to choose 10 minutes that are TV program, not commercials.  The results are saved to a log file.  With this log file, I can double check the black bars and also check to see if the video is interlaced.  MPEG-2 handles both interlaced and progressive video, but there's nothing in the file that says which is which.  On an old CRT television or monitor, interlaced looks fine.  On an LCD, you can see the lines.  You can deinterlace when you convert, but if the video isn't interlaced to begin with, that makes it look worse.
  6. Using another video conversion program (FFmpeg) I strip out the audio and place it in another file, downmixing it to stereo if necessary.  Some audio is in 5.1 surround sound, which some of my devices can't play.  And it's all too quiet, which leads to...
  7. Using an automated dynamic range compressor program (Wave Booster CLI) I massage the audio to boost the quiet parts and damp down the loud parts.  They apparently use trained monkeys to do sound mixing nowadays.
  8. Using FFmpeg again, I take the video from the original MPEG-2 file and the audio from the massaged file and convert the whole thing to an H.264 MP4 file, doing all the scaling down of HD video, the cropping of black bars, and interlacing (if necessary) at the same time.
  9. After using ffprobe again to make sure the converted file is a working video file of the same length as the original, I delete said original, freeing up GBs of disk space.
  10. Directory Monitor checks for new video files and starts the whole process all over again.

There might be some question as to why I use both HandBrake and FFmpeg, since they both basically do the same thing.  Actually, they can each do things the other can't.  HandBrake has a decomb filter.  This scans each frame and looks for the effects of interlacing, applying the proper filters to only the parts of the picture that actually need it.  This gives better results than just straight converting an interlaced file or applying a blanket deinterlacing filter.  Unfortunately, HandBrake has no way to boost, level, normalize, compress, or otherwise manipulate the audio portion of the video.  HandBrake can't even take video from one file and audio from another.  FFmpeg can combine multiple sources.  (In fact, FFmpeg recently added code from a Dynamic Audio Normalizer project very similar to the old and no longer supported Wave Booster CLI, so I may be able to cut out a step or two.)  I use HandBrakes decomb filter to scan the video to see how interlaced it is to decide whether or not to run FFmpeg's standard deinterlace filter.  Future plans include using HandBrake to decomb the video, then use FFmpeg to combine that video with the normalized audio.  I'm hoping to convince the FFmpeg people to include the decomb video filter, or the HandBrake people to include the normalization audio filter, allowing me to cut out a few steps.

Comments