Self-Adaptive Video Encoder: Comparison of Multiple Adaptation Strategies Made Simple

Martina Maggio, Alessandro Vittorio Papadopoulos, Antonio Filieri, Henry Hoffmann

Abstract

  • This paper introduces the case study of an adaptive video encoder, that can be used to compare the behavior of different adaptation strategies that can use multiple actuators to steer the behavior of the software towards a global goal, composed of multiple conflicting objectives. A video camera is producing frames, that the encoder manipulates, with the objective of matching some space requirement, to properly fit a given communication channel. Another objective of the encoder is to maintain a given similarity index between the manipulated frames and the original ones. To achieve the given goal, the software can change three parameters: the quality of the encoding, the noise reduction filter radius and the sharpening filter radius. It is very likely that the two objectives conflict since a larger frame would have a higher similarity index to its original counterpart. This makes the problem difficult from the control perspective and makes the case study appealing to compare different adaptation strategies.

Additional Material

Getting Started Guide

In a ubuntu distribution (LTS), you need to install some packages. You can do so using the following commands:

  • sudo apt-get install mplayer
  • sudo apt-get install imagemagick
  • sudo apt-get install python-imaging
  • sudo apt-get install python-numpy
  • sudo apt-get install python-scipy
  • sudo apt-get install python-matplotlib
  • sudo apt-get install python-cvxopt
  • sudo apt-get install texlive-base
  • sudo apt-get install texlive-pictures
  • sudo apt-get install texlive-latex-extra

Step-by-step Instructions

  • Copy any mp4 video in the folder mp4.
    For example download a video from youtube, using services like this one. The video used for the experiments available here. The repository contains already a shortened version of this video and works out-of-the-box. However, the plots shown in Figure 3 will not be reproduced with the shortened version of the video. To get them, you need to download the full video.
  • Execute a generic test running
    • ./run.sh controller desired_ssim desired_framesize
    • Example 1: ./run random 0.2 45200
      Runs the code to generate the output of Figure 1. The figure shown in the paper is a zommed in version of the first 500 frames.
    • Example 2: ./run bangbang 0.7 10000
      Runs the code to generate the output of Figure 2. The figure shown in the paper is a zommed in version of the first 500 frames.
    • Example 3: ./run mpc 0.7 8000
      Runs the code to generate the output of Figure 3a. The figure shows the full 40000 frames (not only the first part). It is very likely that latex will run out of memory when generating the full video figures, in that case you can add an option to print only a point every nth.
  • The tests results will be available in the directory results under a folder with the video name and a subfolder with the method name and the two given setpoints. Both the csv and a pdf figure representing the data will be available. The figure may not be adjusted to zoom (because it is constructed to be general). The tikz code that generated the figure can be modified to zoom in certain areas (found in the file code/latex/figure.tex).

Using the Virtual Machine

  • In the virtual machine, everything is already pre-installed. Unpack the VM using tar.gz in Linux, the default compression software works for Mac OS. The VM has been tested using Virtualbox. Open a terminal and move to the folder save (subfolder of the Desktop folder) using “cd Desktop/save”.
  • You can then invoke the commands to execute tests as explained above.