PART 1: Theory Questions (20 points)
Q.1 Suppose a camera has 450 lines per frame, 520 pixels per line, and 25 Hz frame rate. The color sub sampling scheme is 4:2:0, and the pixel aspect ratio is 16:9. The camera uses interlaced scanning, and each sample of Y, Cr, Cb is quantized with 8 bits
What is the bit-rate produced by the camera? (2 points)
Suppose we want to store the video signal on a hard disk, and, in order to save space, re-quantize each chrominance (Cr, Cb) signals with only 6 bits per sample. What is the minimum size of the hard disk required to store 10 minutes of video (3 points)
Q.2 The following sequence of real numbers has been obtained sampling an audio signal: 1.8, 2.2, 2.2, 3.2, 3.3, 3.3, 2.5, 2.8, 2.8, 2.8, 1.5, 1.0, 1.2, 1.2, 1.8, 2.2, 2.2, 2.2, 1.9, 2.3, 1.2, 0.2, -1.2, -1.2, -1.7, -1.1, -2.2, -1.5, -1.5, -0.7, 0.1, 0.9 Quantize this sequence by dividing the interval [-4, 4] into 32 uniformly distributed levels (place the level 0 at -3.75, the level 1 at -3.5, and so on. This should simplify your calculations).
Write down the quantized sequence. (4 points)
How many bits do you need to transmit it? (1 points)
Q.3 Temporal aliasing can be observed when you attempt to record a rotating wheel with a video camera. In this problem, you will analyze such effects. Assume there is a car moving at 36 km/hr and you record the car using a film, which traditionally record at 24 frames per second. The tires have a diameter of 0.4244 meters. Each tire has a white mark to gauge the speed of rotation.
If you are watching this projected movie in a theatre, what do you perceive the rate of tire rotation to be in rotations/sec? (5 points)
If you use your camcorder to record the movie in the theater and your camcorder is recording at one third film rate (ie 8 fps), at what rate (rotations/sec) does the tire rotate in your video recording (5 points)
If you use an NTSC camera with 30 fps, what is the maximum speed that the car can go at so that you see no aliasing in the recording
What should you submit for part 1?
No written pages please, only digital submissions. Please submit scanned documents or photographs of hard written notes, Word/pdf documents of your working is also fine.
PART 2: Programming Questions(140 points)
This part will help you gain a practical understanding of Resampling and Filtering and how it affects visual media types like images and video.
Firstly, you need to be able to display images in the RGB format that we will give to you for testing. We have provided a Microsoft Visual Studio C++ and java projects that read a given image and displays it correctly. This source has been provided as a reference for students who may not know how to read and display images. You are free to use this as a start or write your own. For this assignment you are required to use a non-scriptable language (such as C/C++ or java, no python, no matlab) where you will implement operations and not refer to libraries.
Input to your program will take four parameters where
The first parameter is the name of the image, which will be provided in an 8 bit per channel RGB format (Total 24 bits per pixel). You may assume that all images will be of the same size for this assignment, more information on the image format will be placed on the DEN class website
The second parameter will be a mode. It will have an input of 1 or 2. Depending on the input you are asked to process the image separately. For mode 1 you will be rescaling/resampling your image. Mode 2 represents an interactive application. Both these modes are described in more detail below
The third parameter is a floating-point value suggesting by how much the image has to be scaled, such as 0.5 or 1.2 This single number will scale both width and height, resulting in re-sampling your image.
The fourth parameter will be a boolean value (0 or 1) suggesting whether you want to deal with aliasing. A 0 signifies do nothing (aliasing will remain in your output) and a 1 signifies that anti-aliasing should be performed.
To invoke your program, we will compile it and run it at the command line as
YourProgram.exe C:/myDir/myImage.rgb M S A
where M S A are the parameters as described above. Example inputs are shown below and this should give you a fair idea about what your input parameters do and how your program will be tested.
1. YourProgram,exe image1.rgb 1 1.0 0
Here you are in mode 1 (scaling) and input the image will be scaled by 1.0, no antialiasing performed. This effectively means that your output will be the same as your input
2. YourProgram,exe image1.rgb 1 0.5 1
Here you are in mode 1 (scaling) and input the image will be scaled by 0.5, antialiasing is performed. This should reduce your image to half its size in left and right removing any aliasing effects. Results shown below
Details for Mode 1:
In this mode, you will be scaling (up or down) the image depending on the scale value. Your output image dimensions need to change accordingly. You will need to fill in the pixel values of the output image by appropriately mapping them to the input image.
If no anti-aliasing is performed (value = 0), then your program should simply copy the value of the mapped pixel from input to the output.
If anti-aliasing is performed (value = 1), then your program should copy a weighted filtered value of the neighborhood around the input sample to the output location. The filtered lookup will depend on the filter you are trying to implement, for instance an averaging filter.
Details for Mode 2:
In this mode you are going to implement an interactive scaled or magnified lookup around your mouse location. So as you move your mouse point over the image, you
should display an appropriate scaled circular area with radius of 100 pixels to show your magnified region at the same contrast level. The rest of the image should be shown at a lower contrast. The scaling and anti-aliasing parameters should be respected accordingly for the magnified region. Below are two outputs generated for two different mouse positions when invoked as
YourProgram,exe image1.rgb 2 2.0 1
You can observe that the circular region of interest below the mouse location has been magnified by 2.0 with a radius 100, while the remaining image has decreased contrast
What should you submit for part 2?
Your source code, and your project file or makefile, if any, using the submit program. Please do not submit any binaries or images. We will compile your program and execute our tests accordingly.
If you need to, please include a readme.txt file with any special instructions on compilation for compilation.
PART 3: Analysis Question (40 points)
In this analysis part, you will study how lack of samples affects reconstruction errors and image quality. Given an image as shown below you can randomly remove x% of the samples. These missing samples may then be computed from the neighborhood of good samples and create a corrected image. An example is shown below where 10% of the samples have been removed at random locations and given a black color ( r=g=b=0);
These missing sample can be filled in by using the valid neighborhood samples around each missing sample – by weighted averaging, interpolation or other ways. An example is shown below. While the difference is not obviously visible, zooming in a missing region or quantitively taking differences will show the error.
Write a process than can randomly remove (or set to 0) x percent of samples and
recompute new values for each using valid neighborhood samples. Once a reconstructed
image is obtained, find the error in the reconstruction. This might be computed as the
sum of the absolute differences or sum of squared differences between all the pixels of
the original and reconstructed image. Plot a graph that shows this reconstructed error –
with X axis showing the percentage of missing samples x and the Y axis showing the
error in reconstruction. Plot the values for suitable values of x as x varies from 0 to a 50%
of the samples. Your graph should signify the reconstruction error from nothing missing
to 50% of the samples missing.
Answer the following questions:
For each of the given images, plot a graph for the reconstruction error Which image has higher errors, which image has lower error?
From your quantitative analysis, can you qualitatively describe which image will have higher error and which image will have lower error. In other words, what image properties makes such reconstruction errors go higher or lower?
Knowing that the samples lost or removed are going to be always random, can you give a formula or a methodology to predict, given a specific image as input and a number x as input, what the reconstruction error might be?
How well does your formula or method work? Show your analysis of your predicted output with the actual computed output for different images.
What should you submit for part 3 ?
Do NOT submit any code for this section although you may have modified your program, written additional functions to compute what is needed.
Only submit output image results, graphical plots showing reconstruction error and any other quantitative analysis or formulas. As with part 1, please submit only electronic documents.