Computational Vision Coursera Quiz Answers Certificate Free

In this article we are going to discuss to all of the quizzes and assessments from the free Coursera Computational Vision course and also have included 2 sample papers. For all aspiring students, this course serves as a credential.

If you were unable to find this course for free, you can apply for financial aid to obtain it for nothing at all.

Coursera the learning portal offers millions of students daily free courses. These courses come from many reputable colleges, where professors and industry specialists instruct in an excellent and more clear approach.

Computational Vision Course Information

We will examine models that address diverse vision challenges in this course and develop our understanding of vision as a cognitive problem space. The limits of these challenges will then be investigated, along with how these explorations lead to a more complex analysis of the mind and brain and more complicated computational models of knowledge.

Join Coursera 

Sample Set #1

Computational Vision Quiz Answers

Week 1 Quiz Answers

Quiz 1: Vision Overview

Q1. The human eye, to a first approximation, works like which of the following?

  • A​ notebook
  • A​ pinhole camera
  • A​ recording device
  • A​ sketch Artist

Q2. The “piano-in-the-mirror” example illustrates which of the following ideas?

  • It is impossible in principle to recover a unique three-dimensional structure from a two-
    dimensional projection.
  • The eye’s retina is unreliable in its operation.
  • Mirrors pose a particular challenge to the human vision system.
  • Optical illusions reveal the strengths of the human vision system.

Q3. Which of these does not represent a potential complicating factor for our “pixel-array”
portrait of the retina?

  • There are many wavelengths that the eye is not responsive to.
  • Color vision can provide useful information for interpreting an image.
  • The fovea has higher resolution than the periphery of the retina, so our “evenly
    distributed array” portrait is inaccurate.
  • Binocular vision can provide useful information for interpreting an image.

Q4. As a very first step in treating vision as a computational problem, we can think of a retinal
image as:

  • A photograph.
  • A small copy of the object being attended to.
  • An array of pixels, where each pixel denotes a light intensity value.
  • A line sketch of the object being attend to.

Q5. Despite the simplicity of our first model of vision – interpreting black and white photos – it
is not entirely unfair because:

  • Binocular vision is generally of little use.
  • It is, after all, a task that we as human beings are capable of.
  • Most scenes in real life do not involve information such as motion.
  • Many animals have limited color vision.

Q6. Optical illusions are useful tools for studying the computational view of vision because:

  • They are entertaining illustrations of how odd the visual world is.
  • They highlight “gaps” in our vision algorithms – situations where the algorithms give the wrong answers.
  • They show that we are not as good at “seeing” as we think.
  • They show that our color vision is faulty.

 

Week 2

Quiz 1: Edges

Q1. When considering an image as a 2D array of pixels, what denotes an edge?

  • A line of pixels with similar intensity.
  • A high average intensity level among surrounding pixels.
  • A very high intensity pixel.
  • A line of pixels with adjacent pixels with very different intensity levels (high and low
    values).

Q2. If our convolution function is centered on a low-intensity (dark) pixel along a dark-to-light
transition, what kind of value will the function output?

  • A positive number.
  • A negative number.
  • We don’t have enough information to determine the answer.
  • 0

Q3. Which statement best represents the relationship between human vision and the
convolution function approach to edge detection?

  • Photoreceptors are sensitive to transitions between high and low intensity of light, like a
    convolution function, rather than just intensity, like a pixel.
  • The retinal ganglion cells perform a similar function to the convolution function, looking
    for differences in signal from an area of photoreceptor cells.
  • Retinal ganglion cells do not use a convolution approach, instead, they take an average
    of inputs from local photoreceptors then poll nearby ganglion cells to look for
    differences.
  • The convolution approach is not a good representation of human vision.

Quiz 2: Geons

Q1. Suppose, as a rough estimate, we say that there are 20 distinct geons used for object
recognition; and each geon can come in 5 classifiable qualitative sizes (tiny, small, moderate,
large, huge); and a pair of geons can be placed in 10 distinct qualitative relations (geon A on
top of geon B; geon A to upper left of geon B; geon A to the left of geon B; and so forth).

How many distinct two-geon objects do we have in the space described above?

 

Enter answer here

Q2. Now, suppose we add a third geon, geon C. Again, each geon comes in 20 varieties and
5 sizes. We’ll start by creating a two-geon pair of A and B just like in Question 1 above; then,
we decide which of A or B the third geon (C) will be adjacent to, and then we place geon C
beside either A or B in one of the 10 allowed relations. How many distinct three-geon objects
do we have in this space?

Enter answer here

Q3. Are these numbers (as an estimate of the “dictionary size” of potential two-geon and
three-geon objects) significantly bigger, significantly smaller, or comparable to the actual size
of our object vocabulary in English as discussed in lecture?

  • much ​bigger
  • much s​maller
  • about the s​ame

Week 3

Quiz 1: Mental Imagery

Q1. Explain the significance of the following experiment (described in lecture and/or the
reading from Finke) to the debates surrounding mental imagery. Does the experiment better
support the “symbolist” camp or “visualist / pictoralist” camp as discussed in lecture?

Shepard-Metzler “mental rotation” experiment.

  • S​ymbolist
  • V​isualist

Q2. Ambiguous figure (duck/rabbit) mental imagery experiment

  • S​ymbolist
  • V​isualist

Q3. Finke experiment (mental imagery and resolution of grids of lines)

  • S​ymbolist
  • V​isualist

 

Q4. Mental imagery and the McCollough effect

 

  • S​ymbolist
  • V​isualist

Week 4

Quiz 1: Convolution Problem

Q1. Consider the following matrix representation of a 4 pixel by 4 pixel black and white image, which we will call A:

  • 1 1 0 1
  • 0 1 0 1
  • 1 1 1 0
  • 1 1 0 0

And the edge detection matrix B:

  • -1 -1 -1
  • -1 8 -1
  • -1 -1 -1

If we convolve matrix A and Matrix B, what are the values in the resulting matrix?

For this answer, write your answer in the form:

A, B, C, D

Where each letter is replaced with the numeric value that would be found in this matrix representation:

  • A B
  • C D

Sample Set 2

Week 1: Computer Vision Basic Course Certification Answers : Coursera

Question 1: Computer vision includes which of the following?

  • Automatic extraction of features from images
  • All are correct
  • None are correct
  • Understanding useful information
  • Analysis of images

Question 2: The image acquisition devices of computer vision systems capture visual information as digital signals?

  • True
  • False

Question3: Correct Syntax to read image in MATLAB in current folder

  • var_image = imread(‘my_image.jpg’)
  • var_image = imread(‘my_image’)

Question4: Select the correct option to crop top-left 50×50 section from the image read below.

  • var_image = imread(‘my_image.jpg’)
  • cropped_section = var_image(0:50,0:50)
  • cropped_section = var_image(1:50,1:50)
  • cropped_section = var_image[0:50,0:50]
  • cropped_section = var_image[1:50,1:50]

Question5: What is initial data type of the image you read through imread function of MATLAB?

  • int8
  • double
  • uint8

Question6: I1 = imread(‘my_image.jpg’)

I2 = im2double(I1)

  • Scales the image intensity values from 0 to 1
  • Converts the image from uint8 to double format
  • The array dimensions remain same

Question7: Select the options which assigns height and width of an image correctly in MATLAB.

var_image = imread(‘my_image.jpg’)

  • [height,width] = size(var_image );
  • image_dimension = size(var_image );
    • height = image_dimension(1)
    • width = image_dimension(2)
  • [width,height] = size(var_image );
  • image_dimension = size(var_image );
    • width = image_dimension(1)
    • height = image_dimension(2)

Question8: Accessing Image Sub-Regions

img = imread(‘cameraman.tif’);

subimg1 = img(1:50,1:50);

subimg2 = img(  end -49 :end,  end -49 :end);

SSD = sum(sum((double(subimg1) – double(subimg2)).^2));

SSD = immse(subimg1, subimg2) * numel(subimg1);

disp(SSD);

Week 2: Computer Vision Basic Course Certification Answers : Coursera

Question9: Which of the following are area sources?

  • Bulb
  • All of these
  • Sun at infinity
  • Diffuser boxes
  • White walls

Question10: Does distance of the light source affect the color of a pixel?

  • No
  • Yes

Question10: We lose depth information in perspective projection.

  • True
  • False

Question 11: Match column A with correct options in column in B

Column A Column B
1) Shutter speed a) Amount of light per unit area reaching image sensor
2) Exposure b) an effect that causes different signals to become indistinguishable when sampled
3) Aperture c) The length of time when sensor is exposed to light when taking a photograph
4) Aliasing d) Hole or an opening through which light travels

Answer: 1-c, 2-a, 3-d, 4-b

Question 12: Color Imaging – RGB Channels

%Read the image

img = imread(‘image.jpg’);

%Get the size (rows and columns) of the image

[r,c] = size(img);

rr=r/3;

%Wrire code to split the image into three equal parts and store them in B, G, R channels

B=imcrop(img,[1,1,c,rr-1]);

G=imcrop(img,[1,1*rr+1,c,rr-1]);

R=imcrop(img,[1,2*rr+1,c,rr]);

%concatenate R,G,B channels and assign the RGB image to ColorImg variable

ColorImg(:,:,1) = R;

ColorImg(:,:,2) = G;

ColorImg(:,:,3) = B;

imshow(ColorImg)

Week 3: Computer Vision Basic Course Certification Answers : Coursera

Three-Level Paradigm

Question 13

Column A Column B
1) Computational Theory a) Steps for Computation
2) Representation and algorithm b) Physical realization of algorithms, programs and hardware
3) Implementation c) What the device is supposed to do

Answer: 1-c, 2-a, 3-b

Question 14: Low-level vision consists of:

1) feature detection and matching

2) early segmentation

  • 1
  • 1 and 2
  • 2
  • None

Question 15 : Image Gradient Magnitude

img = imread(‘cameraman.tif’);

[Gx, Gy] = imgradientxy(img);

[Gmag, Gdir] = imgradient(Gx, Gy);

%Uncomment the code below to visualize Gx and Gy

%imshowpair(Gx,Gy,’montage’)

%Uncomment the code below to visualize Gmag and Gdir

%imshowpair(Gmag,Gdir,’montage’)

Week 4: Computer Vision Basic Course Certification Answers : Coursera

Question 16: Match the Algorithms in column A with correct techniques in column B

Column A Column B
1) Dynamic Programming a) Binary Image Restoration
2) Graph algorithms b) Stereo matching
3) Dynamic Programming c) Seam Carving
4) Graph algorithms d) Image segmentation

Answers: 1-b, 2-d, 3-c, 4-a

Question 17: Aligning RGB Channels (using SSD)

img = imread(‘course1image.jpg’);

[height, width] = size(img);

oneThird = floor(height/3);

B = img(1:(oneThird), :);

G = img((oneThird+1):(2*oneThird), :);

R = img((2*oneThird+1):(3*oneThird), :);

c_x = (341/2-25);

c_y = (400/2-25);

ref_img_region = double(G(c_x:c_x + 50, c_y:c_y + 50));

red_offset = offset(double(R(c_x:c_x + 50, c_y:c_y + 50)), ref_img_region);

shifted_red = circshift(R, red_offset);

blue_offset = offset(double(B(c_x:c_x + 50, c_y:c_y + 50)), ref_img_region);

shifted_blue = circshift(B, blue_offset);

ColorImg_aligned = cat(3, shifted_red, G, shifted_blue);

%ColorImg_aligned = cat(3, G, shifted_red, shifted_blue);

%ColorImg_aligned = cat(3, G, shifted_blue, shifted_red);

%ColorImg_aligned = cat(3, G, shifted_blue, shifted_red);

imshow(ColorImg_aligned);

% Find the minimun offset by ssd

function [output] = offset(img1, img2)

    MIN = inf;

    for x = -10:10

        for y = -10:10

            temp = circshift(img1, [x, y]);

            ssd = sum((img2 – temp).^2, ‘all’);

            if ssd < MIN

                MIN = ssd;

                output = [x, y];

            end

        end

    end

end

 

 

Review:

We hope that this post will help you locate all of the Week, Final, and Peer Graded Assessment Answers for the Coursera Computational VisionQuiz so you can quickly and easily learn some in-depth material. If this post has been helpful to you in any way, please let your friends and family know on social media about this wonderful training. Check out our other course answers as well.

Leave a Comment