Spatial Operations in Image processing
- Shin Yoonah, Yoonah
- 2022년 7월 20일
- 5분 분량
최종 수정일: 2022년 8월 8일
Convolution: Linear filtering
Edge detection
Median filters
Introduction of spatial operations
Spatial operations consist of a neighborhood
Ex) neighborhood of 2 by 2 pixels

When this pixel applies the function, then it shifts the neighborhood and repeat the process for each pixel in the image
Final Result:

CONVOLUTION: Linear filtering
= standard way to filter an image the filter is called the kernel
The different kernels perform different tasks
Convolution = analogous to this linear equation
Z = W * X
Z = output of the another image
W = known as a kernel or filter
* = the star this operation is the convolution operation which is analytical expression
X = input of the image X

*Output array became much smaller than the input array*
Array is made up of values that have been pre determined for solve operations
Let's see how the operation of convolution works!!
1. Stack off in the top right corner of the image, and we'll overlay the kernel with that region of the image

2. Multiply every elements of the image by the corresponding element of the kernel

3. For the first row, multiplying the intensity value and summing the results

---> This process is repeated for every row
4. Finally, for the final row, we multiplying the intensity value and summing the results
---> The result is the first element of the image Z
We shift the kernel to the left represented by the different colors in red; we multiply all the elements of the kernel with the image/This gives us the second element of the output image "Z"
Shift one more column and repeat the process, the final value gives us the next value
Then shift the kernel down and repeat the process until we get to the end column (until we get a new image)

*One problem is the imaged are different sizes we can fix this my changing the size of the image "X" *
- We change the size of the image by padding
In zero padding, we add two additional rows of zeros and two additional columns of zeros
LOW PASS FILTER
= used to smooth the image getting rid of noise

We can reduce this noise by applying a smoothing filter or kernel
Smoothing filters average out the pixels within a neighborhood, they are sometimes called low pass filter

Filtering the kernel simply average out the kernels in a neighborhood
Q. What happens to pixel intensities?
A. We can plot the output image and its relationship to specific regions of this input to explore the kernel effect

- Around the edge of the box the values change as the values of 255 are averaged out with the zeros
- Finally, when we get to the region with the noise pixel, we see the noise value is smaller
Comparing the original image and the output image, we see the intensity of the noise has reduced but the edges appear less sharp

*There is a trade off between sharpness and smoothness*
EDGE DETECTION
= an important first step in many computer vision algorithms
Edge detection algorithms
Edges in a digital image are where the image brightness changes sharply
Uses methods to approximate derivatives and gradients for identifying these areas
Let's plot the first row of the image, the horizontal axis is the column index, and the vertical axis is the intensity value
If we more in the right direction from pixel 1 to pixel 2, the intensity value increases

If we move from pixel 2 to pixel 3, the intensity value decreases

*We can represent this as a reactor pointing in the opposite direction*
The direction of the vector represents of the adjacent pixel is increasing

Represent this change by applying the following difference equation

- This works by subtracting the intensity value of the adjacent columns j and j + 1in row I
- This computes an approximation of the gradient in the X-direction/In the following table, each row applies the equation

*Each row applied the equation*
To the intensity value for column 1 and 2, the final column is the result of using the equality. We can overly the results over the image as vectors
It turns out we can perform similar Horizontal Derivative Approximations using convolution; the Horizontal changes or gradient are computed by convoluting the image with a kernel
The kernels are called sobel operations
= We can represent the output in the array of intensities Gx, all the values are zero except the elements corresponding to the horizontal edges

In this image, the gray values have different ranges where black is negative, gray is zero, and white is positive
We can use the same process to find vertical changes

This is computed by convoluting the image with a kernel to get the image "Gy"
---> Refers point in the vertical direction

*We can combine the output of each filter into a vector*

Taking the magnitude of the vectors we get the intensity values
- We can plot it as an image this represents the edges
- We can also calculate the angle representing the direction
MEDIAN FILTER
= another popular filter, they are better at removing same types of noise but may distract the image

- Median filter outputs the median value of the neighborhood, consider the yellow pixel, consider the region in these three by three neighborhood (the red box)
- The resultant value of the output image is the median value of the 9 pixels
Depending on the padding we see the median is identical to the image in most regions
=> Overlaying the image values unlike the median filter we see the noise is no longer there and the edges are straight

Apply some Spatial Operation in open cv (PIL is relatively simple!!)
1. Noise Cancellation
new_image = image + Noise ---> Create an image "new image" that is a noisy version of
the original

kernel = np.ones((6,6))/36 ----> Create a kernel for median filtering
image_filtered = cv2.filter2D(src=new_image,ddepth=1,kernel=kernel)
---> The function filter2D performs 2D convolution between the image and the kernel on each
color channel independently

New image has less noise but the image is blurry
2. Image sharpening involves smoothing the image and enhancing the edge
kernel = np.array([[-1,-1,-1], [-1,-9,-1], [-1,-1,-1]]) -----> Image sharpening involves smoothing
the image and enhancing the edges
image_filtered = cv2.filter2D(scrc=image, ddpenth=-1, kernel=kernel) ----> Apply filter2D

3. Edge Detecter
*Assume Barabara.png as an example of an image*
img_stay = cv2.imread('barvara.png', cv2.IMREAD_GRAYSCALE) ---> Smooth the image
using grayscale
img_gray = cv2.GaussianBlur(img_gray, (3,3), sigmaX=0.1, sigmaY=0.1)
---> this decreases changes that may be caused by noise that would affect the gradient
*The function "ddepth" is the output image depth, dx
*Approximate the derivative of X or Y direction using the Sobel function
ddepth = cv2.CV_16S
grad_x = cv2.Sobel(src=img_gray, ddepth=ddepth, dx=1, dy=0, ksize=3)
grad_y = cv2.Sobel(src=img_gray, ddepth=ddepth, dx=0, dy=1, ksize=3)

*Approximate the magnitude of gradient
*Calculate absolute values and convert the result to 8-bit using convertScaleAbs
abs_grad_x = cv2.convertScaleAbs(grad_x)
abs_grad_y = cv2.convertScaleAbs(grad_y)
grad = cv2.addWeighted(abs_grad_x, 0.5, abs_grad_y, 0.5, 0)

Plot Grad as an image the areas with high intensity values represent edges!!
Copyright Coursera All rights reserved
Comentarios