Features in Image [Part -1]

Making computer vision easy with Monk, low code Deep Learning tool and a unified wrapper for Computer Vision.

Akula Hemanth Kumar
Towards AI

--

Table of contents

  1. Image Features
  2. Corners as features
  3. Feature extraction and visualization using OpenCV and PIL
  4. Feature matching
  5. Image stitching

Image Features

What are image features?

Features

  • Describe an image
  • Point out relevant information in an image.
  • Distinguish a set of images from others.

Types

  • Global: For an image as a whole.
  • Local: For a region inside an image.

Formal definition

In computer vision and image processing, a feature is a piece of information that is relevant for solving the computational task related to a certain application.

  • All the machine learning and deep learning algorithms feed on features.

What can be a feature?

  • Pixel intensity alone cannot be a feature.

Features

  • Global intensity measures-average, histogram, color palette, etc.
  • Edges and Ridges-gradients and contours.
  • Interest points- corners, unique curvatures.
  • Blobs and textures
  • Features from filters

Examples

  1. Intensity
Intensity grouped as feature

2.Edges

Edges as features

3.Keypoints

Keypoints as features

What are good features?

  • Features are considered good if they are not affected a lot by external factors.

Invariance in features

  • Scale
  • Rotation
  • Translation
  • Perspective
  • Affine
  • Color
  • Illumination

Corners as features

  • Corners- Defined as the intersection of two edges.
  • Keypoint- a point in an image that has a well-defined position and can be robustly detected.

Used in

  • Motion detection and video tracking.
  • Image registration.
  • Image mosaicing and panorama stitching.
  • 3D modeling and object recognition.

Examples

  1. Keypoints
Keypoints Detection

2. Corners

Corner Detection

Harris Corner detection

Harris corner detector algorithm can be divided into five steps.

  • Color to grayscale.
  • Spatial derivative calculation.
  • Structure tensor setup.
  • Harris response calculation.
  • Non-maximum suppression.

Harris Corner Detection using OpenCV

'''
Harris Corners using OpenCV
'''
%matplotlib inline
import numpy as np
import cv2
from matplotlib import pyplot as plt
img = cv2.imread("imgs/chapter9/chess_slant.jpg", 1)
#img = cv2.resize(img, (96, 96))
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
gray = np.float32(gray)
################################FOCUS###############################
dst = cv2.cornerHarris(gray,2,3,0.04)
##################################################################### Self-study: Parametersplt.figure(figsize=(8, 8))
plt.imshow(dst, cmap="gray")
plt.show()
'''
result is dilated for marking the corners
'''
dst = cv2.dilate(dst,None)
plt.figure(figsize=(8, 8))
plt.imshow(dst, cmap="gray")
plt.show()
'''
1. Threshold for an optimal value, it may vary depending on the image.
2. We first calculate what is the maximum and minimum value of pixel in this image
'''
max_val = np.uint8(dst).max()
min_val = np.uint8(dst).min()
print("max_val = {}".format(max_val))
print("min_val = {}".format(min_val))

Output

max_val = 255
min_val = 0
img = cv2.imread("imgs/chapter9/chess_slant.jpg", 1)
img[dst>0.1*dst.max()]=[0,0,255]
plt.figure(figsize=(8, 8))
plt.imshow(img[:,:,::-1])
plt.show()

Finding the coordinates of corners using OpenCV harris corner

%matplotlib inline
import numpy as np
import cv2
from matplotlib import pyplot as plt
img = cv2.imread("imgs/chapter9/chess_slant.jpg", 1);
#img = cv2.resize(img, (96, 96))
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
# find Harris corners
gray = np.float32(gray)
dst = cv2.cornerHarris(gray,2,3,0.04)
dst = cv2.dilate(dst,None)
ret, dst = cv2.threshold(dst,0.01*dst.max(),255,0)
dst = np.uint8(dst)
# find centroids
ret, labels, stats, centroids = cv2.connectedComponentsWithStats(dst)
# define the criteria to stop and refine the corners
# Explain Criteria
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 100, 0.001)
corners = cv2.cornerSubPix(gray,np.float32(centroids),(5,5),(-1,-1),criteria)
# Now draw them
res = np.hstack((centroids,corners))
res = np.int0(res)
for x1, y1, x2, y2 in res:
#cv2.circle(img,(x1, y1), 5, (0,255,0), -1) # Point Centroids
cv2.circle(img,(x2, y2), 10, (0,0,255), -1) # Point corners
#img[res[:,1],res[:,0]]=[0,0,255]
#img[res[:,3],res[:,2]] = [0,255,0]
plt.figure(figsize=(8, 8))
plt.imshow(img[:,:,::-1])
plt.show()

Shi-Tomasi Corners

  • The Harris corner detection has a corner selection criteria.
  • A score is calculated for each pixel, and if the score is above a certain value, the pixel is marked as a corner.
  • Shi and Tomasi suggested that the function should be done away with. Only the eigenvalues should be used t check if the pixel was a corner or not.
  • These corners are exactly like harris corners with a minor upgrade.
%matplotlib inline
import numpy as np
import cv2
from matplotlib import pyplot as plt
img = cv2.imread("imgs/chapter9/shape.jpg", 1)
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
################################FOCUS###############################corners = cv2.goodFeaturesToTrack(gray,25,0.01,10)# Self-study: Parameterscorners = np.int0(corners)####################################################################################################FOCUS###############################for i in corners:
x,y = i.ravel()
cv2.circle(img,(x,y), 5,(0, 125, 125),-1)
################################################################plt.figure(figsize=(8, 8))
plt.imshow(img[:,:,::-1])
plt.show()

OpenCV Fast corner

  • Select a pixel, let its Intensity be i.
  • Select threshold value t
  • Consider a circle of 16 pixels around the pixel as shown in the above figure.
  • Now the pixel p is a corner if there exists a set of n contiguous pixels in the circle(of 16 pixels) which are all brighter than i+t, or all darker than i-t.
import cv2
from matplotlib import pyplot as plt
img = cv2.imread('imgs/chapter9/tessellate.png',1)
img2 = img.copy()
img3 = img.copy()
###############################FOCUS################################# Initiate FAST object with default valuesfast = cv2.FastFeatureDetector_create()# find and draw the keypointskp = fast.detect(img,None)####################################################################for i in kp:
x,y = int(i.pt[0]), int(i.pt[1])
cv2.circle(img2,(x,y), 5,(0, 255, 0),-1)
# Print all default params
print ("Threshold: ", fast.getThreshold())
print ("nonmaxSuppression: ", fast.getNonmaxSuppression())
print ("neighborhood: ", fast.getType())
print ("Total Keypoints with nonmaxSuppression: ", len(kp))
# Disable nonmaxSuppressionfast.setNonmaxSuppression(0)
kp = fast.detect(img,None)
print ("Total Keypoints without nonmaxSuppression: ", len(kp))
for i in kp:
x,y = int(i.pt[0]), int(i.pt[1])
cv2.circle(img3,(x,y), 5,(0, 255, 0),-1)
f = plt.figure(figsize=(15,15))
f.add_subplot(2, 1, 1).set_title('Corners with non-maximal-suppression')
plt.imshow(img2[:, :,::-1])
f.add_subplot(2, 1, 2).set_title('Corners without non-maximal-suppression')
plt.imshow(img3[:, :,::-1])
plt.show()

Output

Threshold:  10 
nonmaxSuppression: True
neighborhood: 2
Total Keypoints with nonmaxSuppression: 225
Total Keypoints without nonmaxSuppression: 1633

The remaining sections are covered in Part2.

You can find the complete jupyter notebook on github.

If you have any questions, you can reach Abhishek and Akash. Feel free to reach out to them.

--

--