Skip to content

Commit 0b48e78

Browse files
author
Avinash Kumar
committed
Updated code to work with latest opencv and code enhancements.
1 parent 9c645b0 commit 0b48e78

40 files changed

+135
-119
lines changed

README.md

Lines changed: 63 additions & 32 deletions
Original file line numberDiff line numberDiff line change
@@ -1,48 +1,79 @@
1-
# Panoramic-Image-Stitching-using-invariant-features
2-
I have implemented the Panoramic image stitching using invariant features from scratch. Implemented the David Lowe paper on "Image stitching using Invariant features".
1+
# Panoramic Image Stitching
32

4-
NOTE: You can experiment with any images (your own choice). I have experimented with many images. You can check result below. You can find many images in "Image_Data" folder.
3+
Create panorama image from given set of overlapping images.
54

6-
CREATE DATA:
7-
- You can create multiple images like tajm1.jpg, tajm2.jpg, tajm3.jpg and tajm4.jpg (shown below) from your desired images (taj.jpg).Make sure there will be some overlapping parts between two consecutive created images in a sequence. then only algorithm will find and match features and create panorama image of all images which you have provided.
8-
- OR you can directly feed multiple images from camera in a sequence with some overlapping parts between two consecutive images.
95

10-
Please install Libraries:
11-
1. Numpy
12-
2. OpenCV (version 3.3.0)
13-
3. imutils
6+
## Requirements
7+
* numpy >= 1.24.3
8+
* opencv-python >= 4.9.0 (latest as of 2024)
9+
* opencv-contrib-python >= 4.9.0 (latest as of 2024)
10+
* imutils >= 0.5.4
1411

15-
TO RUN CODE:
16-
1. Put images in your current folder where your code is present.
17-
2. Run stitch.py code.
18-
3. Provide the number of images you want to concantenate as input. Like: 2,5,6,10 etc.
19-
4. Enter the image name in order of left to right in way of concantenation. Like:
20-
Enter the 1 image: tajm1.jpg
21-
Enter the 2 image: tajm2.jpg
22-
Enter the 3 image: tajm3.jpg
23-
Enter the 4 image: tajm4.jpg (See below example).
24-
5. Then, you will get your panorama image as Panorama_image.jpg in your current folder.
2512

26-
- Used SIFT to detect feature and then RANSAC, compute Homography and matched points and warp prespective to get final panoramic image.
13+
## Description
14+
We have implemented the **panoramic image stitching algorithm** using invariant features from scratch.
15+
We have Implemented the David Lowe research paper on "Panoramic Image Stitching using Invariant Features".
16+
Used SIFT to detect features, RANSAC, Homography and Warp Prespective concepts.
2717

28-
RESULTS:
2918

30-
Result of tajm1.jpg, tajm2.jpg, tajm3.jpg, tajm4.jpg
19+
## About Data
20+
**NOTE:** You can experiment with any images (of your own choice). We have experimented with many which you can find in
21+
`data/` folder. Please check the results below.
22+
#### Sample Images
23+
* Repo already provides sample images present in `data/` folder. Copy images from `data/` folder
24+
and put it into `inputs/` folder.
25+
* **Default**: you will find `data/tajm` folder images in `inputs/` folder.
26+
#### Custom Images
27+
You can create your own images as well and put it into `inputs/` folder.
28+
* Make sure your images must be in sequence and have overlapping parts between consecutive images.
29+
* Minimum width and height for all images should be 400.
3130

32-
![alt text](https://github.com/AVINASH793/Panoramic-Image-Stitching-using-invariant-features/blob/master/Result/tajm_report.JPG)
3331

34-
Result of nature1.jpg, nature2.jpg, nature3.jpg, nature4.jpg, nature5.jpg, nature6.jpg
32+
## How To Run
33+
1. Put images in `inputs/` folder from which you want to create panorama image.
34+
2. Run:
35+
```shell
36+
python3 stitch.py
37+
```
38+
3. Enter the number of images you want to concatenate
39+
(i.e number of images present in `inputs/` folder):
40+
```shell
41+
Enter the number of images you want to concatenate: 4
42+
```
43+
4. Keep entering the images name along with path and extension. For Ex:
44+
```shell
45+
Enter the image names with extension in order of left to right in the way you want to concatenate:
46+
Enter the 1 image name along with path and extension: inputs/tajm1.jpg
47+
Enter the 2 image name along with path and extension: inputs/tajm2.jpg
48+
Enter the 3 image name along with path and extension: inputs/tajm3.jpg
49+
Enter the 4 image name along with path and extension: inputs/tajm4.jpg
50+
```
51+
5. `panorama_image.jpg` and `matched_points.jpg` will be created in `output/` folder.
3552

36-
![alt text](https://github.com/AVINASH793/Panoramic-Image-Stitching-using-invariant-features/blob/master/Result/nature_report.JPG)
3753

38-
Result of my1.jpg and my2.jpg
54+
## RESULTS
3955

40-
![alt text](https://github.com/AVINASH793/Panoramic-Image-Stitching-using-invariant-features/blob/master/Result/my_report.JPG)
56+
#### Result of Images from data/tajm folder
57+
tajm1.jpg, tajm2.jpg, tajm3.jpg, tajm4.jpg
4158

42-
Result of taj1.jpg and taj2.jpg
59+
![alt text](https://github.com/AVINASH793/Panoramic-Image-Stitching-using-invariant-features/blob/master/result/tajm_result.jpg)
4360

44-
![alt text](https://github.com/AVINASH793/Panoramic-Image-Stitching-using-invariant-features/blob/master/Result/taj_report.JPG)
61+
#### Result of Images from data/nature folder
62+
nature1.jpg, nature2.jpg, nature3.jpg, nature4.jpg, nature5.jpg, nature6.jpg
4563

46-
Result of room1.jpg and room2.jpg
64+
![alt text](https://github.com/AVINASH793/Panoramic-Image-Stitching-using-invariant-features/blob/master/result/nature_result.jpg)
4765

48-
![alt text](https://github.com/AVINASH793/Panoramic-Image-Stitching-using-invariant-features/blob/master/Result/room_report.JPG)
66+
#### Result of Images from data/my folder
67+
my1.jpg and my2.jpg
68+
69+
![alt text](https://github.com/AVINASH793/Panoramic-Image-Stitching-using-invariant-features/blob/master/result/my_result.jpg)
70+
71+
#### Result of Images from data/taj folder
72+
taj1.jpg and taj2.jpg
73+
74+
![alt text](https://github.com/AVINASH793/Panoramic-Image-Stitching-using-invariant-features/blob/master/result/taj_result.jpg)
75+
76+
#### Result of Images from data/room folder
77+
room1.jpg and room2.jpg
78+
79+
![alt text](https://github.com/AVINASH793/Panoramic-Image-Stitching-using-invariant-features/blob/master/result/room_result.jpg)

Result/Report

Lines changed: 0 additions & 6 deletions
This file was deleted.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.

output/matched_points.jpg

144 KB
Loading

output/panorama_image.jpg

87.6 KB
Loading

panorama.py

Lines changed: 43 additions & 59 deletions
Original file line numberDiff line numberDiff line change
@@ -1,117 +1,101 @@
11
import numpy as np
2-
import imutils
32
import cv2
43

5-
class Panaroma:
6-
7-
def image_stitch(self, images, lowe_ratio=0.75, max_Threshold=4.0,match_status=False):
84

9-
#detect the features and keypoints from SIFT
5+
class Panaroma:
6+
def image_stitch(self, images, lowe_ratio=0.75, max_Threshold=4.0, match_status=False):
7+
# detect the features and keypoints from SIFT
108
(imageB, imageA) = images
11-
(KeypointsA, features_of_A) = self.Detect_Feature_And_KeyPoints(imageA)
12-
(KeypointsB, features_of_B) = self.Detect_Feature_And_KeyPoints(imageB)
13-
14-
#got the valid matched points
15-
Values = self.matchKeypoints(KeypointsA, KeypointsB,features_of_A, features_of_B, lowe_ratio, max_Threshold)
9+
(key_points_A, features_of_A) = self.detect_feature_and_keypoints(imageA)
10+
(key_points_B, features_of_B) = self.detect_feature_and_keypoints(imageB)
1611

12+
# get the valid matched points
13+
Values = self.match_keypoints(key_points_A, key_points_B, features_of_A, features_of_B, lowe_ratio, max_Threshold)
1714
if Values is None:
1815
return None
1916

20-
#to get perspective of image using computed homography
17+
# get wrap perspective of image using computed homography
2118
(matches, Homography, status) = Values
22-
result_image = self.getwarp_perspective(imageA,imageB,Homography)
19+
result_image = self.get_warp_perspective(imageA, imageB, Homography)
2320
result_image[0:imageB.shape[0], 0:imageB.shape[1]] = imageB
2421

2522
# check to see if the keypoint matches should be visualized
2623
if match_status:
27-
vis = self.draw_Matches(imageA, imageB, KeypointsA, KeypointsB, matches,status)
28-
29-
return (result_image, vis)
24+
vis = self.draw_matches(imageA, imageB, key_points_A, key_points_B, matches, status)
25+
return result_image, vis
3026

3127
return result_image
3228

33-
def getwarp_perspective(self,imageA,imageB,Homography):
34-
val = imageA.shape[1] + imageB.shape[1]
35-
result_image = cv2.warpPerspective(imageA, Homography, (val , imageA.shape[0]))
3629

30+
def get_warp_perspective(self, imageA, imageB, Homography):
31+
val = imageA.shape[1] + imageB.shape[1]
32+
result_image = cv2.warpPerspective(imageA, Homography, (val, imageA.shape[0]))
3733
return result_image
3834

39-
def Detect_Feature_And_KeyPoints(self, image):
40-
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
4135

36+
def detect_feature_and_keypoints(self, image):
4237
# detect and extract features from the image
43-
descriptors = cv2.xfeatures2d.SIFT_create()
44-
(Keypoints, features) = descriptors.detectAndCompute(image, None)
45-
46-
Keypoints = np.float32([i.pt for i in Keypoints])
47-
return (Keypoints, features)
38+
descriptors = cv2.SIFT_create()
39+
(keypoints, features) = descriptors.detectAndCompute(image, None)
40+
keypoints = np.float32([i.pt for i in keypoints])
41+
return keypoints, features
4842

49-
def get_Allpossible_Match(self,featuresA,featuresB):
5043

51-
# compute the all matches using euclidean distance and opencv provide
52-
#DescriptorMatcher_create() function for that
44+
def get_all_possible_matches(self, featuresA, featuresB):
45+
# compute the all matches using Euclidean distance. Opencv provide DescriptorMatcher_create() function for that
5346
match_instance = cv2.DescriptorMatcher_create("BruteForce")
5447
All_Matches = match_instance.knnMatch(featuresA, featuresB, 2)
55-
5648
return All_Matches
5749

58-
def All_validmatches(self,AllMatches,lowe_ratio):
59-
#to get all valid matches according to lowe concept..
60-
valid_matches = []
6150

51+
def get_all_valid_matches(self, AllMatches, lowe_ratio):
52+
# to get all valid matches according to lowe concept..
53+
valid_matches = []
6254
for val in AllMatches:
6355
if len(val) == 2 and val[0].distance < val[1].distance * lowe_ratio:
6456
valid_matches.append((val[0].trainIdx, val[0].queryIdx))
65-
6657
return valid_matches
6758

68-
def Compute_Homography(self,pointsA,pointsB,max_Threshold):
69-
#to compute homography using points in both images
7059

71-
(H, status) = cv2.findHomography(pointsA, pointsB, cv2.RANSAC, max_Threshold)
72-
return (H,status)
60+
def compute_homography(self, pointsA, pointsB, max_Threshold):
61+
return cv2.findHomography(pointsA, pointsB, cv2.RANSAC, max_Threshold)
7362

74-
def matchKeypoints(self, KeypointsA, KeypointsB, featuresA, featuresB,lowe_ratio, max_Threshold):
7563

76-
AllMatches = self.get_Allpossible_Match(featuresA,featuresB);
77-
valid_matches = self.All_validmatches(AllMatches,lowe_ratio)
64+
def match_keypoints(self, KeypointsA, KeypointsB, featuresA, featuresB, lowe_ratio, max_Threshold):
65+
all_matches = self.get_all_possible_matches(featuresA, featuresB)
66+
valid_matches = self.get_all_valid_matches(all_matches, lowe_ratio)
7867

79-
if len(valid_matches) > 4:
80-
# construct the two sets of points
81-
pointsA = np.float32([KeypointsA[i] for (_,i) in valid_matches])
82-
pointsB = np.float32([KeypointsB[i] for (i,_) in valid_matches])
68+
if len(valid_matches) <= 4:
69+
return None
8370

84-
(Homograpgy, status) = self.Compute_Homography(pointsA, pointsB, max_Threshold)
71+
# construct the two sets of points
72+
points_A = np.float32([KeypointsA[i] for (_, i) in valid_matches])
73+
points_B = np.float32([KeypointsB[i] for (i, _) in valid_matches])
74+
(homograpgy, status) = self.compute_homography(points_A, points_B, max_Threshold)
75+
return valid_matches, homograpgy, status
8576

86-
return (valid_matches, Homograpgy, status)
87-
else:
88-
return None
8977

90-
def get_image_dimension(self,image):
91-
(h,w) = image.shape[:2]
92-
return (h,w)
78+
def get_image_dimension(self, image):
79+
return image.shape[:2]
9380

94-
def get_points(self,imageA,imageB):
9581

82+
def get_points(self, imageA, imageB):
9683
(hA, wA) = self.get_image_dimension(imageA)
9784
(hB, wB) = self.get_image_dimension(imageB)
9885
vis = np.zeros((max(hA, hB), wA + wB, 3), dtype="uint8")
9986
vis[0:hA, 0:wA] = imageA
10087
vis[0:hB, wA:] = imageB
101-
10288
return vis
10389

10490

105-
def draw_Matches(self, imageA, imageB, KeypointsA, KeypointsB, matches, status):
106-
107-
(hA,wA) = self.get_image_dimension(imageA)
108-
vis = self.get_points(imageA,imageB)
91+
def draw_matches(self, imageA, imageB, KeypointsA, KeypointsB, matches, status):
92+
(hA, wA) = self.get_image_dimension(imageA)
93+
vis = self.get_points(imageA, imageB)
10994

11095
# loop over the matches
11196
for ((trainIdx, queryIdx), s) in zip(matches, status):
11297
if s == 1:
11398
ptA = (int(KeypointsA[queryIdx][0]), int(KeypointsA[queryIdx][1]))
11499
ptB = (int(KeypointsB[trainIdx][0]) + wA, int(KeypointsB[trainIdx][1]))
115100
cv2.line(vis, ptA, ptB, (0, 255, 0), 1)
116-
117-
return vis
101+
return vis

result/description

Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,11 @@
1+
Result Description:
2+
3+
tajm_result.jpg: result of images from data/tajm folder
4+
5+
nature_result.jpg: result of images from data/nature folder
6+
7+
room_result.jpg: result of images from data/room folder
8+
9+
taj_result.jpg: result of images from data/taj folder
10+
11+
my_result.jpg: result of images from data/my folder
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.

stitch.py

Lines changed: 18 additions & 22 deletions
Original file line numberDiff line numberDiff line change
@@ -2,50 +2,46 @@
22
import imutils
33
import cv2
44

5-
#Take picture from folder like: Hill1 & Hill2, scene1 & scene2, my1 & my2, taj1 & taj2, lotus1 & lotus2, beach1 & beach2, room1 & room2
65

7-
print("Enter the number of images you want to concantenate:")
8-
no_of_images = int(input())
9-
print("Enter the image name in order of left to right in way of concantenation:")
10-
#like taj1.jpg, taj2.jpg, taj3.jpg .... tajn.jpg
11-
filename = []
6+
no_of_images = int(input("Enter the number of images you want to concatenate: "))
7+
print("Enter the image names with extension in order of left to right in the way you want to concatenate: ")
8+
# like tajm1.jpg, tajm2.jpg, tajm3.jpg .... tajmn.jpg
129

10+
filename = []
1311
for i in range(no_of_images):
14-
print("Enter the %d image:" %(i+1))
15-
filename.append(input())
12+
filename.append(input("Enter the %d image name along with path and extension: " % (i + 1)))
1613

1714
images = []
18-
1915
for i in range(no_of_images):
2016
images.append(cv2.imread(filename[i]))
2117

22-
#We need to modify the image resolution and keep our aspect ratio use the function imutils
23-
18+
# We need to modify the images width and height to keep our aspect ratio same across images
2419
for i in range(no_of_images):
2520
images[i] = imutils.resize(images[i], width=400)
2621

2722
for i in range(no_of_images):
2823
images[i] = imutils.resize(images[i], height=400)
2924

3025

31-
panaroma = Panaroma()
32-
if no_of_images==2:
33-
(result, matched_points) = panaroma.image_stitch([images[0], images[1]], match_status=True)
26+
panorama = Panaroma()
27+
if no_of_images == 2:
28+
(result, matched_points) = panorama.image_stitch([images[0], images[1]], match_status=True)
3429
else:
35-
(result, matched_points) = panaroma.image_stitch([images[no_of_images-2], images[no_of_images-1]], match_status=True)
30+
(result, matched_points) = panorama.image_stitch([images[no_of_images - 2], images[no_of_images - 1]], match_status=True)
3631
for i in range(no_of_images - 2):
37-
(result, matched_points) = panaroma.image_stitch([images[no_of_images-i-3],result], match_status=True)
32+
(result, matched_points) = panorama.image_stitch([images[no_of_images - i - 3], result], match_status=True)
3833

39-
#to show the got panaroma image and valid matched points
40-
for i in range(no_of_images):
41-
cv2.imshow("Image {k}".format(k=i+1), images[i])
34+
# show input images
35+
# for i in range(no_of_images):
36+
# cv2.imshow("Image {k}".format(k=i + 1), images[i])
4237

38+
# show the panorama image and valid matched points
4339
cv2.imshow("Keypoint Matches", matched_points)
4440
cv2.imshow("Panorama", result)
4541

46-
#to write the images
47-
cv2.imwrite("Matched_points.jpg",matched_points)
48-
cv2.imwrite("Panorama_image.jpg",result)
42+
# save panorama and matched_points images in output folder
43+
cv2.imwrite("output/matched_points.jpg", matched_points)
44+
cv2.imwrite("output/panorama_image.jpg", result)
4945

5046
cv2.waitKey(0)
5147
cv2.destroyAllWindows()

0 commit comments

Comments
 (0)