Skip to content

briancsavage-shift/Image-Particle-Filter

Repository files navigation

Image-Particle-Filter

Build Status Website shields.io

  • Summary - Image based Particle Filter for Drone Localization
  • Live Demo

Usage & Installation

  • [Step 1] - Get code via git clone https://github.com/briancsavage/Image-Particle-Filter.git
  • [Step 2] - Navigate to repository locally via cd /path/to/Image-Particle-Filter
  • [Step 3] - Install dependencies via pip install -r requirements.txt
  • [Step 4] - Run Streamlit application locally via streamlit run GUI.py

Implementation & Method

Code Explaination
Heuristical Estimator for Position

    First, calculates the histogram for the reference and expected perspective. This returns a dictionary where the keys are values between 0-255 and the keys represent the frequency counts of the BGR values.
    Using these frequency counts, we calculate the mean squared error between the reference and expected color histograms. Then, we return 1 divided by the mean squared error of the reference image.
Hog Transformer

    Instead of using the color histogram, we use a HOG Transformer to compute the histogram of oriented gradients in 2x2 patches across the image. This returns a flattened feature vector for the image that expresses the directionality of the color change across the patches of the image.
    The final pre-processing step we perform is applying SKLearn's StandardScaler() to scale the feature vectors to a zero mean and unit variance.
    We write a seperate lambda function for the hog transformation so that the pre-processing step could be easily parallelized, since the major computation of the function is in this hog feature extraction step.
Learning Based Estimator for Position

    In the constructor for the PerspectiveSimularity class, we first initialize the hog feature extractor from above. Within the training step, we use a SGDRegressor as the model to train. The justification behind using a SGD algorithm over LBFGS or ADAM is since we have a minimal amount of data relative to the preformance benefits of ADAM, and SGD is more likely to converge to a global minimum than LBFGS at the expense of training time.
    At inference time, we fit the hog transformer and standard scaler to the images first. Then, we use the trained regressor to predict the p(z|x) values for the provided images. Using these estimates, we apply a softmax function on the array to find the corresponding confidence levels for each of the tested for positions.



Simulation Interface

Web capture_3-4-2022_192946_localhost Web capture_3-4-2022_193014_localhost Web capture_3-4-2022_193034_localhost Web capture_3-4-2022_19312_localhost

About

Image Based Particle Filter for Drone Localization

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages