Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Direct Hybrid Factor Specification #1805

Merged
merged 36 commits into from
Sep 20, 2024
Merged
Show file tree
Hide file tree
Changes from 33 commits
Commits
Show all changes
36 commits
Select commit Hold shift + click to select a range
75d4724
remove extra imports in DiscreteBayesNet.cpp
varunagrawal Aug 22, 2024
dce5641
minor edits
varunagrawal Aug 22, 2024
9e77eba
rename X1 to X0 and X2 to X1
varunagrawal Aug 22, 2024
3fc1019
provide logNormalizers directly to the augment method
varunagrawal Aug 22, 2024
cfef6d3
update GaussianMixture::likelihood to compute the logNormalizers
varunagrawal Aug 22, 2024
30bf261
Tests which verify direct factor specification works well
varunagrawal Aug 22, 2024
bfaff50
remove extra prints
varunagrawal Aug 22, 2024
665d755
add docstring and GTSAM_EXPORT for ComputeLogNormalizer
varunagrawal Aug 22, 2024
03e61f4
Merge branch 'working-hybrid' into direct-hybrid-fg
varunagrawal Aug 23, 2024
07a0088
compute logNormalizers and pass to GaussianMixtureFactor
varunagrawal Aug 23, 2024
62b32fa
Merge branch 'working-hybrid' into direct-hybrid-fg
varunagrawal Aug 25, 2024
fbffd79
Merge branch 'develop' into direct-hybrid-fg
varunagrawal Sep 5, 2024
13193a1
better comments
varunagrawal Sep 5, 2024
0bab8ec
Merge branch 'develop' into direct-hybrid-fg
varunagrawal Sep 5, 2024
51a2fd5
improved comments
varunagrawal Sep 6, 2024
615c04a
some more refactor and remove redundant test
varunagrawal Sep 6, 2024
9dc29e0
fix test
varunagrawal Sep 6, 2024
05a4b7a
Merge branch 'develop' into direct-hybrid-fg
varunagrawal Sep 6, 2024
24ec30e
replace emplace_back with emplace_shared
varunagrawal Sep 6, 2024
506cda8
Merge branch 'hybrid-error-scalars' into direct-hybrid-fg
varunagrawal Sep 15, 2024
336b494
fixes
varunagrawal Sep 15, 2024
de68aec
fix tests
varunagrawal Sep 15, 2024
b895e64
Merge branch 'develop' into direct-hybrid-fg
varunagrawal Sep 18, 2024
987ecd4
undo accidental rename
varunagrawal Sep 18, 2024
80d9a5a
remove duplicate test and focus only on direct specification
varunagrawal Sep 19, 2024
717eb7e
relinearization test
varunagrawal Sep 19, 2024
f875b86
print nonlinear part of HybridValues
varunagrawal Sep 19, 2024
2937533
Merge branch 'develop' into direct-hybrid-fg
varunagrawal Sep 19, 2024
9b6facd
add documentation for additive scalar in the error and remove the 0.5…
varunagrawal Sep 19, 2024
244661a
rename ComputeLogNormalizer to ComputeLogNormalizerConstant
varunagrawal Sep 19, 2024
4f88829
fix docstring for HybridGaussianFactor
varunagrawal Sep 19, 2024
d60a253
logNormalizationConstant is now a method for Gaussian noise model
varunagrawal Sep 19, 2024
cea0dd5
update tests
varunagrawal Sep 19, 2024
1ab82f3
hide sqrt(2*value) so the user doesn't have to premultiply by 2
varunagrawal Sep 20, 2024
364b4b4
logDetR method which leverages noise model for efficiency. Build logD…
varunagrawal Sep 20, 2024
67a8b8f
comprehensive unit testing
varunagrawal Sep 20, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 0 additions & 2 deletions gtsam/discrete/DiscreteBayesNet.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -18,8 +18,6 @@

#include <gtsam/discrete/DiscreteBayesNet.h>
#include <gtsam/discrete/DiscreteConditional.h>
#include <gtsam/discrete/DiscreteFactorGraph.h>
#include <gtsam/discrete/DiscreteLookupDAG.h>
#include <gtsam/inference/FactorGraph-inst.h>

namespace gtsam {
Expand Down
3 changes: 1 addition & 2 deletions gtsam/hybrid/HybridGaussianConditional.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -232,8 +232,7 @@ std::shared_ptr<HybridGaussianFactor> HybridGaussianConditional::likelihood(

/* ************************************************************************* */
std::set<DiscreteKey> DiscreteKeysAsSet(const DiscreteKeys &discreteKeys) {
std::set<DiscreteKey> s;
s.insert(discreteKeys.begin(), discreteKeys.end());
std::set<DiscreteKey> s(discreteKeys.begin(), discreteKeys.end());
return s;
}

Expand Down
6 changes: 3 additions & 3 deletions gtsam/hybrid/HybridGaussianFactor.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -30,8 +30,8 @@ namespace gtsam {

/**
* @brief Helper function to augment the [A|b] matrices in the factor components
* with the normalizer values.
* This is done by storing the normalizer value in
* with the additional scalar values.
* This is done by storing the value in
* the `b` vector as an additional row.
*
* @param factors DecisionTree of GaussianFactors and arbitrary scalars.
Expand All @@ -56,7 +56,7 @@ HybridGaussianFactor::Factors augment(
const HybridGaussianFactor::sharedFactor &gf) {
auto jf = std::dynamic_pointer_cast<JacobianFactor>(gf);
if (!jf) return gf;
// If the log_normalizer is 0, do nothing
// If the value is 0, do nothing
if (values(assignment) == 0.0) return gf;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You are calling the potentially expensive values(assignment) twice, so please assign it to a variable. But why unzip the tree and then call values() at all. Can't you just apply on the original tree and get the value for free?

A more problematic issue is that you take the square root. But is this wrong now? Should we not do sqrt(2*value) ? Because the "hidden" value c will be squared and halved: value == 0.5*c^2.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. So I need to unzip to normalize the values (so that we don't have negatives, else the sqrt(value) gives a nan). This brings up an interesting point though: should we normalize for the user or should the user pre-normalize? I felt like us doing it makes sense, since the sqrt to hide the variable is something we need to do and the user shouldn't have to worry about those semantics. Thoughts?

  2. We can do 2*value, but that is going to have to also change in various other places like HybridGaussianConditional::likelihood. I think what you suggest is the better thing, keeping in alignment with my previous point.


GaussianFactorGraph gfg;
Expand Down
9 changes: 9 additions & 0 deletions gtsam/hybrid/HybridGaussianFactor.h
Original file line number Diff line number Diff line change
Expand Up @@ -45,6 +45,15 @@ using GaussianFactorValuePair = std::pair<GaussianFactor::shared_ptr, double>;
* where the set of discrete variables indexes to
* the continuous gaussian distribution.
*
* In factor graphs the error function typically returns 0.5*|A*x - b|^2, i.e.,
* the negative log-likelihood for a Gaussian noise model.
* In hybrid factor graphs we allow *adding* an arbitrary scalar dependent on
* the discrete assignment.
* For example, adding a 70/30 mode probability is supported by providing the
* scalars $-log(.7)$ and $-log(.3)$.
* Note that adding a common constant will not make any difference in the
* optimization, so $-log(70)$ and $-log(30)$ work just as well.
*
* @ingroup hybrid
*/
class GTSAM_EXPORT HybridGaussianFactor : public HybridFactor {
Expand Down
15 changes: 13 additions & 2 deletions gtsam/hybrid/HybridNonlinearFactor.h
Original file line number Diff line number Diff line change
Expand Up @@ -45,6 +45,17 @@ using NonlinearFactorValuePair = std::pair<NonlinearFactor::shared_ptr, double>;
* This class stores all factors as HybridFactors which can then be typecast to
* one of (NonlinearFactor, GaussianFactor) which can then be checked to perform
* the correct operation.
*
* In factor graphs the error function typically returns 0.5*|h(x)-z|^2, i.e.,
* the negative log-likelihood for a Gaussian noise model.
* In hybrid factor graphs we allow *adding* an arbitrary scalar dependent on
* the discrete assignment.
* For example, adding a 70/30 mode probability is supported by providing the
* scalars $-log(.7)$ and $-log(.3)$.
* Note that adding a common constant will not make any difference in the
* optimization, so $-log(70)$ and $-log(30)$ work just as well.
*
* @ingroup hybrid
*/
class HybridNonlinearFactor : public HybridFactor {
public:
Expand Down Expand Up @@ -134,7 +145,7 @@ class HybridNonlinearFactor : public HybridFactor {
auto errorFunc =
[continuousValues](const std::pair<sharedFactor, double>& f) {
auto [factor, val] = f;
return factor->error(continuousValues) + (0.5 * val);
return factor->error(continuousValues) + val;
};
DecisionTree<Key, double> result(factors_, errorFunc);
return result;
Expand All @@ -153,7 +164,7 @@ class HybridNonlinearFactor : public HybridFactor {
auto [factor, val] = factors_(discreteValues);
// Compute the error for the selected factor
const double factorError = factor->error(continuousValues);
return factorError + (0.5 * val);
return factorError + val;
}

/**
Expand Down
9 changes: 6 additions & 3 deletions gtsam/hybrid/HybridValues.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -36,9 +36,12 @@ HybridValues::HybridValues(const VectorValues& cv, const DiscreteValues& dv,
void HybridValues::print(const std::string& s,
const KeyFormatter& keyFormatter) const {
std::cout << s << ": \n";
continuous_.print(" Continuous",
keyFormatter); // print continuous components
discrete_.print(" Discrete", keyFormatter); // print discrete components
// print continuous components
continuous_.print(" Continuous", keyFormatter);
// print discrete components
discrete_.print(" Discrete", keyFormatter);
// print nonlinear components
nonlinear_.print(" Nonlinear", keyFormatter);
}

/* ************************************************************************* */
Expand Down
174 changes: 169 additions & 5 deletions gtsam/hybrid/tests/testHybridGaussianFactor.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -396,7 +396,7 @@ namespace test_two_state_estimation {

DiscreteKey m1(M(1), 2);

void addMeasurement(HybridBayesNet& hbn, Key z_key, Key x_key, double sigma) {
void addMeasurement(HybridBayesNet &hbn, Key z_key, Key x_key, double sigma) {
auto measurement_model = noiseModel::Isotropic::Sigma(1, sigma);
hbn.emplace_shared<GaussianConditional>(z_key, Vector1(0.0), I_1x1, x_key,
-I_1x1, measurement_model);
Expand All @@ -420,7 +420,7 @@ static HybridGaussianConditional::shared_ptr CreateHybridMotionModel(

/// Create two state Bayes network with 1 or two measurement models
HybridBayesNet CreateBayesNet(
const HybridGaussianConditional::shared_ptr& hybridMotionModel,
const HybridGaussianConditional::shared_ptr &hybridMotionModel,
bool add_second_measurement = false) {
HybridBayesNet hbn;

Expand All @@ -443,9 +443,9 @@ HybridBayesNet CreateBayesNet(

/// Approximate the discrete marginal P(m1) using importance sampling
std::pair<double, double> approximateDiscreteMarginal(
const HybridBayesNet& hbn,
const HybridGaussianConditional::shared_ptr& hybridMotionModel,
const VectorValues& given, size_t N = 100000) {
const HybridBayesNet &hbn,
const HybridGaussianConditional::shared_ptr &hybridMotionModel,
const VectorValues &given, size_t N = 100000) {
/// Create importance sampling network q(x0,x1,m) = p(x1|x0,m1) q(x0) P(m1),
/// using q(x0) = N(z0, sigmaQ) to sample x0.
HybridBayesNet q;
Expand Down Expand Up @@ -741,6 +741,170 @@ TEST(HybridGaussianFactor, TwoStateModel4) {
EXPECT(assert_equal(expected, *(bn->at(2)->asDiscrete()), 0.002));
}

namespace test_direct_factor_graph {
/**
* @brief Create a Factor Graph by directly specifying all
* the factors instead of creating conditionals first.
* This way we can directly provide the likelihoods and
* then perform linearization.
*
* @param values Initial values to linearize around.
* @param means The means of the HybridGaussianFactor components.
* @param sigmas The covariances of the HybridGaussianFactor components.
* @param m1 The discrete key.
* @return HybridGaussianFactorGraph
*/
static HybridGaussianFactorGraph CreateFactorGraph(
const gtsam::Values &values, const std::vector<double> &means,
const std::vector<double> &sigmas, DiscreteKey &m1,
double measurement_noise = 1e-3) {
auto model0 = noiseModel::Isotropic::Sigma(1, sigmas[0]);
auto model1 = noiseModel::Isotropic::Sigma(1, sigmas[1]);
auto prior_noise = noiseModel::Isotropic::Sigma(1, measurement_noise);

auto f0 =
std::make_shared<BetweenFactor<double>>(X(0), X(1), means[0], model0)
->linearize(values);
auto f1 =
std::make_shared<BetweenFactor<double>>(X(0), X(1), means[1], model1)
->linearize(values);

// Create HybridGaussianFactor
// We multiply by -2 since the we want the underlying scalar to be log(|2πΣ|)
std::vector<GaussianFactorValuePair> factors{
{f0, -2 * model0->logNormalizationConstant()},
{f1, -2 * model1->logNormalizationConstant()}};
HybridGaussianFactor motionFactor({X(0), X(1)}, m1, factors);

HybridGaussianFactorGraph hfg;
hfg.push_back(motionFactor);

hfg.push_back(PriorFactor<double>(X(0), values.at<double>(X(0)), prior_noise)
.linearize(values));

return hfg;
}
} // namespace test_direct_factor_graph

/* ************************************************************************* */
/**
* @brief Test components with differing means but the same covariances.
* The factor graph is
* *-X1-*-X2
* |
* M1
*/
TEST(HybridGaussianFactor, DifferentMeansFG) {
using namespace test_direct_factor_graph;

DiscreteKey m1(M(1), 2);

Values values;
double x1 = 0.0, x2 = 1.75;
values.insert(X(0), x1);
values.insert(X(1), x2);

std::vector<double> means = {0.0, 2.0}, sigmas = {1e-0, 1e-0};

HybridGaussianFactorGraph hfg = CreateFactorGraph(values, means, sigmas, m1);

{
auto bn = hfg.eliminateSequential();
HybridValues actual = bn->optimize();

HybridValues expected(
VectorValues{{X(0), Vector1(0.0)}, {X(1), Vector1(-1.75)}},
DiscreteValues{{M(1), 0}});

EXPECT(assert_equal(expected, actual));

DiscreteValues dv0{{M(1), 0}};
VectorValues cont0 = bn->optimize(dv0);
double error0 = bn->error(HybridValues(cont0, dv0));
// regression
EXPECT_DOUBLES_EQUAL(0.69314718056, error0, 1e-9);

DiscreteValues dv1{{M(1), 1}};
VectorValues cont1 = bn->optimize(dv1);
double error1 = bn->error(HybridValues(cont1, dv1));
EXPECT_DOUBLES_EQUAL(error0, error1, 1e-9);
}

{
auto prior_noise = noiseModel::Isotropic::Sigma(1, 1e-3);
hfg.push_back(
PriorFactor<double>(X(1), means[1], prior_noise).linearize(values));

auto bn = hfg.eliminateSequential();
HybridValues actual = bn->optimize();

HybridValues expected(
VectorValues{{X(0), Vector1(0.0)}, {X(1), Vector1(0.25)}},
DiscreteValues{{M(1), 1}});

EXPECT(assert_equal(expected, actual));

{
DiscreteValues dv{{M(1), 0}};
VectorValues cont = bn->optimize(dv);
double error = bn->error(HybridValues(cont, dv));
// regression
EXPECT_DOUBLES_EQUAL(2.12692448787, error, 1e-9);
}
{
DiscreteValues dv{{M(1), 1}};
VectorValues cont = bn->optimize(dv);
double error = bn->error(HybridValues(cont, dv));
// regression
EXPECT_DOUBLES_EQUAL(0.126928487854, error, 1e-9);
}
}
}

/* ************************************************************************* */
/**
* @brief Test components with differing covariances but the same means.
* The factor graph is
* *-X1-*-X2
* |
* M1
*/
TEST(HybridGaussianFactor, DifferentCovariancesFG) {
using namespace test_direct_factor_graph;

DiscreteKey m1(M(1), 2);

Values values;
double x1 = 1.0, x2 = 1.0;
values.insert(X(0), x1);
values.insert(X(1), x2);

std::vector<double> means = {0.0, 0.0}, sigmas = {1e2, 1e-2};

// Create FG with HybridGaussianFactor and prior on X1
HybridGaussianFactorGraph fg = CreateFactorGraph(values, means, sigmas, m1);
auto hbn = fg.eliminateSequential();

VectorValues cv;
cv.insert(X(0), Vector1(0.0));
cv.insert(X(1), Vector1(0.0));

// Check that the error values at the MLE point μ.
AlgebraicDecisionTree<Key> errorTree = hbn->errorTree(cv);

DiscreteValues dv0{{M(1), 0}};
DiscreteValues dv1{{M(1), 1}};

// regression
EXPECT_DOUBLES_EQUAL(9.90348755254, errorTree(dv0), 1e-9);
EXPECT_DOUBLES_EQUAL(0.69314718056, errorTree(dv1), 1e-9);

DiscreteConditional expected_m1(m1, "0.5/0.5");
DiscreteConditional actual_m1 = *(hbn->at(2)->asDiscrete());

EXPECT(assert_equal(expected_m1, actual_m1));
}

/* ************************************************************************* */
int main() {
TestResult tr;
Expand Down
Loading
Loading