-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Remove cvxpy for weight minimization, fix test values and refactor tests codes #120
Conversation
@@ -1,5 +1,4 @@ | |||
numpy | |||
scipy | |||
cvxpy |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As intended
tests/test_optimizers.py
Outdated
tm = [ | ||
([[[1, 0], [0, 1]], [1, 1], [0, 0], [1, 1]], [0, 0]), | ||
test_matrix = [ | ||
# ([stretched_component_gram_matrix, linear_coefficient, lower_bound, upper_bound], expected) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
added an in-line comment to see the order of the input data in the test matrix
tests/test_optimizers.py
Outdated
([[[1, 0], [0, 1]], [1, 1], [0, 0], [1, 1]], [0, 0]), | ||
test_matrix = [ | ||
# ([stretched_component_gram_matrix, linear_coefficient, lower_bound, upper_bound], expected) | ||
([[[1, 0], [0, 1]], [1, 1], 0, 0], [0, 0]), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixing syntax bug -
fixing from [0,0]
to to 0, 0]
0,0]
refers to lower_bound
and upper_bound
, respectively.
expected = tm[1] | ||
actual = get_weights(tm[0][0], tm[0][1], tm[0][2], tm[0][3]) | ||
actual = get_weights(stretched_component_gram_matrix, linear_coefficient, lower_bound, upper_bound) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pass variables instead of tm[0][0]
, etc. which is much harder to read for future generations.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Problem- the previous code in line 212 in this file had an absolute tolerance of 0.5!
np.testing.assert_allclose(actual, expected, rtol=1e-03, atol=0.5)
When I modified atol
to 1e-05
np.testing.assert_allclose(actual, expected, rtol=1e-03, atol=1e-05)
the original code started to fail (3-4 out of 7-8 test cases failed).
Since our new scipy
minimization function has been implemented correctly per our tests above, I've updated the numbers for the cases that are failing - CI is passing as well with the new implmentation.
expected = tuwm[1] | ||
np.testing.assert_allclose(actual, expected, rtol=1e-03, atol=0.5) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
atol=0.5
is way too high considering we are comparing numbers between 0 and 1.2, etc.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ready for review @sbillinge Please find my Interesting comments
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The changes look like a step in the right direction.
Before I merge, let me think a bit about this. I haven't looked at the tests pretty much at all in this code before so I want to try and understand what behavior they are testing. It would be good to collaborate on this if you are interested. People tend to be bad at testing until they are good. They don't really realize that the tests are supposed to define the behavior we want, then we code to the tests, but most people's workflow tends to be the other way around....write the code then come up with some tests that pass when the code runs....
that's how you might end up with a test with ridiculous tolerances.....
So the next step is to think what each function is supposed to be doing and what behavior we want it to have. This takes time, so we will just do it for this bit of code that we have been touching here....
I took a quick look at those tests and they are a bit hard to understand. There are also inconsistencies in that function. For example, in the docstring it says that the inputs need to be 1D array like (for example) then there is a line of code that takes the array and does But for the tests, we may want to start by putting a comment on each lline of This may be more trouble than it is worth, but let's give it a bit of thought at least since we are working on it. |
Yes, the tests are hard to interpret and understand using unknown keywords such as I will take some time to refactor the code and keep you posted. Would you want me to make a separate PR for refactoring? |
It may make sense to do it on the same PR. First work on the tests. Then the code refactor will simply include the removal of cvxpy. Let's update then review the tests before doing the refactor. We may want to do it systematically also for any functions that call this one, unless this gets to be too much. But those functions will control whether the inputs here are array-like or not, for example. Is there a reason we don't make these as arrays at the beginning and just pass them around as such, for example. |
@sbillinge ready for review Mostly, I tried to leverage the power of I think it's best to further continue working on a separate PR |
Thanks for this @bobleesj . I agree this is a nice way to do the mark.paramaterize. Of greater utility would be some comments that explain the intent of each test within the paramaterize. It may be hard to reverse engineer this or people's tests but good practice moving forward. |
Got it, now that we have slightly more readable test functions, I will try to reverse engineer.. |
Closes #112