[DO NOT MERGE] Demo js divergence compared to kl divergence#11744
[DO NOT MERGE] Demo js divergence compared to kl divergence#11744
Conversation
✅ Deploy Preview for niobium-lead-7998 canceled.
|
| import numpy as np | ||
| import pandas as pd | ||
| from scipy import stats | ||
| from scipy.spatial.distance import jensenshannon |
There was a problem hiding this comment.
Previously stats was used to compute kl divergence, now we use jensenshannon instead.
|
| "The maximum KL divergence to for which to return success=True. If KL divergence is larger" | ||
| "than the provided threshold, the test will return success=False." | ||
| ) | ||
| INTERNAL_WEIGHT_HOLDOUT_DESCRIPTION = ( |
There was a problem hiding this comment.
The changes in the top part of this file (until the functional change) is about removing the internal weight and tail weight holdout. In KL divergence, we add these fake weights to prevent dividing by 0. We don't need to worry about this in JS divergence.
There was a problem hiding this comment.
We would want to keep kl divergence and add js divergence as a new file. I'm showing this as a replacement so we can see the diff between these files.
This was all AI generated as a demo of the work. There is 1 line (see comment below) where we compute js instead of kl divergence. The other changes are from removing the weight holdout parameters which we don't need with js divergence or rendering helper changes.
| observed_value = None | ||
| else: | ||
| observed_value = kl_divergence | ||
| js_divergence = jensenshannon(pk, qk) ** 2 |
There was a problem hiding this comment.
This is the functional change where we compute js divergence instead of kl divergence. Everything else is dealing with adding the "weight holdouts" to prevent dividing by 0 or in the rendering helpers below.
|
|
||
| return return_obj | ||
|
|
||
| # ---- Rendering helpers ---- |
There was a problem hiding this comment.
I didn't look at the changes in the rendering section.
invoke lint(usesruff format+ruff check)For more information about contributing, visit our community resources.
After you submit your PR, keep the page open and monitor the statuses of the various checks made by our continuous integration process at the bottom of the page. Please fix any issues that come up and reach out on Slack if you need help. Thanks for contributing!