You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Then when this is run proj1 = MyModel(do_run=True, proj_len=10), the model will be run and results stored in the cache.
Once the projection is run, users can access the values from proj1.<user_method>(t) for individual values; proj1.<user_method>.values for an array, and proj1.ToDataFrame() as the optional way to pull all single parameter values into a dataframe (handy for debugging/viewing as easy to copy into Excel). There is also a sum method on the cache which returns the total, e.g. proj.<user_method>.sum().
The proj_len variable controls how much of the projection is pre-computed (from t=0...proj_len-1), if the user requests a method result from after this then the model will run through all the intermediate calculations and cache these.
e.g.: proj.<user_method>(20) would calculate a further 11 values and cache them (10, 11, ... 20).
Rationale for doing the pre-computing is that the python stack can overload with a lot of recursion.
I initially allowed the cache to be cleared, however I found this risky (I use some proprietary software which doesn't always clear the cache correctly 😯), and instead decided that if a new projection is needed, you should just create a new instance.
The text was updated successfully, but these errors were encountered:
If you are running a lot of scenarios, you will have to create many instances. But this will make the memory consumption huge. So you will end up deleting them?
And the code is basically the same if you allow clearing the cache, just instead of deleting an instance you will clear the cache?
LightModel
LightModel needs to clear the cache because it does a warmup run and then a real run. So it supports clearing the cache. If you want to have a uniform API between the two classes (maybe they should both inherit from some Abstract base class idk), then you will want to support clearing the cache.
So I'd cast my vote on supporting the clearing of the cache, but also say that it isn't a huge deal.
The current approach in
heavylight
is that projections run when the instance is created, e.g. if the user model is:Then when this is run
proj1 = MyModel(do_run=True, proj_len=10)
, the model will be run and results stored in the cache.Once the projection is run, users can access the values from
proj1.<user_method>(t)
for individual values;proj1.<user_method>.values
for an array, andproj1.ToDataFrame()
as the optional way to pull all single parameter values into a dataframe (handy for debugging/viewing as easy to copy into Excel). There is also a sum method on the cache which returns the total, e.g.proj.<user_method>.sum()
.The
proj_len
variable controls how much of the projection is pre-computed (from t=0...proj_len-1), if the user requests a method result from after this then the model will run through all the intermediate calculations and cache these.e.g.:
proj.<user_method>(20)
would calculate a further 11 values and cache them (10, 11, ... 20).Rationale for doing the pre-computing is that the python stack can overload with a lot of recursion.
I initially allowed the cache to be cleared, however I found this risky (I use some proprietary software which doesn't always clear the cache correctly 😯), and instead decided that if a new projection is needed, you should just create a new instance.
The text was updated successfully, but these errors were encountered: