-
-
Notifications
You must be signed in to change notification settings - Fork 269
Discrete Choice Models and Utility Estimation #543
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Is this related to RL? We have https://www.pymc.io/projects/examples/en/latest/case_studies/reinforcement_learning.html If so, would it be important that any new example discuss the similarities/ differences? |
No I don't think it is related to reinforcement learning. It's not about optimising choice, but about estimating the latent utility function which determines choices. |
Hmm, I mean RL is a procedure to figure out "the utility function" of a problem I guess the difference when modelling people's choices is that in a RL setting you have snapshots of people's learning/ choice trajectories instead of the "final choices" once learning is concluded (or the utility function is fixed)? |
I mean, I think at the very least they are very different traditions. Typically used to motivate policy choices e.g. famously the introduction of the BART transport system. Yes, this approach is definitely after the fact... but I'll be able to say more once I've read up on the techniques. Will circle back with a clearer motivation |
I believe the two topics may be very distinct (certainly in background). I was just phishing for connections if there were obvious ones :) Mostly being curious |
Signed-off-by: Nathaniel <[email protected]>
@ricardoV94 spent a bit more time looking into this and experimenting. Added a draft pull request above (still experimenting) but found Stan implementations referenced in the draft PR. I think there is something interesting here. Have a very simple model working on fake data with the following structure: Need to write up the motivation for these models more, but i think there are quite distinct from anything in examples gallery at the moment. |
…d indexing for final model instead of for loop.
…e plot. Tidying
…ed dependencies
Interesting to see a pytorch implementation here too:https://twitter.com/GSBsiLab/status/1671534654019469312?t=3N8mg2yK4CK7Vh3slodJDw&s=19 |
* [Discrete Choice #543] early commit exploring model specs * [Discrete Choice #543] added hierarchical covariance structure * [Discrete Choice #543] slow model run crackers data Signed-off-by: Nathaniel <[email protected]> * [Discrete Choice #543] trying a simpler appraoch * [Discrete Choice #543] trying a simpler approach again * [Discrete Choice #543] working example with heating df * [Discrete Choice #543] tidying model iterations * [Discrete Choice #543] adding some explanatory text * [Discrete Choice #543] updated plots and headings * [Discrete Choice #543] added cracker choice and individual random effects * [Discrete Choice #543] added prior constraint on price * [Discrete Choice #543] added prior constraint on display and feat * [Discrete Choice #543] adding reference and counterfactual policy change * [Discrete Choice #543] fixed ref * [Discrete Choice #543] tidied up final plots * [Discrete Choice #543] tidied some writing * [Discrete Choice #543] fixed some writing, added exploratory plots * [Discrete Choice #543] more tidying * [Discrete Choice #543] adjusted with Ricardo's comments. Used indexing for final model instead of for loop. * [Discrete Choice #543] fixed label ordering in cracker choice plot. Tidying * [Discrete Choice #543] fixed data imports try except and added dependencies * [Discrete Choice #543] added include extra installs markdown * [Discrete Choice #543] tidied xarray dims usage as per comments * [Discrete Choice #543] updated metadata with myst ignore * [Discrete Choice #543] updated jupytext toml * [Discrete Choice #543] updated final plot and adjusted with feedback * [Discrete Choice #543] swapped dataframe for xarray --------- Signed-off-by: Nathaniel <[email protected]>
Discrete Choice Models and Utility Estimation:
Why should this notebook be added to pymc-examples?
In Economics there is a tradition of using discrete choice models over many agents and products to infer properties about the utility function of the agents, and abstracting over all the agents make predictions about product share within the market.
These have traditionally been modelled where closed form solutions allowed with logistic type and ordinal choice models. I cover
Ordinal regression in #533, but there is a separate stream of research dealing with Nested logit and GEV style discrete choice models that i think could be more easily modelling using a Bayesian framework.
Not sure at the minute of the details, but plan on looking into: https://www.amazon.co.uk/Discrete-Choice-Methods-Simulation-Kenneth/dp/0521816963/ref=sr_1_1?crid=3DBCCE669W43Z&keywords=kenneth+train&qid=1681378209&sprefix=kenneth+train%2Caps%2C70&sr=8-1&asin=0521816963&revisionId=&format=4&depth=1
I think it could be a useful example, and in particular one where the Bayesian approach is far easier to understand than the traditional frequentist account. (I'm hoping). For comparison: https://pyblp.readthedocs.io/en/stable/background.html
Suggested categories:
The text was updated successfully, but these errors were encountered: