-
Notifications
You must be signed in to change notification settings - Fork 11
ML Lab Narrative
Msg: I just got a complaint from a past applicant named {name} asking why they were rejected. Can you look into it?
Interaction:
- when you click on the response, the inspector panel shows up demo
- what you see is the dataset inspector as we have it now, with the grid of accepted and rejected candidates, and the CV in the middle of the candidate to be inspected
- the candidate is visibly qualified for the job, all CV features are pretty high, but they still get rejected (to make a clear hint that the machine is making unfair decisions)
- in the bottom right corner, there is a "respond to message" box see image. When you click on the box, you see the previous message (I just got a complaint from a past applicant...), and you also get a chance to respond to the message (e.g. "I don't know how this decision was made").
- after you respond to the message, you're taken back to the ML Lab view, and the narrative continues
Msg: Hey, some reporters are talking about hiring bias, but you’re off the hook since it’s all automated now, right?
Interaction:
- stays as we have it now, we show the info tooltip about how machine decisions don't happen in a vacuum, and that they replicate historical trends
Msg: I’m hearing that you may be involved with this bias story. Reporters are asking for transparency!
Interaction:
- here, since the investors are getting nervous, the copy should say something like "let's run the stats to see if the algorithm is really making skewed decisions". This message sets the stage for the next inspector panel
- just like with the first inspector panel, it opens up automatically when you acknowledge the investor's message. This time, we see a grid of candidates like before, but in the middle column we show some stats:
- ratios of accepted/rejected applicants based evaluated by you
- ratios of accepted/rejected applicants evaluated by the machine These stats are meant to reinforce the idea that the bias is present (the ratio statistic should show the yellow people are being favored).
- again, you click on the message box to respond to the investor (e.g. "Indeed, the algorithm is favoring yellow people"
=== ADDITION CHECK HERE =================
- in the analysis step we want to point out the difference between the data that the machine was trained on vs what we are evaluating now (to show the initial dataset was biased due to historical reasons)
- as a result, showing the analysis cards in the current data inspector doesn't make too much sense, since we can only see the currently evaluated candidates and I don't think we can refer back to the people you hired out of nowhere.
- I thought putting the analysis into a more "transition stage" type of conversation makes more sense.
- you'd arrive in the hr laptop view, which now also has a "machine-evaluated-candidates.csv" file in the background.
- in the conversation you're telling the SE:
YOU: "Hey, we need to figure out what's wrong with your algorithm, can you analyze the hires it made?"
SE: "Can you send me the decisions it has made and all the candidate information?"
[upload machine-evaluated-candidates.csv]
[returns a card of analytics with small yellow - blue people difference, but huge discrepancy between color of hired people]
YOU: "WHAT? How is this possible?"
| "Why did the algorithm learn to discriminate blue people?"
SE: "Remember, how the algorithm works?
YOU: "Uhm, I sent you my decisions and the machine just mimics me?"
| "I don't care, fix it!"
SE: "Yeah, if we want to fix it, we need to look at the data that you sent me, because that's what it was learning from. Can you send it again?"
[upload previously uplaoded file]
[returns analytics card with blue people underperforming previously]
YOU: "When we trained the algorithm, we didn't have many good blue candidates..."
| "The data is wrong."
SE: "You should have understood your data! I just did what you asked me!"
YOU: "But you know how these things work!"
==========
The bias story is getting hotter, the investor is getting panicky and is looking for answers and fixes
Msg[example]: We need to fix this now! Can you change the way the machine works?
Interaction:
- info tooltip about the black box problem = it's hard to track back algorithm decisions, so it's tough to quickly change how it works
Msg:Hey, you just got sued for hiring discrimination. All the investors are pulling out! What on earth went wrong?
Interaction:
- end game
When showing the dataset inspector, we need to track which stage in the narrative we're currently in, because we have to display the message box with response options. For that, you can use the simple state interface I wrote. It has a store, as well as a get
and set
function, so you can quickly set and retrieve state values without having to pass config variables to the dataset inspector constructor.
- state.js (in
~/public/game/controllers/common/state
)
const store = {
'ml-narrative-stage': Number(0),
};
export const set = (prop, value) => store[prop] = value;
export const get = (prop) => return store[prop];
- ml narrator
import * as state from '~/public/game/controllers/common/state';
...
state.set('ml-narrative-stage', 2);
...
new DatasetInspector();
- dataset inspector
import * as state from '~/public/game/controllers/common/state';
...
copy = txt.ml[state.get('ml-narrative-stage')]