Skip to content

ML model to debias biased content #20

@demilolu

Description

@demilolu

Background on the problem the feature will solve/improved user experience

People who use TakeTwo might not only want to detect racially biased content but receive help on how to debias their content.

Describe the solution you'd like

Machine learning models that are able to debias content that is found to be racially biased.

Tasks

  • Confirm this is a real painpoint for others
  • gather datasets
  • experiments
  • TBD

Acceptance Criteria

Standards we believe this issue must reach to be considered complete and ready for a pull request. E.g precisely all the user should be able to do with this update, performance requirements, security requirements, etc as appropriate.

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions