Skip to content

ML model to detect malicious use of TakeTwo crowdsourced labeling #21

@demilolu

Description

@demilolu

Background on the problem the feature will solve/improved user experience

As an open-source, data crowdsourced solution, there is potential for malicious use/contributions

Describe the solution you'd like

Develop an ML model(s) that detects malicious use such as:

  • seeking to spam
  • alter what is considered by racist, by making offensive racist terms seem less racist or identifying non-racist, benign terms as racist with the intent of making the api useless (by classifying everything as racist)

Tasks

Description of the development tasks needed to complete this issue, including tests,

Acceptance Criteria

Standards we believe this issue must reach to be considered complete and ready for a pull request. E.g precisely all the user should be able to do with this update, performance requirements, security requirements, etc as appropriate.

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions