Skip to content

Commit

Permalink
* docs: fixed missing footnote and expand risk definition
Browse files Browse the repository at this point in the history
  • Loading branch information
hbooth authored Feb 10, 2025
1 parent 2eca30a commit 8e1688a
Show file tree
Hide file tree
Showing 2 changed files with 4 additions and 2 deletions.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

Dioptra is a software test platform for assessing the trustworthy characteristics of artificial intelligence (AI).
Trustworthy AI is: valid and reliable, safe, secure and resilient, accountable and transparent, explainable and interpretable, privacy-enhanced, and fair - with harmful bias managed[^1].
Dioptra supports the Measure function of the [NIST AI Risk Management Framework](https://nist.gov/itl/ai-risk-management-framework/) by providing functionality to assess, analyze, and track identified AI risks.
Dioptra supports the Measure function of the [NIST AI Risk Management Framework](https://nist.gov/itl/ai-risk-management-framework/) by providing functionality to assess, analyze, and track identified AI potential benefits and negative consequences.

Dioptra provides a REST API, which can be controlled via an intuitive web interface, a Python client, or any REST client library of the user's choice for designing, managing, executing, and tracking experiments.
Details are available in the project documentation available at <https://pages.nist.gov/dioptra/>.
Expand Down
4 changes: 3 additions & 1 deletion docs/source/overview/executive-summary.rst
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,9 @@
.. https://creativecommons.org/licenses/by/4.0/legalcode
Dioptra is a software test platform for assessing the trustworthy characteristics of artificial intelligence (AI). Trustworthy AI is: valid and reliable, safe, secure and resilient, accountable and transparent, explainable and interpretable, privacy-enhanced, and fair - with harmful bias managed1. Dioptra supports the Measure function of the NIST AI Risk Management Framework by providing functionality to assess, analyze, and track identified AI risks.
Dioptra is a software test platform for assessing the trustworthy characteristics of artificial intelligence (AI). Trustworthy AI is: valid and reliable, safe, secure and resilient, accountable and transparent, explainable and interpretable, privacy-enhanced, and fair - with harmful bias managed\ [#f1]_\ . Dioptra supports the Measure function of the NIST AI Risk Management Framework by providing functionality to assess, analyze, and track identified AI potential benefits and negative consequences.

.. [#f1] https://doi.org/10.6028/NIST.AI.100-1
Use Cases
---------
Expand Down

0 comments on commit 8e1688a

Please sign in to comment.