diff --git a/LLM_LEGAL.md b/LLM_LEGAL.md index d5f8950..bff2d15 100644 --- a/LLM_LEGAL.md +++ b/LLM_LEGAL.md @@ -53,6 +53,10 @@ If a company put sufficient efforts into filtering, cleaning, and synthesizing t * [Microsoft Responsible AI Standard](https://www.microsoft.com/en-us/ai/principles-and-approach/) and see "Safety-first model design" in [Introducing Phi-3: Redefining what’s possible with SLMs](https://azure.microsoft.com/en-us/blog/introducing-phi-3-redefining-whats-possible-with-slms/) * [Our responsible approach to Meta AI and Meta Llama 3](https://ai.meta.com/blog/meta-llama-3-meta-ai-responsibility/) +And this is a leaderboard about how "uncersored" (in other words, "unsafe") models are + +* [Uncensored General Intelligence Leaderboard](https://huggingface.co/spaces/DontPlanToEnd/UGI-Leaderboard) + #### Consider using a safeguard There are many ways to safeguard LLM-genrated contents, such as constraining the format and setting up text filter. Meta even published a [Llama Guard 2](https://llama.meta.com/docs/model-cards-and-prompt-formats/meta-llama-guard-2/) model to identify issues (e.g., privacy, intellectual property) in user input and generated output, and yes it has a [GGUF version](https://huggingface.co/neopolita/meta-llama-guard-2-8b-gguf/tree/main) that can be used in inference.