Skip to content

enable privacy evaluation for models trained externally #23

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

mohsinehajar
Copy link
Member

This PR enables the framework to support the evaluation of models trained outside Guardian AI. These changes specifically affect the privacy estimation component and do not impact other parts of the system

With this update, users can now use Guardian AI to evaluate externally trained models without needing to retrain them within the framework. This enhances flexibility while maintaining compatibility with the existing evaluation pipeline

@oracle-contributor-agreement oracle-contributor-agreement bot added the OCA Verified All contributors have signed the Oracle Contributor Agreement. label Mar 19, 2025
@mohsinehajar mohsinehajar changed the title feat: enable privacy evaluation for models trained externally enable privacy evaluation for models trained externally Mar 19, 2025
@mohsinehajar mohsinehajar marked this pull request as draft March 19, 2025 12:09
@mohsinehajar mohsinehajar marked this pull request as ready for review March 19, 2025 12:10
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
OCA Verified All contributors have signed the Oracle Contributor Agreement.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant