-
Notifications
You must be signed in to change notification settings - Fork 3k
Upload results for 1DP project #40725
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR adds support for uploading evaluation results for the OneDP project by introducing a new logging function and updating related client calls.
- Introduces _log_metrics_and_instance_results_onedp to handle OneDP-specific logging.
- Branches _evaluate based on the type of azure_ai_project to select the appropriate logging approach.
- Adds the TokenScope enum and updates EvaluationServiceOneDPClient to accept metrics.
Reviewed Changes
Copilot reviewed 5 out of 5 changed files in this pull request and generated 2 comments.
File | Description |
---|---|
sdk/evaluation/azure-ai-evaluation/azure/ai/evaluation/_evaluate/_utils.py | Adds the new OneDP logging function (_log_metrics_and_instance_results_onedp). |
sdk/evaluation/azure-ai-evaluation/azure/ai/evaluation/_evaluate/_evaluate.py | Updates _evaluate to call the OneDP logging function when azure_ai_project is a string. |
sdk/evaluation/azure-ai-evaluation/azure/ai/evaluation/_constants.py | Introduces the TokenScope enum for token management. |
sdk/evaluation/azure-ai-evaluation/azure/ai/evaluation/_common/evaluation_onedp_client.py | Modifies create_evaluation_result to accept a new metrics parameter and uses create_or_update_version. |
sdk/evaluation/azure-ai-evaluation/azure/ai/evaluation/_constants.py
Outdated
Show resolved
Hide resolved
@@ -22,7 +22,7 @@ def __init__(self, endpoint: str, credential: Union[AzureKeyCredential, "TokenCr | |||
**kwargs, | |||
) | |||
|
|||
def create_evaluation_result(self, *, name: str, path: str, version=1, **kwargs) -> None: | |||
def create_evaluation_result(self, *, name: str, path: str, version=1, metrics: Dict[str, int]=None, **kwargs) -> EvaluationResult: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The 'metrics' parameter has a default value of None but is annotated with Dict[str, int]. Consider updating the type hint to Optional[Dict[str, int]] or using an empty dict as the default to avoid potential type issues.
Copilot uses AI. Check for mistakes.
API change check API changes are not detected in this pull request. |
…nts.py Co-authored-by: Copilot <[email protected]>
0b696cb
to
dc5a3a5
Compare
Description
PR to upload evaluation with 1 DP Project. Not tested yet needs for evaluators working with 1 RP to validate.
If an SDK is being regenerated based on a new swagger spec, a link to the pull request containing these swagger spec changes has been included above.
All SDK Contribution checklist:
General Guidelines and Best Practices
Testing Guidelines