Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add AI FAQ page for Code Eval Tool #6071

Merged
merged 1 commit into from
Jan 21, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 8 additions & 0 deletions docs/code-eval-tool.md
Original file line number Diff line number Diff line change
Expand Up @@ -65,6 +65,14 @@ You can also have an **Ask AI** question as a criteria item in the checklist. Yo

![Ask AI criteria](/static/teachertool/ask-ai-criteria.png)

### ~hint

#### AI usage

The use of AI in criteria items is further explained in the [AI FAQ](/teachertool/ai-faq).

### ~

#### 6. Remove Criteria

A criteria item is removed using the **trash** button.
Expand Down
31 changes: 31 additions & 0 deletions docs/teachertool/ai-faq.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
# Microsoft MakeCode Code Evaluation Tool

## Responsible AI FAQ

### 1. What is the MakeCode Code Evaluation Tool?

The MakeCode Code Evaluation tool is an online tool for teachers to help them understand and evaluate student block-based coding programs. In addition to static analysis functionality, there is an optional AI component for teachers to provide additional feedback and recommendations to students. The teacher can ask specific questions about one student project at a time (i.e. "Do the variables in this program have meaningful names?"), and the AI will respond with an answer and reasoning.

### 2. What can the MakeCode Code Evaluation Tool do?

The MakeCode Code Evaluation tool will send the current student code with the teacher question to DeepPrompt (a Microsoft Azure LLM service) along with some contextual prompt information and return the resulting AI response back to the user.

### 3. What is MakeCode Code Evaluation Tool’s intended use(s)?

The MakeCode Code Evaluation tool is intended to help teachers expedite the process of giving feedback on student programs.

### 4. How was the MakeCode Code Evaluation Tool evaluated? What metrics are used to measure performance?

The system was evaluated with 1000+ prompts from multiple sources to ensure the responses are grounded and relevant to the educator’s task of assessing student code. We evaluated accuracy with red teaming and expert review of responses.

### 5. What are the limitations of the MakeCode Code Evaluation Tool? How can users minimize the impact of the Code Evaluation Tool’s limitations when using the system?

The system only supports educational scenarios related to student code. The system will not perform well for other scenarios or unrelated questions. When using this tool, educators should ask short, concise questions relating to the assessment of student code. Questions are limited to 5 per program, and 1000 characters per question. The MakeCode Code Evaluation tool cannot provide direct scores or grades for student work.

### ~reminder

#### Tool Beta

This tool is currently in Beta, and we value your feedback. Please click on the **Feedback** button to share your experiences and thoughts about the MakeCode Code Evaluation Tool.

### ~
Loading