Skip to content

Commit 2eaa29c

Browse files
authored
add AI FAQ page (#6071)
1 parent 8ddfcea commit 2eaa29c

File tree

2 files changed

+39
-0
lines changed

2 files changed

+39
-0
lines changed

docs/code-eval-tool.md

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -65,6 +65,14 @@ You can also have an **Ask AI** question as a criteria item in the checklist. Yo
6565

6666
![Ask AI criteria](/static/teachertool/ask-ai-criteria.png)
6767

68+
### ~hint
69+
70+
#### AI usage
71+
72+
The use of AI in criteria items is further explained in the [AI FAQ](/teachertool/ai-faq).
73+
74+
### ~
75+
6876
#### 6. Remove Criteria
6977

7078
A criteria item is removed using the **trash** button.

docs/teachertool/ai-faq.md

Lines changed: 31 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,31 @@
1+
# Microsoft MakeCode Code Evaluation Tool
2+
3+
## Responsible AI FAQ
4+
5+
### 1. What is the MakeCode Code Evaluation Tool?
6+
7+
The MakeCode Code Evaluation tool is an online tool for teachers to help them understand and evaluate student block-based coding programs. In addition to static analysis functionality, there is an optional AI component for teachers to provide additional feedback and recommendations to students. The teacher can ask specific questions about one student project at a time (i.e. "Do the variables in this program have meaningful names?"), and the AI will respond with an answer and reasoning.
8+
9+
### 2. What can the MakeCode Code Evaluation Tool do?
10+
11+
The MakeCode Code Evaluation tool will send the current student code with the teacher question to DeepPrompt (a Microsoft Azure LLM service) along with some contextual prompt information and return the resulting AI response back to the user.
12+
13+
### 3. What is MakeCode Code Evaluation Tool’s intended use(s)?
14+
15+
The MakeCode Code Evaluation tool is intended to help teachers expedite the process of giving feedback on student programs.
16+
17+
### 4. How was the MakeCode Code Evaluation Tool evaluated? What metrics are used to measure performance?
18+
19+
The system was evaluated with 1000+ prompts from multiple sources to ensure the responses are grounded and relevant to the educator’s task of assessing student code. We evaluated accuracy with red teaming and expert review of responses.
20+
21+
### 5. What are the limitations of the MakeCode Code Evaluation Tool? How can users minimize the impact of the Code Evaluation Tool’s limitations when using the system?
22+
23+
The system only supports educational scenarios related to student code. The system will not perform well for other scenarios or unrelated questions. When using this tool, educators should ask short, concise questions relating to the assessment of student code. Questions are limited to 5 per program, and 1000 characters per question. The MakeCode Code Evaluation tool cannot provide direct scores or grades for student work.
24+
25+
### ~reminder
26+
27+
#### Tool Beta
28+
29+
This tool is currently in Beta, and we value your feedback. Please click on the **Feedback** button to share your experiences and thoughts about the MakeCode Code Evaluation Tool.
30+
31+
### ~

0 commit comments

Comments
 (0)