Short-Form Writing Creator + Analyzer
An integration of AI features that help instructors enhance their prompts when creating assessments and grade student responses in a streamlined way.
Work Type: Openfield Work Summer + Fall 2025 | UI/UX
Applications: Figma
Project Summary
Problem
Instructors often spend too much time drafting up a prompt for their short-form writing assessment. When they do finalize their prompt for their assessment, they find grading each student response tedious and slow.
Process
As 1 of 2 designers on this project, our process includes:
Sketching out multiple design concepts during ideation
Meeting weekly with the Macmillan product team to gather feedback on early-stage designs, informing iterations later on
Multiple rounds of design iterations with feedback from internal and client teams
Prototyping final wireframes for validation testing
Impact
Both the prompt creator and student response analyzer tools are out publicly in the Achieve courseware, having gone through validation testing. Instructors have found that the tools streamline their grading process in large classes and create meaningful assessments quickly.
Intro
Problem
Instructors Spend Too Much Time Drafting Their Assessment Prompt and Grading Hundreds of Student Responses
Achieve instructors teach multiple classes with 100+ students. They don’t have a lot of time creating assessment prompts and grading their students’ short-form writing responses. They spend a long time assessing whether or not students meet the assessment rubric criteria.
Solution
Implement Tools That Streamline the Assessment Prompt Creation and Grading Process
Macmillan specifically wanted to implement AI tools that help instructors write and refine their assessment prompts and analyze student responses to speed up the grading process. These tools are optional but when enabled help instructors quickly create their prompts with confidence and grade efficiently.
The Macmillan Learning Team
Product Manager
Developer and Team Lead
Developer
My Responsibilities
Sketched lo-fi design and evolved them through multiple rounds of design iterations
Aligned with the Macmillan team weekly to gather feedback and set soft and hard deadlines to stay on track
Upgraded medium-fidelity wireframes to high-fidelity designs, using Macmillan’s new brand identity
Finalized wireframes and prototyped designs in Figma for hand-off to the client team for validation testing
Planning
Assessing the Current Achieve Screens
We looked at the current Achieve assessment creation and student responses screens to see where we could implement the AI tools.
For the prompt creator, the tool should be visually connected to the Question Prompt field, because that’s the section instructors create their prompts. For the analyzer tool assessing student responses, the tool should be implemented where the student responses are, or can even be its own column within the student response list.
Prompt Creator Ideation + Refinement
Early Exploration
We began this project with the prompt creator, since that’s what instructors would interact with first when creating a short-form writing assessment for students. Having been inspired by Gemini and Gmail’s AI writing tools as mentioned above, we experimented with a box or section where instructors can type in their prompt idea to generate ideas. At first, we thought of having criteria to fill out, such as complexity or prompt length but figured that would just add more time and effort to the instructor, who needs to move fairly quickly.
Continued Ideation
We did not like the large AI tool box we drafted earlier, so we went with a more subtle approach. Like many AI tools, a simple button below the user’s prompt could work. We also added a “Help me Write” button to assist instructors starting from scratch, but that option could easily be confused with the “Tips for writing a good prompt” option in the top right, which are just guidelines for writing a good prompt. We did like the direction of the “Prompt Focus” button option, which gives instructors options on enhancing their existing prompt.
We experimented with how we want the generated options to show. Having a “multiple choice” style made the tool too large and would give the user fatigue reading through all generations. We settled with a single generation that can be rerolled or enhanced more.
Refinement
Eventually we found that having the chips made it seem like the user could only select one refinement feature at a time, when we want users to be able to freely enhance their prompt with all options available at once. We went back to the dropdown menu feature, utilizing cleaner UI and colors.
Analyzer Ideation + Refinement
Early Exploration
While we sketched prompt creators, we were also sketching what the AI analyzer tool would look like within the student responses view. This tool would suggest that a student did or did not meet the rubric criteria, thus giving the instructor a quick idea on grading. We experimented with adding an extra “Criteria Items” column, and while we found the idea exciting, we found that horizontally scrolling is not very user-friendly. It was also a feature Macmillan wanted to avoid implementing due to difficulty in developing it in.
Continued Ideation and Refinement
We still wanted to experiment with adding the Rubric Performance as a table column, but we tried shortening it. Still, the Rubric Performance criteria items section is too long.
Hallway testing concluded that the long, expandable column of the Rubric Panel proved too cumbersome. We decided to implement the criteria chips under the student response. With the color-coding of the items being met, that can help with quick scanning. We also designed how the filtering would work. Instructors can quickly find the exact criteria chips, met or not, and apply batch scoring to filtered students.
Individual Student Grading Iterations and Refinement
After ideating on the full list of student responses, we moved onto ideating the individual student response view. This time, we could afford much more room with showing the rubric criteria items.
End Concepts
Final Designs: Prompt Creator + Student Response Analyzer Tools
After rounds of ideation and iterations, we settled on final designs for both the prompt creator / enhancer and the student response analyzer tools for short-form writing assignments. We created prototypes for validation testing with instructors in higher education, working with the Macmillan research team. The testing was a success in that the AI features received largely positive responses, and both tools were then shortly pushed to a beta phase in the Macmillan Courseware.
Impact
Prompt Creator / Enhancer
Out of the 12 recruited instructors testing this tool, 11 of them found the tool helpful in saving time.
The 1 instructor who did not approve of the AI tool acknowledged that while the tool is not relevant to them, it can help others.
The design of the AI tool and its features of being able to go back to previous generations were very understandable.
This tool is currently public in a beta phase.
Student Response Analyzer
The 8 recruited instructors liked seeing the color-coded criteria tags in each student response; it was very easy for them to get an idea of how many students did or didn’t meet rubric criteria.
The instructors liked the AI reasoning and also being able to override the decision on if a student met/didn’t meet criteria. Ultimately, this gives instructors control with grading.
Like the prompt enhancer tool, the student response analyzer tool is also public and in beta.
Takeaways
Achieve’s prompt creation and student response view had not been updated in nearly a decade; it was imperative to improve the instructor experience, incorporating AI tools as they are relevant. Based off of validation testing, the update was well-received. It’s important to take into consideration of current trends when updating user experiences.
While the AI tools were considered helpful, there were a few instructors who did not want to utilize AI at all; it was also imperative to not push the tool as a required experience for the user. Some may want to use it, some may not.

