How Fruition redesigned assessments with Vocina

Learn about how we created, tested and implemented an oral AI agent with Fruition

hero image

Written by

Superpower: Making interesting meals for dinner every. single. night.

Fixation: Finding the “so what” in complex information


Zoe’s work at Scarlatti spans AI Development. She is involved in the development of our AI Agent 'Vocina', focusing on understanding how evidence can be translated into practical, real-world value.

A core part of her role is making sense of complex information, from qualitative research through to market analysis, to identify what matters most and why. She enjoys working at the intersection of research and decision-making, helping ensure insights are clear, relevant, and actionable.

Previously, Zoe managed the insights and analytics function at Waihanga Ara Rau (the Workforce Development Council). She has also worked as a university lecturer in Kuwait and Oman, teaching business and marketing subjects at undergraduate and postgraduate levels.

Zoe holds a PhD in Marketing and has a longstanding interest in research, sustainability, and understanding what drives meaningful outcomes.

What we were tasked with

In 2025, the Food and Fibre Centre for Vocational Excellence (FFCoVE) commissioned Scarlatti to research the use of AI in education and test the viability of structured oral AI-driven assessment in partnership with Fruition. This case study shares what happened next, and how Scarlatti’s Vocina platform emerged from that work.

(You can find all of the the AI in Education Articles on our 'Case studies' page)

Rethinking what assessments should capture

Fruition was interested in exploring how AI could help them develop assessments that genuinely captured learners’ understanding, not their ability to produce polished written work.

Written assessments are familiar, but they don’t always reveal how learners think. Capable learners sometimes struggle to express themselves in writing. Others refine and edit text in ways that mask gaps in understanding. At the same time, careful management is always required for grade moderation to achieve consistency across tutors.

Fruition was not looking for shortcuts. They were looking to design accessible assessments that surface learners’ authentic understandings while retaining strong moderation and tutor control.

Designed for real-time reasoning

Following initial research and testing, Scarlatti designed and built the Vocina platform to deliver structured oral assessment agents.

Instead of submitting written answers, learners click a link in their learning management system, Moodle, to speak with an AI agent. The agent guides learners through a structured sequence of questions and can ask follow-up prompts when responses lack clarity or depth. Assessments typically take five to ten minutes.

The tutor designs the questions and prompts in advance, aligning them to the learning outcomes and rubric criteria used for original written assessments.

The format for capturing knowledge changes. The content being assessed does not.

From pilot to embedded tool

The initial trial was well received by both learners and tutors.

More details about the pilot are available by watching the AI agent for oral assessment recorded webinar and reading the key findings from piloting AI agents for oral assessments in our article series.

Fruition has continued using the Vocina platform and expanded oral AI assessment across multiple programmes and levels.

Vocina is now embedded in:

  • The Level 6 Diploma in Horticulture (Horticultural Innovations micro-credential), where it supports reflective journaling and documenting of applied innovation practice.
  • The Level 4 Emerging Leadership micro-credential, where learners complete structured oral reflections.
  • A Level 2 Primary Industry Skills unit standard, where it enables learners to demonstrate understanding in a way that better reflects their capability.

Across these, the purpose remains consistent: to capture understanding and reasoning through speech rather than relying on writing.

What changed for learners and tutors

  • Learners find the oral format more natural. Many feel more confident explaining their knowledge, including learners whose first language is not English.
  • Reasoning becomes more visible for tutors. With responses delivered in real time, learners must explain, justify and connect ideas without editing. Follow-up prompts often push them deeper than a written task might, creating stronger evidence of their capability.
  • Structured questions keep things consistent. Rubric criteria are aligned to learners’ answers and are reflected in the agent’s summary.
  • Transcripts and audio recordings make moderation easier as tutors can see how answers were reached.
  • AI-supported oral assessments are more efficient. They reduce time spent producing written material.

Importantly, human judgement remains central. Fruition has a fully moderated model: tutors review transcripts and/or audio and still make the final decision on outcomes. The AI technology structures and captures evidence. It does not replace professional judgement.

As Tiffany Andrews, Academic Manager at Fruition Horticulture, explains:

“For us, this isn’t about replacing written assessment with technology. It’s about designing assessment that genuinely captures learner thinking. The oral format challenges learners to explain, justify, and connect their ideas in real time, which gives us clearer evidence of authentic understanding and applied capability. Importantly, tutors remain central to the process: professional judgement and robust moderation are non-negotiable.”

Assessments redesigned

Fruition did not adopt AI to automate assessment. They adopted it to improve it.

Structured oral assessments do not reduce robustness. They are more inclusive, show learners’ real thinking quicker, and strengthen moderation.

After being trained, tutors are now building and running assessment agents themselves on the Vocina platform. The approach is scalable and adaptable across courses.

At Fruition, oral AI assessment has moved from pilot to everyday practice because it delivers clearer evidence of understanding while keeping tutors’ oversight in place.

Learn more about Vocina