Overview
This project was developed as the final assignment for the Human–Computer Interaction for AI Systems Design course at the University of Cambridge. The course focused on how to design AI systems that are transparent, verifiable, and user-centered, combining technical frameworks with ethics, risk management, and usability.
The goal of the project was to design a conceptual human–AI decision support tool to help manufacturing workers identify and resolve quality control issues. The system needed to work across different user roles and deployment contexts, while supporting varying levels of automation and providing clear, explainable recommendations.
Over eight weeks, I followed a structured process to define the problem space, model the system’s function, evaluate risk, and propose verification and validation strategies. I received a final project grade of 100%.
The Challenge
Manufacturing processes often rely on human experience to detect and resolve quality control issues. These tasks are time-sensitive and complex, involving a combination of data interpretation, decision-making, and collaboration between operators, engineers, and managers.
The challenge was to design a human–AI system that could support these users without removing their agency. The system needed to adapt to different roles, assist with diagnosis, and provide traceable recommendations, all while fitting into real-world manufacturing environments that vary in automation and risk tolerance.
Approach & Process
The design process followed the HCI for AI Systems framework taught in the course, with each phase focusing on a specific aspect of human–AI interaction. I began by defining a solution-neutral problem statement that captured the core need without assuming an AI-based solution.
From there, I created a system function model that outlined the key tasks and user interactions across roles such as line operators, engineers, and supervisors. Each task was mapped to different levels of automation to evaluate where human decision-making should be retained or supported.
I performed a risk analysis to identify potential system failures or misuses, along with mitigation strategies for each. A Verification Cross-Reference Matrix (VCRM) was developed to ensure that each requirement could be verified across relevant deployment contexts. Finally, I proposed a validation strategy to ensure the system would achieve its intended purpose in the real world.
Solution
The final concept is an AI-powered decision support tool designed to assist manufacturing workers during quality control checks. The system integrates multiple data sources (including sensor readings, historical reports, and user annotations) to help identify root causes and suggest next actions.
Recommendations are presented with confidence levels and supporting context to preserve user trust and interpretability. Different automation levels are applied depending on user roles for example, operators receive clear guidance, while engineers can interact with more detailed diagnostic tools.
The interface is designed to support traceability, enabling users to review the AI’s reasoning, explore alternative solutions, and provide feedback that improves future recommendations.
Reflections
This project helped me structure my thinking around human–AI collaboration in a much more rigorous way. I gained a deeper understanding of how automation, interpretability, and verification all shape the user experience in systems that rely on AI.
The methodical approach taught in the course (from defining a solution-neutral problem to building a function model and planning risk mitigation) is something I’ve started applying to real-world design challenges. While the project was theoretical, the process gave me tools I now use in my professional work when designing systems that require user trust, explainability, or risk awareness.
Looking back, I’d refine the validation strategy further by integrating more user-specific testing plans and post-deployment feedback loops.
Summary
This project demonstrates how structured, human-centered thinking can be applied to the design of AI-powered systems. Through the Cambridge framework, I explored not just how to build smarter tools, but how to make them accountable, explainable, and usable across different roles.
While the context was manufacturing, the methods and principles (like automation level mapping, traceability, and verification planning) are now part of how I approach complex design problems in my professional UX work.