Privacy Policy

Last updated: June 27, 2025

1. User Identity & Profile Data

This section covers essential information such as names, email addresses, university IDs, roles, and biometric data. These details are used for authenticating users, assigning permissions, and personalising interactions. To secure this sensitive information, sensitive data is encrypted both at rest and in transit, and role-based access controls ensure that only authorised individuals have access. Inactive accounts are deleted after one year, while biometric data is stored as hashed templates rather than raw data to further enhance privacy.

2. Academic Content

Academic content includes student submission documents, module contents and tutor feedback, all of which are fundamental for assessing research quality and supporting AI-driven question generation. Students retain full intellectual property rights, as the platform functions as a secure repository. Data is protected using AES-256 encryption which ensures integrity of stored items.

3. Interaction Data

Interaction data encompasses voice and video recordings from viva sessions, chat logs, screen-sharing content, and real-time transcripts. This data supports grading, AI model training, and resolving disputes. Although recordings are typically initiated based on explicit consent, when the application is integrated at a university level, the institution may determine the consent protocols. In either scenario, metadata is stripped from recordings to ensure anonymity, and all recordings are automatically deleted after a set number of days unless retained for appeals.

4. AI-Generated Data

AI-generated data includes outputs such as question logs, performance metrics, and sentiment analysis of student responses. This information is used to refine AI models and generate performance feedback reports. Importantly, because marking is a critical aspect of the evaluation process, students do not have direct access to their AI-generated feedback or grades until these are reviewed and approved by the teacher. To maintain fairness, regular audits are conducted to identify and mitigate any bias in the AI outputs. Training data is sourced from synthetic or anonymised historical viva records, ensuring privacy while enhancing model accuracy.

5. Technical Logs

Technical logs record device information, IP addresses, session timestamps, error logs, and API call details. This data is essential for debugging, monitoring security, and optimising performance. To protect privacy, IP addresses are masked and logs are anonymised. Data retention is managed carefully, with logs being purged after 30 days unless they are required for security investigations.

6. Performance & Analytics Data

This category comprises viva grading rubrics, understanding levels, and aggregated performance trends, which are critical for institutional reporting, curriculum improvements, and student support. Analytics data is stripped off Personal Identifiable Information before sharing to admin personals, and access to the analytics dashboards is restricted to authorised administrators. Moreover, since marking is sensitive, students only see their feedback and grades after the teacher has reviewed and distributed them.

7. Cross-Cutting Security Measures

Across all data categories, several robust security measures are in place. The platform adheres to data minimisation principles, ensuring that only essential information is collected. Encryption, including end-to-end solutions for live sessions and zero-knowledge storage for sensitive data, prevents unauthorised access. While granular consent workflows may be available in some contexts, in a university-integrated environment, consent protocols are centrally managed by the institution. Regional compliance is ensured by storing data in servers that meet legal requirements (e.g., GDPR in the EU), and audit trails are maintained to log all data access or modification attempts for accountability.