Talview Podcast

The Exam Security Imperative | Exam Security Summit 2026 | Opening Keynote [Sanjoe Tom Jose]

Talview Season 1

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 22:26

Listen now: why protecting exam credibility is becoming non-negotiable in an AI-enabled world

  • Speaker: Sanjoe Tom Jose (CEO at Talview)
SPEAKER_00

Exam security. A new era of fairness, trust, integrity. Across digital exam rooms, unfamiliar risks arising. A high-assisted test takers, hidden collaborators, and silent co-pilots flipped past legacy control. The danger is real. Impersonation, unauthorized health, and relief, and invisible coordination. When assessments fail, it's not just forced to suffer entire credentials, institutions, and assumptions to lose credibility. We verified. We assent our credibility scale. Exam security is not an add-on. It is the foundation of high-stage assessment. This is about people, about opportunity, about confidence in every score, every license, every credential, fairness and integrity. At scale, lead the next wave of exam security in the Gen AI era. Protect the value behind every exam.

SPEAKER_01

Hey everyone, I'm super excited to welcome each and every one of you to Exam Security Summit, the first ever version hosted by Talview in 2026. This is going to be a power-packed event with a lot of great speakers, multiple panel discussions where we are gonna go deep into specific topics which is worrying everyone in today's age of AI on how remote proctoring is changing drastically to counter or combat the changes which AI is bringing into uh today's age of exam delivery. As everybody else, a lot of my conversations with many of you and other customers, prospects, partners in the last few months have been around how AI is changing exam delivery and how the threat of AI is impacting the discussions from boardroom to test centers and candidate support lines. And that's a significant part of the conversation today here at Exam Security Summit as well. For us at Dalvieu, we believe that the trust layer, the layer of trust which we as human civilization has established over centuries of working together in organizations, in groups, us community members is today under attack um with AI. But when I AI attacks trust, we shouldn't abscond. We should respond. And to respond to the threat, we should have an infrastructure, a trust infrastructure. And that's the focus of today's my keynote. For most of the world, especially if you are operating in the software space or in financial services or healthcare. Or for most most jobs, AI is considered a feature upgrade. But fortunately or unfortunately for uh the testing industry, AI is not just a feature upgrade, it's also an adversary upgrade. We have seen a wide variety of uh cheating assistance enabled by AI in the last few months. Uh a lot of LLM assisted answering where test takers are getting teleprompted or shown answers on the screen or on another device on the side, or even being given answers in in their ears with a smart earphone, which has significantly impacted test credibility. A lot of uh deepfake-driven identity spoofing, which we are seeing, everything from candidates generating a deep fake ID, uh which is then used to apply for exams and take exams, to streaming a deepfake video uh which mimics a human candidate in front of the camera while the real candidate is probably referring to reading out of a textbook or coding with five other people in the room to take an exam to the impact of these tools with proxy test taken networks. Many of this cheating as a service provider as a service providers have significantly scaled their services uh with the help of AI. And lastly, a lot of this collusion is so real-time, whether it's teleprompting, smart devices, LLM assistants, candidates can take advantage of these techniques within within a very short span of time, and it becomes really difficult for detecting some of this cheating coordination which is happening on the ground. Why this is uh different, I tell you we've uh been working with some of the largest test publishers across the globe, and what one thing we have noticed is this is not just individual candidate cheating. This is automation of cheating at scale. There are a lot of tools available in the market today. Uh, if many of you are familiar with probably Cluli. Uh, there are services who are shipping pre-AI installed packaged devices to test takers. There are methods like smart devices which are largely invisible. Specs might uh just look like any other specs, but it's actually a smart glass which comes enabled with cameras which can look at the question on the screen and then give the response to the candidate in their ear. And what is makes uh the form of cheating which has evolved in the last few months indistinguishable from legit legitimate work. And the challenge for us is when assistance becomes instant, invisible, and indistinguishable, trust collapses quietly. And that's the I would say the challenge we face as an industry today. And the reality check is traditionally we have just relied on detection. You have a proctor looking at the screen all the time, you have some AI tools running in the background, detecting some form of action, and if we detect something, then we believe that we can fix it. That's not true anymore. Detection is dead. The harsh reality is a lot of things which traditionally uh we we were not used to, but is happening right now with the help of AI like teleprompting or candidates running uh browser use or computer use AI to uh some of this deep fake technology. We can't rely on just human proctors to reliably detect uh what's going on. You need specialized capabilities to detect what's going on. The traditional AI detectors, many of those flagging tools which uh everybody is familiar with, they create a lot of false positives. You could have candidates uh just leaning back and it detects a flag, or just thinking and rolling their eyes and it detects uh it creates a flag. A lot of those noise really clouds out the real intelligence and real information in your proctoring um logs, which makes uh it very difficult, especially when you have a lot of diff new things, different things going on in the exam environment. A lot of false negatives got noticed. Many of the AI tools and uh untrained proctors are not able to detect some of the newer methods which is happening uh on the ground. And even when you detect something, how do you enforce something without being 100% sure? Because you have to balance uh the experience and being just and uh being fair uh with protecting the security of your exam. So a lot of the traditional detection which we have been relying on produce a lot of noise and uh that produces outcome and not just a lot of noise. And that's the reality of high-stake exams. Uh when an exam is conducted uh in a high-stake scenario, you are granting a license. Somebody is earning a certification which they are going to use for their job. In many cases, you're also getting a job offered as a part of that particular exam process. So you you're not just looking for alerts in the scenarios, you need defensible decisions. And that's what we believe at argue you should focus on. We need trust as an infrastructure. It cannot be a feature, it needs to be a comprehensive approach towards exam security, which is backed by true science, true AI, which you can rely on. So, what does infrastructure mean? Infrastructure it assumes failure. So the it has multiple layers built in so that if something fails, something else can uh still get you. Uh there are layers of defenses, multiple security measures for protection. It produces proof. You have audit rails, you have verifiable proof which you can use for uh dispute resolution and discussions. And it survives uh scrutiny, whether it's scrutiny from regulators, whether it's scrutiny from customers, yeah, or uh uh scrutiny from your peers. It it it it survives that scrutiny. And this is not uh unique to testing as a space. It it's already how the payments industry, the identity verification industry, the security industry work. And we believe assessment integrity is the next. So, what is a trust stack? How does it look like? And Talview has been at the forefront of it uh for many years, and we've recently patented a seven-layer security framework, which we believe is going to be very effective as a trust act for accents. It starts with AI identity verification, where you have the ability to not just look for proxies, but also look for deep fake, another form of cheating which is new in today's day and age. You have AI behavior monitoring, which is primarily the ability to detect for any form of suspicious activity which the candidate is showcasing, especially by monitoring their body language. So you might just have a front-facing camera. What can we infer from the candidate's body language, which might require a closer inspection? AI environment monitoring is our ability to use a second camera, which many of you are familiar with, but it's not just a camera which is feeding the video to you, but it can look for very specific patterns, hidden devices, hidden people, shadows, and flags to you or your proctor that something suspicious is going on in the room or the in the exam environment. The fourth layer is device security, it's the ability to block all forms of malicious applications which might compromise your exam or leak your exam content. And it doesn't have to always be a separate install software, it can also be using AI with computer vision to analyze what's going on in your device, uh, in inspecting the task manager, inspecting the uh taskbar to look for specific things which might denote that this particular exam is compromised. The fifth layer is assessment feed monitoring. In many cases, you might not have all the layers in place, or you might have a candidate who has figured out a very novel way to cheat in spite of all the four layers we already spoke about. That's where assessment feed monitoring comes in. Where today a lot of assessment tools and proctoring systems are disparate, they don't talk to each other. But if you have an infrastructure which can take all the signals from the assessment system, for example, if a candidate is going too fast, you might want to prompt them to connect a second camera to see what's going on, or take other proactive measures to take control back of your exam. So that's where the assessment feed monitoring comes in. The sixth layer, which I believe has become even more significant in the last few years with all this scaled cheating autonomy, is uh the intelligence layer, which looks for specific patterns, which uncovers collusion, identity, fraud, uh building a repository of non-IPs, non-devices, non-softwares which are used for collusion, and is enabling cheating as a service, tracking and blocking that is where the intelligence layer comes in. And lastly, the seventh layer, which is the agent web monitoring and research, which can detect leaked exam content and answer sharing by scanning the internet and determine also determining new devices, new softwares which are being used by candidates and bringing it back to your AI so that the AI is now familiar with that particular application or particular device and can detect the next time it happens. So essentially, it's not just one model or one tool. It's not a flag. It's seven reinforcing layers. And remember, we spoke about what an infrastructure is, and this is this is the trust infrastructure, the tech trust stack we are uh talking about. I'm very excited to also talk about Dalview's latest version, version 8, which is a true agent AI trust infrastructure. This is our response for a world where AI is the attacker. We believe we should compare AI with AI to be effective. How is a genic AI different from the traditional AI solutions? First and foremost, it's an AI platform or an AI tool which can interact with your candidates. So it can not just monitor the candidate but also assist them, help answer their questions, and also have conversations with them when there is a need for that. It can also reason. So most traditional AI proctoring tools has the ability to flag when something goes wrong in the exam, like the candidate is having a phone in their hand, or if uh somebody is talking to somebody else in the room. But it cannot reason to determine what could be a false positive where the candidate was probably just putting the phone on silent or asking the kid who ran into the room to leave, and what is a real cheating scenario. But it doesn't operate by itself, it can also escalate scenarios to a human proctor. So you're always having a human in the loop in some form or shape to ensure that there is that human judgment involved in decision making and a human can override at any point in time. And lastly, it can also learn, it can adapt at a much faster rate than anything else you have seen in the industry today. It can continuously improvise on your exam rules. Yeah, it does better and better when it comes to understanding new patterns which is seeing in your exams and ensuring that every exam is delivered with the utmost security. Speaking a little more about how it goes beyond flags in monitoring and enforcement. Most of the traditional proctoring tools they generate a lot of events, alerts, suspicion. But you can't just rely on that. There is a lot of false positive which erodes credibility. There are a lot of false negatives which destroy trust. In version 8, what we have done is we have launched an approach of structured incidents where when we flag a particular session, we are not just looking at a standalone event or alert. We are looking at structured incidents which are verifiable evidence, which creates or generates defensible outcomes. And I'll request my team to give you a very cute demo of how that works in the real world.

SPEAKER_00

Introducing version 8, a smarter proctoring experience powered by agentic workflows and intelligent multi-signal monitoring, designed to support every role at every stage of the assessment journey. Version 8 connects test takers, proctors, and administrators through intelligent automation, transforming isolated tasks into a seamless guided workflow. For test takers, version 8 introduces a built-in intelligent support through Alvi Need Guidance. Contextual help is available right inside the assessment window. Facing a technical issue, diagnostics run instantly without leaving the test. Alvy reduces confusion, minimizes interruptions, and keeps candidates focused on what matters most. Monitoring becomes smarter, not louder. Version 8 analyzes signals from the candidate's primary device, camera, audio, and screen activity. Instead of flooding dashboards with isolated alerts, this system formates multiple signals to create meaningful incidents. The result newer false alarm clear priorities, and faster, more confident in the integrity begins with strength and what is photo ID test. And then finality test. In the right conditions before and during the session. After the assessment, administrators receive a complete section summary in a single, unified field. The events, incidents, and verification results are organized clearly, eliminating the need for time-consuming manual review. With everything in one place, decisions become faster, simpler, and more reliable. Version 8 delivers measurable impact across the board. Less manual effort, smarter insights, stronger exam integrity, and most importantly, quicker, more confident decision making.

SPEAKER_01

Beyond the automation in monitoring and enforcement, V8 also brings in a lot of other efficiencies. It removes a lot of the scheduling bottlenecks, some of the utilization inefficiencies which we have been uh in discussions with, and is plaguing a lot of the tools out there in the market. And it also removes a lot of those scaling constraints because of human dependencies for specific actions. So what we are able to achieve is a much higher proctor efficiency and more importantly, 10x better user experience while having autonomous enforcement in a natural environment. So to conclude, when AI becomes the attacker, trust must become the infrastructure. And that's the spirit of this event. That's the spirit of all the conversations today. So I'm very excited and then uh welcome each and every one of you once again to the Exam Security Summit. A quick sneepy sneak peek of the sessions today. The first panel, uh, which is moderated by Radhika, has Dan, Michael, and Jarrett talking about the Gen II cheating shift, where we'll take a closer look at how artificial intelligence is rewriting exams. In the second panel, uh which is uh moderated by Harry, we are having Joko, Paul, and Miguel talking about what exam leaders and institutions are seeing when it comes to the shift in the ecosystem. It's kind of in a peak inside the breakdown. And the last panel discussion, which is by our Or AI DOM initiative moderated by Raji with participation from Liberty Paul and Suntha. We are going to look at how exam leaders can adopt HDKI without breaking credibility. How do you scale trust? Or how do you trust before scale when it comes to some of these modern techniques? Thank you and have a great event.