Don't miss our weekly PhD newsletter | Sign up now Don't miss our weekly PhD newsletter | Sign up now

  Automatic Detection of Deception from Verbal Cues Using Techniques from Computational Intelligence


   Applied Computational Science

This project is no longer listed on FindAPhD.com and may not be available.

Click here to search FindAPhD.com for PhD studentship opportunities
  Dr J O'Shea  No more applications being accepted  Funded PhD Project (Students Worldwide)

About the Project

Lie detection is a crucial process in law enforcement and national security (e.g. in offender management). Verbal Features from speech offer subtle cues for deceptive intent. This project will be the first to develop a systematic, fine-grained analysis of Verbal Features for a real-time, automatic deception classifier using Artificial Intelligence.

Aims and objectives

The Silent Talker (ST) lie detector was developed by the MMU Intelligent Systems Group (ISG) using mainly facial microgestures and has had a remarkable public awareness impact. The objective of this project is to produce a new version of ST using Verbal Features (VFs) as cues for deception. The verbal version will not only be a breakthrough in its own right, but could be combined with the current ST to provide a more accurate detector.

ST is MMU’s contribution to the Horizon 2020 iCROSS border security project (€4M). Our additional objective is to extend the research base of the Intelligent Systems Group and develop you as a new researcher who could participate in future projects on this scale.

Original attempts to detect lies from speech used voice stress analysis. This uses the pitch of the speaker’s voice. Although this technique has been highly publicized by the insurance industry to deter fraudulent telephone claims, it has been criticized in scientific reviews.

Our project will move from signal processing to analyzing the syntax, semantics and pragmatics of spoken utterances for features that can be used as inputs to an automatic lie detection system, used to train an AI component to classify an interviewee’s behaviour as deceptive or truthful.

Research in linguistics and psychology has revealed a set of candidates you could use to create VFs, for example a large-scale meta-analysis of 44 studies identified 79 potential cues to deception. That study found evidence that liars experienced greater cognitive load, expressed more negative emotions, distanced themselves more from events, expressed fewer sensory-perceptual words and referred less often to cognitive processes. These conclusions were reached in terms of summary statistics and could not be exploited to design an automatic real-time detector.

However, our project will follow the ST approach of breaking down candidate features into their simplest components and combining these over a time interval to generate complex features for AI classification. This has worked well in NVB, where it has been shown that single indicative features (such as eye contact) are not meaningful, but ST’s complex vectors are effective.

This new project aims to answer the primary research question “Can human VFs be used in the automated psychological profiling application of lie detection?” It also aims to answer the secondary research questions: “How do VFs compare with facial microgestures?”, “Are some VFs better than others for classification” and “Can an AI algorithm using VFs explain how it takes its decisions to human practitioners in fields such as law enforcement and border control?”

Your PhD research plan will be based on the following objectives:
1.Produce a catalogue of VFs, based on a comprehensive evaluation of the literature on state of the art of psycho-linguistic deception research.
2.Select software and hardware environments to implement a VF lie detector, from an investigation of techniques for VF feature extraction from audio streams
3.Establish a baseline for evaluation of the work, by re-implementing an existing microgestural (facial) lie detector on the platform identified in (2)
4.Design and conduct experiments to collect a corpus of audio/video streams combining verbal and non-verbal behaviour in a new deceptive scenario
5.Identify potential VFs to deception to train the new system
6.Train a sophisticated AI system to analyse audio streams and classify the human behaviour (truthful / deceptive) based on the VFs
7.Analyse the experimental findings to test the hypotheses and answer the research questions
8.Investigate a combination of VFs with Facial microgestures for a hybrid system
9.Replace the final classifier with decision trees, to investigate the explicability of the classification process to a human practitioner

The supervisory team for this project will be Dr Jim O’Shea, Dr Samuel Larner, and Dr Keeley Crockett

The closing date for applications is 31st January 2017.
To apply, please use the form on our web page: http://www2.mmu.ac.uk/study/postgraduate/apply/postgraduate-research-course/ - please note, CVs alone will not be accepted.

For informal enquiries, please contact: [Email Address Removed]
Please quote the Project Reference in all correspondence.


Funding Notes

This scholarship is open to UK, EU and International students
For information on Project Applicant Requirements please visit: http://www2.mmu.ac.uk/research/research-study/scholarships/detail/vc-scieng-jos-2017-2-automatic-detection-of-deception.php