CDE Research Students and Projects

Current research engineers and live projects

We have research projects in a wide range of areas in computer vision, computer graphics, human-computer interaction (HCI) and machine learning from procedural generation of content for international games companies to assistive technologies to help stroke rehabilitation via the future of interactive technologies for major broadcasters and virtual reality for naval training.

Get to know some of our current research engineers below and see our video filmed during our CDE Winter Networking Event at the British Film Institute, London.  Hear more from our students and alumni.

2021
Manuel Rey Area
Manuel Rey Area

Deep View Synthesis For VR Video Supervisor

Ph.D (University of Bath):

Academic Supervisor: Dr Christian Richardt

With the outbreak of VR, it is key for users to be fully immersed in the virtual world. The users must be able to move their heads freely around the virtual scene unveiling occluded surfaces, perceiving depth cues, and observing the scene to its last detail. Furthermore, if scenes are captured via casual devices (smartphone) anyone could convert their 2D pictures to a 3D fully immersive experience bringing a new digital world representation closer to ordinary users. The aim of this project is to synthesize novel views of a scene from a set of input views captured by the user. Eventually, the whole scene 3D geometry must be reconstructed and depth cues must be preserved to allow 6-DoF (degrees of freedom) head motion avoiding the well-known VR sickness. The main challenge lies in generating synthetic views with a high level of detail, light reflections, shadows, and occlusions… resembling to reality as much as possible.

2016
Lewis Ball
Lewis Ball
Industrial Supervisor: Mark Leadbeater

Material based vehicle deformation and fracturing

Industrial Partner: Ubisoft Reflections

CDE EngD in Digital Entertainment (Bournemouth University):

Academic Supervisors: Prof Jian Jun Zhang, Prof Lihua You

Damage and deformation of vehicles in video games is essential for delivering an exciting and immersive experience to the player, however there are tough constraints placed on deformation methods used in video games. They must produce deformations which appear plausible so as not to break the players immersion, however they must also be robust enough to remain stable in any situation the player may experience. Lastly any deformation method must be fast enough to calculate the deformations in real-time while also leaving enough time for other critical game state updates such as Rendering, AI and Animations.

My research focuses on augmenting real-time physics simulations with data-driven methods. Data from offline high-quality, physically-based simulations are used to augment real-time simulations in order to allow them to adhere to physically correct material properties while also remaining fast and stable enough to use in production-quality video games.

 

2021
Daniel Castle
Daniel Castle

Efficient and Natural Proofs and Algorithms

Ph.D (University of Bath):

I’m Daniel, a new CDE aligned PhD student at the University of Bath. I recently completed a 4-year masters in computer science here at Bath, during which I particularly enjoyed the theoretical aspects of the subject. I was fortunate to meet and later work under the supervision of members of the mathematical foundations of computation research group, including Dr Willem Heijltjes and Professor Nicolai Vorobjov, who were extremely helpful throughout and later encouraged me to apply for a PhD. I chose to stay on at Bath because of the great people I met, the exciting work being done in the group I am now a part of, and, of course, the wonderful city.My interests broadly lie at the intersection of mathematics and computer science – in particular, the field of proof theory, which studies mathematical proofs as formal objects. This is important from a computer science perspective because of a direct correspondence between computer programs and proofs, leading to applications in areas such as automatic software verification and the design of new programming languages.

2017
Kenneth Cynric Dasalla
Kenneth Cynric Dasalla
Effects of Natural Locomotion in VR
CDE EngD in Digital Entertainment (University of Bath):
Academic Supervisor: Dr Christof Lutteroth

The project aims to investigate the use of depth-sensing cameras and positional tracking technologies to dynamically composite different visual content in real-time for mixed-reality broadcasting applications. This could involve replacing green-screen backgrounds with dynamic virtual environments, or augmenting 3D models into a real-world video scene. A key goal of the project

is to keep production costs as low as possible. The technical research will therefore be undertaken predominantly with off-the-shelf consumer hardware to ensure accessibility. At the same time, the developed techniques also need to be integrated with

existing media production techniques, equipment, and approaches, including user interfaces, studio environments, and content creation.

 

 

2018
Sydney Day
Sydney Day

Humanoid Character Creation Through Retargeting

CDE EngD in Digital Entertainment (Bournemouth University):

Academic Supervisor: Professor Lihua You

Industrial Partner: Axis Animation

Industrial Supervisor: Matt Hooker

 

This project explores the automatic creation of rigs for humanoid characters with associated animation cycles and poses. Through retargeting a number of techniques can be covered:

– automatic generation of facial blend shapes from a central reference library

– retargeting of bipedal humanoid skeletons

– transfer of weights between characters of differing topologies.

The key goals are to dramatically reduce the amount of time needed to rig certain types of character, thus freeing up the riggers to work on fancier, more complex rigs that cannot be automated.

2019
Alexz Farrall
Alexz Farrall
Supervisor
Prof Jason Alexander
Industrial Supervisor: Dr. Sabarigirivasan Muthukrishnan

The guide to mHealth implementation

Project partner: Avon and Wiltshire Mental Health PartnershipNHS Trust (AWP)

CDE EngD in Digital Entertainment (University of Bath):

The project will not only be a collaboration between the University of Bath and AWP, but also work alongside Bristol’s Medical School to directly incorporate stakeholders into the design and evaluation of a new digital intervention. Smartphone apps are an increasingly popular means for delivering psychological interventions to patients suffering from reduced well-being and mental disorders. One such population that suffers from reduced well-being is that of the medical student populace, with recent studies identifying 27.2% to have depressive symptoms, 11.1% to have suicidal ideation, and 45-56% to have symptoms suggestive of burnout. Moreover, through the utilisation of advanced human-computer interaction (HCI) and behaviour therapy techniques, this project aims to contribute innovative research to increase the effectiveness of existing digital mental health technologies. Thus, it is the hope of the research team to actualise and implement the smartphone app into the NHS and create new opportunities to support the entire medical workforce.

 

2022
Eshani Fernando
Eshani Fernando

Efficient Generation of Conversational Databases – Conversational AI Technologies for Digital Virtual Humans

Intel-PA Ph.D (Bournemouth University):

Academic Supervisor: Prof Jian Chang

Intelligent virtual assistants/chatbot is a rapidly growing technology over the last decade, and the development of smart voice recognition software such as Siri, Alexa and google assistant make chatbots quite widespread among the general community. Conversational AI has been widely researched for many years in the understanding and generation of meaningful conversation. Generation of context-aware conversation is challenged with understanding dynamic context which keeps the conversation flowing. The development of transformer-based models such as BERT and GPTs have accelerated this area of research. However, it was seen that these models generate incorrect and inconsistent conversation threads deviating from meaningful human-like conversation. To this end, it is aimed to conduct research on supporting interaction among virtual agents or between an agent and a human user. The aim of my research is to develop AI-based novel technologies for maintaining and evolving a dynamic conversation database that provides the virtual agents with the capacity of building up understanding, making corrections and updating the context while dialogues continue. The proposed database powered with AI and machine learning techniques will help automatically (or semi-automatically) train and drive the conversational AI leading to human-like conversations.

2019
Isabel Fitton
Isabel Fitton

Improving skills learning through VR

Industrial Partner: PwC UK
Industrial Supervisor: Jeremy Dalton
CDE EngD in Digital Entertainment (University of Bath):
Academic Supervisors: Dr Christof Lutteroth, Dr Michael Proulx, Dr Chris Clarke

 

 

 

 

More affordable, consumer-friendly head-mounted displays (HMDs) have led to excitement around the potential for virtual reality (VR) to revolutionise training and education. VR promises to support people in learning new skills, by immersing learners in virtual environments where they can practice the skills and receive feedback on their progress, but important questions remain regarding the transferability and effectiveness of acquiring skills in a virtual world. The current state-of-the-art VR training fails to utilise learning theories to underpin their design and is not necessarily optimised for learning. In this project we will investigate how VR can help learners acquire manual skills, comparing different approaches to job skills training in VR with the goal of informing the development of effective and engaging training tools, which are underpinned by theory. We aim to produce results relating to enhancing ‘hard’ skills training in virtual environments which are applicable to industry, as required in engineering and manufacturing for example, and support the wider adoption of VR training tools. We will design VR learning simulations and new approaches to support learners and test these simulations to determine whether skills learned in VR transfer to the real world. We will also compare virtual training simulations to more traditional learning aids such as instructional videos.

 

2019
Michal Gnacek
Michal Gnacek

Improved Affect Recognition in Virtual Reality Environments, Dr Theodoros Kostoulas

CDE EngD in Digital Entertainment (Bournemouth University):

Academic Supervisors: Dr Emili Balaguer-Ballester, Dr Ellen Seiss, Dr Theodoros Kostoulas

Industrial Partner: emteq

Industrial Supervisor: Charles Nduka

I am working with Emteqon improving affect recognition using various bio-signalswith the hope ofcreatingbetter experiences and creatingcompletely new ones that have the potential to tackle physical and mental health problems in previously unexplored ways.

The ever-increasing use of virtual reality (VR) in research as well as mainstream consumer markets has created a need for understanding users’ affective state. This would not only guide the development of the technology but also allow for the creation of brand-new experiences in entertainment, healthcare, and training applications.

This research project will build on the existing research conducted by Emteq with their patented device for affect detection in VR. In addition to the already implemented sensors (electromyography, photoplethysmography and inertial measuring unit) which need to be evaluated, other modalities need to be explored for potential inclusion and their ability to determine emotions.

View Michal’s Research Outputs

2021
Jiajun Huang
Jiajun Huang

Editing and Animating Implicit Neural Representations

Intel-PA Ph.D (Bournemouth University):

Academic Supervisor: Professor Hongchuan Yu

Recently, implicit neural representation methods have gained significant traction. By using neural networks to represent objects, they can photo-realistically represent and reconstruct 3D objects or scenes without expensive capture equipment or tedious human labour. This makes them an immensely useful tool for the next generation of virtual reality / augmented reality applications.However, different from their traditional counterpart, these representations cannot be easily edited, which reduces their usability in the industry as artists cannot easily modify the represented object to their liking. They can also only represent static scenes, and animating them remains to be an open challenge.The goal of my research is to address these problems by devising intuitive yet efficient methods to edit or animate implicit neural scene representations without losing their representation or reconstruction capabilities.

2021
Kavisha Jayathunge
Kavisha Jayathunge

Emotionally Expressive Speech Synthesis

Intel-PA Ph.D (Bournemouth University):

Academic Supervisor: Professor Xiaosong Yang

Emotionally expressive speech synthesis for a multimodal virtual avatar. Virtual conversation partners powered by Artificial Intelligence are ubiquitous in today’s world, from personal assistants in smartphones, to customer-facing chatbots in retail and utility service helplines. Currently, these are typically limited to conversing through text, and where there is speech output, this tends to be monotone and (funnily enough) robotic. The long-term aim of this research project is to design a virtual avatar that picks up information about a human speaker from multiple different sources (i.e. audio, video and text) and uses this information to simulate a realistic conversation partner. For example, it could determine the emotional state of the person speaking to it by examining their face and vocal cadence. We expect that taking such information into account when generating a response would make for a more pleasant conversation experience, particularly when a human needs to speak to a robot about a sensitive matter. The virtual avatar will also be able to speak out loud and project an image of itself onto a screen. Using context cues from the human speaker, the avatar will modulate its voice and facial expressions in ways that are appropriate to the conversation at hand.

The project is a group effort and I’m working with several other CDE researchers to realise this goal. I’m specifically interested in the speech synthesis aspect of the project, and how existing methods could be improved to generate speech that is more emotionally textured.

2020
Will Kerr
Will Kerr

Autonomous Filming Systems: Towards Empathetic Imitation

CDE Ph.D. (University of Bath):

Academic Supervisors: Dr Tom Fincham Haines, Dr Wenbin Li

Film making is an artistic but resource-intensive process. The visual appearance of finished film is the product of many departments, but Directors and Cinematographers play a significant role by applying professional expertise and style into the planning (pre-production) and production stages. Once each shot of the film is planned, a trade-off is made between the cost of multiple or highly experienced camera operators against the improved quantity / quality of footage captured. There is a therefore scope to autonomise some aspects of film (pre)-production, such that increased coverage or professionalism can be achieved by film makers limited by finance or expertise.

Existing work in autonomous virtual film-making has focussed on actor and camera positioning, but there remains a gap in how the composition of the frame is designed, particularly how the background elements (shape, colour, focus etc) play a part in the aesthetics of the footage, in a style which is empathetic to the story.

This project takes the above scope forward by asking 2 principle questions:

1) how can the intent of a professional cinematographer be learnt from finished film content?

2) How can these learnings be applied back to new filming tasks in virtual or real environments?

Early work has focussed on 1) with a suite of visual analysis tools and film datasets, providing some evidence of cinematographic styles that were applied for particular films. The second step will develop a virtual filming environment, apply style to virtual shot composition, and offer comparisons to existing film footage (imitation).

2018
Robert Kosk
Robert Kosk

Biomechanical Parametric Faces Modelling and Animation

CDE EngD in Digital Entertainment (Bournemouth University):

Academic Supervisor: Professor Xiaosong Yang

Industrial Partner: Humain

Industrial Supervisor: Willemn Kokke

Modelling and animation of high-quality, digital faces remains a tedious and challenging process. Although sophisticated data-capture and manual processing allow realistic results in offline production, there is demand in the rapidly developing virtual reality industry for fully automated and flexible methods.

My project aims to develop a parametric template for physically-based facial modelling and animation, which will:

– automatically generate any face, either existing or synthetic,

– intuitively edit structure of a face without affecting the quality of animation,

– reflect non-linear nature of facial movement,

– retarget facial performance, accounting for anatomy of particular faces.

Ability to generate faces with governing, meaningful parameters such as age, gender or ethnicity is a crucial objective in wider adaptation of the system among the artists. Furthermore, the template can be extended with numerous novel applications, such as animation retargeting driven by muscle activations, fantasy character synthesis or digital forensic reconstruction.

Download Robert’s Research Profile

 

2019
Kris Kosunen
Kris Kosunen
VR Empathy Training for Clinical Staff
Industrial Partner: Royal United Hospital, Bath
Industrial Supervisor: Dr Chris Dyer
CDE EngD in Digital Entertainment (University of Bath):
Academic Supervisor: Dr Christof Lutteroth, Professor Eamonn O’Neill

Empathy for patients is important for good clinical outcomes. It can sometimes be challenging to develop empathy and understanding for cognitive or mental disorders because it is hard to imagine what they feel like. For example, people affected by dementia or psychosis may not show physical symptoms but may behave unusually. Without an emotional understanding of such conditions, it can be difficult for clinical staff to treat people effectively.

Virtual reality (VR) is being used increasingly for learning and training. VR makes it possible to immerse users in complex interactive scenarios, allowing them to safely experience and practice situations that would be difficult to arrange in reality. This creates new opportunities for VR in clinical training. In this project, we will develop a VR simulator that helps clinical staff to develop empathy and understanding for people affected by cognitive or mental disorders.

 

2019
Nick Lindfield
Nick Lindfield

Deep Neural Networks for Computer-Generated Holography

CDE EngD in Digital Entertainment (Bournemouth University):

Academic Supervisors: Professor Wen Tang, Professor Feng Tian

Industrial Partner:  Vivid Q

Industrial Supervisor: Andrzej Kaczorowski

Computer-generated holography is a display technology that uses diffraction and interference of light to reconstruct fully three-dimensional objects. These objects appear three-dimensional because holograms produce depth cues that are processed by our brains in a way that is consistent with its experience of the natural world, unlike stereoscopic displays which produce conflicting depth cues and often cause nausea.

Holographic displays fall within the category of computational displays.  And for those, the software element is the major factor dictating the properties of the final output image (such as image quality, and depth perception).

Yet, the calculation of those holographic patterns is complex in both production and analysis. Neural networks are a potential method to simplify and speed up these processes while retaining a high level of quality.

The main goal of this project is to develop an algorithm to determine the visual quality of computer-generated holograms. A secondary research direction is using neural networks to produce a hologram.

Determining the quality of a hologram is a difficult task with few publicised solutions. This is because holograms viewed directly are essentially three-dimensional structures, continuous in all three spatial coordinates. Hence, existing quality evaluation methods need to be rethought to incorporate a much wider scope of the problem. Neural networks can be used to analyse highly complex collections of information in a way that is highly generic; instead of focusing on predetermined features they can learn what to focus on based on context.

Neural networks have been demonstrated to replicate more complex operations, producing an output of comparable quality to the original in a much shorter time scale. Recently, the combination of holography and neural networks has received significant academic attention, from MIT and Stanford. Therefore the secondary direction of this project is to explore the use of neural networks to compute holograms and correct for imperfections in realistic holographic projections, in real time.

2022
Xiaoxiao Liu
Xiaoxiao Liu

Continuous Learning in Natural Conversation Generation

Intel-PA Ph.D (Bournemouth University):

Academic Supervisor: Prof Jian Chang

As a crucial component in medical chatbot, natural language generation (NLG) module converts a dialogue act represented in a semantic form into a response in natural language. The continuous meaningful conversations in a dialogue system requires not only to understand the dynamic content from ongoing conversation, but also be able to generate up-to-date responses according to the context of the conversation. In particular, the conversation generation module should convert the responses represented in semantic forms in natural language. Giving appropriate responses will help the users to increase affective interactions and be willing to give more detailed information on their symptoms. By doing so, the conversation generation will assist the diagnosis system better.

In this research, I will develop a medical conversation generation system and focus on enhance the naturalness of the responses generated so that the user experiences of the medical chatbot will be improved.

2019
Philip Lorimer
Philip Lorimer
Supervisors
Dr Wenbin Li,
Dr Alan Hunter
Andy Nancollis

Autonomous Robots for Professional Filming

CDE EngD in Digital Entertainment (University of Bath):

The typical production pipeline involves a considerable effort by industry professionals to plan, capture and post-produce an outstanding commercial film. Workflows are often heavily reliant on human-input along with a fine-tined robotics platform.

The research project explores the use of autonomous robots for professional filming, particularly investigating the use of reinforcement learning for learning and executing typical filming techniques.

The primary aim of this is to design a fully autonomous pipeline for a robot to plan moving trajectories and perform the capture.

2018
Neerav Nagda
Neerav Nagda
Industrial Supervisor: James Coore

Asset Retrieval Using Knowledge Graphs and Semantic Tags

 

Industrial Partner: Absolute Post

CDE EngD in Digital Entertainment (Bournemouth University):

Academic Supervisors: Prof Xiaosong Yang, Prof Jian Chang

The nature of my project is to be able to search, view and retrieve digital assets within a database of the entire company’ s works, from a single application.

There are three major challenges that this project aims to solve:

  1. Searching and retrieving specific data.

The current method is not specific. Data can be found, but usually, this set contains both the required data and a larger set of irrelevant data. The goal is to avoid the retrieval of irrelevant data, which will significantly reduce data transfer times.

  1. Understanding the contents of a file without needing to open it in specialised software.

This can be achieved by generating visual previews to see the contents of a file. The generation of semantic tags will allow for quicker and more efficient searching.

  1. Finding connections in data, such as references and dependencies.

Some files may import or reference data from other files. This linked data can be addressed by creating a Semantic Web or Knowledge Graph. Often there are entities that are not necessarily represented by a file, such as a project, but would have many connections to other entities. This allows such entities to become handles in the Semantic Web which can be used to locate a collection of connected entities.

The disciplines that this project covers are:

  • Web science
  • Big Data
  • Data Mining
  • Computer Vision
  • Natural Language Processing

The integration of such a system in industry would significantly reduce searching and retrieval times for data. This can be used in many scenarios, for example:

  • Retrieving data from backups for further work

A common task is to retrieve a project from archives. Most of the time the entire project is not required to be unarchived, so finding the specific data significantly reduces unarchiving times.

  • Reducing duplication of data

If a digital asset can be reused, it can be found from this system and imported into another project. This saves the time of remaking previous work.

  • Reviewing work

Searches can be filtered, for example finding all work produced in the previous day or sorting works by date and time. Creating a live feed would allow for quicker access to data to review works.

2021
Keji Neri
Keji Neri

Ph.D (University of Bath):

I am part of the Mathematical foundations of the computation research group. I shall be working on applying techniques in proof theory to proofs in order to extract more information from them. My supervisor is Thomas Powell and I was attracted to Bath, particularly by the project that was on offer!

2019
Kari Noriy
Kari Noriy

Incremental Machine Speech Chain for Realtime Narrative Stories

CDE Ph.D. (Bournemouth University):

Academic Supervisor: Professor Xiaosong Yang

Speech and text remain the main form of communication for human-to-human interactions, it has allowed us to communicate and coordinate ideas. My research focuses on Human-Computer Interaction (HCI) namely, synthesis of natural-sounding speech for use in interactive story-driven experiences, allowing for natural flowing conversation between a human and computer in low latency environments.

Current mechanisms require the entire input sequences, thus, there is a significant delay between input and output, breaking the immersion. In contrast, humans can listen and speak in real time, if there is a delay, they will not be able to converse.

Another area of interest is the addition of imperfection in synthesised speech that drive the believability of speech, these include, prosody, suprasegmentals, Interjection, Discourse marker, Intonation, tone, stress, and rhythm.

2018
Karolina Pakenaite
Karolina Pakenaite

An Investigation into Tactile Images for the Visually-Impaired

EngD in Digital Entertainment (University of Bath):

Academic Supervisor: Prof Peter Hall

My aim is to provide the visually impaired community with access to photographs using sensory substitution. I am investigating the functionality of photographs being translated into simple pictures, which will then be printed in tactile form. Some potential contributions could be the introduction of denotation and projection with regard to style. Beneficiaries could also extend beyond computing into other academic disciplines like Electronic Engineering and Education. Accessible design is also essentially inclusive design for all. Sighted individuals find themselves feeling tempted to touch art pieces in museums or galleries and while most artworks are originally created to be touched, we often discern a cardinal no-touch rule to preserve them. Accessibility features may be designed for a particular group of the community, but they can and usually do end up being used by a wider range of people. Towards the end of my research, I hope to adapt my work for the use of primary blind school children.

To get simplified pictures, I recently tried translating photographs into two different styles: ‘Icons Representation’ and ‘Shape Representation’.

For Icons Representation of a photograph, I used a combination of object and salient detection algorithms to identify salient objects only. I used Mask R-CNN object detection and combined its output with Saliency Map using PiCANet detection algorithm, which then gave us probabilities that a given pixel belongs to a salient object within an image. All detected salient objects are replaced with corresponding simplified icons onto a blank canvas of the same size as the input image. Geometric transformations on icons are applied to avoid any overlaps. Background edge map was added to give further context about the image.

For Shape Representation of an object, I experimented with different image segmentation methods and replaced each segment with the most appropriate canonical shape, using a method introduced by Voss and Suße. That is, segments are normalised into a canonical frame by using a whitening transform to get a normalised shape. We then compared these normalised shapes with canonical shapes in the library and decide which is correlated the most. An inverse transform was then applied on library shapes. For the last step, it looks like we have moulded library shapes accordingly so that it matches closely to its segments. We now have simplified images of objects using a combination of shapes. We plan to have these Shape Representations printed in 3D.

Due to Covid-19, we were unable to test these tactile images with participants using touch, but few obvious imitations were found. We will continuously investigate and improve our simplified images. Computer Vision will allow us to create autonomous functionality to translate photographs into tactile images and hope that this will reduce the cost of tactile image production. We will also use knowledge in Psychology of Object Recognition and experiment with human participants to make our implementation as effective as possible by the real users. A combination of Computer Science and Psychology will prepare us to adapt our work for the use of education for primary school children. This could be teaching congenitally blind children to understand different sizes of objects that are rarely touched (e.g elephant or mouse) or teach them to indicate the distance of an object on a paper by drawing objects small that are far away.

2021
Abdul Rehman
Abdul Rehman

Machine learning for discerning paralingual aspects of speech using prosodic cues

Intel-PA Ph.D (Bournemouth University):

Academic Supervisor: Professor Xiaosong Yang

Computers are currently limited in their ability to understand human speech as humans do because of the lack of understanding of aspects of speech other than words. The scope of my research is to find out the shortcomings in speech processing systems’ ability to process such speech cues and look for solutions to enhance computers’ ability to understand not just what’s being said but also how it’s being said.

2018
Olivia Ruston
Olivia Ruston

Supervisor: Professor Jason Alexander

Designing Interactive Wearable Technology

CDE EngD in Digital Entertainment (University of Bath):

Academic Supervisors: Professor Mike Fraser, Professor Jason Alexander

This research focuses on wearables and e-textiles, considering fashion design/construction processes and their socio-cultural impact. My most recent work has involved creating and experimenting with bodice garments to understand how information about their motion might help people to learn about the way they move so that they can learn to move better.

 

2017
Marcia Saul
Marcia Saul
Industrial Supervisor: Stuart Black

A Two-Person Neuroscience Approach for Social Anxiety

Industrial Partner: BrainTrainUK

CDE EngD in Digital Entertainment (Bournemouth University):

Academic Supervisors: Prof Fred Charles, Dr Xun He

Can we use games technology and EEG to help us understand the role of interbrain synchrony on people experiencing the symptoms of social anxiety?

A Two-Person Neuroscience Approach for Social Anxiety: Prospects into Bridging Intra- & Inter-brain Synchrony with Neurofeedback.

My main field of interest is computational neuroscience, brain-computer interfaces and machine learning with the use of games in applications for rehabilitation and improving the quality of life for patients/persons in care.

Social anxiety has become one of the most prominent of anxiety disorders, with many of its symptoms overlapping into the realms of other mental disorders such as depression, autism spectrum disorder, schizophrenia, ADHD, etc. Neurofeedback (NF) is well known to modulate these symptoms using a metacognitive approach of relaying a participant’ s brain activity back to them for self-regulation of the target brainwave patterns. In this project, we explore the potential of integrating Intra- and inter-brain Synchrony to explore the potential of a more effective NF procedure. By using realistic multimodal feedback in the delivery of NF, we can amplify the concept of collaboration or co-operation during tasks utilising the ‘power of two’ in two-person neuroscience to help reach our goal of synchronising brainwaves between two participants and aiming to alleviate symptoms of social anxiety.

View Marcia’s Research Outputs 

2021
Tom Smith
Tom Smith

Model-based hierarchical reinforcement learning for real-world control tasks

Ph.D (University of Bath):

Academic Supervisor: Dr Özgür Şimşek

I’m returning to academia after 7 years in industry (mostly in the automotive industry), completing my MSc in Robotics and Autonomous Systems at Bath last year before starting my PhD journey.  I’m excited and optimistic about the impact autonomous systems and AI will have on our lives and think that learning systems are fundamental to this.  I’m particularly interested in how agents can learn and apply hierarchical models of an environment to improve their performance.

2019
Ben Snow
Ben Snow
Industrial Supervisor: Greg Dawson

Griffon Hoverwork Simulator for Pilot Training

Industrial Partner: Griffin Hoverwork

CDE EngD in Digital Entertainment (Bournemouth University):

Academic Supervisor: Prof Jian Chang

Griffon Hoverwork are both pioneers and innovators in the hovercraft space. With over 50 years of experience making, driving, and collecting data about hovercrafts, GHL has the resources to build a realistic and informative training simulator. We will design a virtual environment to train prospective hovercraft pilots, give feedback, and have fun driving a physically realistic hovercraft. The simulator will incorporate the experience of GHL’s highly trained pilots and a wealth of craft data collected from real vehicles to provide a simulation tailored to a Griffon 2000TD craft. GHL’s training protocols will be used to provide specific learning objectives and give feedback to novice and professional pilots on all aspects of craft operation. Creating a realistic hovercraft model will allow the simulation environment to be used as a research testbed for future projects.

 

2019
Luke Worgan
Luke Worgan

Supervisor: Prof Mike Fraser; Prof Jason Alexander

Enhancing Perceptions of Immersion

CDE EngD in Digital Entertainment (University of Bath):

Academic Supervisors: Professor Mike Fraser, Professor Jason Alexander

Enhancing Perceptions of Immersion within Multi-Sensory Environments through the Introduction of Scent Stimuli Using Ultrasonic Particle Manipulation.

Present Virtual Reality environments focus on providing a rich, immersive audio-visual experience, however, the technology required to enhance a user’s perception of smell, touch, or taste is yet to reach the same level of sophistication and remains largely absent from virtual and augmented reality systems. Existing technologies rely on fan-based systems which may lack temporal and spatial resolution. This research project explores ultrasonic particle manipulation, the process of isolating and manipulating the behaviour of individual particles within an acoustic field, as a method for enhancing olfactory resolution. The research will focus on the development of a discreet ultrasonic system, designed to introduce scent stimuli into multi-sensory environments and increase user perceptions of immersion.

2017
Michelle Wu
Michelle Wu

Motion Representation Learning with Graph Neural Networks

CDE Ph.D. (Bournemouth University):

Academic Supervisors: Dr Zhidong Xiao, Dr Hammadi Nait Charif

The animation of digital characters can be a long and demanding process: the human eye is very sensitive to unnatural motions, and this means that animators need to pay extra attention to create realistic and believable animations. Motion Capture can be a helpful tool in this matter, as it allows to directly capture movements performed by actors and converts them into mathematical data. However, dealing with dense motion data presents its own challenges, and this usually translates into studios having difficulty reusing the large collections of motion data available, often resorting, in the end, to capturing new data instead.

To promote the recycling of motion data, time-consuming tasks (e.g. manual data cleaning and labelling) should be automated by developing efficient methods for classifying and indexing data to allow for the searching and retrieval of motions from databases. At the core of these approaches is the learning of a discriminative motion representation. A skeleton can naturally be represented as a graph, where nodes correspond to joints and edges to bones. However, many human actions need far-apart joints to move collaboratively and to capture these internal dependencies between joints (even those without bone connections), we can leverage the potential of Graph Neural Networks to adaptively learn a model that can extract both spatial and temporal features. This will allow us to learn potentially richer motion representations that will form the basis for the tasks of motion classification, retrieval and synthesis.

View Michelle’s Research Outputs

2021
Huan Xu
Huan Xu

Common Sense Reasoning for Conversational AI

Intel-PA Ph.D (Bournemouth University):

Academic Supervisor: Professor Wen Tang

Conversational AI allows computer chatbots to interact with people in a human-like way, by bridging the gap between human language and computer language. However, existing conversational chatbots are mainly based on predefined command patterns, and there are still challenges to making conversational AI behave human-like. To solve this problem, applying common sense knowledge to conversational AI is a viable solution. With common sense reasoning, chatbots can better understand human conversation not just from context information but also from common knowledge. Thus, it can make communication between humans and computers straightforward and natural and improve the customer experience by better understanding humans’ intentions. The goal of my research is to find a domain-specific knowledge hunting approach and apply common sense knowledge into task-driven conversational AI, to make AI aware of common sense knowledge, and further make the agent more human-like and provide better user experiences.

© 2024 The Centre for Digital Entertainment (CDE). All rights reserved.