PhD's

Bournemouth University

Centre for Digital Entertainment – Intelligent Virtual Personal Assistant (Intel-PA) Project

During 2020 Bournemouth University (BU) held a competition to initiate a research theme based Doctoral Training Centre (DTC). The competition was won by Professor Jian Zhang Co-Director of the CDE, with the research theme of the Intelligent Virtual Personal Assistant (Intel-PA), an AI-powered conversational animated avatar.

The Intel PA research theme builds on the research strengths of BU’s National Centre for Computer Animation (NCCA) namely: Animation, Simulation and Visualisation by investigating a new topic known as Conversational Artificial Intelligence (Conversational AI). The INTEL-PA project aims to significantly elevate the believability and usefulness of existing Virtual Humans.

The INTEL PA DTC is founded on the academic excellence and operational experience of the Centre for Digital Entertainment (CDE) – a multi-million pound EPSRC funded doctoral training centre operated by FMC for over 10 years, in equal partnership with the University of Bath. Six match-funded PhDs will form two BU CDE cohorts 20/21 and 21/22, tackling the fundamental research issues and applications of the Intelligent Virtual Personal Assistant. The six match funded PhDs -4-year doctoral programme, is a BU financial investment of £544k over the period 2021 to 2025. Having two cohorts of three PhD students working on the same research theme, creates a supportive environment between the students, the supervising academics and the BU CDE team. The second cohort builds on the research findings of the first cohort.

Taking advantage of the CDE experience, the Intel-PA PhD students have an opportunity to gain industrial experience, through placement with industrial partners for up to 6 months arranged by the BU CDE project manager.   Students will also participate in CDE student cohort building and skills training to make them all-round independent researchers, benefiting from the CDE practice.

The Intelligent Virtual Personal Assistant consists of three landmark components that enables the Intel-PA to perform human-like communications with real humans. These are: sensible conversations; appropriate body gestures and facial expressions; and ability to detect the motion and emotions of the human user. By performing such human-like functions, Intel-PA will be able to benefit mankind in countless applications, such as healthcare, training, education, marketing and manufacturing decision making, to name but a few. Find out more about the current Intel-PA PhD cohort below:- 

PGR  Research Project Outline

Abdul Rehman

Machine learning for discerning paralingual aspects of speech using prosodic cues

Supervisor: Xiaosong Yang

Computers are currently limited in their ability to understand human speech as humans do because of the lack of understanding of aspects of speech other than words. The scope of my research is to find out the shortcomings in speech processing systems’ ability to process such speech cues and look for solutions to enhance computers' ability to understand not just what's being said but also how it's being said.

Background:

MSc Control Science and Technology from China University of Geosciences, Wuhan

BSc Electrical Engineering from Pakistan Institute of Engineering and Applied Sciences

Kavisha Jayathunge

Emotionally Expressive Speech Synthesis

Supervisor: Dr Richard Southern

Emotionally expressive speech synthesis for a multimodal virtual avatar.

Virtual conversation partners powered by Artificial Intelligence are ubiquitous in today’s world, from personal assistants in smartphones, to customer-facing chatbots in retail and utility service helplines. Currently, these are typically limited to conversing through text, and where there is speech output, this tends to be monotone and (funnily enough) robotic. The long-term aim of this research project is to design a virtual avatar that picks up information about a human speaker from multiple different sources (i.e. audio, video and text) and uses this information to simulate a realistic conversation partner. For example, it could determine the emotional state of the person speaking to it by examining their face and vocal cadence. We expect that taking such information into account when generating a response would make for a more pleasant conversation experience, particularly when a human needs to speak to a robot about a sensitive matter. The virtual avatar will also be able to speak out loud and project an image of itself onto a screen. Using context cues from the human speaker, the avatar will modulate its voice and facial expressions in ways that are appropriate to the conversation at hand.

The project is a group effort and I'm working with several other CDE researchers to realise this goal. I'm specifically interested in the speech synthesis aspect of the project, and how existing methods could be improved to generate speech that is more emotionally textured.

Background: MEng in Electronics and Software Engineering from the University of Glasgow

Ruibin Wang

Intelligent Dialogue System for Automatic Diagnosis

Supervisor: Dr Xiaosong Yang

The automatic diagnosis of diseases has drawn increasing attention from both research communities and health industry in recent years. Due to the conversation between a patient and a doctor can provide many valuable clues for diagnosis, dialogue system is naturally used in the field of medical diagnosis to simulate the consultation process between doctors and patients. The existing dialogue-based diagnosis systems is mainly based on data-driven methods and highly rely on the statistical features from large amount of data which is normally not available. Previous works have already indicated that using medical knowledge graph in the diagnosis prediction system will improve the model’s prediction performance and robustness against data insufficiency and noise effectively.

The aim of my project is to propose a new dialogue-based diagnosis system which not only can efficiently communicate with patients to obtain symptom information, but also can be guided by medical knowledge to make accurate diagnosis more efficiently. Our proposed Knowledge-based GCN dialogue system for automatic diagnosis is shown below.

My research interests focus on: Natural language processing, AI medical, Chatbot, Dialogue system, Deep learning algorithms, Knowledge Graph

 

Background

BSc, Applied Mathematics, Southwest Jiaotong University

MSc, Vehicle operation engineering, Southwest Jiaotong University

Jiajun Huang

 

Editing and Animating Implicit Neural Representations

Supervisor: Hongchuan Yu

 

Recently, implicit neural representation methods have gained significant traction. By using neural networks to represent objects, they can photo-realistically represent and reconstruct 3D objects or scenes without expensive capture equipment or tedious human labour. This makes them an immensely useful tool for the next generation of virtual reality / augmented reality applications.

However, different from their traditional counterpart, these representations cannot be easily edited, which reduces their usability in the industry as artists cannot easily modify the represented object to their liking. They can also only represent static scenes, and animating them remains to be an open challenge.

The goal of my research is to address these problems by devising intuitive yet efficient methods to edit or animate implicit neural scene representations without losing their representation or reconstruction capabilities.

Background

BSc in Network Engineering from South China Normal University

Xiaoxiao Liu

Continuous Learning in Natural Conversation Generation

Supverisor: Prof. Jian Chang

As a crucial component in medical chatbot, natural language generation (NLG) module converts a dialogue act represented in a semantic form into a response in natural language. The continuous meaningful conversations in a dialogue system requires not only to understand the dynamic content from ongoing conversation, but also be able to generate up-to-date responses according to the context of the conversation. In particular, the conversation generation module should convert the responses represented in semantic forms in natural language. Giving appropriate responses will help the users to increase affective interactions and be willing to give more detailed information on their symptoms. By doing so, the conversation generation will assist the diagnosis system better.

In this research, I will develop a medical conversation generation system and focus on enhance the naturalness of the responses generated so that the user experiences of the medical chatbot will be improved.

Background

MSc, Computer Science, Swansea University

 

University of Bath

Manuel Rey Area

Deep View Synthesis For VR Video

Supervisor: Dr Christian Richardt

With the outbreak of VR, it is key for users to be fully inmersed in the virtual world. The users must be able to move their head freely around the virtual scene unveiling occluded surfaces, perceiving depth cues and observing the scene to its last detail. Furthermore, if scenes are captured via casual devices (smartphone) anyone could convert their 2D pictures to a 3D fully immersive experience bringing a new digital world representation closer to ordinary users. The aim of this project is to synthesize novel views of a scene from a set of input views captured by the user. Eventually, the whole scene 3D geometry must be reconstructed and depth cues must be preserved to allow 6-DoF (degrees of freedom) head motion avoiding the well-known VR sickness. The main challenge lies in generating the synthetic views with a high level of detail, light reflections, shadows, occlusions... resembling to reality as much as possible.

Background:

MSc Computer Vision, Autonomous University of Barcelona

BSc Telecommunications Engineering, University of Vigo

Tom Smith

 

Model-based hierarchical reinforcement learning for real-world control tasks

Supervisor: Dr. Ozgur Simsek

I'm returning to academia after 7 years in industry (mostly in the automotive industry), completing my MSc in Robotics and Autonomous Systems at Bath last year before starting my PhD journey.  I'm excited and optimistic about the impact autonomous systems and AI will have on our lives and think that learning systems are fundamental to this.  I'm particularly interested in how agents can learn and apply hierarchical models of an environment to improve their performance.

 

 

Keji Neri

Supervisor: Dr Thomas Powell

I am part of the Mathematical foundations of computation research group. I shall be working on applying techniques in proof theory to proofs in order to extract more information from them. My supervisor is Thomas Powell and I was attracted to Bath particularly by the project that was on offer!

Daniel Castle

Research Project: Efficient and Natural Proofs and Algorithms

Supervisor:  Dr Willem Heijltjes and Dr Alessio Guglielmi 

I'm Daniel, a new CDE aligned PhD student at the University of Bath. I recently completed a 4-year masters in computer science here at Bath, during which I particularly enjoyed the theoretical aspects of the subject. I was fortunate to meet and later work under the supervision of members of the mathematical foundations of computation research group, including Dr Willem Heijltjes and Professor Nicolai Vorobjov, who were extremely helpful throughout and later encouraged me to apply for a PhD. I chose to stay on at Bath because of the great people I met, the exciting work being done in the group I am now a part of, and, of course, the wonderful city.

My interests broadly lie at the intersection of mathematics and computer science - in particular, the field of proof theory, which studies mathematical proofs as formal objects. This is important from a computer science perspective because of a direct correspondence between computer programs and proofs, leading to applications in areas such as automatic software verification and the design of new programming languages.

 

MSCA FIRE Fellows

As part of the European Commission's Marie Skłodowska-Curie Research Fellowship Programme for Doctoral degrees, the Centre for Digital Entertainment at the University of Bath supports 10 Fellows with Industrial Research Enhancement (FIRE).

The FIRE programme is an integrated 4-year Doctoral Training programme, bringing together two national Centres for Doctoral Training: the Centre for Sustainable Chemical Technologies (CSCT) and the Centre for Digital Entertainment CDE). The Marie Skłodowska-Curie actions (MSCA) FIRE programme is delivering autonomous, creative, highly-skilled scientists and engineers fully ready for careers in international industries, and a model for intersectoral, interdisciplinary doctoral training in an international environment. Industrial partners ensure that research carried out is relevant and enhances the employability of graduates, both in Europe and globally.

Our Fellows receive training in scientific, societal and business aspects of digital entertainment and conduct challenging PhD research. All projects are interdisciplinary and supported by industrial or international partners.

The positions are based at the University of Bath and require some continuing involvement with an appropriate company.

This project receives funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 665992.

Our current MSCA FIRE Fellows are

Fire Fellow Research Project Outline

Tobias Bertel

 

Light field synthesis from existing imagery Image-based rendering of real environments for virtual reality with Dr Christian Richardt, Dr Neill Campbell and Professor Darren Cosker; Industrial partner: Technical University of Brunswick.

Thu Nguyen Phuoc 

  

Interactive Fabrication-aware Architectural Modelling Neural rendering and inverse rendering using physical inductive biases with Dr Yongliang Yang and Professor Eamonn O’Neill. Industrial Partner: Lambda Labs.

 

Yassir Saquil

Machine learning for semantic-level data generation and exploration Machine learning for semantic-level data generation and exploration with Dr Yongliang Yang and Professor Peter Hall.

Soumya C Barathi

Interactive Feedforward in High Intensity VR Exergaming

Adaptive Exergaming with Dr Christof Lutteroth and Professor Eamonn O’ Neill; Industrial Partner: eLearning Studios.

Project completed and fellow graduated.

 

Youssef Alami Mejjati

         

Multitask Learning for Heterogeneous data Creative editing and synthesis of objects in photographs using generative adversarial networks with Professor Darren Cosker and Dr Wenbin Li.

Jan Malte Lichtenberg

 

Bounded rationality in machine learning Bounded rationality in machine learning with Professor Özgür Şimşek; Industrial Partner: Max Planck Institute.

Maryam Naghizadeh 

A New Method for Human/Animal Retargeting using Machine Learning Multi-Character Motion Retargeting for Large Scale Changes with Professor Darren Cosker and Dr Neill Campbell.

Andrew Lawrence 

Learning 3D Models of Deformable/Non-Rigid Bodies Using Bayesian Non-Parametrics to Learn Multivariate Dependency Structures with Dr Neill Campbell and Professor Darren Cosker.

Tayfun Esenkaya 

Spatially Enhancing Sensory Substitution Devices and Virtual Reality Experiences

One Is All, All Is One: Cross-Modal Displays for Inclusive Design and Technology with Dr Michael Proulx and Professor Eamonn O’Neill; Industrial Partner: Atkins.

Project completed and fellow graduated.

© Centre for Digital Entertainment 2022. Site by MediaClash.