Incorporated and Aligned Students

Bournemouth University

Centre for Applied Creative Technologies (CfACTs)

CfACTs is a Marie Sklodowska-Curie Action (MSCA) COFUND research centre, co-led by Prof Jian Chang and Prof Jian J Zhang, of the Bournemouth University’s National Centre for Computer Animation (BU NCCA); CfACTs runs from October 2020 to September 2025.

CfACTs will recruit six funded post-doctoral researchers (CfACTs fellows) and embed them in UK creative technology companies for up to two years, to deliver multi-disciplinary research focused on industrial applications related to three BU Strategic Investment Areas:  Animation, Simulation and Visualisation; Medical Science and Assistive Technology.  This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 900025.

Three of the research projects for CfACTs are listed below. To find out more please visit the CfACTs Webpage.

CfACTs PLUS – Transforming Healthcare and its Training with Digital Technologies

Prof Jian Chang, has recently secured funding to support 2 PhDs and 2 Post Doctoral Fellows as part of the Research Capacity Transformation Scheme, Bournemouth University (BU). The research cluster (CfACTs+) will see research collaboration across 3 BU faculties:

Faculty of Media and Communication

Faculty of Health and Social Science

Faculty of Science and Technology

The cluster will carry out multi-disciplinary research to transform the Uk/global healthcare sector and allied training using digital technologies derived from the creative industries sector. The cluster will advance research themes related to: computer animation, virtual reality (VR), augmented reality (AR) including holographic displays, artificial intelligence (AI), medical visualisation, and healthcare training; fusing emerging digital technologies to tackle existing acute challenges in healthcare training.

The recruitment call for Post Doctoral Fellows is now open:

Centre for Digital Entertainment – Intelligent Virtual Personal Assistant (Intel-PA) Project

During 2020 Bournemouth University (BU) held a competition to initiate a research theme-based Doctoral Training Centre (DTC). The competition was won by Professor Jian Zhang Co-Director of the CDE, with the research theme of the Intelligent Virtual Personal Assistant (Intel-PA), an AI-powered conversational animated avatar.

Find out more about Intel-PA

Meet the current Intel-PA PhD cohort below:- 

PGR Research Project Outline

Eshani Fernando
Efficient Generation of Conversational Databases – Conversational AI Technologies for Digital Virtual Humans

Supervisor: Prof Jian Chang

Intelligent virtual assistants/chatbot is a rapidly growing technology over the last decade, and the development of smart voice recognition software such as Siri, Alexa and google assistant make chatbots quite widespread among the general community. Conversational AI has been widely researched for many years in the understanding and generation of meaningful conversation. Generation of context-aware conversation is challenged with understanding dynamic context which keeps the conversation flowing. The development of transformer-based models such as BERT and GPTs have accelerated this area of research. However, it was seen that these models generate incorrect and inconsistent conversation threads deviating from meaningful human-like conversation. To this end, it is aimed to conduct research on supporting interaction among virtual agents or between an agent and a human user. The aim of my research is to develop AI-based novel technologies for maintaining and evolving a dynamic conversation database that provides the virtual agents with the capacity of building up understanding, making corrections and updating the context while dialogues continue. The proposed database powered with AI and machine learning techniques will help automatically (or semi-automatically) train and drive the conversational AI leading to human-like conversations.  

Background: M.Sc. Data Science and Artificial Intelligence from Bournemouth University
B.Sc. Mathematics from the University of Sri Jayewardenepura, Sri Lanka

Huan Xu
Common Sense Reasoning for Conversational AI

Supervisor: Prof Wen Tang

Conversational AI allows computer chatbots to interact with people in a human-like way, by bridging the gap between human language and computer language. However, existing conversational chatbots are mainly based on predefined command patterns, and there are still challenges to making conversational AI behave human-like. To solve this problem, applying common sense knowledge to conversational AI is a viable solution. With common sense reasoning, chatbots can better understand human conversation not just from context information but also from common knowledge. Thus, it can make communication between humans and computers straightforward and natural and improve the customer experience by better understanding humans’ intentions.
 
The goal of my research is to find a domain-specific knowledge hunting approach and apply common sense knowledge into task-driven conversational AI, to make AI aware of common sense knowledge, and further make the agent more human-like and provide better user experiences.
 
Background: MSc in Advanced Computer Science, Newcastle University, UK
Abdul RehmanMachine learning for discerning paralingual aspects of speech using prosodic cues

Supervisor: Xiaosong Yang

Computers are currently limited in their ability to understand human speech as humans do because of the lack of understanding of aspects of speech other than words. The scope of my research is to find out the shortcomings in speech processing systems’ ability to process such speech cues and look for solutions to enhance computers’ ability to understand not just what’s being said but also how it’s being said.



Background: MSc Control Science and Technology from China University of Geosciences, WuhanBSc Electrical Engineering from Pakistan Institute of Engineering and Applied Sciences
Kavisha Jayathunge

Emotionally Expressive Speech Synthesis

Supervisor: Dr Richard Southern

Emotionally expressive speech synthesis for a multimodal virtual avatar.Virtual conversation partners powered by Artificial Intelligence are ubiquitous in today’s world, from personal assistants in smartphones, to customer-facing chatbots in retail and utility service helplines. Currently, these are typically limited to conversing through text, and where there is speech output, this tends to be monotone and (funnily enough) robotic.

The long-term aim of this research project is to design a virtual avatar that picks up information about a human speaker from multiple different sources (i.e. audio, video and text) and uses this information to simulate a realistic conversation partner. For example, it could determine the emotional state of the person speaking to it by examining their face and vocal cadence. We expect that taking such information into account when generating a response would make for a more pleasant conversation experience, particularly when a human needs to speak to a robot about a sensitive matter. The virtual avatar will also be able to speak out loud and project an image of itself onto a screen. Using context cues from the human speaker, the avatar will modulate its voice and facial expressions in ways that are appropriate to the conversation at hand.

The project is a group effort and I’m working with several other CDE researchers to realise this goal. I’m specifically interested in the speech synthesis aspect of the project, and how existing methods could be improved to generate speech that is more emotionally textured.

Background: MEng in Electronics and Software Engineering from the University of Glasgow
Ruibin WangIntelligent Dialogue System for Automatic Diagnosis

Supervisor: Dr Xiaosong Yang

The automatic diagnosis of diseases has drawn increasing attention from both research communities and health industry in recent years. Due to the conversation between a patient and a doctor can provide many valuable clues for diagnosis, dialogue system is naturally used in the field of medical diagnosis to simulate the consultation process between doctors and patients. The existing dialogue-based diagnosis systems is mainly based on data-driven methods and highly rely on the statistical features from large amount of data which is normally not available. Previous works have already indicated that using medical knowledge graph in the diagnosis prediction system will improve the model’s prediction performance and robustness against data insufficiency and noise effectively.
The aim of my project is to propose a new dialogue-based diagnosis system which not only can efficiently communicate with patients to obtain symptom information, but also can be guided by medical knowledge to make accurate diagnosis more efficiently. Our proposed Knowledge-based GCN dialogue system for automatic diagnosis is shown below.My research interests focus on: Natural language processing, AI medical, Chatbot, Dialogue system, Deep learning algorithms, Knowledge Graph 

BackgroundBSc, Applied Mathematics, Southwest Jiaotong UniversityMSc, Vehicle operation engineering, Southwest Jiaotong University
Jiajun Huang Editing and Animating Implicit Neural Representations

Supervisor: Hongchuan Yu

 
Recently, implicit neural representation methods have gained significant traction. By using neural networks to represent objects, they can photo-realistically represent and reconstruct 3D objects or scenes without expensive capture equipment or tedious human labour. This makes them an immensely useful tool for the next generation of virtual reality / augmented reality applications.However, different from their traditional counterpart, these representations cannot be easily edited, which reduces their usability in the industry as artists cannot easily modify the represented object to their liking. They can also only represent static scenes, and animating them remains to be an open challenge.The goal of my research is to address these problems by devising intuitive yet efficient methods to edit or animate implicit neural scene representations without losing their representation or reconstruction capabilities.

Background: BSc in Network Engineering from South China Normal University
 Xiaoxiao LiuContinuous Learning in Natural Conversation Generation

Supverisor: Prof. Jian Chang

As a crucial component in medical chatbot, natural language generation (NLG) module converts a dialogue act represented in a semantic form into a response in natural language. The continuous meaningful conversations in a dialogue system requires not only to understand the dynamic content from ongoing conversation, but also be able to generate up-to-date responses according to the context of the conversation. In particular, the conversation generation module should convert the responses represented in semantic forms in natural language. Giving appropriate responses will help the users to increase affective interactions and be willing to give more detailed information on their symptoms. By doing so, the conversation generation will assist the diagnosis system better.
In this research, I will develop a medical conversation generation system and focus on enhance the naturalness of the responses generated so that the user experiences of the medical chatbot will be improved.

Background: MSc, Computer Science, Swansea University

 

University of Bath

Manuel Rey AreaDeep View Synthesis For VR VideoSupervisor: Dr Christian RichardtWith the outbreak of VR, it is key for users to be fully inmersed in the virtual world. The users must be able to move their head freely around the virtual scene unveiling occluded surfaces, perceiving depth cues and observing the scene to its last detail. Furthermore, if scenes are captured via casual devices (smartphone) anyone could convert their 2D pictures to a 3D fully immersive experience bringing a new digital world representation closer to ordinary users. The aim of this project is to synthesize novel views of a scene from a set of input views captured by the user. Eventually, the whole scene 3D geometry must be reconstructed and depth cues must be preserved to allow 6-DoF (degrees of freedom) head motion avoiding the well-known VR sickness. The main challenge lies in generating the synthetic views with a high level of detail, light reflections, shadows, occlusions… resembling to reality as much as possible.Background:MSc Computer Vision, Autonomous University of BarcelonaBSc Telecommunications Engineering, University of Vigo
Tom Smith Model-based hierarchical reinforcement learning for real-world control tasksSupervisor: Dr. Ozgur SimsekI’m returning to academia after 7 years in industry (mostly in the automotive industry), completing my MSc in Robotics and Autonomous Systems at Bath last year before starting my PhD journey.  I’m excited and optimistic about the impact autonomous systems and AI will have on our lives and think that learning systems are fundamental to this.  I’m particularly interested in how agents can learn and apply hierarchical models of an environment to improve their performance.  
Keji NeriSupervisor: Dr Thomas PowellI am part of the Mathematical foundations of computation research group. I shall be working on applying techniques in proof theory to proofs in order to extract more information from them. My supervisor is Thomas Powell and I was attracted to Bath particularly by the project that was on offer!
Daniel CastleResearch Project: Efficient and Natural Proofs and AlgorithmsSupervisor:  Dr Willem Heijltjes and Dr Alessio Guglielmi I’m Daniel, a new CDE aligned PhD student at the University of Bath. I recently completed a 4-year masters in computer science here at Bath, during which I particularly enjoyed the theoretical aspects of the subject. I was fortunate to meet and later work under the supervision of members of the mathematical foundations of computation research group, including Dr Willem Heijltjes and Professor Nicolai Vorobjov, who were extremely helpful throughout and later encouraged me to apply for a PhD. I chose to stay on at Bath because of the great people I met, the exciting work being done in the group I am now a part of, and, of course, the wonderful city.My interests broadly lie at the intersection of mathematics and computer science – in particular, the field of proof theory, which studies mathematical proofs as formal objects. This is important from a computer science perspective because of a direct correspondence between computer programs and proofs, leading to applications in areas such as automatic software verification and the design of new programming languages.

 

MSCA FIRE Fellows

As part of the European Commission’s Marie Skłodowska-Curie Research Fellowship Programme for Doctoral degrees, the Centre for Digital Entertainment at the University of Bath supports 10 Fellows with Industrial Research Enhancement (FIRE).

The FIRE programme is an integrated 4-year Doctoral Training programme, bringing together two national Centres for Doctoral Training: the Centre for Sustainable Chemical Technologies (CSCT) and the Centre for Digital Entertainment CDE). The Marie Skłodowska-Curie actions (MSCA) FIRE programme is delivering autonomous, creative, highly-skilled scientists and engineers fully ready for careers in international industries, and a model for intersectoral, interdisciplinary doctoral training in an international environment. Industrial partners ensure that research carried out is relevant and enhances the employability of graduates, both in Europe and globally.

Our Fellows receive training in scientific, societal and business aspects of digital entertainment and conduct challenging PhD research. All projects are interdisciplinary and supported by industrial or international partners.

The positions are based at the University of Bath and require some continuing involvement with an appropriate company.

This project receives funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 665992.

Our current MSCA FIRE Fellows are

Fire FellowResearch ProjectOutline
Tobias Bertel  Light field synthesis from existing imageryImage-based rendering of real environments for virtual reality with Dr Christian Richardt, Dr Neill Campbell and Professor Darren Cosker; Industrial partner: Technical University of Brunswick.
Thu Nguyen Phuoc    Interactive Fabrication-aware Architectural ModellingNeural rendering and inverse rendering using physical inductive biases with Dr Yongliang Yang and Professor Eamonn O’Neill. Industrial Partner: Lambda Labs.
 Yassir SaquilMachine learning for semantic-level data generation and explorationMachine learning for semantic-level data generation and exploration with Dr Yongliang Yang and Professor Peter Hall.
Soumya C BarathiInteractive Feedforward in High Intensity VR ExergamingAdaptive Exergaming with Dr Christof Lutteroth and Professor Eamonn O’ Neill; Industrial Partner: eLearning Studios.Project completed and fellow graduated.
 Youssef Alami Mejjati          Multitask Learning for Heterogeneous dataCreative editing and synthesis of objects in photographs using generative adversarial networks with Professor Darren Cosker and Dr Wenbin Li.
Jan Malte Lichtenberg  Bounded rationality in machine learningBounded rationality in machine learning with Professor Özgür Şimşek; Industrial Partner: Max Planck Institute.
Maryam Naghizadeh A New Method for Human/Animal Retargeting using Machine LearningMulti-Character Motion Retargeting for Large Scale Changes with Professor Darren Cosker and Dr Neill Campbell.
Andrew Lawrence Learning 3D Models of Deformable/Non-Rigid BodiesUsing Bayesian Non-Parametrics to Learn Multivariate Dependency Structures with Dr Neill Campbell and Professor Darren Cosker.
Tayfun Esenkaya Spatially Enhancing Sensory Substitution Devices and Virtual Reality ExperiencesOne Is All, All Is One: Cross-Modal Displays for Inclusive Design and Technology with Dr Michael Proulx and Professor Eamonn O’Neill; Industrial Partner: Atkins.Project completed and fellow graduated.

© 2022 The Centre for Digital Entertainment (CDE). All rights reserved.