Current research engineers and live projects

We currently have 18 active EngD projects with 12 companies.

We have research projects in a wide range of areas in computer vision, computer graphics, human computer interaction (HCI) and machine learning from procedural generation of content for international games companies to assistive technologies to help stroke rehabilitation via the future of interactive technologies for major broadcasters and virtual reality for naval training.

Get to know some of our current research engineers below and see our video filmed during our CDE Winter Networking Event at the British Film Institute, London.  Hear more from our students and alumni.

Download CDE RE's and Projects

 

 

 


2020
Mesar Hameed
Mesar Hameed
Supervisor:
Professor Peter Hall

Immersive Technology

Research  Project

Immersive Technology to Enable Travel and Transport for the Visually Impaired


2020
Will Kerr
Will Kerr
Supervisor:
Dr Wenbin Li, Dr Tom Haines

Autonomous Filming Systems: Towards Empathetic Imitation

Film making is an artistic but resource intensive process. The visual appearance of finished film is the product of many departments, but Directors and Cinematographers play a significant role by applying professional expertise and style into the planning (pre-production) and production stages. Once each shot of the film is planned, a trade-off is made between the cost of multiple or highly experienced camera operators against the improved quantity / quality of footage captured. There is a therefore scope to autonomise some aspects of film (pre)-production, such that increased coverage or professionalism can be achieved by film makers limited by finance or expertise.

Existing work in autonomous virtual film-making has focussed on actor and camera positioning, but there remains a gap in how the composition of the frame is designed, particularly how the background elements (shape, colour, focus etc) play a part in the aesthetics of the footage, in a style which is empathetic to the story.

This project takes the above scope forward by asking 2 principle questions:

1) how can the intent of a professional cinematographer be learnt from finished film content?

2) How can these learnings be applied back to new filming tasks in virtual or real environments?

Early work has focussed on 1) with a suite of visual analysis tools and film datasets, providing some evidence of cinematographic styles that were applied for particular films. The second step will develop a virtual filming environment, apply style to virtual shot composition, and offer comparisons to existing film footage (imitation).

 


2019
Kris Kosunen
Kris Kosunen
Supervisor:
Dr Christof Lutteroth; Prof Eamonn O'Neill
Industrial Supervisor:
Dr Chris Dyer

VR Empathy Training for Clinical Staff

Research Project: VR Empathy Training for Clinical Staff

Industrial Partner: Royal United Hospital, Bath

Empathy for patients is important for good clinical outcomes. It can sometimes be challenging to develop empathy and understanding for cognitive or mental disorders because it is hard to imagine what they feel like. For example, people affected by dementia or psychosis may not show physical symptoms but may behave unusually. Without an emotional understanding of such conditions, it can be difficult for clinical staff to treat people effectively. 

Virtual reality (VR) is being used increasingly for learning and training. VR makes it possible to immerse users in complex interactive scenarios, allowing them to safely experience and practice situations that would be difficult to arrange in reality. This creates new opportunities for VR in clinical training. In this project, we will develop a VR simulator that helps clinical staff to develop empathy and understanding for people affected by cognitive or mental disorders.

Background

Informatics: Serious Games, University of Skovde, Sweden

I studied Serious Games at the University of Skövde in Sweden, this gave me a firm background in how games technology and ideas can be used in a serious way. I then worked as a social media community manager in the Nordic region for Lionbridge on a contract with HTC Vive, during this time I learned about the ways that Nordic companies are making use of VR tech in all manner of sectors and was inspired to follow this research trend.


2019
Nick Lindfield
Nick Lindfield
Supervisor:
Prof Wen Tang, Prof Feng Tian
Industrial Supervisor:
Dr Andrzej Kaczorowski

Deep Neural Networks for Computer Generated Holography

Industrial Partner:

Vivid Q

Computer generated holography is a display technology that uses diffraction and interference of light to reconstruct fully three-dimensional objects. These objects appear three-dimensional because holograms produce depth cues that are processed by our brains in a way that is consistent with its experience of the natural world, unlike stereoscopic displays which produce conflicting depth cues and often cause nausea.

Holographic displays fall within the category of computational displays. And for those, the software element is the major factor dictating the properties of the final output image (such as image quality, depth perception).

Yet, the calculation of those holographic patterns is complex in both production and analysis. Neural networks are a potential method to simplify and speed up these processes while retaining a high level of quality.

The main goal of this project is to develop an algorithm to determine the visual quality of computer-generated holograms. A secondary research direction is using neural networks to produce a hologram.

Determining the quality of a hologram is a difficult task with few publicised solutions. This is because holograms viewed directly are essentially three-dimensional structures, continuous in all three spatial coordinates. Hence, existing quality evaluation methods need to be re-thought to incorporate a much wider scope of the problem.  Neural networks can be used to analyse highly complex collections of information in a way that is highly generic; instead of focusing on predetermined features they can learn what to focus on based on context.

Neural networks have been demonstrated to replicate more complex operations, producing an output of comparable quality to the original in a much shorter time scale. Recently, the combination of holography and neural networks has received significant academic attention, from MIT and Stanford. Therefore the secondary direction of this project is to explore the use of neural networks to compute holograms and correct for imperfections in realistic holographic projections, in real-time.

 

Background:

  • Software Engineer
  • MSc Computer Science
  • MChem Materials Chemistry


2019
Michal Gnacek
Michal Gnacek
Supervisor:
Dr Ellen Seiss, Dr Theodoros Kostou, Dr Emili Balaguer-Ballester
Industrial Supervisor:
Dr Charles Nduka MA,MD, FRCS

Improved Affect Recognition in Virtual Reality Environments

Industrial Partner

emteq

 

Research Project

I am working with Emteq on improving affect recognition using various bio-signals with the hope of creating better experiences and creating completely new ones that have the potential to tackle physical and mental health problems in previously unexplored ways.

The ever-increasing use of virtual reality (VR) in research as well as mainstream consumer markets has created a need for understanding users’ affective state. This would not only guide the development of the technology but also allow for a creation of brand-new experiences in entertainment, healthcare and training applications.

This research project will build on the existing research conducted by Emteq with their patented device for affect detection in VR. In addition to the already implemented sensors (electromyography, photoplethysmography and inertial measuring unit) which need to be evaluated, other modalities need to be explored for potential inclusion and their ability to determine emotions.

Background

I have 4 years of experience as a Games & Software Engineer.

BEng (Hons) Computer Games Development at Ulster University


2019
Luke Worgan
Luke Worgan
Supervisor:
Prof Mike Fraser; Prof Jason Alexander

Enhancing Perceptions of Immersion

Enhancing Perceptions of Immersion within Multi-Sensory Environments through the Introduction of Scent Stimuli Using Ultrasonic Particle Manipulation. 

Present Virtual Reality environments focus on providing a rich, immersive audio-visual experience, however, the technology required to enhance a user's perception of smell, touch, or taste is yet to reach the same level of sophistication and remains largely absent from virtual and augmented reality systems. Existing technologies rely on fan-based systems which may lack temporal and spatial resolution. This research project explores ultrasonic particle manipulation – the process of isolating and manipulating the behaviour of individual particles within an acoustic field, as a method for enhancing olfactory resolution.  Research will focus on the development of a discreet ultrasonic system designed to introduce scent stimuli into multi-sensory environments and increasing user perceptions of immersion. 

Background: 

I have a multi-disciplinary background as a musician, artist, and computer scientist with a BSc in Sound Design (LSBU), MSc in Creative Technology (UWE) as well as completing an MSc in Human-Computer Interaction as part of my doctorate program at the University of Bath. 


lukeworgan.com


2019
Kari Noriy
Kari Noriy
Supervisor:
Xiaosong Yang

Incremental Machine Speech Chain for Realtime Narrative Stories

 

Research: Incremental Machine Speech Chain for Realtime Narrative Driven Stories & High Fidelity Speech Synthesis for Interactive Stories

Speech and text remains the main form of communication for human to human interactions, it has allowed us to communicate and coordinate ideas. My research focuses on Human Computer Interaction (HCI) namely, synthesis of natural sounding speech for use in interactive story driven experiences, allowing for natural flowing conversation between a human and computer in low latency environments.

Current mechanisms require the entire input sequences, thus, there is a significant delay between input and output, breaking the immersion. In contrast, humans can listen and speak in real-time, if there is a delay, they will not be able to converse.

Another area of interest is the addition of imperfection in synthesised speech that drive the believability of speech, these include, prosody, suprasegmentals, Interjection, Discourse marker, Intonation, tone, stress, and rhythm.

 

Background:

BA (Hons) Computer Visualisation and Animation, Bournemouth University

Download Kari's Research Profile


2019
Ben Snow
Ben Snow
Supervisor:
Prof Jian Chang
Industrial Supervisor:
Greg Dawson

Griffon Hoverwork Simulator for Pilot Training

Industrial Partner

Griffin Hoverwork

Griffon Hoverwork are both pioneers and innovators in the hovercraft space. With over 50 years of experience making, driving, and collecting data about hovercrafts, GHL has the resources to build a realistic and informative training simulator. We will design a virtual environment to train prospective hovercraft pilots, give feedback, and have fun driving a physically realistic hovercraft. The simulator will incorporate the experience of GHL's highly trained pilots and a wealth of craft data collected from real vehicles to provide a simulation tailored to a Griffon 2000TD craft. GHL's training protocols will be used to provide specific learning objectives and give feedback to novice and professional pilots on all aspects of craft operation. Creating a realistic hovercraft model will allow the simulation environment to be used as a research testbed for future projects.

Background

I gained an Mphys from the University of Manchester in 2019. My research project focused on nanoscale thermoelectric transport in ferromagmets for graphene spintronics Applications. I spent my 3rd year abroad at the University of Maryland College Park where I worked on the USA's largest, student run cyclotron.


2019
Alexz Farrall
Alexz Farrall
Supervisor:
Professor Jason Alexander; Dr Ben Ainsworth
Industrial Supervisor:
Dr. Sabarigirivasan Muthukrishnan

The guide to mHealth implementation

Research Project: The guide to mHealth implementation – designing, developing, and evaluating a new evidence-based mobile intervention to support medical students suffering from reduced well-being.

Project partner: Avon and Wiltshire Mental Health Partnership NHS Trust (AWP)

The project will not only be a collaboration between the University of Bath and AWP, but also work alongside Bristol’s Medical School to directly incorporate stakeholders into the design and evaluation of a new digital intervention. Smartphone apps are an increasingly popular means for delivering psychological interventions to patients suffering from reduced well-being and mental disorders. One such population that suffers from reduced well-being is that of the medical student populace, with recent studies identifying 27.2% to have depressive symptoms, 11.1% to have suicidal ideation, and 45-56% to have symptoms suggestive of burnout. Moreover, through the utilisation of advanced human-computer interaction (HCI) and behaviour therapy techniques, this project aims to contribute innovative research to increase the effectiveness of existing digital mental health technologies. Thus, it is the hopes of the research team to actualise and implement the smartphone app into the NHS and create new opportunities to support the entire medical workforce.

MSc Thesis: The development of a mindfulness-based smartphone intervention to support doctoral student suffering from reduced wellbeing.

Background: 

BEng Electronics and Communication Engineering (IET accredited), University of Kent

University of Bath’s vertically integrated project (VIP) well-being team leader


2019
Philip Lorimer
Philip Lorimer
Supervisor:
Dr Wenbin Li

Autonomous Robots for Professional Filming

Research Project: Autonomous Robots for Professional Filming

The typical production pipeline involves considerable effort by industry professionals to plan, capture and post-produce an outstanding commercial film. Workflows are often heavily reliant on human-input along with a fine-tined robotics platform.

The research project explores the use of autonomous robots for professional filming, particularly investigating the use of reinforcement learning for learning and executing typical filming techniques.

The primary aim of this is to design a fully autonomous pipeline for a robot to plan moving trajectories and perform the capture.

MSc Project: Perception Module for Autonomous formula Student vehicle.

Background: MSc Computer Science, University of Bath


2019
Isabel Fitton
Isabel Fitton
Supervisor:
Dr Christof Lutteroth; Dr Chris Clarke
Industrial Supervisor:
Jeremy Dalton

Improving skills learning through VR

Research Project: Improving skills learning through VR

Industrial Partner: PwC UK

More affordable, consumer friendly head-mounted displays (HMDs) have led to excitement around the potential for virtual reality (VR) to revolutionise training and education. VR promises to support people in learning new skills, by immersing learners in virtual environments where they can practice the skills and receive feedback on their progress, but important questions remain regarding the transferability and effectiveness of acquiring skills in a virtual world. Current state of the art VR training fails to utilise learning theories to underpin their design and are not necessarily optimised for learning. In this project we will investigate how VR can help learners acquire manual skills, comparing different approaches to job skills training in VR with the goal of informing the development of effective and engaging training tools, which are underpinned by theory. We aim to produce results relating to enhancing ‘hard’ skills training in virtual environments which are applicable to industry, as required in engineering and manufacturing for example, and support the wider adoption of VR training tools. We will design VR learning simulations and new approaches to support learners and test these simulations to determine whether skills learned in VR transfer to the real world. We will also compare virtual training simulations to more traditional learning aids such as instructional videos.

Background

Multi-disciplinary background in Psychology and Human Computer Interaction (HCI).

BSc Psychology with Placement, University of Bath 

BSc Project: Immersive virtual environments and embodied agents for e-learning applications 


2018
Jack Brett
Jack Brett
Supervisor:
Dr Christos Gatzidis
Industrial Supervisor:
Corey Harrower

Augmented Music Interaction and Gamification

Industrial Partner:

Roli

Research Project: Learning Through Play: Defining the Balance Between Music Learning and Video Games

“Without music, life would be a mistake.” Friedrich Nietzsche. Learning music theory or a musical instrument in a traditional sense is seen as tedious, requires rote learning and a lot of commitment. In the current market, there are copious amounts of musical applications, from creation and mixing to mobile instruments and a small handful of actual games. Most of these applications offer little to no learning and do not cater for beginners. The problem is that music is fun, but those beginning the learning journey, may find the initial phase of it daunting because there is so much to contemplate. Learning applications such as Yousician have helped to bridge this gap between traditional learning and modern technology using ‘gamification’ - adding elements of games such as leaderboards and instant gratification. Whilst these learning applications do offer more of an engaging experience, users still stop and drop the learning process after a short period of time.

 

The trouble with learning any instrument and generally improving musician skills is that there is a large amount of rote learning required; simply playing the same rhythms repeatedly to help internalise your own sense of rhythm or playing scales over and over again to help memorise them. Current applications and methods of gamification focus on adding game elements to a learning environment or lesson whereas we are looking to develop games in which the mechanics are the learning components.

We aim to develop learning games with the use of new and existing musical technology created by ROLI, as well as leveraging new technology such as virtual/augmented reality. These technologies can open new doors for innovative, engaging and fun experiences to help aid the learning process. The end goal is to develop games in which users/students can learn and practise, whilst avoiding boredom and putting fun as the core driver - it is learning without… learning.

Background: Games Technology

BSc (Hons) Games Technology. Previous research work was conducted mostly with the Psychology Department where programs were created for mobile/PC use and then later branched into virtual reality.  Most recently, I have been focusing on a VR program which is used in clinical trials to gauge the severity of certain mental illnesses such as dementia.

 

Download Jack's Research Profile

View Jack's Research Outputs


http://jackbrett.co/


2018
Olivia Ruston
Olivia Ruston
Supervisor:
Professor Mike Fraser; Professor Jason Alexander

Designing Interactive Wearable Technology

Research Project: Designing Interactive Wearable Technology for Embodied Movement Applications 

This research focuses on wearables and e-textiles, considering fashion design/construction processes and their socio-cultural impact. My most recent work has involved creating and experimenting with bodice garments to understand how information about their motion might help people to learn about the way they move, so that they can learn to move better. 

Background: Computer Science

BSc Computer Science with Placement, University of Bath  

BSc Project: An Investigation of User Interactions with Wearable Ambient Awareness Technologies 


2018
Katarzyna Wojna
Katarzyna Wojna
Supervisor:
Dr Christof Lutteroth, Dr Michael Wright
Industrial Supervisor:
Dr David Beattie

Natual user interaction and multimodality

Research Project: Multimodal interactions for digital out of home

Industrial Partner: Ultraleap

This project explores how multi-modal interaction can enable more meaningful collaboration with systems that use Natural User Interaction (e.g. voice, gesture, eye gaze, touch etc.) as the primary method of interaction.  

In the first instance this project will explore Digital Out Of Home systems.  However, the research and insights generated from this application domain could be transferred to virtual and augmented reality as well as more broadly to other systems where Natural User Interaction is used as the primary method of interaction.

One of the challenges of such systems is to design adaptive affordances so that a user knows how to interact with an information system whilst on the move with little or no training.  One possible solution is to provide multiple modes of interaction, and associated outputs, which can work together to enable meaningful or "natural" user interaction. For example, a combination of eye gaze and gesture to "understand" that the user wishes to "zoom in" to a particular location on a map.

Addressing this challenge, this project explores how multi-modal interaction (both input and output) can enable meaningful interactions between users and these systems.  That is, how can a combination of one or more inputs and the associated outputs allow the user to convey intent, perform tasks and react to feedback as well as the system providing meaningfully feedback to the user about what it "understands" to be the user’s intended actions?

Demo paper 'Particle-ulary Haptics' accepted to the Eurohaptics 2020 Conference

"Mid-air haptic feedback technology produces tactile sensations that are felt without the need for physical interactions, and bridges the gap with digital interactions, by making the virtual feel real. However, existing mid-air haptic experiences often do not reflect user expectations in terms of congruence between visual and haptic stimuli. To overcome this, we investigate how to better present the visual properties of objects, so that what one feels is a more accurate prediction of what one sees. In the following demonstration, we present an approach that allows users to fine-tune the visual appearance of different textured surfaces, and then match the set corresponding mid-air haptic stimuli in order to improve visual-haptic congruence"

Eurohaptics 2020 Demonstration video

 

 

View Katarzyna's Research Outputs


2018
Sydney Day
Sydney Day
Supervisor:
Lihua You
Industrial Supervisor:
Matt Hooker

Humanoid Character Creation Through Retargeting

Industrial Partner:

Axis Animation

Research Project: Humanoid Character Creation Through Retargeting

This project explores the automatic creation of rigs for humanoid characters with associated animation cycles and poses. Through retargeting a number of techniques can be covered:

- automatic generation of facial blendshapes from a central reference library

- retargeting of bipedal humanoid skeletons

- transfer of weights between characters of differing topologies.

The key goals are to dramatically reduce the amount of time needed to rig certain types of character, thus freeing up the riggers to work on fancier, more complex rigs that cannot be automated. 

Background: Computer Science

BA (Hons) Computer Animation and Visualisation

Download Sydney's Research Profile


2018
Robert Kosk
Robert Kosk
Supervisor:
Dr Richard Southern
Industrial Supervisor:
Willem Kokke

Biomechanical Parametric Faces Modelling and Animation

Industrial Partner:

Humain

Research Project: Generation and Intuitive Editing of High Fidelity, Dynamic, Digital Faces

Project Overview

Modelling and animation of high-quality, digital faces remains a tedious and challenging process. Although sophisticated data-capture and manual processing allow realistic results in offline production, there is demand in the rapidly developing virtual reality industry for fully automated and flexible methods.

My project aims to develop a parametric template for physically-based facial modelling and animation, which will:

- automatically generate any face, either existing or synthetic,

- intuitively edit structure of a face without affecting the quality of animation,

- reflect non-linear nature of facial movement,

- retarget facial performance, accounting for anatomy of particular faces.

Ability to generate faces with governing, meaningful parameters such as age, gender or ethnicity is a crucial objective in wider adaptation of the system among the artists. Furthermore, the template can be extended with numerous novel applications, such as animation retargeting driven by muscle activations, fantasy character synthesis or digital forensic reconstruction.

Background: Computer Science

BA (Hons) Computer Visualisation and Animation

Download Robert's Research Profile


www.robertkosk.com


2018
Aaron Demolder
Aaron Demolder
Supervisor:
Dr Hammadi Nait-Charif Dr Valery Adzhiev
Industrial Supervisor:
Dr. Andrzej Kaczorowski

Image Rendering for laser-based holographic Displays

Data capture and 3D integration for VFX and Emerging Technology

Research Project: Media Production for Laser-Based Holographic Display

Industrial Partner: VividQ

 

VividQ has developed world leading software technology that provides holographic computation and real time holographic 3D display. VividQ now requires research and development work to generate a range of assets that best showcase the technology including: high quality projected objects with realistic texture / materials (e.g. glossy metallic surfaces, semi transparencies) and visual effects such as smoke and fire. This R&D will also facilitate the delivery of Multi-View and AR experiences that overlay holographic objects onto the real

world.

Existing computer graphics, visual effects and video games technologies provide the basis for rendering digital content. Rendering images for a laser-based holographic display presents unique challenges compared to traditional 2D or stereoscopic display panels, as the ability of the observer to focus at varying depths plays a large role in the perception of content. This project will use computer graphics, visual effects and video games technologies to develop new techniques to improve image rendering for laser-based holographic displays.

This project aims to:

1) Improve the quality and range of holographic objects that can be created and displayed. This R&D will enable VividQ to remain the market leader by increasing the level of object realism.

2) Assist the development of VividQ’s software framework to improve the visual representation of 3D objects and AR experiences, thereby improving the experience of users.

Background: Art, Design, Animation, VFX, Computer Science

BA (Hons) Computer Animation and Visualisation

Download Aaron's Research Profile


https://aarondemolder.com


2018
Neerav Nagda
Neerav Nagda
Supervisor:
Dr Xiaosong Yang, Dr Jian Chang, Dr Richard Southern
Industrial Supervisor:
Rebekah King-Britton

Asset Retrieval Using Knowledge Graphs and Semantic Tags

Industrial Partner:

Absolute Post

Research Project: Multimodal Search And Retrieval Using Knowledge Graphs

The nature of my project is to be able to search, view and retrieve digital assets within a database of the entire company’s works, from a single application.

There are three major challenges which this project aims to solve:

 

  1. Searching and retrieving specific data.

The current method is not specific. Data can be found, but usually this set contains both the required data and a larger set of irrelevant data. The goal is to avoid the retrieval of irrelevant data, which will significantly reduce data transfer times.

  1. Understanding the contents of a file without needing to open it in specialised software.

This can be achieved by generating visual previews to see the contents of a file. The generation of semantic tags will allow for quicker and efficient searching.

  1. Finding connections in data, such as references and dependencies.

Some files may import or reference data from other files. This linked data can be addressed by creating a Semantic Web or Knowledge Graph. Often there are entities which are not necessarily represented by a file, such as a project, but would have many connections to other entities. This allows such entities to become handles in the Semantic Web which can be used to locate a collection of connected entities.

The disciplines that this project covers are:

  • Web science

  • Big Data

  • Data Mining

  • Computer Vision

  • Natural Language Processing

The integration of such a system in industry would significantly reduce searching and retrieval times for data. This can be used in many scenarios, for example:

  • Retrieving data from backups for further work

A common task is to retrieve a project from archives. Most of the time the entire project is not required to be unarchived, so finding the specific data significantly reduces unarchiving times.

  • Reducing duplication of data

If a digital asset can be reused, it can be found from this system and imported into another project. This saves the time of remaking previous work.

  • Reviewing work

Searches can be filtered, for example finding all work produced in the previous day or sorting works by date and time. Creating a live feed would allow for quicker access to data to review works.

 

Background: Computer Science

BA Computer Visualisation And Animation. I specialised in programming and scripting, developing tools and plugins for content creation applications. My Major project in my final year sparked research in machine learning and neural networks for motion synthesis.

Download Neerav's Research Profile


2018
Karolina Pakenaite
Karolina Pakenaite
Supervisor:
Prof Peter Hall, Dr Michael Proulx

An Investigation into Tactile Images for the Visually-Impaired

Research Project: An Investigation into Tactile Images for the Visually-Impaired Community  

My aim is to provide the visually impaired community with access to photographs using sensory substitution. I am investigating the functionality of photographs being translated into simple pictures, which will then be printed in tactile form. Some potential contributions could be the introduction of denotation and projection with regard to style. Beneficiaries could also extend beyond computing into other academic disciplines like Electronic Engineering and Education. Accessible design is also essentially inclusive design for all. Sighted individuals find themselves feeling tempted to touch art pieces in museums or galleries and while most artworks are originally created to be touched, we often discern a cardinal no-touch rule to preserve them. Accessibility features may be designed for a particular group of the community, but they can and usually do end up being used by a wider range of people. Towards the end of my research, I hope to adapt my work for the use of primary blind school children.  

To get simplified pictures, I recently tried translating photographs into two different styles: ‘Icons Representation’ and ‘Shape Representation’.  

For Icons Representation of a photograph, I used a combination of object and salient detection algorithms to identify salient objects only. I used Mask R-CNN object detection and combined its output with Saliency Map using PiCANet detection algorithm, which then gave us probabilities that a given pixel belongs to a salient object within an image. All detected salient objects are replaced with corresponding simplified icons onto a blank canvas of the same size as the input image. Geometric transformations on icons are applied to avoid any overlaps. Background edge map was added to give further context about the image.  

For Shape Representation of an object, I experimented with different image segmentation methods and replaced each segment with the most appropriate canonical shape, using a method introduced by Voss and Suße. That is, segments are normalised into a canonical frame by using a whitening transform to get a normalised shape. We then compared these normalised shapes with canonical shapes in the library and decide which is correlated the most. An inverse transform was then applied on library shapes. For the last step, it looks like we have moulded library shapes accordingly so that it matches closely to its segments. We now have simplified images of objects using a combination of shapes. We plan to have these Shape Representations printed in 3D.  

Due to Covid-19, we were unable to test these tactile images with participants using touch, but few obvious imitations were found. We will continuously investigate and improve our simplified images. Computer Vision will allow us to create autonomous functionality to translate photographs into tactile images and hope that this will reduce the cost of tactile image production. We will also use knowledge in Psychology of Object Recognition and experiment with human participants to make our implementation as effective as possible by the real users. A combination of Computer Science and Psychology will prepare us to adapt our work for the use of education for primary school children. This could be teaching congenitally blind children to understand different sizes of objects that are rarely touched (e.g elephant or mouse) or teach them to indicate the distance of an object on a paper by drawing objects small that are far away 

Background: Maths/ Computer Science

MSci Mathematics with a Year in Computer Science, Birmingham

Download Karolina's Research Profile


2017
Rory Clark
Rory Clark
Supervisor:
Dr Feng Tian

3D UIs within VR and AR with Ultrahaptics Technology

Industrial Partner:

Ultrahaptics

Research Project: 3D User Interfaces for Virtual and Augmented Reality

Research into how a 3D user interface (UI) can be presented, perceived, and realised within virtual and augmented reality (VR and AR), while integrating Ultrahaptics mid-air haptics technology. Mid-air haptics presents the opportunity of allowing users to feel feedback and information directly on their hands, without having to hold a specific controller. This means the hands can be targeted for both tracking, and haptics, while still allowing full freedom of control.

Background: Games Programming

BSc Games Programming, Bournemouth University, focusing on the use and development of; games and game engines, graphical rendering, 3D modelling, and a number of programming languages. Final year dissertation on virtual reality event planning simulation, utilising the HTC Vive. Previous projects on systems ranging from the web and mobile, to smart-wear devices and VR headsets.

Download Rory's Research Profile


https://rory.games


2017
Kenneth Cynric Dasalla
Kenneth Cynric Dasalla
Supervisor:
Dr Christof Lutteroth

Effects of Natural Locomotion in VR

Research Project: Effects of Natural Locomotion in VR

MSc Digital Entertainment - Masters Project:

Multi-View High-Dynamic-Range Video, working with Dr Christian Richardt

Background: Computer Science

BSc in Computer Science, Cardiff University specializing in Visual Computing. Research project on Boosting Saliency Research on the development of a new dataset which includes multiple categorised stimuli and distortions. Fixations of multiple observers on the stimuli were recorded using an eye tracker.

Download Kenneth's Research Profile


https://zubr.co/author/kenneth/


2017
Alexandros Rotsidis
Alexandros Rotsidis
Supervisor:
Dr Christof Lutteroth; Dr Christian Richardt
Industrial Supervisor:
Mark Lawson

Creating an intelligent animated avatar system

Industrial Partner:

Design Central (Bath) Ltd t/a DC Activ / LEGO

Research Project:

Creating an intelligent avatar: using Augmented Reality to bring 3D models to life. The creation of 3D intelligent multi-lingual avatar system that can realistically imitate (and interact with) Shoppers (Adults), Consumers (Children), Staff (Retail) and Customers (Commercial) as users or avatars. Using different dialogue, appearance and actions based on given initial data and feedback on the environment and context in which it is placed creating ‘live’ interactivity with other avatars and users.

While store assistant avatars and virtual assistants are commonplace in present times, they act in an often scripted and unrealistic manner. These avatars are also often limited in their visual representation (ie usually humanoid).

This project is an exciting opportunity to apply technology and visual design to many different 3D objects to bring them to life to guide and help people (both individually and in groups) learn from their mistakes in a safe virtual space and make better quality decisions increasing commercial impact.

Masters Project: AR in Human Robotics

Augmented Reality used in Human Robotics Interaction, working with Ken Cameron

Background: Computer Science

BSc (Hons) Computer Science from Southampton University; worked in the industry for 5 years as a web developer. A strong interest in Computer Graphics and Machine Learning led me to the EngD programme.

Download Alex's Research Profile


http://www.alexandrosrotsidis.com/


2017
Michelle Wu
Michelle Wu
Supervisor:
Dr Zhidong Xiao, Dr Hammadi Nait-Charif

Motion Representation Learning with Graph Neural Networks

Research Interest: Motion Representation Learning with Graph Neural Networks and its Applications  

The animation of digital characters can be a long and demanding process: the human eye is very sensitive to unnatural motions, and this means that animators need to pay extra attention to create realistic and believable animations. Motion Capture can be a helpful tool in this matter, as it allows to directly capture movements performed by actors and converts them into mathematical data. However, dealing with dense motion data presents its own challenges, and this usually translates in studios having difficulty reusing the large collections of motion data available, often resorting, in the end, to capturing new data instead. 

 

To promote the recycling of motion data, time-consuming tasks (e.g. manual data cleaning and labelling) should be automated by developing efficient methods for classifying and indexing data to allow for the searching and retrieval of motions from databases. At the core of these approaches is the learning of a discriminative motion representation. A skeleton can naturally be represented as a graph, where nodes correspond to joints and edges to bones. However, many human actions need far-apart joints to move collaboratively and to capture these internal dependencies between joints (even those without bone-connections), we can leverage the potential of Graph Neural Networks to adaptively learn a model that can extract both spatial and temporal features. This will allow us to learn potentially richer motion representations that will form the basis for the tasks of motion classification, retrieval and synthesis.

Background:  Computer Animation, Computer Science

BSc Software Development for Animation, Games and Effects, Bournemouth University.

Research Assistant in Human Computer Interaction/Computer Graphics in collaboration with the Modelling Animation Games, Effects (MAGE) group within the National Centre for Computer Animation (NCCA), focusing on the development and dissemination of the SHIVA Project, a software that provides virtual sculpting tools for people with a wide range of disabilities.

View Michelle's Research Outputs

Download Michelle's Research Profile


2017
Marcia Saul
Marcia Saul
Supervisor:
Dr Fred Charles, Dr Xun He
Industrial Supervisor:
Stuart Black

A Two-Person Neuroscience Approach for Social Anxiety

Can we use games technology and EEG to help us understand the role of interbrain synchrony on people experiencing the symptoms of social anxiety?

Research Project: A Two-person Neuroscience Approach For Social Anxiety: Bridging Interbrain Synchrony and Neurofeedback.

Industrial Partner:

BrainTrainUK

Research Project: A Two-Person Neuroscience Approach for Social Anxiety: Prospects into Bridging Intra- & Inter-brain Synchrony with Neurofeedback

My main field of interest is computational neuroscience, brain-computer interfaces and machine learning with the use of games in applications for rehabilitation and improving the quality of life for patients/persons in care.

Social anxiety has become one of the most prominent of anxiety disorders, with many of its symptoms overlapping into the realms of other mental disorders such as depression, autism spectrum disorder, schizophrenia, ADHD, etc. Neurofeedback (NF) is well known to modulate these symptoms using a metacognitive approach of relaying a participant’s brain activity back to them for self-regulation of the target brainwave patterns. In this project, we explore the potential of integrating Intra- and inter-brain Synchrony to explore the potential of a more effective NF procedure. By using realistic multimodal feedback in the delivery of NF, we can amplify the concept of collaboration or co-operation during tasks – utilising the ‘power of two’ in two-person neuroscience – to help reach our goal of synchronising brainwaves between two participants and aiming to alleviate symptoms of social anxiety.

MRes - Masters project:

Using computational proprioception models and artificial neural networks in predictive two-dimensional wrist position methods.

Background: Psychology and Computational Neuroscience

BSc in Biology with Psychology, Royal Holloway University of London

MSc in Computational Neuroscience & Cognitive Robotics, University of Birmingham

Download Marcia's Research Profile

View Marcia's Research Outputs


2017
Valentin Miu
Valentin Miu
Supervisor:
Dr Oleg Fryazinov
Industrial Supervisor:
Mark Gerhard

Realtime Scene Understanding With Machine Learning

Industrial Partner:

Beauty Labs

Research Project: Augmented Reality with Machine Learning on Smartphones for Beauty Applications

Given the speed requirements of realtime applications, server-side deep learning inference is often not suitable due to high latency, potentially even in a 5G world. With the increased computing power of smartphone processors, the leveraging of device GPUs, and the development of mobile-optimized neural networks such as Mobilenet, realtime on-device inferencing has become possible.

Within this scope, machine learning techniques for scene understanding are leveraged, such as generic object detection. They are implemented as multiplatform augmented reality apps, offering a unified experience by using Unity and C++ plugins, the machine learning functionality being accomplished through the TensorFlow Lite C API.  In the current project, machine learning and other methods are combined to track the position and pose of a hair curler, with the purpose of developing an app to educate regular users in the usage of professional hairdressing equipment.

Background: Physics

MSci Physics, University of Glasgow, graduating with a 1st degree. During this time I familiarized myself with compositing and 2D/3D animation, in a non-professional setting. In my first year at the CDE, I successfully completed masters-level courses in Maya, OpenGL and Houdini, and have been learning CUDA GPU programming and machine learning. 

 

 

 

 

Download Valentin's Research Profile

View Valentin's Research Outputs


2017
Sameh Hussain
Sameh Hussain
Supervisor:
Prof Peter Hall
Industrial Supervisor:
Andrew Vidler

Learning to render in style

Industrial Partner:

Ninja Theory

Research Project:

Investigations into the development of high-fidelity style transfer from artist drawn examples.

Style transfer techniques have provided the means of re-envisioning images in the style of various works of art. However, these techniques can only produce credible results for a limited range of images. We are improving on existing style transfer techniques by observing and understanding how artists place their brush strokes on a canvas.

So far we have been able to build models that are able to learn styles pertaining to line drawings from a few example strokes. We have then been able to apply the model a variety of inputs to create stylised drawings.

Over the upcoming year, we will be working on extending this model so that we can do more than just line drawings. We will also be developing working with our industrial partner to develop interactive so their artists can leverage the research we have produced.

MSc Digital Entertainment - Masters project: A parametric model for linear flames, working with Prof Peter Hall

Background: Mechanical Engineering

MEng in Mechanical Engineering, University of Bath; one year placement with Airbus Space and Defence developing software to monitor and assess manufacturing performance.


2017
Thomas Williams
Thomas Williams
Supervisor:
Dr Elies Dekoninck, Dr Simon Jones, Dr Christof Lutteroth
Industrial Supervisor:
Prof Nigel Harris, Dr Hazel Boyd

AR as a cognitive prosthesis for people living with dementia

Industrial Partner:

Designability

Research Project: Exploring the Use of Augmented Reality to Support People Living with Dementia to Complete Tasks in the Home

There have been considerable advances in the technology and range of applications of virtual and augmented reality environments. However, to date, there has been limited work examining design principles that would support successful adoption. Assistive technologies have been identified as a potential solution for the provision of elderly care. Such technologies have in general the capacity to enhance the quality of life and increase the level of independence among their users.

The aim of this research project is to explore how augmented reality (AR) could be used to support those with dementia with daily living tasks and activities in the home. This will specifically focus on those living with mild to moderate dementia and their carers. Designability have been working on task sequencing for different types of daily living tasks and have amassed considerable expertise in how to prompt people with cognitive difficulties, through a range of everyday multi-step tasks. This project would allow us to explore how AR technology could build on that expertise.

The research will involve testing the design of augmented reality prompts in domestic settings. Augmented reality technologies are all still in their early stages of technology maturity, however they are at the ideal stage of development to explore their application in such a unique field as assistive technology.

MSc Digital Entertainment - Masters project:

A novel gaze tracking system to improve user experience at Cultural Heritage sites, with Dr Christof Lutteroth

Background: Maths/Physics

University of Bath BSc (Hons) Mathematics and Physics Four years with placement

Download Thomas' Research Profile


http://blogs.bath.ac.uk/ar-for-dementia/


2016
Lewis Ball
Lewis Ball
Supervisor:
Prof Lihua You, Prof Jian Jun Zhang
Industrial Supervisor:
Dr Mark Leadbeater, Dr Chris Jenner

Material based vehicle deformation and fracturing

Industrial Partner:

Ubisoft Reflections

Research Project: Material based vehicle deformation and fracturing

Damage and deformation of vehicles in video games is essential for delivering an exciting and immersive experience to the player, however there are tough constraints placed on deformation methods used in video games. They must produce deformations which appear plausible so as not to break the players immersion, however they must also be robust enough to remain stable in any situation the player may experience. Lastly any deformation method must be fast enough to calculate the deformations in real-time while also leaving enough time for other critical game state updates such as Rendering, AI and Animations. 

My research focuses on augmenting real-time physics simulations with data-driven methods. Data from offline high-quality, physically-based simulations are used to augment real-time simulations in order to allow them to adhere to physically correct material properties while also remaining fast and stable enough to use in production-quality video games. 

Background:

BSc Physics and MSc Scientific Computing, University of Warwick. 

 

 

 

 

Download Lewis' Research Profile


2016
Kyle Reed
Kyle Reed
Supervisor:
Prof Darren Cosker
Industrial Supervisor:
Dr Steve Caulkin

Improving Facial Performance Animation using Non-Linear Motion

Industrial Partner:

Cubic Motion

Research Project: Improving Facial Performance Animation using Non-Linear Motion

Cubic Motion is a facial tracking and animation studio, most famous for their real-time live performance capture. The aim of this research is to improve the quality of facial motion capture and animation through the development of new methods for capture and animation.

We are investigating the utilisation of non-linear facial motion observed from 4D facial capture for improving the realism and robustness of facial performance capture and animation. As the traditional pipeline relies on linear approximations for facial dynamics, we hypothesise that using observed non-linear dynamics will automatically factor in subtle nuances such as fine wrinkles and micro-expressions, reducing the need of animator handcrafting to refine animations.

Starting with developing a pipeline for 4D Capture of an performer’s range of motion (or Dynamic Shape Space); we apply this information to various components of the animation pipeline including rigging, blendshape solving to performance capture and keyframe animation. We also investigate how by acquiring a Dynamic Shape Space of multiple individuals we can develop a motion manifold for the personalisation of individual expression, that can be used as a prior for subject-agnostic animation. Finally we validate the need of non-linear animation through comparison to linear methods and through audience perception studies.

MSc Digital Entertainment - masters project:

Using convolutional neural networks (CNNs) to predict occluded facial expressions when wearing head - mounted displays (HMDs) for VR.

Background: Computer Science

BSc (Hons) Computer Science with Industrial Placement Year, University of Bath.

Download Kyle's Research Profile

View Kyle's Research Outputs


2016
Padraig Boulton (Paddy)
Padraig Boulton (Paddy)
Supervisor:
Prof Peter Hall
Industrial Supervisor:
Oliver Schilke

Recognition of Specific Objects Regardless of Depiction

Industrial Partner:

Disney Research

Research Project: Recognition of Specific Objects Regardless of Depiction

Recognition numbers among the most important of all open problems in Computer Vision. State of the art using neural networks is achieving truly remarkable performance when given real world images (photographs). However, with one exception, the performance of each and every mechanism for recognition falls significantly when the computer attempts to recognise objects depicted in non-photorealistic form. This project addresses that very important literature gap by developing mechanisms able to recognise specific objects regardless of the manner on which they are depicted. It builds on state of the path which is alone in generalising uniformly across many depictions.

In this case, the objects of interest are specific objects rather than visual object classes, and more particularly the objects represent visual IP as defined by the Disney corporation. Thus an object could be “Mickey Mouse”, and the task would be to detect “Mickey Mouse” photographed as a 3D model, as a human wearing a costume, as a drawing on paper, as printed on a T-shirt and so on.

Currently we are investigating how different art styles map salient information of object classes or characters, and using this to develop a recognition framework that can use examples from artistic styles to learn domain agnostic classifier capable of generalising to unseen depictive styles.

MSc Digital Entertainment - Masters project:  

Undoing Instagram Filters : Creating a generative adversarial network (GAN) which takes a filtered Instagram photo and synthesizes an approximation of the original photo.

Background: Automotive Engineering

MEng Automotive Engineering, Loughborough university. 

View Paddy's Research Outputs


2016
Azeem Khan
Azeem Khan
Supervisor:
Dr Tom Fincham Haines
Industrial Supervisor:
Michele Condò

Procedural gameplay flow using constraints

Industrial Partner:

Ubisoft Reflections

Research Project: Procedural gameplay flow using constraints

This project involves using machine learning to identify what players find exciting or entertaining as they progress through a level.  This will be used to procedurally generate an unlimited number of levels, tailored to a user's playing style.

Tom Clancy's The Division is one of the most successful game launches in history, and the Reflections studio was a key collaborator on the project. Reflections also delivered the Underground DLC, within a very tight development window. The key to this success was the creation of a procedural level design tool, which took a high level script that outlined key aspects of a mission template, and generated multiple different underground dungeons that satisfied this gameplay template. The key difference to typical procedural environment generation technologies, is that the play environment is created to satisfy the needs of gameplay, rather than trying to fit gameplay into a procedurally generated world.

The system using for TCTD had many constraints, and our goal is to develop technology that will build on this concept to generate an unlimited number of missions and levels procedurally, and in an engine agnostic manner to be used for any number of games. We would like to investigate using Markov constraints, inspired by the 'flow machines' research currently being undertaken by Sony to generate music, text and more automatically in a style dictated by the training material. http://www.flow-machines.com/ (other techniques may be considered)

Masters Project:

An Experimental Approach to the Complexity of Solving Bimatrix Games

Background: Physics

MSci Physics with Theoretical Physics, Imperial College

Download Azeem's Research Profile


2015
Thomas Joseph Matthews
Thomas Joseph Matthews
Supervisor:
Dr Feng Tian / Prof Wen Tang
Industrial Supervisor:
Tom Dolby

Automated Proficiency Analysis and Feedback for VR Training

Research Project: Human-Centred Design for Virtual Reality Emergency Medicine Training

Industrial Partner:

AI Solve

Research Project: Semi-Automated Proficiency Analysis and Feedback for VR Training

Virtual Reality (VR) is a growing and powerful medium that is finding traction in a variety of praxis. My research aims to tackle the specific aim of encouraging immersive learning and knowledge retention through short-form

Our project is streamlining the process of proficiency analysis in virtual reality training by using performance recording and data analytics to directly compare subject matter experts and trainees. Currently virtual reality training curriculums require at least post-performance review and/or direct supervised interpretation to provide feedback, whereas our system will be able to use expert performance models to direct feedback towards trainees’ strengths and weaknesses, both in a specific scenario and across the subject curriculum.

Using an existing virtual reality training scenario developed by AiSolve and Children’s Hospital Los Angeles, subject matter experts will complete multiple performance variations in a single scenario. This provides a scenario action graph which is then used as a baseline to measure trainee performances against, noting significant variants in attributes like decision-making, stimuli perception and stress management. We will validate the system using objective and subjective accuracy metrics, implementation feasibility and usability measures.

More information on the VRSims framework this project is attached to can be found on the AiSolve website: http://www.aisolve.com/enterprise/

 

 

 

Download Thomas' Research Profile

View Thomas' Research Outputs


http://www.aisolve.com/


2013
Tristan Smith
Tristan Smith
Supervisor:
Dr Julian Padget
Industrial Supervisor:
Andrew Vidler

Procedural content generation for computer games

Industrial Partner

Ninja Theory

Research Project

Procedural content generation for computer games

Procedural content generation (PCG) is increasingly used in games to produce varied and interesting content. However PCG systems are becoming increasingly complex and tailored to specific game environments, making them difficult to reuse, and so we investigate ways to make the PCG code reusable and allow simpler, usable descriptions of the desired output. By allowing the behaviour of the generator to be specified without altering the code, we provide increasingly data-driven, modular generation. We look at reusing tools and techniques originally developed for the semantic web, and investigate the possibility of using them with industry-standard games development tools.

Background: Computer Science

Master of Engineering (MEng), Computer Science with Artificial Intelligence, University of Southampton

 

View Tristan's Research Outputs




© Centre for Digital Entertainment 2022. Site by MediaClash.