The Engineering and Physical Sciences Research Council (EPSRC), the Universities of Bath and Bournemouth and our industrial partners have funded over 100 Engineering Doctorate (EngD) and PhD studentships on the CDE Digital Entertainment programme.
The CDE provides outstanding candidates with an intensive training and research programme. Our Research Engineers (REs) are supervised by academic experts in Computer Vision, Computer Graphics, HCI, and ML/AI and supported by industry experts in companies and organisations in the digital entertainment sector or using the tools/technologies in application.
Graduates from this programme have the technical, business and personal development competencies, knowledge and experience needed to work in both academic and industrial research and understand the benefits and mechanisms for collaboration between them. Over half (67.6%) now have careers in industry, 28% work in academia or both.
This is a credit to their hard work and determination during their doctorates.
DOWNLOAD RESEARCH PROFILES Bournemouth University
DOWNLOAD RESEARCH PROFILES University of Bath
DOWNLOAD RESEARCH PROJECT SUMMARY
2018Daniela De Angeli |
Museums in the Digital Age: Understanding Visitors’ Behaviour with Real and Virtual Content CDE EngD in Digital Entertainment (University of Bath): Academic Supervisor: Professor Eamonn O’Neill Digital technologies are part of our everyday lives, affecting how people communicate and perceive the world, and pressuring museums into rethinking their exhibitions in order to stay relevant and drive visits. Visitors increasingly expect experiences that are not only educational and authentic, but also entertaining and relevant for them. However, museums are struggling to balance their traditional rigor with the requirements of a changing society and are increasingly considering participatory activities as a solution to better understand visitors and design experiences that are more relevant and engaging for the public. Among participatory practices, games are well-established and have been successfully used both as a co-design technique and as a method to collect data from and about players. Moreover, games are both engaging and relevant; they have a key role in contemporary society as they are played by an increasing number of people all over the world. Thus, games are gaining reach in entertainment, popular culture, and as an academic field of study. But despite their growing popularity and their potential as a participatory method, games are still used in museums for educational purposes rather than as a design and research method. The core of this research of game-based activities – or gamefulness – as a tool to promote authentic and entertaining experiences in museums. In order to address this main research topic, I used a combination of methods, building upon theoretical work and a series of empirical studies. First, I developed an understanding of authenticity and entertainment, outlining their relevance in contemporary museums. Then, I planned a series of activities that involved playing and making games with both museum professionals and the general public. Through those game-based studies I investigated how to collect data to support the design of new interactive experiences. Thesis: The Gameful Museum: Authenticity and Entertainment in the Digital AgeView Daniela’s Research Outputs |
2021Manuel Rey Area |
Deep View Synthesis For VR Video Supervisor Ph.D (University of Bath): Academic Supervisor: Dr Christian Richardt With the outbreak of VR, it is key for users to be fully immersed in the virtual world. The users must be able to move their heads freely around the virtual scene unveiling occluded surfaces, perceiving depth cues, and observing the scene to its last detail. Furthermore, if scenes are captured via casual devices (smartphone) anyone could convert their 2D pictures to a 3D fully immersive experience bringing a new digital world representation closer to ordinary users. The aim of this project is to synthesize novel views of a scene from a set of input views captured by the user. Eventually, the whole scene 3D geometry must be reconstructed and depth cues must be preserved to allow 6-DoF (degrees of freedom) head motion avoiding the well-known VR sickness. The main challenge lies in generating synthetic views with a high level of detail, light reflections, shadows, and occlusions… resembling to reality as much as possible. |
2016Lewis BallIndustrial Supervisor: Mark Leadbeater |
Material based vehicle deformation and fracturing Industrial Partner: Ubisoft Reflections CDE EngD in Digital Entertainment (Bournemouth University): Academic Supervisors: Prof Jian Jun Zhang, Prof Lihua You Damage and deformation of vehicles in video games is essential for delivering an exciting and immersive experience to the player, however there are tough constraints placed on deformation methods used in video games. They must produce deformations which appear plausible so as not to break the players immersion, however they must also be robust enough to remain stable in any situation the player may experience. Lastly any deformation method must be fast enough to calculate the deformations in real-time while also leaving enough time for other critical game state updates such as Rendering, AI and Animations. My research focuses on augmenting real-time physics simulations with data-driven methods. Data from offline high-quality, physically-based simulations are used to augment real-time simulations in order to allow them to adhere to physically correct material properties while also remaining fast and stable enough to use in production-quality video games.
|
2023Soumya C Barathi |
Interactive Feedforward in High-Intensity VR ExergamingMSCA Ph.D (University of Bath):Academic Supervisors: Professor Eamonn O’Neill, Dr Christof LutterothVR exergaming is a promising motivational tool to incentivise exercise. It has been widely applied to low to moderate-intensity exercise protocols; however, its effectiveness in implementing high-intensity protocols that require lesser time commitment remains unknown. This thesis presents a novel method called interactive feedforward, which is an interactive adaptation of the psychophysical feedforward training method where rapid improvements in performance are achieved by creating self-models showing previously unachieved performance levels. Interactive feedforward was evaluated in a cycling-based VR exergame where, in contrast to how feedforward has typically been used, individuals were not merely passive recipients of a self-model but interacted with it in real-time in a VR experience. Interactive feedforward led to improved performance while maintaining intrinsic motivation. This thesis further explores interactive feedforward in a social context. Players competed with enhanced models of themselves, their friend, and a stranger moving at the same enhanced pace as their friend. Thesis – Interactive Feedforward in HighIntensity VR Exergaming |
2018Alistair Barber |
Modern Approaches to Camera Tracking Within the Visual Effects Pipeline CDE EngD in Digital Entertainment (University of Bath): Academic Supervisor: Prof Darren Cosker, Matthew Brown Industrial Partner: DNeg Visual Effects (VFX) are a crucial component in a large proportion of feature films being produced today. The work done in producing VFX usually takes place after filming has happened, and by a specialised VFX facility. The process of producing visually realistic, and compelling, effects is complex and labour-intensive, requiring many skilled workers to complete different stages of the VFX ‘pipeline’. One of these tasks is Camera Tracking, the goal of which is to accurately calculate the movement of the camera used to film the original footage. Without this solution for camera movement, it would not be possible to convincingly render ComputerGenerated (CG) assets onto the original footage. The VFX pipeline is so called because it can be thought of as a process through which the original footage, output from digital artists, and other data produced, ‘flows’ towards producing a final output. Camera Tracking is one of the processes that is performed first in the pipeline. Therefore, as well as accuracy, timely completion of this stage is also essential in making sure that the VFX facility operates efficiently and in a cost-effective manner. Deadlines are strictly enforced, and the cost of producing VFX is agreed and fixed at the start of the project – so delays at any point of the pipeline can have dire consequences. Camera Tracking is closely related to the field of research known as Structure From Motion (SfM). Double Negative Ltd, a UK-based VFX studio with facilities worldwide, partnered with the University of Bath to establish a research project investigating how the latest work in the SfM domain could be applied to the process of Camera Tracking, which in VFX is still a process that involves a large amount of human interaction and hence cost. Presented in this project is a detailed investigation into the process of Camera Tracking at a VFX facility, utilising a large dataset of real shots from major Hollywood feature films. One of the main conclusions from this investigation is that Camera Tracking for VFX work, due to the nature of the work encountered in film production, is better regarded as a problem-solving exercise rather than a pure SfM problem. The quantitative results obtained in this work strongly suggest that having more data available about the scene being filmed and the camera used is one of the most effective ways to reduce the time spent on the Camera Tracking process. This research project then investigates the use of additional on-set hardware to make obtaining this information easier. It also develops new methods for determining information about changes in the parameters of the camera being used to film the scene using visual footage alone, under conditions in which other traditional Computer Vision methods would likely fail. The impact of this work has been a valuable contribution to the methods and tools available to artists performing these tasks, allowing them to operate more efficiently in this competitive and global industry, where the standards expected for the quality of VFX rise with each new film released. Thesis: Modern Approaches to Camera Tracking Within the Visual Effects PipelineView Alistair’s Research Outputs |
2021Simone Barbieri |
Unified Animation Pipeline for 2D and 3D Content CDE EngD in Digital Entertainment (Bournemouth University): Academic Supervisors: Professor Xiaosong Yang, Dr Zhidong Xiao Industrial Partner: Thud Media Industrial Supervisor: Ben Cawthorne Despite the remarkable growth of 3D animation in the last twenty years, 2D is still popular today and often employed for both films and video games. In fact, 2D offers important economic and artistic advantages to production. In this thesis has been introduced an innovative system to generate 3D character from 2D cartoons, while maintaining important 2D features in 3D as well. However, handling 2D characters and animation in a 3D environment is not a trivial task, as they do not possess any depth information. Three different solutions have been proposed in this thesis. A 2.5D modelling method, which exploits billboarding, parallax scrolling and 2D shape interpolation to simulate the depth between the different body parts of the characters. Two additional full 3D solutions have been presented. One based on inflation and supported by a surface registration method and one that produces more accurate approximations by using information from the side views to solve an optimization problem. These methods have been introduced into a new unified pipeline that involves a game engine, and that could be used for animation and video game production. A unified pipeline introduces several benefits to animation production for either 2D and 3D content. On one hand, assets can be shared for different productions and media. On the other hand, real-time rendering for animated films allows immediate previews of the scenes and offers artists away to experiment more during the making of a scene. Thesis – Generation of 3D characters from existing cartoons and a unified pipeline for animation and video games.View Simone’s Research Outputs |
2022Tobias Bertel |
Image-based Rendering of Real Environments for Virtual Reality MSCA Ph.D (University of Bath): Academic Supervisors: Dr Christian Richardt, Prof Darren Cosker, Prof Neill Campbell The main focus of this thesis lies on image-based rendering (IBR) techniques designed to operate in real-world environments and special attention is paid to the state-of-the-art end-to-end pipelines used to create and display virtual reality (VR) of 360° real-world environments. Head-mounted displays (HMDs) enable users to experience virtual environments freely, but the creation of real-world VR experiences remains a challenging interdisciplinary research problem.VR experiences can greatly differ depending on the underlying scene representation and the meaning of real-world VR heavily depends on the context, i.e., the system or format at hand. Terminology and fundamental concepts are introduced which are needed to understand related IBR and learned IBR (neural) approaches, which are categorically surveyed in the context of end-to-end pipelines to create real-world IBR experiences. The applicability of the discussed approaches to create real-world VR applications is categorised into practical aspects covering capture, reconstruction, representation, and rendering, which yields a fairly good overview of the research landscape to which this thesis contributes. The life cycle of immersive media production depends on computer vision and computer graphics problems and describes, in its whole, end-to-end pipelines for creating 3D photography used to render high-quality real-world VR experiences. Vision is needed to obtain viewpoint and scene information to create scene representations,i.e., 3D photographs, and computer graphics is needed for creating high-quality novel viewpoints, for instance by applying IBR techniques to the reconstructed scene representation. The lack of widely available immersive real-world VR content which suits current generations of HMDs motivates research in casual 3D photography. Furthermore, augmenting widely available real-world VR formats, e.g., omnidirectional stereo (ODS), seems intriguing in order to increase the immersion of currently available real-world VR experiences. This thesis contributes three end-to-end IBR pipelines for the creation and display of immersive 360° VR experiences, all outperforming the current de-facto standard (ODS)while only relying on moderate computational resources which are commonly available to casual consumers, and one learned IBR approach based on conditional adversarial nets that takes a casually captured video sweep as input to perform high-quality video extrapolation. The ability to casually capture 3D photography might have a profound impact on the way consumers capture, edit, share, and re-live personal experiences in the near foreseeable future. Thesis – Image-Based Rendering Of Real Environments For Virtual Reality |
2015Naval Bhandari |
Influence of Perspective in Virtual Reality CDE EngD in Digital Entertainment (University of Bath): Academic Supervisor: Professor Eamonn O’Neill Industrial Partner: BMT Defence Services Industrial Supervisor: Simon Luck Virtual reality (VR) is becoming increasingly popular, both domestically and commercially. The potential domain applications are ever-growing. In the domestic sector, VR is predominantly used for video games, whereas commercially, it is used for training. VR has been used to enable the occlusion of physical space from a user’s eyes, and instead replace it with a virtual environment (VE). Most, if not all VR content and applications are viewed using a first-person perspective (1PP). That is, they are viewed as if a camera in the VE is placed at the position of user’s physical eyes. Most headset-based VR devices allow users to manipulate the orientation of this camera by rotating their head, and some devices even let users manipulate the position of the camera. Perspective manipulation is common in other digital mediums than VR, but most notably in video games on 2D screens. The most common form of perspective manipulation in video games is to allow a third-person perspective (3PP). This typically allows a user to view a virtual character’s body and space around them in a video game. The techniques which enable perspective manipulation do not naturally carry over into VR. There have been few studies that have conclusively determined how perspective influences users in VR. This thesis is focused on determining how perspective impacts users within multiple areas. The areas focused on in this thesis are task performance, spatial perception, and user presence. These are measured over several domain applications. This thesis also looks at how perspective may interact with other variables, such as display screens and invoked emotion. This thesis progressively investigates several aspects of perspective manipulation with a series of user studies. The thesis presents original research on key components for the design and implementation of 3PP in VR. Thesis: Influence of Perspective in Virtual RealityView Naval’s Research Outputs |
2019Andreea Bizdideanu |
Strategies for reconstructing and reusing light transport paths in dynamic environments CDE EngD in Digital Entertainment (Bournemouth University): Academic Supervisors: Dr Ian Stephenson, Dr Oleg Fryasinov Industrial Partner: Optis, Industrial Supervisor: Nicolas Dalmasso The current work introduces path manipulation as a tool that extends bidirectional path tracing to reuse paths in the temporal domain. Defined as an apparatus of sampling and reuse strategies, path manipulation reconstructs the subpaths that compose the light transport paths and addresses the restriction of static geometry commonly associated with Monte Carlo light transport simulations. By reconstructing and reusing subpaths, the path manipulation algorithm obviates the regeneration of the entire path collection, reduces the computational load of the original algorithm and supports scene dynamism. Bidirectional path tracing relies on local path sampling techniques to generate the paths of light in a synthetic environment. By using the information localized at path vertices, like the probability distribution, the sampling techniques construct paths progressively with distinct probability densities. Each probability density corresponds to a particular sampling technique, which accounts for specific illumination effects. Bidirectional path tracing uses multiple importance sampling to combine paths sampled with different techniques in low-variance estimators. The path sampling techniques and multiple importance sampling are the keys to the efficacy of bidirectional path tracing. However, the sampling techniques gained little attention beyond the generation and evaluation of paths. Bidirectional path tracing was designed for static scenes and thus it discards the generated paths immediately after the evaluation of their contributions. Limiting the lifespan of paths to a generation-evaluation cycle imposes a static use of paths and of sampling techniques. The path manipulation algorithm harnesses the potential of the sampling techniques to supplant the static manipulation of paths with ageneration-evaluation-reuse cycle. An intra-subpath connectivity strategy was devised to reconnect the segregated chains of the subpaths invalidated by the scene alterations. Successful intra-subpath connections generate subpaths in multiple pieces by reusing subpath chains from prior frames. Subpaths are reconstructed generically, regardless of the subpath or scene dynamism type and without the need for predefined animationpaths. The result is the extension of bidirectional path tracing to the temporal domain. Thesis: Path manipulation strategies for rendering dynamic environments
https://www.wetafx.co.nz/ |
2021Adam Boulton |
Generating Engagement With Video Games Through Frustration And HindranceCDE EngD in Digital Entertainment (University of Bath):Academic Supervisors: Professor Rachid Hourizi, Professor Eamonn O’NeillIndustrial Partner: PaperSevenIndustrial Supervisor: Alice GuyThe problems that can arise from excessive player frustration with video games are well-reported. As the emerging literature surrounding video game frustration makes clear, the causes of that frustration can be complex and difficult to identify. In many cases, however, that literature shows the causes of frustration to include a player’s inability to achieve in-game goals, and the results to include disengagement from the wider game rather than simply the specific obstacle to be overcome. In that context, it is perhaps unsurprising that the recognition and removal of frustration have been major focuses for the research community interested in the phenomenon. Importantly, however, a game that creates no sense of frustration in its players risks becoming boring. For example, a puzzle game in which every puzzle is instantly solved or an obstacle-driven game in which every obstacle is immediately surmounted is unlikely to attract or engage the players needed to make it profitable. Comprehensive recognition and removal of all frustrating events or properties from a game may, therefore, come at the cost of removing players engagement with that game – a substantial problem for developers trying to sell not only the game itself but, increasingly add-ons and upgrades to its player base e.g. the avatar costumes or ‘skins’ that underpin the commercialstrategy[80, 133] of the popular game Fortnite: Battle Royale[67].In that context, games developers (including, but not limited to those working at Paperseven, my host company for the Engineering Doctoratewhich underpins this thesis) need a wider understanding of the complex relationship between frustration and engagement with the games that they produce. They cannot rely exclusively on an approach to frustration based upon its removal at all costs. They also need to be able to be able to understand when to include potentially frustrating elements in their games and how to design those elements such that they increase rather than harm player engagement. Thesis – Generating Engagement With Video Games Through Frustration And HindranceView Adam’s Research Outputs |
2016Padraig Boulton |
Recognition of Specific Objects Regardless of Depiction CDE EngD in Digital Entertainment (University of Bath) Academic Supervisor: Professor Peter Hall Industrial Partner: Disney Research Industrial Supervisor: Oliver Schilke
Recognition numbers are among the most important of all open problems in Computer Vision. State-of-the-art using neural networks is achieving truly remarkable performance when given real-world images (photographs). However, with one exception, the performance of each and every mechanism for recognition falls significantly when the computer attempts to recognise objects depicted in non-photorealistic form. This project addresses that very important literature gap by developing mechanisms able to recognise specific objects regardless of the manner in which they are depicted. It builds on state of the path which is alone in generalising uniformly across many depictions. In this case, the objects of interest are specific objects rather than visual object classes, and more particularly the objects represent visual IP as defined by the Disney corporation. Thus an object could be ‘Mickey Mouse’, and the task would be to detect ‘Mickey Mouse’ photographed as a 3D model, as a human wearing a costume, as a drawing on paper, as printed on a T-shirt, and so on. Currently we are investigating how different art styles map salient information of object classes or characters and using this to develop a recognition framework that can use examples from artistic styles to learn domain agnostic classifier, capable of generalising to unseen depictive styles. Thesis – Recognition of Specific Objects Regardless of Depiction (Redacted Version)
|
2018Jack Brett |
Augmented Music Interaction and Gamification Industrial Partner: Roli Industrial Supervisor: Corey Harrower CDE EngD in Digital Entertainment (Bournemouth University): Academic Supervisor: Professor Christos Gatzidis ‘Without music, life would be a mistake’ Friedrich Nietzsche. Learning music theory or a musical instrument in a traditional sense is seen as tedious, and requires rote learning and a lot of commitment. In the current market, there are copious amounts of musical applications, from creation and mixing to mobile instruments and a small handful of actual games. Most of these applications offer little to no learning and do not cater for beginners. The problem is that music is fun, but those beginning the learning journey, may find the initial phase of it daunting because there is so much to contemplate. Learning applications such as Yousician have helped to bridge this gap between traditional learning and modern technology using ‘gamification’ – adding elements of games such as leaderboards and instant gratification. Whilst these learning applications do offer more of an engaging experience, users still stop and drop the learning process after a short period of time. The trouble with learning any instrument and generally improving musician skills is that there is a large amount of rote learning required; simply playing the same rhythms repeatedly to help internalise your own sense of rhythm or playing scales over and over again to help memorise them. Current applications and methods of gamification focus on adding game elements to a learning environment or lesson whereas we are looking to develop games in which the mechanics are the learning components. We aim to develop learning games with the use of new and existing musical technology created by ROLI, as well as leveraging new technology such as virtual/augmented reality. These technologies can open new doors for innovative, engaging and fun experiences to help aid the learning process. The end goal is to develop games in which users/students can learn and practise, whilst avoiding boredom and putting fun as the core driver – it is learning without ¦ learning. View Jack’s Research Outputshttp://jackbrett.co/ |
2021Daniel Castle |
Efficient and Natural Proofs and Algorithms Ph.D (University of Bath): I’m Daniel, a new CDE aligned PhD student at the University of Bath. I recently completed a 4-year masters in computer science here at Bath, during which I particularly enjoyed the theoretical aspects of the subject. I was fortunate to meet and later work under the supervision of members of the mathematical foundations of computation research group, including Dr Willem Heijltjes and Professor Nicolai Vorobjov, who were extremely helpful throughout and later encouraged me to apply for a PhD. I chose to stay on at Bath because of the great people I met, the exciting work being done in the group I am now a part of, and, of course, the wonderful city.My interests broadly lie at the intersection of mathematics and computer science – in particular, the field of proof theory, which studies mathematical proofs as formal objects. This is important from a computer science perspective because of a direct correspondence between computer programs and proofs, leading to applications in areas such as automatic software verification and the design of new programming languages. |
2021Rory Clark |
Understanding Hand Interactions and Mid-Air Haptic Responses within Virtual Reality and Beyond CDE EngD in Digital Entertainment (Bournemouth University): Academic Supervisor: Professor Feng Tian Industrial Partner: Ultraleap Industrial Supervisor: Adam Harwood Hand tracking has long been seen as a futuristic interaction, firmly situated into the realms of sci-fi. Recent developments and technological advancements have brought that dream into reality, allowing for real-time interactions by naturally moving and positioning your hand. While these developments have enabled numerous research projects, it is only recently that businesses and devices are truly starting to implement and integrate the technology into their different sectors. Numerous devices are shifting towards a fully self-contained ecosystem, where the removal of controllers could significantly help in reducing barriers to entry. Prior studies have focused on the effects or possible areas for implementation of hand tracking, but rarely focus on the direct comparisons of technologies, nor do they attempt to reproduce lost capabilities.With this prevailing background, the work presented in this thesis aims to understand the benefits and negatives of hand tracking when treated as the primary interaction method within virtual reality (VR) environments. Coupled with this, the implementation and usage of novel mid-air ultrasound-based haptics attempt to reintroduce feedback that would have been achieved through conventional controller interactions. Two unique user studies were undertaken, testing core underlying interactions within VR that represent common instances found throughout simulations. The first study focuses on the interactions presented within 3D VR user interfaces, with a core topic of buttons. While the second study directly compares input and haptic modalities within two different fine motor skill tasks. These studies are coupled with the development and implementation of a real-time user study recording toolkit, allowing for significantly heightened user analysis and visual evaluation of interactions. Results from these studies and developments make valuable contributions to the research and business knowledge of hand-tracking interactions, as well as providing a uniquely valuable open-source toolkit for other researchers to use. This thesis covers work undertaken at Ultraleap over varying projects between 2018 and 2021. Thesis – Understanding Hand Interactions and Mid-Air Haptic Responses within Virtual Reality and Beyond |
2017Kenneth Cynric Dasalla |
Effects of Natural Locomotion in VRCDE EngD in Digital Entertainment (University of Bath):Academic Supervisor: Dr Christof LutterothThe project aims to investigate the use of depth-sensing cameras and positional tracking technologies to dynamically composite different visual content in real-time for mixed-reality broadcasting applications. This could involve replacing green-screen backgrounds with dynamic virtual environments, or augmenting 3D models into a real-world video scene. A key goal of the project is to keep production costs as low as possible. The technical research will therefore be undertaken predominantly with off-the-shelf consumer hardware to ensure accessibility. At the same time, the developed techniques also need to be integrated with existing media production techniques, equipment, and approaches, including user interfaces, studio environments, and content creation.
|
2018Sydney Day |
Humanoid Character Creation Through Retargeting CDE EngD in Digital Entertainment (Bournemouth University): Academic Supervisor: Professor Lihua You Industrial Partner: Axis Animation Industrial Supervisor: Matt Hooker
This project explores the automatic creation of rigs for humanoid characters with associated animation cycles and poses. Through retargeting a number of techniques can be covered: – automatic generation of facial blend shapes from a central reference library – retargeting of bipedal humanoid skeletons – transfer of weights between characters of differing topologies. The key goals are to dramatically reduce the amount of time needed to rig certain types of character, thus freeing up the riggers to work on fancier, more complex rigs that cannot be automated. |
2019Anamaria Weston |
E-StopMotion: Reconstructing and Enhancing 3D Animation of Stop Motion Characters by Reverse Engineering Plasticine Deformation CDE EngD in Digital Entertainment (University of Bath): Academic Supervisor: Prof Darren Cosker Industrial Partner: Fat Pebble Stop motion animation is a popular creative medium with applications across film, games, education and health industries. Traditionally this type of animation has been known as a two-dimensional (2D) art form, where the artist iteratively deforms a character and then takes photographs of it. The artist’s task can be overwhelming as he has to reshape a character into hundreds of poses to obtain just a few seconds of animation. Moreover, features that took a lot of effort to create remain unseen due to the preferred 2D method of visualization. The current project was a collaboration between Fat Pebble Games Studio and the Centre for Digital Entertainment (CDE) from the University of Bath. The aim was to create a novel pipeline for reconstructing and enhancing stop motion animation from three-dimensional (3D) character scans, obtained from multi-view images. Deformation, non-rigid registration and interpolation techniques were used to fulfil this aim. These procedures were aided by information about the material a character is made from and the character’s structure. The underlying inquiry of the project was to see whether reverse engineering the artist’s plasticine modelling process can result in physically plausible deformations between scans and more accurate non-rigid registrations. Message: A specialized pipeline for animation reconstruction and enhancement of handmade, plasticine characters can be created by combining deformation, non-rigid registration and interpolation. These techniques can be adapted to imitate the physical properties of plasticine, by including information about the material and structure of the original models. Video accompanying the publication: Thesis: E-StopMotion: Reconstructing and Enhancing 3D Animation of Stop Motion Characters by Reverse Engineering Plasticine Deformation |
2021Javier DeHesa |
A Novel Neural Network Architecture with Applications to 3D Animation and Interaction in Virtual RealityCDE EngD in Digital Entertainment (University of Bath):Academic Supervisor: Dr Julian Padget, Dr Christof LutterothIndustrial Partner: Ninja TheoryIndustrial Supervisor: Andrew Vidler
Realistic real-time character animation in 3D virtual environments is a difficult task, especially as graphics fidelity and production standards continue to rise in the industry. Motion capture is an essential tool to produce lifelike animation, but by itself, it does not solve the complexities inherent to real-time environments. In the case of interactive characters in virtual reality, this difficulty is taken to another level, as the animation must dynamically adapt to the free actions of the user. On top of that, these actions, unlike the touch of a button, are not easily interpretable, introducing new challenges to interaction design. We propose a novel data-driven approach to these problems that takes most of this complexity out of the hands of the developers and into automated machine-learning methods. We propose a new neural network architecture, “grid-functioned neural networks” (GFNN), that is particularly well suited to model animation problems. Unlike previous proposals, GFNN features a grid of expert parameterisations associated with specific regions of a multidimensional domain, making it more capable of learning local patterns than conventional models. We give a full mathematical characterisation of this architecture as well as practical applications and results, evaluating its benefits as compared with other state-of-the-art models. We then propose a complete framework for human-character interaction in virtual reality built upon this model, along with gesture recognition and behaviour planning models. The framework establishes a novel general data-driven approach to the problem applicable to a variety of scenarios, as opposed to existing ad hoc solutions to specific cases. This is described at an abstract, general level and in the context of a particular case study, namely virtual sword fighting, demonstrating the practical implementation of these ideas. Our results show that grid-functioned neural networks surpass other comparable models in aspects like control accuracy, predictability and computational performance, while the evaluation of our interaction framework case study situates it as a strong alternative to traditional animation and interaction development techniques. This contributes to the growing trend of incorporating data-driven systems into video games and other interactive media, which we foresee as a convergent future for industry and academia. View Javier’s Research Outputs |
2018Aaron Demolder |
Image Rendering for laser-based holographic Displays CDE EngD in Digital Entertainment (Bournemouth University):Academic Supervisors: Dr Valery Adzhiev, Dr Hammadi Nait CharifIndustrial Partner: VividQIndustrial Supervisor: Andrzej KaczorowskiVividQ has developed world-leading software technology that provides holographic computation and real-time holographic 3D display. VividQ now requires research and development work to generate a range of assets that best showcase the technology including high-quality projected objects with realistic texture/materials (e.g. glossy metallic surfaces, semi-transparencies) and visual effects such as smoke and fire. This R&D will also facilitate the delivery of Multi-View and AR experiences that overlay holographic objects onto the real world. Existing computer graphics, visual effects and video games technologies provide the basis for rendering digital content. Rendering images for a laser-based holographic display presents unique challenges compared to traditional 2D or stereoscopic display panels, as the ability of the observer to focus at varying depths plays a large role in the perception of content. This project will use computer graphics, visual effects and video games technologies to develop new techniques to improve image rendering for laser-based holographic displays. This project aims to: 1) Improve the quality and range of holographic objects that can be created and displayed. This R&D will enable VividQ to remain the market leader by increasing the level of object realism. 2) Assist the development of VividQ’s software framework to improve the visual representation of 3D objects and AR experiences, thereby improving the experience of users. View Aaron’s Research Outputs |
2021Rahul (Ray) Dey |
Procedural Generation of Features for Volumetric Terrains using a Rule-Based Approach Dataset CDE EngD in Digital Entertainment (Bournemouth University): Academic Supervisors: Professor Christos Gatzidis, Professor Xiaosong Yang Industrial Partner: Sony Interactive Entertainment Industrial Supervisor: Jason G Doig Terrain generation is a fundamental requirement of many computer graphics simulations, including computer games, flight simulators and environments in feature films. Volumetric representations of 3D terrains can create rich features that are either impossible or very difficult to construct in other forms of terrain generation techniques, such as overhangs, arches and caves. While a considerable amount of literature has focused on the procedural generation of terrains using heightmap-based implementations, there is little research found on procedural terrains utilising a voxel-based approach. This thesis contributes two methods to procedurally generate features for terrains that utilise a volumetric representation. The first method is a novel grammar-based approach to generate overhangs and caves from a set of rules. This voxel grammar provides a flexible and intuitive method of manipulating voxels from a set of symbol/transform pairs that can provide a variety of different feature shapes and sizes. The second method implements three parametric functions for overhangs, caves and arches. This generates a set of voxels procedurally based on the parameters of a function selected by the user. A small set of parameters for each generator function yields a widely varied set of features and provides the user with a high degree of expressivity. In order to analyse the expressivity, this thesis’ third contribution is an original method of quantitatively valuing a result of a generator function. This research is a collaboration with Sony Interactive Entertainment and their proprietary game engine PhyreEngineTM. The methods presented have been integrated into the engine’s terrain system. Thus, there is a focus on real-time performance so as to be feasible for game developers to use while adhering to strict sub-second frame times of modern computer games. Thesis – Procedural generation of features for volumetric terrains using a rule-based approachView Ray’s Research Outputs |
2020Era Dorta |
Learning models for intelligent photo editing CDE EngD in Digital Entertainment (University of Bath):Academic Supervisor: Professor Neill CampbellIndustrial Partner: Anthropics Technology LtdIndustrial Supervisor: Ivor SimpsonThis thesis addresses the task of photo-realistic semantic image editing, where the goal is to provide intuitive controls to modify the content of an image, such that the result is indistinguishable from a real image. In particular the focus is on editing applied to human faces, although, the proposed models can be readily applied to other types of images. We build on recently proposed deep generative models, which allow learning the image editing operations from data. However, there are a number of limitations in these models, two of which are explored in this thesis: the difficulty of modelling high-frequency image details, and the inability to edit images at arbitrarily high resolutions. The difficulty of modelling high-frequency image details is typical of methods with explicit likelihoods. This work presents a novel approach to overcome this problem. This is achieved by surpassing the common assumption that the pixels in the image noise distribution are independent. In most scenarios, breaking away from this independence assumption leads to a significant increase in computational costs. Additionally, it introduces issues in the estimability of the distribution due to the considerable increment in the number of parameters to be estimated. To overcome these obstacles, we present a tractable approach for a correlated multivariate Gaussian data likelihood, based on sparse inverse covariance matrices. This approach is demonstrated on variational autoencoder (VAE) networks. An approach to perform image edits using generative adversarial networks (GAN) at arbitrarily high-resolutions is also proposed. The method relies on restricting the types of edits to smooth warps, i.e. geometric deformations of the input image. These warps can be efficiently learned and predicted at a lower resolution, and easily upsampled to be applied at arbitrary resolutions with minimal loss of fidelity. Moreover, paired data is not needed for training the method, i.e. example images of the same subject with different semantic attributes. The model offers several advantages with respect to previous approaches that directly predict the pixel values: the edits are more interpretable, the image content is better preserved, and partial edits can be easily applied. Thesis – Learning models for intelligent photo editingView Era’s Research Outputs |
2016Tara Douglas |
Tales of the Tribes: Animation as a Tool for Indigenous Representation CDE EngD in Digital Entertainment (Bournemouth University): Academic Supervisors: Dr Bronwen Thomas, Dr Chindu Sreedharan Industrial Partner: West Highland Animation and Adivasi Arts Trust Industrial Supervisor: Leslie MacKenzie In India, animation practice is almost completely dominated by commercial production. Much of this is outsourced to India by foreign companies, but even the animation that is produced for national broadcast shows characteristics of animation design of Westernorigination with regard to content, presentation and art style. Consequently, modes of commercially driven animation are dictating the expectations of the medium in India, and have become widely regarded as the normative standard. The forces of global expansion have accelerated the arrival of commercial media entertainment into the various peripheral regions of India. The indigenous communities there have been represented by outsiders since colonial times and have no representation of their own in the medium of animation. As a consequence, young indigenous people are growing up with media entertainment that has no cultural relevance to them. It is challenging their identities and through this process, they are losing touch with their own cultural heritage. In this research I set out to investigate whether animation is a medium that can be used to retell indigenous folktales and reconnect young indigenous audiences to their traditional narratives. The development and production of a sample collection of short animation films, Tales of the Tribes through participatory film-making practice presents case studies of the process of collaborating with indigenous artists and cultural practitioners from selected communities to examine these issues of representation and to investigate how adaptation can be negotiated from oral to audio-visual forms of cultural expression. Find out more about Tara’s project here.Thesis: Tales of the tribes: animation as a tool for indigenous representation
|
2018Kwamina Edum-Fotwe |
Procedural Reconstruction of Architectural Parametric Models from Airborne and Ground Laser Scans CDE EngD in Digital Entertainment (University of Bath): Academic Supervisor: Dr Paul Shepperd Industrial Partner: Cityscape Industrial Supervisor: Dan Harper This research addresses the problem of efficiently and robustly reconstructing semantically-rich 3D architectural models from laser-scanned point-clouds. It first covers the pre-existing literature and industrial developments in active-sensing,3D reconstruction of the built environment and procedural modelling. It then documents a number of novel contributions to the classical problems of change detection between temporally varying multi-modal geometric representations and automatic 3D asset creation from airborne and ground point-clouds of buildings. Finally this thesis outlines ongoing research and avenues for continued investigation – most notably fully automatic temporal update and revision management for city-scale CAD models via data-driven procedural modelling from point-clouds. In short this thesis documents the outcomes of a research project whose primary aim was to engineer fast, accurate, and sparse building reconstruction algorithms. Recovery of Sparse Architectural Models from Aerial and Ground-Based LiDAR (Point-Clouds) for use in Interactive Visualisation and Simulation. The focus of my project was the efficient recovery of sparse architectural models from aerial and ground-based LiDAR (Light Detection and Ranging) scans. The key objective is to be able to turn unstructured point-clouds into clean, lightweight 3D models for use in Interactive Visualisation and Simulation. The research relies heavily on Computer Graphics, Image Processing and Data-Driven Procedural Modelling. Thesis: Procedural Reconstruction of Architectural Parametric Models from Airborne and Ground Laser ScansView Kwamina’s Research Outputs
|
2023Tayfun Esenkaya |
One Is All, All Is One: Cross-Modal Displays for Inclusive Design and Technology MSCA Ph.D (University of Bath): Academic Supervisors: Dr Michael Proulx, Professor Eamonn O’Neill Sensory substitution phenomena transform the representation of one sensory form into an equivalent from a different sensory origin. For example, a visual feed from a camera can be turned into something that can be touched or sounds that can be heard. The immediate applications of this can be seen in developing assistive technologies that aid vestibular problems, and visual and hearing impairments. This raises the question of whether perception with sensory substitution is processed like an image, or like a surface, or a sound. Sensory substitution techniques offer a great opportunity to dissociate the stimulus, the task and sensory modality, and thus provide a novel way to explore the level of representation that is most crucial for cognition. Accordingly, state-of-the-art sensory substitution techniques contribute significantly to the understanding of how the brain processes sensory information and also represents it with distinct qualia. This progressively advances cognitive theories with respect to multisensory perception and cross-modal cognition. Due to its versatility, sensory substitution phenomena also carry the applications of cognitive theories to other interdisciplinary research areas such as human-computer interactions (HCI). In HCI, cross-modal displays utilise sensory substitution techniques to augment users by enabling them to acquire sensory information via a sensory channel of different origin. The modular and flexible nature of cross-modal displays provide a supplementary framework that can appeal to a wider range of people whose physical and cognitive capabilities vary on a continuum. The present thesis focuses on the inclusive applications of sensory substitution techniques and cross-modal displays. Chapter I outlines the inclusive design mindset and proposes a case for applications of sensory substitution techniques for all of us. Chapter II and Chapter IV evaluate cross-modal displays in digital emotion communication and navigation applications respectively. Chapter III offers a methodology to study sensory substitution in a multisensory context. The present thesis evidences that perception with cross-modal displays utilises the capabilities of various senses. It further investigates the implication of this and suggests that cross-modal displays can benefit from multisensory combinations. With multisensory combination, cross-modal displays with unisensory and multisensory modes can deliver complementary feedback. In this way, it is argued users can gain access to the same inclusive information technology with customised sensory channels. Overall, the scope of the present thesis approaches sensory substitution phenomena from an HCI perspective with theoretical implications grounded in cognitive sciences. Thesis – One Is All, All Is One: Cross-Modal Displays for Inclusive Design and Technology
|
2019Alexz Farrall
Supervisor
|
The guide to mHealth implementation Project partner: Avon and Wiltshire Mental Health PartnershipNHS Trust (AWP) CDE EngD in Digital Entertainment (University of Bath): The project will not only be a collaboration between the University of Bath and AWP, but also work alongside Bristol’s Medical School to directly incorporate stakeholders into the design and evaluation of a new digital intervention. Smartphone apps are an increasingly popular means for delivering psychological interventions to patients suffering from reduced well-being and mental disorders. One such population that suffers from reduced well-being is that of the medical student populace, with recent studies identifying 27.2% to have depressive symptoms, 11.1% to have suicidal ideation, and 45-56% to have symptoms suggestive of burnout. Moreover, through the utilisation of advanced human-computer interaction (HCI) and behaviour therapy techniques, this project aims to contribute innovative research to increase the effectiveness of existing digital mental health technologies. Thus, it is the hope of the research team to actualise and implement the smartphone app into the NHS and create new opportunities to support the entire medical workforce.
|
2022Eshani Fernando |
Efficient Generation of Conversational Databases – Conversational AI Technologies for Digital Virtual Humans Intel-PA Ph.D (Bournemouth University): Academic Supervisor: Prof Jian Chang Intelligent virtual assistants/chatbot is a rapidly growing technology over the last decade, and the development of smart voice recognition software such as Siri, Alexa and google assistant make chatbots quite widespread among the general community. Conversational AI has been widely researched for many years in the understanding and generation of meaningful conversation. Generation of context-aware conversation is challenged with understanding dynamic context which keeps the conversation flowing. The development of transformer-based models such as BERT and GPTs have accelerated this area of research. However, it was seen that these models generate incorrect and inconsistent conversation threads deviating from meaningful human-like conversation. To this end, it is aimed to conduct research on supporting interaction among virtual agents or between an agent and a human user. The aim of my research is to develop AI-based novel technologies for maintaining and evolving a dynamic conversation database that provides the virtual agents with the capacity of building up understanding, making corrections and updating the context while dialogues continue. The proposed database powered with AI and machine learning techniques will help automatically (or semi-automatically) train and drive the conversational AI leading to human-like conversations. |
2017Daniel Finnegan |
Compensating for Distance Compression in Virtual Audiovisual Environments CDE EngD in Digital Entertainment (University of Bath): Academic Supervisors: Professor Eamonn O’Neill, Dr Michael Proulx Industrial Partner: Somethin’ Else Industrial Supervisor: Rob McHardy Virtual environments are increasingly being used for various applications. In recent times, with the advent of consumer-grade systems, virtual reality has reached a critical mass and has exploded in terms of application domains. Extending from games and entertainment, VR is also applied in military training, remote surgery, flight simulation, co-operative work, and education. While all of these applications require careful design with respect to the interaction and aesthetics of the environment, they differ in their requirement of veridical realism: the impression of suspending disbelief to the point where perception in the environment is equal to the real world. At the same time, research in human centred disciplines have shown predictable biases and `errors’ in perception with respect to the environment intended by the designer. This can be a challenge when certain perceptual phenomena prohibit the applicability of VR due to discontinuation in what is rendered and what is actually perceived by the observer. This thesis is focused on a specific perceptual phenomenon in VR, namely that of distance compression, a term describing the widespread underestimation of distances that occur in VR relative to the real world. This perceptual anomaly occurs not only in visual-based virtual environments, as compression has been observed and studied in auditory-only and audiovisual spaces too. The contribution of this thesis is a novel technique for reducing compression, and its effectiveness is demonstrated in a series of empirical evaluations. First, research questions are synthesized from existing literature and the problem is introduced and explained through a rigorous review of previous literature in the context of spatial audio, virtual reality technology, psychophysics, and multi-sensory integration. Second, the technique for reducing distance compression is proposed from an extensive literature review. Third, the technique is empirically tested through a series of studies involving human participants, virtual reality hardware, and bespoke software engineered for each study. Finally, the results from the studies are discussed and concluded with respect to the research questions proposed.
Thesis: Compensating for Distance Compression in Virtual Audiovisual EnvironmentsView Daniel’s Research Outputs |
2019Isabel Fitton |
Improving skills learning through VR Industrial Partner: PwC UKIndustrial Supervisor: Jeremy DaltonCDE EngD in Digital Entertainment (University of Bath):Academic Supervisors: Dr Christof Lutteroth, Dr Michael Proulx, Dr Chris Clarke
More affordable, consumer-friendly head-mounted displays (HMDs) have led to excitement around the potential for virtual reality (VR) to revolutionise training and education. VR promises to support people in learning new skills, by immersing learners in virtual environments where they can practice the skills and receive feedback on their progress, but important questions remain regarding the transferability and effectiveness of acquiring skills in a virtual world. The current state-of-the-art VR training fails to utilise learning theories to underpin their design and is not necessarily optimised for learning. In this project we will investigate how VR can help learners acquire manual skills, comparing different approaches to job skills training in VR with the goal of informing the development of effective and engaging training tools, which are underpinned by theory. We aim to produce results relating to enhancing ‘hard’ skills training in virtual environments which are applicable to industry, as required in engineering and manufacturing for example, and support the wider adoption of VR training tools. We will design VR learning simulations and new approaches to support learners and test these simulations to determine whether skills learned in VR transfer to the real world. We will also compare virtual training simulations to more traditional learning aids such as instructional videos.
|
2018Dhana Frerichs |
Computer Graphics Simulation of Organic and Inorganic Optical and Morphological Appearance Changes CDE EngD in Digital Entertainment (Bournemouth University): Academic Supervisor: Professor Christos Gatzidis Industrial Partner: Ninja Theory Industrial Supervisor: Andrew Vidler Organic bodies are subject to internal biological, chemical and physical processes as well as environmental interactions after death, which cause significant structural and optical changes. Simulating corpse decomposition and the environmental effects on its surface can help improve the realism of computer-generated scenes and provide the impression of a living, dynamic environment. This doctorate thesis aims to simulate post-mortem processes of the human body and their visual effects on its appearance. The proposed method is divided into three processes; surface weathering due to environmental activities, livor mortis and natural mummification by desiccation. The decomposing body is modelled by a layered model consisting of a tetrahedral mesh representing the volume and a high-resolution triangle surface mesh representing the skin. A particle-based surface weathering approach is employed to add environmental effects. The particles transport substances that are deposited on the object’s surface. A novel, biologically-inspired blood pooling simulation is used to recreate the physical processes of livor mortis and its visual effects on the corpse’s appearance. For mummification, a physically-based approach is used to simulate the moisture diffusion process inside the object and the resulting deformations of the volume and skin. To simulate the colouration changes associated with livor mortis and mummification, a chemically-based layered skin shader that considers time and spatially varying haemoglobin, oxygen and moisture contents is proposed. The suggested approach is able to model changes in the internal structure and the surface appearance of the body that resemble the post-mortem processes livor mortis, natural mummification by desiccation and surface weathering. The surface weathering approach is able to add blemishes, such as rust and moss, to an object’s surface while avoiding inconsistencies in deposit sizes and discontinuities on texture seams. The livor mortis approach is able to model the pink colouration changes caused by blood pooling, pressure-induced blanching effects, fixation of hypostasis and the purple discolouration due to oxygen loss in blood. The mummification method is able to reproduce volume shrinkage effects caused by moisture loss, skin wrinkling and skin darkening that are comparable to real mummies. Thesis: Computer graphics simulation of organic and inorganic optical and morphological appearance changes View Dhana’s Research Outputs |
2018David Gillespie |
User-appropriate viewer for high-resolution interactive engagement with 3D digital cultural artefacts CDE EngD in Digital Entertainment (Bournemouth University): Academic Supervisor: Professor Hongchuan Yu Industrial Partner: National Museums Liverpool Industrial Supervisor: Danny Boardman The core mission of museums and cultural institutions is the preservation, study and presentation of cultural heritage content. In this technological age, the creation of digital datasets and archives has been widely adopted as one way of seeking to achieve some or all of these goals. However, there are many challenges with the use of these data, and in particular, the large numbers of 3D digital artefacts that have been produced using methods such as noncontact laser scanning. As public expectation for more open access to information and innovative digital media increases, there are many issues that need to be rapidly addressed. The novel nature of 3D datasets and their visualisation presenting unique issues that impede use and dissemination. Key questions include the legal issues associated with 3D datasets created from cultural artefacts; the complex needs of users who are interacting with them; a lack of knowledge to texture and assess the visual quality of the datasets; and how the visual quality of the presented dataset relates to the perceptual experience of the user. This engineering doctorate, based on an industrial partnership with the National Museums of Liverpool and Conservation Technologies, investigates these questions and offers new ways of working with 3D cultural heritage datasets. The research outcomes in the thesis provide an improved understanding of the complexity of intellectual property law in relation to 3D cultural heritage datasets and how this impacts the dissemination of these types of data. It also provides tools and techniques that can be used to understand the needs of a user when interacting with 3D cultural content. Additionally, the results demonstrate the importance of the relationship between texture and polygonal resolution and how this can affect the perceived visual experience of a visitor. It finds that there is an acceptable cost to texture and polygonal resolution to offer the best perceptual experience with 3D digital cultural heritage. The results also demonstrate that a non-textured mesh may be as highly received as a high-resolution textured mesh. View David’s Research Outputs |
2017Oliver Gingrich |
Evoking Presence on Pepper’s Ghost Displays CDE EngD in Digital Entertainment (Bournemouth University): Academic Supervisors: Dr Alain Renaud, Dr Richard Southern, Dr Zhidong Xiao Industrial Partner: Musion 3D This thesis proposes a theoretic framework for the analysis of presence research in the context of Pepper’s ghost. Pepper’s Ghost as a media platform offers new possibilities for performances, real-time communication and media art. The thesis gives an overview on the 150-year-old history, as well as contemporary art creation on Pepper’s ghost with a specific focus on telepresence. Telepresence, a concept that infused academic debate since 1980, discusses the topic of remote communication, perceived presence transmitted through networked environments. This discourse of telepresence revealed shortcomings in current analytical frameworks. This thesis presents a new model for presence in the context of my research. The standard telepresence model (STM) assumes a direct link between three fundamental components of presence and a measurable impact on the audience. Its three pillars are conceptualised as presence co-factors immersion, interactivity and realism, presented individually in the framework of my practice. My research is firmly rooted in the field of media art and considers the effect of presence in the context of Pepper’s ghost. This Victorian parlour trick serves as an interface, an intermediary for the discussion of live-streaming experiences. Three case studies present pillars of the standard model, seeking answers to elemental questions of presence research. The hypothesis assumes a positive relationship between presence and its three co-factors. All case studies were developed as media art pieces in the context of Pepper’s ghost. As exemplifies, they illustrate the concept of presence in respect of my own creative practice.KIMA, a real-time sound representation experience, proposes a form of telepresence that relies exclusively on immersive sound as a medium. Immersion as a co-factor of presence is analysed and explored creatively on the Pepper’sghost canvas. Transmission, the second case study, investigates the effect of physical interaction on presence experiences. An experiment helps to draw inferences in a mixed-method approach. The third case study, Aura, discusses variations of realism as a presence co-factor in the specific context of Pepper’sghost. The practical example is accompanied by an in-depth meta-analysis of realism factors, specifically focusing on the intricacies of Pepper’s ghost creative production processes. Together, these three case studies help to shed light on new strategies to improve production methods with possible impact on presence in Pepper’s ghost-related virtual environments – and beyond. Thesis: Evoking presence through creative practice on Pepper’s ghost displaysView Oliver’s Research Outputs |
2019Michal Gnacek |
Improved Affect Recognition in Virtual Reality Environments, Dr Theodoros Kostoulas CDE EngD in Digital Entertainment (Bournemouth University): Academic Supervisors: Dr Emili Balaguer-Ballester, Dr Ellen Seiss, Dr Theodoros Kostoulas Industrial Partner: emteq Industrial Supervisor: Charles Nduka I am working with Emteqon improving affect recognition using various bio-signalswith the hope ofcreatingbetter experiences and creatingcompletely new ones that have the potential to tackle physical and mental health problems in previously unexplored ways. The ever-increasing use of virtual reality (VR) in research as well as mainstream consumer markets has created a need for understanding users’ affective state. This would not only guide the development of the technology but also allow for the creation of brand-new experiences in entertainment, healthcare, and training applications. This research project will build on the existing research conducted by Emteq with their patented device for affect detection in VR. In addition to the already implemented sensors (electromyography, photoplethysmography and inertial measuring unit) which need to be evaluated, other modalities need to be explored for potential inclusion and their ability to determine emotions. |
2018Alex Gouvatsos |
3D Storyboarding for Modern Animation CDE EngD in Digital Entertainment (Bournemouth University): Academic Supervisors: Professor Jian Jun Zhang, Dr Zhidong Xiao Industrial Partner: Hibbert Ralph Animation Industrial Supervisor: Jerry Hibbert Animation is now a classic medium that has been practiced for over a century. While Disney arguably made it mainstream with some hand-drawn classics, today’s industry is focused on Three-Dimensional (3D) animation. In modern 3D animation productions, there have been significant leaps in terms of optimising, automating and removing manual tasks. This has allowed the artistic vision to be realised within time and budget and empowered artists to do things that in the past would be technically more difficult. However, most existing research is focused on specific tasks or processes rather than the pipeline itself. Moreover, it is mostly focused on elements of the animation production phase, such as modelling, animating and rendering. As a result, pre-production parts like storyboarding are still done in the traditional way, often drawn by hand. Because of this disparity between the old and the new, the transition from storyboarding to 3D is prone to errors. 3D storyboarding is an attempt to adapt the pre-production phase of modern animation productions. By allowing storyboard artists access to simple but scale-accurate 3D models early on, drawing times as well as transition times between pre-production and production can be reduced. However, 3D storyboarding comes with its own shortcomings. By analysing existing pipelines, points of potential improvement are identified. Motivating research from these points, alternative workflows, automated methods and novel ideas that can be combined to make 3D animation pipelines more efficient are presented. The research detailed in this thesis focuses on the area between pre-production and production. A pipeline is presented that consists of a portfolio of projects that aim to: • Generate place-holder character assets from a drawn character line-up • Create project files with scene and shot breakdowns using screenplays • Empower non-experts to pose 3D characters using Microsoft Kinect • Pose 3D assets automatically by using 2D drawings as input.
|
2016David Greer |
Physics-based Character Locomotion Control with Large Simulation Time Steps CDE EngD in Digital Entertainment (Bournemouth University): Academic Supervisors: Professor Jian Jun Zhang, Dr Zhidong Xiao Industrial Partner: Natural Motion Industrial Supervisors: Joss Knight and Alberto Aguado Physical simulated locomotion allows rich and varied interactions with environments and other characters. However, control is difficult due to factors such as a typical character’s numerous degrees of freedom and small stability region, discontinuous ground contacts, and indirect control over the centre of mass. Previous academic work has made significant progress in addressing these problems but typically uses simulation time steps much smaller than those suitable for games. This project deals with developing control strategies using larger time steps. After describing some introductory work showing the difficulties of implementing a handcrafted controller with large physics time steps, three major areas of work are discussed. The rest area uses trajectory optimization to minimally alter reference motions to ensure physical validity, in order to improve simulated tracking. The approach builds on previous work which allows ground contacts to be modified as part of the optimization process, extending it to 3D problems. Incorporating contacts introduces di cult complementarity constraints, and an exact penalty method is shown here to improve solver robustness and performance compared to previous relaxation methods. Trajectory optimization is also used to modify reference motions to alter characteristics such as timing, stride length and heading direction, whilst maintaining physical validity, and to generate short transitions between existing motions. The second area uses a sampling-based approach, previously demonstrated with small time steps, to formulate open-loop control policies which reproduce reference motions. As a prerequisite, the reproducibility of simulation output from a common game physics engine, PhysX, is examined and conditions leading to highly reproducible behaviour are determined. For large time steps, sampling is shown to be susceptible to physical invalidities in the reference motion but, using physically optimized motions, is successfully applied at 60-time steps per second. Finally, adaptations to an existing method using evolutionary algorithms to learn feedback policies are described. With large time steps, it is found to be necessary to use a dense feedback formulation and to introduce phase-dependence in order to obtain a successful controller, which is able to recover from impulses of several hundred Newtons applied for 0.1s. Additionally, it is shown that a recent machine learning approach based on support vector machines can identify whether disturbed character states will lead to failure, with high accuracy (99%) and with prediction times in the order of microseconds. Together, the trajectory optimization, open loop control, and feedback developments allow successful control for a walking motion at 60-time steps per second, with control and simulation time of 0.62ms per time step. This means that it could plausibly be used within the demanding performance constraints of games. Furthermore, the availability of rapid failure prediction for the controller will allow more high-level control strategies to be explored in future. Thesis: Physics-based character locomotion control with large simulation time steps
|
2017Lisa Haskel |
Participatory Design and Free and Open Source Software in the Not for Profit Sector ‘the Hublink Project’ CDE EngD in Digital Entertainment (Bournemouth University): Academic Supervisor: Dr Neal White Industrial Partner: Arts Catalyst Industrial Supervisor: Nicola Triscott This industry-based thesis undertakes a multifaceted and longitudinal exploration of the design and implementation of a Free and Open Source Software (FLOSS) based information system in a consortium of small-scale community organisations. The research is centred on the design, production and implementation of a case management system with and for a group of nine not-for-profit organisations in the London Borough of Tower Hamlets who work as a consortium. The system, called Hublink, is based on the FLOSS framework Drupal. The system was designed during 2013 and has been in everyday use by those organisations since January 2014, acting as the consortium’s primary information infrastructure. This research therefore encompasses both design and use. The design process was based on Participatory Design (PD) principles and methods. Because of the project’s long-term nature, Hublink has been an exceptional opportunity to focus on the legacy of a PD process into the later stages of the software development life-cycle. This research has therefore been able to draw on themes that have emerged through real-world use and extended collaboration and engagement. In this thesis I place the Hublink project description within literature covering Participatory Design, Community Informatics and Free/Libre and Open Source Software (FLOSS), extending into infrastructure, appropriation and end-user development. Through a literature review and presentation of evidence collected during this research project, a clear argument emerges that relates the mutual learning outcomes of Participatory Design, with sustainability through infrastructuring activities, while also showing how the communities of practice of FLOSS projects create an infrastructure for not-for-profit organisations, enabling them to build sustainable systems that can meet their needs and accord with their values. The thesis argues that while Participatory Design strengthens the human element of infrastructure, FLOSS provides a complementary element of technical support, via the characteristics of generativity and extensibility, and their communities of practice. This research provides a deeply descriptive study that bridges design and use, centred on the core values of Participatory Design, contributing to the understanding and development of practices around sustainability and Participatory Design in the not-for-profit sector. The research offers a conceptual pathway to link FLOSS and Participatory Design, suggesting directions for future research and practice that enhance the connections between these two important areas of participatory production. Thesis: Participatory design and free and open source software in the not for profit sector – the Hublink Project
|
2018Charlotte Hoare |
Exploring synchronous second screen experiences to television CDE EngD in Digital Entertainment (University of Bath): Academic Supervisors: Professor Danae Stanton-Fraser, Professor Eamonn O’Neill Industrial Partner: BBC R&D Industrial Supervisor: Phil Stenton The way we interact with media has changed. Devices such as laptops, phones and tablets now supplement television watching for many. This behaviour allows viewers to engage more deeply with, or engage with content unrelated to, the television content they are simultaneously watching. This leads to the possibility of leveraging devices in a living room to deliver a synchronous, holistic experience over two screens: a companion screen experience. Although some examples of commercial companion screen experiences have been attempted, few have offered a genuinely enhanced experience to audiences. This thesis examines how it is possible to design experiences that truly add value to a television experience, asking the central research question, how should companion screen experiences be designed? A number of companion screen experiences are developed and evaluated. A comparison chapter discerns how using the space around a TV to deliver a companion experience impacts a users experience when compared to a companion experience delivered more traditionally on a tablet. This leads to a more thorough investigation of the orchestration of companion experiences, addressed by using the novel approach of involving television professionals and audience members in the very initial stages of developing a companion screen experience, as a way of generating design guideline[s] for a companion experience. A potential guideline is uncovered for further investigation in the form of a hypothesis for testing. This hypothesis is then put under test in order to rigorously validate this design guideline for producers and designers of companion screen experiences. This rigorously-validated design guideline then leads to an important implication for broadcasters when it comes to providing and producing companion screen experiences. A final contribution of this research is the many potential directions for future research that the thesis yields. Thesis: The Companion Experience: A Thesis from the Study of the Evolving Home Television Experience
|
2015Jake Hobbs |
Audience Engagement & Monetisation of Creative Content in Digital Environments: A creative SME Perspective CDE EngD in Digital Entertainment (Bournemouth University): Academic Supervisor: Mike Molesworth Industrial Partner: Wonky Industrial Supervisor: Anna Adi Creative SMEs face a number of limitations that can hamper their ability to develop and establish original content in digital environments. These limitations include a lack of resources, struggles for visibility, limits of engagement, audience pressures and free culture. The constant pressures from growing competition and fragmented audiences across digital environments amplify these limitations, which means SMEs can struggle in these highly competitive, information-rich platforms. The research sought to explore how creative SMEs may circumvent these limitations to strengthen their positioning in digital environments. Two areas of focus are proposed to address these issues; firstly a study and development of audience engagement, and secondly an analysis of the monetisation options available for digital content and their links to engagement. With a focus on audience engagement, the theoretical grounding of this work is based within the engagement literature. Through this work a new Dynamic Shaping of Engagement is developed and used as a foundation of analysis, which informs the development of practical work in this study. Findings present insight into the methods and practices that can help creative SMEs circumvent their limitations and strengthen their positioning within digital environments. However, the findings continue to emphasise the difficulties faced by creative SMEs. These companies are hampered by paradoxes that arise due to their resource limitations that limit their ability to gain finances, develop audiences and produce content. It is shown that those with the ‘key’ to audience attention are the ones best positioned to succeed in these environments, often at the expense of the original content creators themselves. Therefore, visions of a democratic environment, which levels the playing field for SMEs to compete, are diminished and it is argued digital environments may act to amplify the positioning of established media. Therefore, greater support is required to aid these companies, which must look beyond short-term solutions that focus on one-off projects, towards broader, more long-term support. This support can then enhance creative SMEs ability to not only deliver but also establish and potentially monetise content in digital environments, which in turn can make continued production more sustainable. Thesis: Audience engagement and monetisation of creative content in digital environments: a creative SME perspectiveView Jake’s Research Outputshttps://www.bathspa.ac.uk/our-people/jake-hobbs/ |
2021Jiajun Huang |
Editing and Animating Implicit Neural Representations Intel-PA Ph.D (Bournemouth University): Academic Supervisor: Professor Hongchuan Yu Recently, implicit neural representation methods have gained significant traction. By using neural networks to represent objects, they can photo-realistically represent and reconstruct 3D objects or scenes without expensive capture equipment or tedious human labour. This makes them an immensely useful tool for the next generation of virtual reality / augmented reality applications.However, different from their traditional counterpart, these representations cannot be easily edited, which reduces their usability in the industry as artists cannot easily modify the represented object to their liking. They can also only represent static scenes, and animating them remains to be an open challenge.The goal of my research is to address these problems by devising intuitive yet efficient methods to edit or animate implicit neural scene representations without losing their representation or reconstruction capabilities. |
2017Sameh Hussain |
Stylisation through Strokes CDE EngD in Digital Entertainment (University of Bath): Academic Supervisor: Professor Peter Hall Industrial Partner: Ninja Theory Industrial Supervisor: Andrew Vidler Investigations into the development of high-fidelity style transfer from artist-drawn examples. Style transfer techniques have provided the means of re-envisioning images in the style of various works of art. However, these techniques can only produce credible results for a limited range of images. We are improving on existing style transfer techniques by observing and understanding how artists place their brush strokes on a canvas. So far we have been able to build models that are able to learn styles pertaining to line drawings from a few example strokes. We have then been able to apply the model a variety of inputs to create stylised drawings. Over the upcoming year, we will be working on extending this model so that we can do more than just line drawings. We will also be working with our industrial partner to develop interactive so their artists can leverage the research we have produced.
|
2021Kavisha Jayathunge |
Emotionally Expressive Speech Synthesis Intel-PA Ph.D (Bournemouth University): Academic Supervisor: Professor Xiaosong Yang Emotionally expressive speech synthesis for a multimodal virtual avatar. Virtual conversation partners powered by Artificial Intelligence are ubiquitous in today’s world, from personal assistants in smartphones, to customer-facing chatbots in retail and utility service helplines. Currently, these are typically limited to conversing through text, and where there is speech output, this tends to be monotone and (funnily enough) robotic. The long-term aim of this research project is to design a virtual avatar that picks up information about a human speaker from multiple different sources (i.e. audio, video and text) and uses this information to simulate a realistic conversation partner. For example, it could determine the emotional state of the person speaking to it by examining their face and vocal cadence. We expect that taking such information into account when generating a response would make for a more pleasant conversation experience, particularly when a human needs to speak to a robot about a sensitive matter. The virtual avatar will also be able to speak out loud and project an image of itself onto a screen. Using context cues from the human speaker, the avatar will modulate its voice and facial expressions in ways that are appropriate to the conversation at hand. The project is a group effort and I’m working with several other CDE researchers to realise this goal. I’m specifically interested in the speech synthesis aspect of the project, and how existing methods could be improved to generate speech that is more emotionally textured. |
2019Richard Jones |
Droplets, splashes and sprays: highly detailed liquids in visual effects production CDE EngD in Digital Entertainment (Bournemouth University): Academic Supervisor: Dr Richard Southern Industrial Partner: DNeg Industrial Supervisor: James Bird An often misunderstood or under-appreciated feature of the visual effects pipeline is the sheer quantity of components and layers that go into a single shot, or even, single effect. Liquids, often combining waves, splashes, droplets and sprays, are a particular example of this. Whilst there has been a huge amount of research on liquid simulation in the last decade or so, little has been successful in reducing the number of layers or elements required to create a plausible final liquid effect. Furthermore, the finer-scale phenomena of droplets and sprays, often introduced in this layered approach and crucial for plausibility, are some of the least well-catered-for in the existing toolkit. In lieu of adequate tooling, creation of these elements relies heavily on non-physical methods, bespoke setups and artistic ingenuity. This project explores physically-based methods for creating these phenomena, demonstrating improved levels of detail and plausibility over existing non-physical approaches. These provide an alternative to existing workflows that are heavily reliant on artistic input, allowing artists to focus efforts on creative direction rather than trying to recreate physical plausibility Richard worked alongside VFX studio DNeg to develop improvements to the liquid simulation toolset for creating turbulent liquid and whitewater effects for feature film visual effects. The current toolset for liquid simulation is built around the creation of simple single-phase liquid motion, such as ocean waves and simple splashes, but struggles to capture the often more exciting mixed air-liquid phenomena of very turbulent fluid splashes and sprays. Therefore the creation of turbulent effects relies very heavily on artistic input and having the experience and intuition to use existing tools in unorthodox ways. By incorporating more physical models for turbulent fluid phenomena into the existing liquid simulation toolset, his project aims to develop techniques to greater capture realistic turbulent fluid effects and allow faster turnover of the highly detailed liquid effects required for feature film. Thesis: Droplets, splashes and sprays: highly detailed liquids in visual effects productionView Richard’s Research Outputs |
2020Will Kerr |
Autonomous Filming Systems: Towards Empathetic Imitation CDE Ph.D. (University of Bath): Academic Supervisors: Dr Tom Fincham Haines, Dr Wenbin Li Film making is an artistic but resource-intensive process. The visual appearance of finished film is the product of many departments, but Directors and Cinematographers play a significant role by applying professional expertise and style into the planning (pre-production) and production stages. Once each shot of the film is planned, a trade-off is made between the cost of multiple or highly experienced camera operators against the improved quantity / quality of footage captured. There is a therefore scope to autonomise some aspects of film (pre)-production, such that increased coverage or professionalism can be achieved by film makers limited by finance or expertise. Existing work in autonomous virtual film-making has focussed on actor and camera positioning, but there remains a gap in how the composition of the frame is designed, particularly how the background elements (shape, colour, focus etc) play a part in the aesthetics of the footage, in a style which is empathetic to the story. This project takes the above scope forward by asking 2 principle questions: 1) how can the intent of a professional cinematographer be learnt from finished film content? 2) How can these learnings be applied back to new filming tasks in virtual or real environments? Early work has focussed on 1) with a suite of visual analysis tools and film datasets, providing some evidence of cinematographic styles that were applied for particular films. The second step will develop a virtual filming environment, apply style to virtual shot composition, and offer comparisons to existing film footage (imitation). |
2016Azeem Khan |
Procedural gameplay flow using constraints CDE EngD in Digital Entertainment (University of Bath): Academic Supervisor: Dr Tom Fincham Haines Industrial Partner: Ubisoft Reflections Industrial Supervisor: Michele Condò This project involves using machine learning to identify what players find exciting or entertaining as they progress through a level. This will be used to procedurally generate an unlimited number of levels, tailored to a user’s playing style. Tom Clancy’s The Division is one of the most successful game launches in history, and the Reflections studio was a key collaborator on the project. Reflections also delivered the Underground DLC, within a very tight development window. The key to this success was the creation of a procedural level design tool, which took a high level script that outlined key aspects of a mission template, and generated multiple different underground dungeons that satisfied this gameplay template. The key difference to typical procedural environment generation technologies, is that the play environment is created to satisfy the needs of gameplay, rather than trying to fit gameplay into a procedurally generated world. The system using for TCTD had many constraints, and our goal is to develop technology that will build on this concept to generate an unlimited number of missions and levels procedurally, and in an engine agnostic manner to be used for any number of games. We would like to investigate using Markov constraints, inspired by the ‘flow machines’ research currently being undertaken by Sony to generate music, text and more automatically in a style dictated by the training material.http://www.flow-machines.com/ (other techniques may be considered) |
2020Ieva Kazlauskaite |
Compositional Uncertainty in Models of Alignment CDE EngD in Digital Entertainment (University of Bath):Academic Supervisors: Professor Neill Campbell, Professor Darren CoskerIndustrial Partner: Electronic Arts GamesIndustrial Supervisor: Tom WatersonThis thesis studies the problem of temporal alignment of sequences and the uncertainty propagation in models of alignment. Temporal alignment of sequences is the task of removing the differences between the observed time series arising from the differences in their relative timing. It is a common preprocessing step in time series modelling, usually performed in isolation from the data analysis and modelling. The methods proposed in this thesis cast alignment learning in a framework where both the alignment and the data is modelled simultaneously. Specifically, we use tools from Bayesian nonparametrics to model each sequence as a composition of a monotonic warping function, that accounts for the differences in timing, and a latent function, which is an aligned version of the observed sequence. Combined with a probabilistic alignment objective, such an approach allows us to align sequences into multiple, a-priori unknown groups in an unsupervised manner. Furthermore, the use of Bayesian nonparametrics offers the benefits of principled modelling of the noisy observed sequences, explicit priors that encode our beliefs about the constituent parts of the model and the generative process of the data, and an ability to adapt to the complexity of the data. Another feature of the probabilistic formulation that is lacking in the traditional temporal alignment models is an explicit quantification of the different kinds of uncertainties arising in the alignment problem. These include the uncertainties related to the noisy observations and the fact that the observed data may be explained in multiple different ways, all of which are plausible under our prior assumptions. While formulating various parts of the model, we encounter and discuss some of the challenges of Bayesian modelling, most notably the need for approximate inference. We argue that variational distributions which include correlations between the hierarchical components of the model are necessary to take advantage of the potential of the model to discover the compositional structure in the data and to capture the uncertainty arising from it.
Thesis: Compositional Uncertainty in Models of Alignment
|
2015Charalampos (Babis) Koniaris |
Real-time Rendering of Complex, Heterogeneous Mesostructure on Deformable Surfaces CDE EngD in Digital Entertainment (University of Bath): Academic Supervisor: Prof Darren Cosker Industrial Partner: Disney Research Industrial Supervisor: Kenny Mitchell In this thesis, we present a new approach to rendering deforming textured surfaces that takes into account variations in elasticity of the materials represented in the texture. Our approach is based on dynamically warping the parameterisation so that parameterisation distortion in a deformed pose is locally similar to the rest pose; this similarity results in apparent rigidity of the mapped texture material. The warps are also weighted, so that users have control over what appears rigid and what not. Our algorithms achieve real-time generation of warps, including their application in rendering the textured surfaces. A key factor to the achieved performance is the exploitation of the parallel nature of local optimisations by implementing the algorithms on the GPU. We demonstrate our approach with several example applications. We show warps on models using standard texture mapping as well as Ptex. We also show warps using static or dynamic/procedural texture detail, while the surface that it is mapped on deforms. A variety of use-cases is also provided: generating warps for looping animations, generating out-of-core warps of film-quality assets, approximating high-resolution warps with lower-resolution texture-space Linear Blend Skinning and dynamically preserving texture features of a model being interactively edited by an artist. Texture mapping is a standard technique in computer graphics that is commonly used to map complex surface detail -in the form of images- to 3D models so that the latter appear more visually complex than they really are. A drawback is that when models animate, parts of them stretch or compress, and at those parts the mapped surface detail behaves like rubber due to distortion in the mapping introduced by the deformation. In this project, we develop methods to control this behaviour so that we can represent surface detail with heterogeneous elasticity characteristics that properly correspond to the portrayed materials. Thesis: Real-time Rendering of Complex, Heterogeneous Mesostructure on Deformable Surfaces
|
2018Robert Kosk |
Biomechanical Parametric Faces Modelling and Animation CDE EngD in Digital Entertainment (Bournemouth University): Academic Supervisor: Professor Xiaosong Yang Industrial Partner: Humain Industrial Supervisor: Willemn Kokke Modelling and animation of high-quality, digital faces remains a tedious and challenging process. Although sophisticated data-capture and manual processing allow realistic results in offline production, there is demand in the rapidly developing virtual reality industry for fully automated and flexible methods. My project aims to develop a parametric template for physically-based facial modelling and animation, which will: – automatically generate any face, either existing or synthetic, – intuitively edit structure of a face without affecting the quality of animation, – reflect non-linear nature of facial movement, – retarget facial performance, accounting for anatomy of particular faces. Ability to generate faces with governing, meaningful parameters such as age, gender or ethnicity is a crucial objective in wider adaptation of the system among the artists. Furthermore, the template can be extended with numerous novel applications, such as animation retargeting driven by muscle activations, fantasy character synthesis or digital forensic reconstruction. Download Robert’s Research Profile
|
2021Andrew Lawrence |
Using Bayesian Non-Parametrics to Learn Multivariate Dependency Structures MSCA Ph.D (University of Bath): Academic Supervisors: Prof Darren Cosker, Prof Neill Campbell Andrew’s research at the University of Bath focused on unsupervised learning with Bayesian non-parametrics, specifically publishing work at NeurIPS and ICML on generative latent variable models capable of learning multivariate dependency structures. Outline of current research: Causal discovery, causal inference, fairness. The majority of my work is product-related research so the work ends up in our main platform and not disclosed in papers in conferences/journals. Potential impact of current research: The most impactful aspect would be with respect to fairness. We are working on methods to assess and correct fairness in algorithms. For example, if a bank has an algorithm that determines if a person is approved or rejected for a loan, we can check if the algorithm uses protected characteristics, such as gender, race, etc to make this decision. If it does, that is unfair. However, it is not as simple as checking if it uses those features. There are often proxy variables (such as salary, postcode, etc) that capture the sensitive information without specifically using gender/race in the decision. Thesis: Using Bayesian Non-Parametrics to Learn Multivariate Dependency Structures
|
2016Chris Lewin |
Constraint Based Simulation of Soft and Rigid Bodies CDE EngD in Digital Entertainment (University of Bath): Academic Supervisors: Professor Phil Willis, Dr Chris Williams, Dr Tom Waterson Industrial Partner: Electronic Arts Games Indusrial Supervisor: Dr. Mike Bassett This dissertation presents a number of related works in real-time physical animation, centered around the theme of constraint-based simulation. Methods used for real-time simulation of deformable bodies tend to differ quite substantially from those used in offline simulation; we discuss the reasons for this and propose a new position-based finite element method that attempts to produce more realistic simulations with these methods. We also consider and adapt other methods to make them more suitable for game physics simulation. Finally, we adapt some concepts from deformable body simulation to define a deformable rod constraint between rigid bodies that allows us to represent the kinematics of the human spine using fewer degrees of freedom than required with a strictly joint-based model. Rigid body physics has become a standard feature in modern video games. However, very few things in the real world behave in a rigid way; even metal-framed buildings and vehicles will deform under enough force. My project centered around efficiently simulating soft body behaviour in ways that are appropriate for use in games. The requirements of high performance and robustness tend to expose weaknesses in the standard implicit force-based methods; instead, we have adopted a fast position-based approach to simulating the deformation of finite element meshes. Thesis: Constraint based simulation of soft and rigid bodiesView Chris’ Research Outputshttps://www.ea.com/en-gb |
2023Jan Malte Lichtenberg |
Bounded Rationality in Reinforcement Learning MSCA Ph.D (University of Bath): Academic Supervisor: Dr Özgür Şimşek The broad problem I address in this dissertation is the design of autonomous agents that can efficiently learn goal-directed behavior in sequential decision-making problems under uncertainty. I investigate how certain models of bounded rationality—simple decision-making models that take into consideration the limited cognitive abilities of biological and artificial minds—can inform reinforcement learning algorithms to produce more resource-efficient agents. In the two main parts of this dissertation, I use different existing models of bounded rationality to address different resource limitations present in sequential decision-making problems. In the first part, I introduce a boundedly rational function approximation architecture for reinforcement learning agents to reduce the amount of training data required to learn a useful behavioral policy. In the second part, I investigate how Herbert A. Simon’s satisficing strategy can be applied in sequential decision-making problems to reduce the computational effort of the action-selection process. Thesis: Bounded Rationality in Reinforcement Learning |
2019Nick Lindfield |
Deep Neural Networks for Computer-Generated Holography CDE EngD in Digital Entertainment (Bournemouth University): Academic Supervisors: Professor Wen Tang, Professor Feng Tian Industrial Partner: Vivid Q Industrial Supervisor: Andrzej Kaczorowski Computer-generated holography is a display technology that uses diffraction and interference of light to reconstruct fully three-dimensional objects. These objects appear three-dimensional because holograms produce depth cues that are processed by our brains in a way that is consistent with its experience of the natural world, unlike stereoscopic displays which produce conflicting depth cues and often cause nausea. Holographic displays fall within the category of computational displays. And for those, the software element is the major factor dictating the properties of the final output image (such as image quality, and depth perception). Yet, the calculation of those holographic patterns is complex in both production and analysis. Neural networks are a potential method to simplify and speed up these processes while retaining a high level of quality. The main goal of this project is to develop an algorithm to determine the visual quality of computer-generated holograms. A secondary research direction is using neural networks to produce a hologram. Determining the quality of a hologram is a difficult task with few publicised solutions. This is because holograms viewed directly are essentially three-dimensional structures, continuous in all three spatial coordinates. Hence, existing quality evaluation methods need to be rethought to incorporate a much wider scope of the problem. Neural networks can be used to analyse highly complex collections of information in a way that is highly generic; instead of focusing on predetermined features they can learn what to focus on based on context. Neural networks have been demonstrated to replicate more complex operations, producing an output of comparable quality to the original in a much shorter time scale. Recently, the combination of holography and neural networks has received significant academic attention, from MIT and Stanford. Therefore the secondary direction of this project is to explore the use of neural networks to compute holograms and correct for imperfections in realistic holographic projections, in real time. |
2022Xiaoxiao Liu |
Continuous Learning in Natural Conversation Generation Intel-PA Ph.D (Bournemouth University): Academic Supervisor: Prof Jian Chang As a crucial component in medical chatbot, natural language generation (NLG) module converts a dialogue act represented in a semantic form into a response in natural language. The continuous meaningful conversations in a dialogue system requires not only to understand the dynamic content from ongoing conversation, but also be able to generate up-to-date responses according to the context of the conversation. In particular, the conversation generation module should convert the responses represented in semantic forms in natural language. Giving appropriate responses will help the users to increase affective interactions and be willing to give more detailed information on their symptoms. By doing so, the conversation generation will assist the diagnosis system better. In this research, I will develop a medical conversation generation system and focus on enhance the naturalness of the responses generated so that the user experiences of the medical chatbot will be improved. |
2019Philip Lorimer
Supervisors
|
Autonomous Robots for Professional Filming CDE EngD in Digital Entertainment (University of Bath): The typical production pipeline involves a considerable effort by industry professionals to plan, capture and post-produce an outstanding commercial film. Workflows are often heavily reliant on human-input along with a fine-tined robotics platform. The research project explores the use of autonomous robots for professional filming, particularly investigating the use of reinforcement learning for learning and executing typical filming techniques. The primary aim of this is to design a fully autonomous pipeline for a robot to plan moving trajectories and perform the capture. |
2022Zack Lyons |
Virtual Therapy for Acquired Brain Injury Rehabilitation CDE EngD in Digital Entertainment (University of Bath): Academic Supervisor: Dr Leon Watts Industrial Partner: Designability Industrial Supervisor: Nigel Harris Here Zack explains why he chose the EngD within CDE. Virtual Therapy – A Story-Driven and Interactive Virtual Environment for Acquired Brain Injury Rehabilitation Human-computer interaction is concerned with understanding the goals of people in a target domain, documenting their motivations and challenges to ground investigations into how technology can be used to support their interactions. In this thesis, the domain of interest is that of neurobehavioural rehabilitation services for people with executive dysfunction arising from acquired brain injuries. For the clinical professionals and users of such services, the predominant goal is the reacquisition of functional and socio-cognitive skills to facilitate successful community reintegration. The gold standard in assessing and training executive skills is to place someone in community settings, to facilitate observation of their behaviours, strategies and emergent deficits. However, this comes with practical difficulties: such activities are irregular, costly and uncontrollable. Virtual reality uses immersive and interactive experiences to psychologically engage users in situations that are impractical to re-create in the real world. It aligns with the goals of neurobehavioural rehabilitation, which seeks to familiarise and observe the behaviours of service users in ecologically valid situations. In this thesis, we report on user-centred design research conducted with the Brain Injury Rehabilitation Trust to ensure our approach is theoretically sound and practicable. Through analysis of the literature and in situ observations we present an understanding of clinical activities framed through human-computer interaction, to establish clinically grounded frameworks to support clinician-service user interactions. These inform the development of an experimental platform, Virtuality Street, to demonstrate how virtual environments can expose key behavioural correlates of executive dysfunction and facilitate clinical observations of service users. Having developed an experimental platform that is grounded in clinical practice, we present a lab-based study with neurotypical participants to demonstrate VirtualityStreet’s capacity to deliver challenges that are executive in nature and support the devising of strategies to complete socio-cognitive and functional tasks. We further report on demonstration sessions with clinical professionals involved in acquired brain injury rehabilitation, and three service users approaching the end of their rehabilitative programme. The feedback from these groups supports the overarching goal of this clinically motivated research, which is to build towards clinical validation of VirtualityStreet as a therapeutic tool. Thesis: Towards Effective Virtual Reality Environments for the Behavioural Assessment ofExecutive DysfunctionView Zack’s Research Outputs |
2015Lindsey Macaulay-Lowe |
Investigating how computational tools can improve the production process of stop-motion animation CDE EngD in Digital Entertainment (University of Bath): Academic Supervisor: Professor Peter Hall Industrial Partner: Aardman Animations Industrial Supervisor: Philip Child Stop-motion animation is a traditional form of animation that has been practised for over 100 years. While the unique look and feel of stop-motion animation has been retained in modern productions, the production process has been modernised to take advantage of technological advancements. Modernstop-frame animation production integrates digital imaging technology and computational methods with traditional hand-crafted skills. This portfolio documents three projects undertaken at Aardman Animations, each investigated with the aim of improving efficiency in the stop-motion production process. Advancing state-of-the-art research is only one of the challenges when working in a production studio such as Aardman Animations. Additionally, findings must be integrated into the production pipeline. This research discusses the challenges and constraints faced when conducting research in this environment. In order for stop-motion animation to remain competitive it is vital that production companies stay up-to-date with technological advancements in research areas that can contribute to their production processes. I conclude by discussing whether technological advancements can help Aardman Animations in improving the efficiency of their stop-motion production pipeline. Why did you choose the EngD / CDE? The experience of working on industrial research and working towards academic goals helped to develop a number of skills: My research was about the development of technical tools for traditional stop-motion animation production. Technological methods must ensure that the traditional hand-crafted look is retained. My work involved researching and developing computational tools that could be used to make improvements to the production pipeline. Aardman Animations was my host company. They create films, short-form animation, television series and commercials using both computer-generated and stop-motion animation. Their most famous characters are Wallace and Gromit and Morph. I was based in the commercials computer graphics department and my supervisor Philip Child is a Senior Technical Developer within that team. The main areas that I researched during my EngD were:
Solutions for simulating plasticine materials computationally Thesis:Investigating how computational tools can improve the production process of stop-motion animation |
2021Tom Matko |
Computational Fluid Dynamics Modelling of Dissolved Oxygen in Oxidation Ditches CDE EngD in Digital Entertainment (Bournemouth University):Academic Supervisors: Professor Jian Chang, Dr Zhidong Xiao, Jan HoffmanIndustry Partner: Wessex WaterIndustrial Supervisor: John LeonardThis research aims to reveal new knowledge about the factors that affect the hydrodynamics, dissolved oxygen (DO) and aeration performance of a wastewater oxidation ditch. The literature is reviewed on the Computational Fluid Dynamics (CFD) modelling of wastewater aeration tanks. This study develops a CFD model of an aerated oxidation ditch, by taking into account two-phase gas-liquid flow, inter- phase oxygen mass transfer and dissolved oxygen. The main contributions to knowledge are the effect of bubble size distribution (BSD) and biochemical oxygen demand (BOD) distribution on the DO distribution. Species transport modelling predicts the BOD and DO distribution in the ditch. De-oxygenation of local dissolved oxygen by BOD is modelled by an oxygen sink that depends on the local BOD concentration. This is a novel approach to flow modelling for the prediction of the DO distribution. The local BOD concentration in the ditch may depend on either the local DO concentration or the local residence time. The numerical residence time distribution (RTD), heterogeneous flow pattern and DO distribution indicate that the flow behaviour in the ditch is non-ideal. Dissolved oxygen is affected by BOD distribution, bubble size, BSD, mechanical surface aeration and temperature. There is good agreement between the numerical simulation and both the observation of flow pattern and the measurement of mean DO. The BSD predicts a mean bubble size of around 2 mm, which is also the bubble size that best agrees with the measurements of DO. This study identifies that the BOD distribution and the BSD are key parameters that affect the DO distribution and improve the accuracy of the agreement with experimental data. In decreasing order of aeration performance are the air membrane diffuser, Fuch air jet aerator, Kessener brush surface aerator and Maguire hydro-jet aerator. Thesis: – Computational Fluid Dynamics (CFD) Modelling of Dissolved Oxygen in Oxidation Ditches |
2015Thomas Joseph Matthews |
Automated Proficiency Analysis and Feedback for VR Training MRes (Bournemouth University): Academic Supervisors: Professor Feng Tian, Professor Wen Tang Industrial Partner: AI Solve Industrial Supervisor: Tom Dolby With the advent of modern VR technology in 2016, its potential for medical simulation and training has been recognized. However, challenges like low user acceptance due to poor usability are frequently found, hampering wide-spread adoption. This research aims to address the usability of VR clinical skills simulations, particularly focusing on interaction design and proposes improvements for higher learning outcomes and user retention. A literature review and a usability case study of an off-the-shelf clinical VR training application was conducted, revealing usability concerns and areas requiring improvement. The prevalent issues include difficulties with controls, hardware and the ‘gulf of execution’ in broader ‘possibility space’ – issues that extend beyond direct interaction designs. A market analysis further reinforced these findings, showing gaps in interaction affordances, and pointing to design patterns and trends that could be improved for better usability and interaction. The synthesis of these findings indicates that the limitations of novel interaction schemes and understanding of the VR simulation’s ‘possibility space’ affect knowledge transferability. Given these issues and limitations in current VR clinical training simulations, this study outlines several Human-Centred Design recommendations for improvement, incorporating findings from wider VR design research. This research’s findings seek to facilitate the development of more user-centric VR training applications, ultimately leading to enhanced training of healthcare professionals and improved patient outcomes. The study sets a foundation for future interaction design work, addressing the primary usability issues and limitations in current VR clinical simulations. Thesis: Human-Centred Design for Improving VR Training of Clinical SkillsView Thomas’ Research Outputs |
2021Ifigeneia Mavridou |
Emotion in Virtual Reality (VR)- Thesis title: Affective State Recognition in Virtual Reality CDE EngD in Digital Entertainment (Bournemouth University):Academic Supervisors: Dr Emili Balaguer-Ballester, Dr Ellen SiessIndustrial Partner: emteqIndustrial Supervisor: Dr. Charles NdukaThe three core components of Affective Computing (AC) are emotion expression recognition, emotion processing, and emotional feedback. Affective states are typically characterized in a two-dimensional space consisting of arousal, i.e., the intensity of the emotion felt; and valence, i.e., the degree to which the current emotion is pleasant or unpleasant. These fundamental properties of emotion can not only be measured using subjective ratings from users, but also with the help of physiological and behavioural measures, which potentially provide an objective evaluation across users. Multiple combinations of measures are utilised in AC for a range of applications, including education, healthcare, marketing, and entertainment. As the uses of immersive Virtual Reality (VR) technologies are growing, there is a rapidly increasing need for robust affect recognition in VR settings. However, the integration of affect detection methodologies with VR remains an unmet challenge due to constraints posed by the current VR technologies, such as Head Mounted Displays. This EngD project is designed to overcome some of the challenges by effectively integrating valence and arousal recognition methods in VR technologies and by testing their reliability in seated and room-scale full immersive VR conditions. The aim of this EngD research project is to identify how affective states are elicited in VR and how they can be efficiently measured, without constraining the movement and decreasing the sense of presence in the virtual world. Through a three-years long collaboration with Emteq labs Ltd, a wearable technology company, we assisted in the development of a novel multimodal affect detection system, specifically tailored towards the requirements of VR. This thesis will describe the architecture of the system, the research studies that enabled this development, and the future challenges. The studies conducted, validated the reliability of our proposed system, including the VR stimuli design, data measures and processing pipeline. This work could inform future studies in the field of AC in VR and assist in the development of novel applications and healthcare interventions. |
2021Youssef Alami Mejjati |
Creative Editing and Synthesis of Objects in Photographs Using Generative Adversarial Networks MSCA Ph.D (University of Bath): Academic Supervisor: Dr Kwang In Kim Image editing is traditionally a labour-intensive process involving professional software and human expertise. Such a process is expensive and time-consuming. As a result, many individuals are not able to seamlessly express their creativity. Therefore, there is a need for new image editing tools allowing for intuitive and advanced image edits. In this thesis we propose novel algorithms that simplify the image editing pipeline and reduce the amount of labour involved. We leverage new advances in artificial intelligence to bridge the gap between human-based edits and data-driven image edits. We build upon existing models learned from data and propose four new solutions that allow users to edit images without prior knowledge on image editing. Having completed my Ph.D, I now work in a startup called Synthesia.io. I am a senior research scientist and I lead a team that investigates photoreal digital human synthesis. Our goal is to disrupt the video generation industry. That is, instead of using a camera, going to a studio and hiring actors, we develop software that allows to skip all the former steps. Users can use our platform to generate video by using text only, they just chose a synthetic actor, and write the content of the video. Thesis: Creative Editing and Synthesis of Objects in Photographs Using Generative Adversarial Networks
|
2021Lazaros Michailidis |
Exploiting physiological changes during the flow experience for assessing virtual-reality game design CDE EngD in Digital Entertainment (Bournemouth University):Academic Supervisors: Dr Emili Balaguer-Ballester, Dr Xun He, Dr Christos GatzidisIndustrial Partner: Sony Interactive EntertainmentIndustrial Supervisor: Tony GodarImmersive experiences are considered the principal attraction of video games. Achieving a healthy balance between the game’s demands and the user’s skills is a particularly challenging goal. However, it is a coveted outcome, as it gives rise to the flow experience – a mental state of deep concentration and game engagement. When this balance fractures, the player may experience considerable disinclination to continue playing, which may be a product of anxiety or boredom. Thus, being able to predict manifestations of these psychological states in video game players is essential for understanding player motivation and designing better games. To this end, we build on earlier work to evaluate flow dynamics from a physiological perspective using a custom video game. Although advancements in this area are growing, there has been little consideration given to the interpersonal characteristics that may influence the expression of the flow experience. In this thesis, two angles are introduced that remain poorly understood. First, the investigation is contextualized in the virtual reality domain, a technology that putatively amplifies affective experiences, yet is still insufficiently addressed in the flow literature. Second, a novel analysis setup is proposed, whereby the recorded physiological responses and psychometric self-ratings are combined to assess the effectiveness of our game’s design in a series of experiments. The analysis workflow employed heart rate and eye blink variability, and electroencephalography (EEG) as objective assessment measures of the game’s impact, and self-reports as subjective assessment measures. These inputs were submitted to a clustering method, cross-referencing the membership of the observations with self-report ratings of the players they originated from. Next, this information was used to effectively inform specialized decoders of the flow state from the physiological responses. This approach successfully enabled classifiers to operate at high accuracy rates in all our studies. Furthermore, we addressed the compression of medium-resolution EEG sensors to a minimal set required to decode flow. Overall, our findings suggest that the approaches employed in this thesis have wide applicability and potential for improving game designing practices. |
2017Milto Miltiadou |
Efficient Accumulation Analysis and Visualisation of Full-Waveform LiDAR data in a Volumetric Representation with Applications to Forestry CDE EngD in Digital Entertainment (University of Bath): Academic Supervisor: Prof Neill Campbell Industrial Partner: Plymouth Marine Laboratory Industrial Supervisor: Dr Michael Grant Full-waveform (FW) LiDAR is a particularly useful source of information in forestry since it samples data between tree branches, but compared to discrete LiDAR, there are very few researchers exploiting this due to the increased complexity. DASOS, an open-source program, was developed, along with this thesis, to improve the adoption of FW LiDAR. DASOS uses voxelisation for interpreting the data and this approach is fundamentally different from state-of-art tools. There are three key features of DASOS, reflecting the key contributions of this thesis: Firstly, visualisation of a forest to improve fieldwork planning. Polygonal meshes are generated using DASOS, by extracting an iso-surface from the voxelised data. Additionally, six data structures are tested for optimising iso-surface extraction. The new structure, `Integral Volumes’, is the fastest but the best choice depends on the size of the data. Secondly, the FW LiDAR data are efficiently aligned with hyperspectral imagery using a geospatial representation stored within a hashed table with buckets of points. The outputs of DASOS are coloured polygonal meshes, which improve the visual output, and aligned metrics from the FW LiDAR and hyperspectral imagery. The metrics are used for generating tree coverage maps and it is demonstrated that the increased amount of information improves classification. The last feature is the extraction of feature vectors that characterise objects, such as trees, in 3D. This is used for detecting dead-standing Eucalypt trees in a native Australian forest for managing biodiversity without tree delineation. A random forest classifier, a weighted-distance KNN algorithm and a seed growth algorithm are used to predict the positions of dead trees. Improvements in the results from increasing numbers of training samples was prevented due to the noise in the field data. It is nevertheless demonstrated that forest health assessment without tree delineation is possible. Cleaner training samples that are adjustable to tree heights would have improved prediction. Thesis: Efficient Accumulation, Analysis and Visualisation of Full-Waveform LiDAR in a Volumetric Representation with Applications to ForestryView Milto’s Research Outputs |
2021Valentin Miu |
Computer Vision with Machine Learning on Smartphones for Beauty Applications CDE EngD in Digital Entertainment (Bournemouth University): Academic Supervisors: Dr Oleg Fryazinov Industrial Partner: BeautyLabs Industrial Supervisors: Chris Smith & Mark Gerhard Over the past decade, computer vision has shifted strongly towards deep learning techniques with neural networks, given their relative ease of application to custom tasks, as well as their greatly improved results compared to traditional computer vision techniques. Since the execution of deep learning models is often resource-heavy, this leads to issues when using them on smartphones, as these are generally constrained by their computing power and battery capacity-limited energy consumption. While it is sometimes possible to conduct such resource-heavy tasks on a powerful remote server receiving the smartphone user’s input, this is not possible for real-time augmented reality applications, due to latency constraints. Since smartphones are by far the most common consumer-oriented augmented reality platforms, this makes on-device neural network execution a highly active area of research, as evidenced by Google’s TensorFlow Lite platform and Apple’s CoreML API and Neural Engine-accelerated iOS devices. The overarching goal of the projects carried out in this thesis is to adapt existing desktop computer-oriented computer vision techniques to smartphones, by lowering the computational requirements, or by developing alternative methods. In concordance with the requirements of the placement company, this research contributed to the creation of various beauty-related smartphone and web apps using Unity, as well as TensorFlowLite and TensorFlowJS for the machine learning components. Beauty is a highly valued market, which has seen increasing adoption of augmented reality technologies to drive user-customized product sales. The projects presented include a novel 6DoF machine learning system for smartphone object tracking, used in a hair care app, an improved wrinkle and facial blemish detection algorithm and implementation in Unity, as well as research on neural architecture search for facial feature segmentation, and makeupstyle transfer with generative adversarial networks. Thesis: Computer Vision with Machine Learning on Smartphones for Beauty ApplicationsView Valentin’s Research Outputs |
2021Mark Moseley |
The Development of Assistive Technology to Reveal Knowledge of Physical World Concepts in Young People Who Have Profound Motor Impairments CDE EngD in Digital Entertainment (Bournemouth University): Academic Supervisor: Prof Venky Dubey, Leigh McLoughlin Industrial Partner: Victoria Education Centre Industrial Supervisor: Sarah Gilling Cognitively able children and young people who have profound motor impairments and complex communication needs (the target group or TG) face many barriers to learning, communication, personal development, physical interaction and play experiences, compared to their typically developing peers. Physical interaction (and play) are known to be important components of child development, but this group currently has few suitable ways in which to participate in these activities. Furthermore, the TG may have knowledge about real-world physical concepts despite having limited physical interaction experiences but it can be difficult to reveal this knowledge and conventional assessment techniques are not suitable for this group, largely due to accessibility issues. This work presents a pilot study involving a robotics-based system intervention which enabled members of the TG to experience simulated physical interaction and was designed to identify and develop the knowledge and abilities of the TG relating to physical concepts involving temporal, spatial or movement elements. The intervention involved the participants using an eye gaze-controlled robotic arm with a custom-made haptic feedback device to complete a set of tasks. To address issues with assessing the TG, two new digital Assistive Technology (AT) accessible assessments were created for this research, one using static images, the other video clips. Two participants belonging to the TG took part in the study. The outcomes indicated a high level of capability in performing the tasks, with the participants exhibiting a level of knowledge and ability which was much higher than anticipated. One explanation for this finding could be that they have acquired this knowledge through past experiences and ‘observational learning’. The custom haptic device was found to be useful for assessing the participants’ sense of ‘touch’ in a way which is less invasive than conventional ‘pin-prick’ techniques. The new digital AT accessible assessments seemed especially suitable for one participant, while results were mixed for the other. This suggests that a combination of ‘traditional’ assessment and a ‘practical’ intervention assessment approach may help to provide a clearer, more rounded understanding of individuals within the TG. The work makes contributions to knowledge in the field of disability and Assistive Technology, specifically regarding: AT accessible assessments; haptic device design for the TG; the combination of robotics, haptics and eye gaze for use by the TG to interact with the physical world; a deeper understanding of the TG in general; insights into designing for and working with the TG. The work and information gathered can help therapists and education staff to identify strengths and gaps in knowledge and skills, to focus learning and therapy activities appropriately, and to change the perceptions of those who work with this group, encouraging them to broaden their expectations of the TG. Further information about Livability Victoria Education Centre can be found on their website: https://www.victoria.poole.sch.uk/ Thesis: The development of assistive technology to reveal knowledge of physical world concepts in young people who have profound motor impairmentsView Mark’s Research Outputs |
2019Elena Marimon Munoz |
Digital Radiography: Image Acquisition and Scattering Reduction in X-ray Imaging CDE EngD in Digital Entertainment (Bournemouth University): Academic Supervisor: Dr Hammadi Nait Charif Industrial Partner: PerkinElmer Industrial Supervisor: Philip A. Marsden Since the discovery of the X-rays in 1895, their use in both medical and industrial imaging applications has gained increasing importance. As a consequence, X-ray imaging devices have evolved and adapted to the needs of individual applications, leading to the appearance of digital image capture devices. Digital technologies introduced the possibility of separating the image acquisition and image processing steps, allowing their individual optimization. This thesis explores both areas, by seeking the improvement in the design of the new family of Varex Imaging CMOS X-ray detectors and by developing a method to reduce the scatter contribution in mammography examinations using image post-processing techniques. During the CMOS X-ray detector product design phase, it is crucial to detect any shortcomings that the detector might present. Image characterization techniques are a very efficient method for finding these possible detector features. This first part of the thesis focused on taking these well-known test methods and adapt and optimize them, so they could act as a red flag indicating when something needed to be investigated. The methods chosen in this study have proven to be very effective in finding detector shortcomings and the designs have been optimised in accordance with the results obtained. With the aid of the developed imaging characterization tests, new sensor designs have been successfully integrated into a detector, resulting in the recent release into the market of a new family of Varex Imaging CMOS X-ray detectors. The second part of the thesis focuses in X-ray mammography applications, the gold standard technique in breast cancer screening programmes. Scattered radiation degrades the quality of the image and complicates the diagnosis process. Anti-scatter grids, the main scattering reduction technique, are not a perfect solution. This study is concerned with the use of image post-processing to reduce the scatter contribution in the image, by convolving the output image with kernels obtained from simplified Monte Carlo simulations. The proposed semi-empirical approach uses three thickness-dependant symmetric kernels to accurately estimate the environment contribution to the breast, which has been found to be of key importance in the correction of the breast-edge area. When using a single breast thickness-dependant kernel to convolve the image, the post-processing technique can overestimate the scattering up to 60%. The method presented in this study reduces the uncertainty to a 4-10% range for a 35 to 70 mm breast thickness range, making it a very efficient scatter modelling technique. The method has been successfully proven against full Monte Carlo simulations and mammography phantoms, where it shows clear improvements in terms of the contrast-to-noise ratio and variance ratio when the performance is compared against images acquired with anti-scatter grids. Thesis: Digital radiography: image acquisition and scattering reduction in x-ray imaging |
2018Neerav NagdaIndustrial Supervisor: James Coore |
Asset Retrieval Using Knowledge Graphs and Semantic Tags
Industrial Partner: Absolute Post CDE EngD in Digital Entertainment (Bournemouth University): Academic Supervisors: Prof Xiaosong Yang, Prof Jian Chang The nature of my project is to be able to search, view and retrieve digital assets within a database of the entire company’ s works, from a single application. There are three major challenges that this project aims to solve:
The current method is not specific. Data can be found, but usually, this set contains both the required data and a larger set of irrelevant data. The goal is to avoid the retrieval of irrelevant data, which will significantly reduce data transfer times.
This can be achieved by generating visual previews to see the contents of a file. The generation of semantic tags will allow for quicker and more efficient searching.
Some files may import or reference data from other files. This linked data can be addressed by creating a Semantic Web or Knowledge Graph. Often there are entities that are not necessarily represented by a file, such as a project, but would have many connections to other entities. This allows such entities to become handles in the Semantic Web which can be used to locate a collection of connected entities. The disciplines that this project covers are:
The integration of such a system in industry would significantly reduce searching and retrieval times for data. This can be used in many scenarios, for example:
A common task is to retrieve a project from archives. Most of the time the entire project is not required to be unarchived, so finding the specific data significantly reduces unarchiving times.
If a digital asset can be reused, it can be found from this system and imported into another project. This saves the time of remaking previous work.
Searches can be filtered, for example finding all work produced in the previous day or sorting works by date and time. Creating a live feed would allow for quicker access to data to review works.
|
2015Maryam Naghizadeh |
Multi-Character Motion Retargeting for Large Scale Changes MSCA Ph.D (University of Bath): Academic Supervisors: Prof Darren Cosker, Prof Neill Campbell Multi-character motion retargeting (MCMR) aims at generating motion for multiple target characters given the motion data for their corresponding source subjects. Unlikesingle-character motion retargeting, MCMR algorithms should be able to retarget each character’s motion correctly while maintaining the interaction between them. Existing solutions focus on small-scale changes between interacting characters. However, many retargeting applications require large-scale transformations. For example, movies like Avatar (2009) use motion retargeting to drive characters that are much taller or shorter than the human actors controlling them. Current solutions in the industry require a significant amount of clean-up, increasing costs and post-processing time considerably. In this research, we propose a new algorithm for large-scale MCMR using space-time constraint-based optimisation. We build on the idea of interaction meshes, which are structures representing the spatial relationship among characters. We introduce a new distance-based interaction mesh that embodies the relationship between characters more accurately by prioritizing local connections over global ones. We introduce a stiffness weight for each skeletal joint in our optimisation function, which defines how undesirable it is for the interaction mesh to deform around that joint. This parameter increases the adaptability of our algorithm for large-scale transformations and reduces optimisation time considerably. Our optimisation function also incorporates a) a pose prior model, which ensures that the output poses are valid; b) a balance term, which aims at preserving balance in the output motion; and c) a distance adjustment element, which adapts the distance between characters according to their scale change. We compare the performance of our algorithm with the current state-of-the-art MCMR solution (baseline) for several motion sequences based on runtime, bone-length error, balance and pose validity metrics. Furthermore, we complete two more experiments to evaluate our method’s competency against the baseline. The first experiment involves converting retargeting results to an angular representation and measuring inverse kinematics (IK) error. For the second experiment, we conduct a user study and ask participants to rank the output of our method and the baseline according to their retargeting quality for various test sequences. Our results show that our method outperforms the baseline over based on runtime, balance, pose validity, IK error and retargeting quality score measures. They display similar performance regarding bone-length error. Thesis: Multi-Character Motion Retargeting for Large Scale Changes |
2021Keji Neri |
Ph.D (University of Bath): I am part of the Mathematical foundations of the computation research group. I shall be working on applying techniques in proof theory to proofs in order to extract more information from them. My supervisor is Thomas Powell and I was attracted to Bath, particularly by the project that was on offer! |
2019Kari Noriy |
Incremental Machine Speech Chain for Realtime Narrative Stories CDE Ph.D. (Bournemouth University): Academic Supervisor: Professor Xiaosong Yang Speech and text remain the main form of communication for human-to-human interactions, it has allowed us to communicate and coordinate ideas. My research focuses on Human-Computer Interaction (HCI) namely, synthesis of natural-sounding speech for use in interactive story-driven experiences, allowing for natural flowing conversation between a human and computer in low latency environments. Current mechanisms require the entire input sequences, thus, there is a significant delay between input and output, breaking the immersion. In contrast, humans can listen and speak in real time, if there is a delay, they will not be able to converse. Another area of interest is the addition of imperfection in synthesised speech that drive the believability of speech, these include, prosody, suprasegmentals, Interjection, Discourse marker, Intonation, tone, stress, and rhythm. |
2022Thu Nguyen Phuoc |
Neural Rendering and Inverse Rendering using Physical Inductive Biases MSCA Ph.D (University of Bath): Academic Supervisors: Dr Yongliang Yang, Professor Eamonn O’Neill The computer graphics rendering pipeline is designed to generate realistic 2D images from 3D virtual scenes, with most research focusing on simulating elements of the physical world using light transport models or material simulation. This rendering pipeline, however, can be limited and expensive. For example, it still takes months or year seven for highly trained 3D artists and designers to produce high-quality images, games or movies. Additionally, most renderers are not differentiable, making it hard to apply to inverse rendering tasks. Computer vision investigates the inference of scene properties from 2D images, and has recently achieved great success with the adoption of neural networks and deep learning. It has been shown that representations learned from these computer vision models are also useful for computer graphics tasks. For example, powerful image-generative models are capable of creating images with a quality that can rival those created by traditional computer graphics approaches. However, these models make few explicit assumptions about the physical world or how images are formed from it and therefore still struggle in tasks such as novel-view synthesis, re-texturing or relighting. More importantly, they offer almost no control over the generated images, making it non-trivial to adapt them for computer graphics applications. In this thesis, we propose to combine inductive biases about the physical world and the expressiveness of neural networks for the task of neural rendering and inverse rendering. We show that this results in a differentiable neural renderer that can both achieve high image quality and generalisation across different 3D shape categories, as well as recover scene structures from images. We also show that with the added knowledge about the 3D world, unsupervised image generative models can learn representations that allow explicit control over object positions and poses without using pose labels, 3D shapes, or multiple views of the same objects or scenes. This suggests the potential of learning representations specifically for neural rendering tasks, which offer both powerful priors about the world and intuitive control over the generated results. Thesis – Neural Rendering and Inverse Rendering Using Physical Inductive BiasesView Thu’s Research Outputs |
2018Karolina Pakenaite |
An Investigation into Tactile Images for the Visually-Impaired EngD in Digital Entertainment (University of Bath): Academic Supervisor: Prof Peter Hall My aim is to provide the visually impaired community with access to photographs using sensory substitution. I am investigating the functionality of photographs being translated into simple pictures, which will then be printed in tactile form. Some potential contributions could be the introduction of denotation and projection with regard to style. Beneficiaries could also extend beyond computing into other academic disciplines like Electronic Engineering and Education. Accessible design is also essentially inclusive design for all. Sighted individuals find themselves feeling tempted to touch art pieces in museums or galleries and while most artworks are originally created to be touched, we often discern a cardinal no-touch rule to preserve them. Accessibility features may be designed for a particular group of the community, but they can and usually do end up being used by a wider range of people. Towards the end of my research, I hope to adapt my work for the use of primary blind school children. To get simplified pictures, I recently tried translating photographs into two different styles: ‘Icons Representation’ and ‘Shape Representation’. For Icons Representation of a photograph, I used a combination of object and salient detection algorithms to identify salient objects only. I used Mask R-CNN object detection and combined its output with Saliency Map using PiCANet detection algorithm, which then gave us probabilities that a given pixel belongs to a salient object within an image. All detected salient objects are replaced with corresponding simplified icons onto a blank canvas of the same size as the input image. Geometric transformations on icons are applied to avoid any overlaps. Background edge map was added to give further context about the image. For Shape Representation of an object, I experimented with different image segmentation methods and replaced each segment with the most appropriate canonical shape, using a method introduced by Voss and Suße. That is, segments are normalised into a canonical frame by using a whitening transform to get a normalised shape. We then compared these normalised shapes with canonical shapes in the library and decide which is correlated the most. An inverse transform was then applied on library shapes. For the last step, it looks like we have moulded library shapes accordingly so that it matches closely to its segments. We now have simplified images of objects using a combination of shapes. We plan to have these Shape Representations printed in 3D. Due to Covid-19, we were unable to test these tactile images with participants using touch, but few obvious imitations were found. We will continuously investigate and improve our simplified images. Computer Vision will allow us to create autonomous functionality to translate photographs into tactile images and hope that this will reduce the cost of tactile image production. We will also use knowledge in Psychology of Object Recognition and experiment with human participants to make our implementation as effective as possible by the real users. A combination of Computer Science and Psychology will prepare us to adapt our work for the use of education for primary school children. This could be teaching congenitally blind children to understand different sizes of objects that are rarely touched (e.g elephant or mouse) or teach them to indicate the distance of an object on a paper by drawing objects small that are far away. |
2017Ralph Potter |
Programming models for heterogeneous systems with application to computer graphics CDE EngD in Digital Entertainment (University of Bath):Academic Supervisor: Dr Russell BradfordIndustrial Partner: CodePlayIndustrial Supervisor: Dr. Alastair Murray and Dr. Paul KeirFor over a decade, we have seen a plateauing of CPU clock rates, primarily due to power and thermal constraints. To counter these problems, processor architects have turned to both multi-core and heterogeneous processors. Whilst the use of heterogeneous processors provides a route to reducing energy consumption, this comes at the cost of increased complexity for software developers. In this thesis, we explore the development of C++-based programming models and frameworks which enable the efficient use of these heterogeneous platforms, and the application of these programming models to problems from the field of visual computing. Two recent specifications for heterogeneous computing: SYCL and Heterogeneous System Architecture, share the common goal of providing a foundation for developing heterogeneous programming models. In this thesis, we provide early evaluations of the suitability of these two new platforms as foundations for building higher-level domain-specific abstractions. We draw upon two use cases from the field of visual computing: image processing and ray tracing; and explore the development and use of domain-specific C++ abstractions layered upon these platforms. We present a domain-specific language that generates optimized image processing kernels by deeply embedding within SYCL. By combining simple primitives into more complex kernels, we are able to eliminate intermediate memory accesses and improved performance. We also describe Offload for HSA: a single-source C++14 compiler and programming model for Heterogeneous System Architecture. The pervasive shared virtual memory offered by HSA allows us to reduce execution overheads and relax constraints imposed by SYCL’s programming model, leading to significant performance improvements. Performance optimization on heterogeneous systems is a challenging task. We build upon Offload to provide RTKit, a framework for exploring the optimization space of ray tracing algorithms on heterogeneous systems. Finally, we conclude by discussing the challenges raised by our work and open problems that must be resolved in order to unify C++ and heterogeneous computing. Thesis: Programming Models for Heterogeneous Systems with Application to Computer Graphics |
2022Kyle Reed |
Improving Facial Performance Animation using Non-Linear Motion CDE EngD in Digital Entertainment (University of Bath): Academic Supervisor: Prof Darren Cosker Industrial Partner: Cubic Motion Industrial Supervisor: Steve Caulkin Cubic Motion is a facial tracking and animation studio, most famous for its real-time live performance capture. The aim of this research is to improve the quality of facial motion capture and animation through the development of new methods for capture and animation. We are investigating the utilisation of non-linear facial motion observed from 4D facial capture for improving the realism and robustness of facial performance capture and animation. As the traditional pipeline relies on linear approximations for facial dynamics, we hypothesise that using observed non-linear dynamics will automatically factor in subtle nuances such as fine wrinkles and micro-expressions, reducing the need for animator handcrafting to refine animations. Starting with developing a pipeline for 4D Capture of a performer’s range of motion (or Dynamic Shape Space); we apply this information to various components of the animation pipeline including rigging, blend shape solving, performance capture, and keyframe animation. We also investigate how by acquiring a Dynamic Shape Space of multiple individuals we can develop a motion manifold for the personalisation of individual expression, that can be used as a prior for subject-agnostic animation. Finally, we validate the need of non-linear animation through comparison to linear methods and through audience perception studies. Thesis – Improving Facial Animation using Non-Linear MotionView Kyle’s Research Outputs |
2021Abdul Rehman |
Machine learning for discerning paralingual aspects of speech using prosodic cues Intel-PA Ph.D (Bournemouth University): Academic Supervisor: Professor Xiaosong Yang Computers are currently limited in their ability to understand human speech as humans do because of the lack of understanding of aspects of speech other than words. The scope of my research is to find out the shortcomings in speech processing systems’ ability to process such speech cues and look for solutions to enhance computers’ ability to understand not just what’s being said but also how it’s being said. |
2017Alexandros Rotsidis |
Creating an intelligent animated avatar system CDE EngD in Digital Entertainment (University of Bath): Academic Supervisor: Professor Peter Hall Industrial Partner: Design Central (Bath) Ltd t/a DC Activ / LEGO Industrial Supervisor: Mark Dason Creating an intelligent avatar: Using augmented Reality to bring 3D models to life. The creation of 3D intelligent multi-lingual avatar system that can realistically imitate (and interact with) Shoppers (Adults), Consumers (Children), Staff (Retail) and Customers (Commercial) as users or avatars. Using different dialogue, appearance and actions based on given initial data and feedback on the environment and context in which it is placed creating ‘live’ interactivity with other avatars and users. While store assistant avatars and virtual assistants are commonplace in present times, they act in an often scripted and unrealistic manner. These avatars are also often limited in their visual representation (ie usually humanoid). This project is an exciting opportunity to apply technology and visual design to many different 3D objects to bring them to life to guide and help people (both individually and in groups) learn from their mistakes in a safe virtual space and make better quality decisions increasing commercial impact. View Alexandros’ Research Outputshttp://www.alexandrosrotsidis.com/ |
2018Olivia RustonSupervisor: Professor Jason Alexander |
Designing Interactive Wearable Technology CDE EngD in Digital Entertainment (University of Bath): Academic Supervisors: Professor Mike Fraser, Professor Jason Alexander This research focuses on wearables and e-textiles, considering fashion design/construction processes and their socio-cultural impact. My most recent work has involved creating and experimenting with bodice garments to understand how information about their motion might help people to learn about the way they move so that they can learn to move better.
|
2023Yassir Saquil |
Machine Learning for Semantic-level Data Generation and Exploration MSCA Ph.D (University of Bath): Academic Supervisors: Dr Yongliang Yang, Dr Wenbin Li In the last decade, the users of many web platforms provided a massive flow of multimedia data content, which opened the possibility of enhancing the user experience in these platforms through data personalization. The personalization of data consists of presenting customized data representations to different users according to their preferences, and with the recent advances of deep learning models in computer vision, it becomes intriguing to explore the impact of these models on personalization tasks. For this purpose, this thesis aims to study the possibility of building personalized deep-learning models that benefit from the users’ preferences to provide customized applications. The main challenges in this thesis reside in defining the context, methods, and applications of the studied data personalization tasks. Also, the representation of the users’ preferences should be considered according to the available benchmarks in the literature. Our work in this thesis focus on the personalization of generation, exploration, and summarization tasks using three main types of multimedia data: 2D images, 3D shapes, and videos where we define the users’ preferences at two levels: categorical annotated labels and comparison based semantic attributes that are more intuitive given that it is much easier to compare objects rather than assigning a specific label from the users’ perspective. We begin our studies by investigating the usage of generative adversarial networks in 2D image generation tasks according to semantic attributes. These semantic attributes represent subjective measures via pairwise comparisons of images by the user for customized 2D image generation and editing. As an extension to this work, we explore generative adversarial networks for 3Dmeshes, where the user defines subjective measures to browse and edit 3D shapes. Lastly, we tackle the video summarization task, where we suggest a conditional ranking-based model that generates personalized summarizations given a set of categorical annotated labels selected by the user, which enables the possibility of providing flexible and interactive summarizations. Thesis: Personalized data generation and summarization using learned ranking models |
2017Marcia SaulIndustrial Supervisor: Stuart Black |
A Two-Person Neuroscience Approach for Social Anxiety Industrial Partner: BrainTrainUK CDE EngD in Digital Entertainment (Bournemouth University): Academic Supervisors: Prof Fred Charles, Dr Xun He Can we use games technology and EEG to help us understand the role of interbrain synchrony on people experiencing the symptoms of social anxiety? A Two-Person Neuroscience Approach for Social Anxiety: Prospects into Bridging Intra- & Inter-brain Synchrony with Neurofeedback. My main field of interest is computational neuroscience, brain-computer interfaces and machine learning with the use of games in applications for rehabilitation and improving the quality of life for patients/persons in care. Social anxiety has become one of the most prominent of anxiety disorders, with many of its symptoms overlapping into the realms of other mental disorders such as depression, autism spectrum disorder, schizophrenia, ADHD, etc. Neurofeedback (NF) is well known to modulate these symptoms using a metacognitive approach of relaying a participant’ s brain activity back to them for self-regulation of the target brainwave patterns. In this project, we explore the potential of integrating Intra- and inter-brain Synchrony to explore the potential of a more effective NF procedure. By using realistic multimodal feedback in the delivery of NF, we can amplify the concept of collaboration or co-operation during tasks utilising the ‘power of two’ in two-person neuroscience to help reach our goal of synchronising brainwaves between two participants and aiming to alleviate symptoms of social anxiety. |
2021Tom Smith |
Model-based hierarchical reinforcement learning for real-world control tasks Ph.D (University of Bath): Academic Supervisor: Dr Özgür Şimşek I’m returning to academia after 7 years in industry (mostly in the automotive industry), completing my MSc in Robotics and Autonomous Systems at Bath last year before starting my PhD journey. I’m excited and optimistic about the impact autonomous systems and AI will have on our lives and think that learning systems are fundamental to this. I’m particularly interested in how agents can learn and apply hierarchical models of an environment to improve their performance. |
2013Tristan Smith |
Procedural content generation for computer games CDE EngD in Digital Entertainment (University of Bath): Academic Supervisor: Dr Julian Padget Industrial Partner: Ninja Theory Industrial Supervisor: Andrew Vidler Procedural content generation (PCG) is increasingly used in games to produce varied and interesting content. However PCG systems are becoming increasingly complex and tailored to specific game environments, making them difficult to reuse, so we investigate ways to make the PCG code reusable and allow simpler, usable descriptions of the desired output. By allowing the behaviour of the generator to be specified without altering the code, we provide increasingly data-driven, modular generation. We look at reusing tools and techniques originally developed for the semantic web and investigate the possibility of using them with industry-standard games development tools. Thesis: Procedural Constraint-based Generation for Game DevelopmentView Tristan’s Research Outputs |
2019Ben SnowIndustrial Supervisor: Greg Dawson |
Griffon Hoverwork Simulator for Pilot Training Industrial Partner: Griffin Hoverwork CDE EngD in Digital Entertainment (Bournemouth University): Academic Supervisor: Prof Jian Chang Griffon Hoverwork are both pioneers and innovators in the hovercraft space. With over 50 years of experience making, driving, and collecting data about hovercrafts, GHL has the resources to build a realistic and informative training simulator. We will design a virtual environment to train prospective hovercraft pilots, give feedback, and have fun driving a physically realistic hovercraft. The simulator will incorporate the experience of GHL’s highly trained pilots and a wealth of craft data collected from real vehicles to provide a simulation tailored to a Griffon 2000TD craft. GHL’s training protocols will be used to provide specific learning objectives and give feedback to novice and professional pilots on all aspects of craft operation. Creating a realistic hovercraft model will allow the simulation environment to be used as a research testbed for future projects.
|
2017Sean Soraghan |
A Perceptually Motivated Approach to Timbre Representation and Visualisation CDE EngD in Digital Entertainment (Bournemouth University): Academic Supervisor: Dr Alain Renaud Industrial Partner: ROLI Industrial Supervisor: Ben Supper Musical timbre is a complex phenomenon and is often understood in relation to the separation and comparison of different sound categories. The representation of musical timbre has traditionally consisted of instrumentation category (e.g. violin, piano) and articulation technique (e.g. pizzicato, staccato). Electroacoustic music places more emphasis on timbre variation as musical structure, and has highlighted the need for better, more in-depth forms of representation of musical timbre. Similarly, research from experimental psychology and audio signal analysis has deepened our understanding of the perception, description, and measurement of musical timbre, suggesting the possibility of more exact forms of representation that directly reference low-level descriptors of the audio signal (rather than high-level categories of sound or instrumentation). Research into the perception of timbre has shown that ratings of similarity between sounds can be used to arrange sounds in an N-dimensional perceptual timbre space, where each dimension relates to a particular axis of differentiation between sounds. Similarly, research into the description of timbre has shown that verbal descriptors can often be clustered into a number of categories, resulting in an N-dimensional semantic timbre space. Importantly, these semantic descriptors are often physical, material, and textural in nature. Audio signal processing techniques can be used to extract numeric descriptors of the spectral and dynamic content of an audio signal. Research has suggested correlations between these audio descriptors and different semantic descriptors and perceptual dimensions in perceptual timbre spaces. This thesis aims to develop a perceptually motivated approach to timbre representation by making use of correlations between semantic and acoustic descriptors of timbre. User studies are discussed that explored participant preferences for different visual mappings of acoustic timbre features. The results of these studies, together with results from existing research, have been used in the design and development of novel systems for timbre representation. These systems were developed both in the context of digital interfaces for sound design and music production, and in the context of real-time performance and generative audio-reactive visualisation. A generalised approach to perceptual timbre representation is presented and discussed with reference to the experimentation and resulting systems. The use of semantic visual mappings for low-level audio descriptors in the representation of timbre suggests that timbre would be better defined with reference to individual audio features and their variation over time. The experimental user studies and research-led development have highlighted specific techniques and audio-visual mappings that would be very useful to practitioners and researchers in the area of audio analysis and representation. Thesis: A perceptually motivated approach to timbre representation and visualisationView Sean’s Research Outputs |
2020Joanna Tarko |
Graphics Insertions into Real Video for Market Research CDE EngD in Digital Entertainment (University of Bath): Academic Supervisor: Dr Christian Richardt Industrial Partner: DC-Activ and Checkmate VR Industrial Supervisors: Rob Thorpe, Tim Jarvis Combining real videos with computer-generated content, either offline (compositing) or in real-time (augmented and mixed reality, AR/MR), is an extensive field of research. It has numerous applications, including entertainment, medical imaging, education, sport, architecture, and marketing (advertising and commerce). However, even though well established in marketing as a part of a retail environment, there seem to be no known applications of merging real and virtual in market research. The aim of market research is to help explain why a customer decides to buy a specific product. In a perfect scenario, study participants are placed in a real but fully controlled shopping environment, but in practice, such environments are very expensive or even impossible to build. Using virtual reality (VR) environments instead significantly reduces costs. VR is fully controllable and immersive but CG models often lack realism. This research project aims to provide mixed-reality tools which combine real camera footage with computer-generated elements to create plausible but still controlled environments that can be used for market research. My work consists of the full graphics insertions pipeline for both perspective and 360° spherical cameras, with real-time user interaction with the inserted objects. It addresses the three main technical challenges: tracking the camera, estimating the illumination to light virtual objects plausibly, and rendering virtual objects and compositing them with the video in real time. Tracking and image-based lighting techniques for perspective cameras are well-established both in research and industry. Therefore, I focused only on real-time compositing for perspective video. My pipeline takes camera tracking data and reconstructed points from external software and synchronises them with the video sequence in the Unity game engine. Virtual objects can be dynamically inserted, and users can interact with them. Differential rendering for image-based shadows adds to the realism of insertions. Then I extend the pipeline to 360° spherical cameras with my implementation of omnidirectional structure from motion for camera tracking and scene reconstruction. Selected 360° video frames, after inverse tone mapping, act as spatially distributed environment maps for image-based lighting. Like in the perspective cameras case, differential rendering enables shadow casting, and the user can interact with inserted objects. The proposed pipeline enables compositing in the Unity game engine with correct synchronisation between the camera pose and the video, both for perspective and 360°videos, which is not available by default. This allows virtual objects to be inserted into moving videos, which extends the state of the art, which is limited to static videos only. In user studies, I evaluated the perceived quality of virtual object insertions, and I compared their level of realism against purely virtual environments. Thesis: Graphics Insertions into Real Video for Market ResearchView Joanna’s Research Outputs |
2021Catherine Taylor |
Deformable Objects for Virtual Environments CDE EngD in Digital Entertainment (University of Bath): Academic Supervisor: Prof Darren Cosker Industrial Partner: Marshmallow Laser Feast Industrial Supervisors: Robin McNicholas, Nell Whitely Improvements in both software and hardware, as well as an increase in consumer-suitable equipment, have resulted in great advances in the fields of virtual reality (VR)and augmented reality (AR). A primary focus of immersive research, using VR or AR, is bridging the gap between real and virtual. The feeling of disconnect between worlds largely arises due to the means of interaction with the virtual environment and computer-generated (CG) objects in the scene. While current interaction mechanisms(e.g. controllers or hand gestures) have improved greatly in recent years, there are still limitations which must be overcome to reach the full potential of interaction within immersive experiences. Thus, to create immersive VR and AR applications and training environments, an appropriate method for allowing participants to interact with the virtual environments and elements of that scene must be considered. There does not currently exist a platform to bring physical objects into virtual worlds without additional peripherals or the use of expensive motion capture setups and so to overcome this we need a real-time solution for capturing the behaviour of physical objects in order to animate CG representations in VR or add effects to real-world objects in AR. In this work, we consider different approaches for transporting physical objects into virtual and augmented environments and collaborate with Marshmallow Laser Feast to facilitate novel and engaging interactions within their immersive experiences. To do so, we design an end-to-end pipeline for creating interactive VR Props from physical objects, with a focus on non-rigid objects with large, distinct deformations such as bends and folds. In this pipeline, the behaviour of the objects are predicted using deep neural networks (DNNs). Our networks predict model parameters and use these to animate virtual representations of objects in VR and AR applications. We experiment with 3 different DNNs (a standard ResNet34 and our custom VRProp-Net and VRPropNet+) and compare the outputs of each of these. We present both a fixed camera solution as well as an egocentric solution which predicts the shape and pose of objects in a moving first-person view, allowing a flexible capture volume and offering more freedom in immersive experiences. Finally, motivated by the potential applications for hand-object tracking within mixed reality experiences, we design a novel dataset –EgoInteraction. This is the first large-scale dataset containing egocentric hand-object interaction sequences with 3D ground truth data for a range of rigid, articulated and non-rigid objects. Thesis: Deformable Objects for Virtual EnvironmentsView Catherine’s Research Outputs |
2018Matthew Thompson |
Building abstractable story components with institutions and tropes CDE EngD in Digital Entertainment (University of Bath): Academic Supervisors: Dr Julian Padget Industrial Partner: Sysemia Industrial Supervisors: Steve Battle, Andy Sinclair Though much research has gone into tackling the problem of creating interactive narratives, no software has yet emerged that can be used by story authors to create these new types of narratives without having to learn a programming language or narrative formalism. Widely-used formalisms in interactive narrative research, such as Propp’s Morphology of the Folktale and Lehnert’s Plot Units’ allow users to compose stories out of pre-defined components, but do not allow them to define their own story components, or to create abstractions by embedding components inside of other components. Current tools for interactive narrative authoring, such as those that use Young’s Mimesis architecture or Facade’s drama manager approach, direct intelligent agents playing the roles of characters through use of planners. Though these systems can handle player interactions and adapt the story around them, they are inaccessible to story authors who lack technical or programming ability. This thesis proposes the use of Story Tropes to informally describe story components. We introduce TropICAL, a controlled natural language system for the creation of tropes that allows non-programmer story authors to describe their story components informally. Inspired by Propp’s Morphology, this language allows for the creation of new story components and abstractions that allow existing components to be embedded inside of new ones. Our TropICAL language compiles to the input language for an Answer Set solver, which represents the story components in terms of a formal normative framework, and hence allows for the automated verification of story paths. These paths can be visualised as branching tree diagrams in the StoryBuilder tool, so that authors can visualise the effect of adding different tropes to their stories, aiding the process of authoring interactive narratives. We evaluate the suitability of these tools for interactive story construction through a thematic analysis of story authors’ completion of story-authoring tasks using TropICAL and StoryBuilder. The participants complete tasks in which they have to describe stories with different degrees of complexity, finally requiring them to reuse existing tropes in their own trope abstractions. The thematic analysis identifies and examines the themes and patterns that emerge from the story authors’ use of the tool, revealing that non-programmer story authors are able to create their own stories using tropes without having to learn a strict narrative formalism. Thesis: Building Abstractable Story Components with Institutions and Tropes View Matthew’s Research Outputs |
2019Fabio Turchet |
Physics-based modelling, simulation, placement and learning for musculoskeletal animations CDE EngD in Digital Entertainment (Bournemouth University): Academic Supervisor: Dr Oleg Fryazinov Industrial Partner: MPC Industrial Supervisors: Dr Sara Schvartzman In character production for Visual Effects, the realism of deformations and flesh dynamics is a vital ingredient of the final rendered moving images shown on screen. This work is a collection of projects completed at the hosting company MPC London focused on the main components needed for the animation of musculoskeletal systems: primitives modeling, physically accurate simulation, and interactive placement. Complementary projects are also presented, including the procedural modeling of wrinkles and a machine-learning approach for deformable objects based on Deep Neural Networks. Primitive modeling proposes an approach to generating muscle geometry complete with tendons and fibers from superficial patches sketched on the character skin mesh. The method utilizes the physics of inflatable surfaces and produces meshes ready to be tetrahedralized, that is without compenetrations. A framework for the simulation of muscles, fascia and fat tissues based on the Finite Elements Method (FEM) is presented, together with the theoretical foundations of fiber-based materials with activations and their fitting in the Implicit Euler integration. The FEM solver is then simplified to achieve interactive rates to show the potential of interactive muscle placement on the skeleton to facilitate the creation of intersection-free primitives using collision detection and resolution. Alongside physics simulation for biological tissues, the thesis explores an approach that extends the Implicit Skinning technique with wrinkles based on convolution surfaces by exploiting the gradients of the combination of bones fields. Finally, this work discusses a possible approach to the learning of physics-based deformable objects based on deep neural networks which makes use of geodesic disks convolutional layers. Thesis: Physics-based modelling, simulation, placement and learning for musculo-skeletal animations |
2021Ruibin Wang |
Intelligent Dialogue System for Automatic Diagnosis Intel-PA Ph.D (Bournemouth University): Academic Supervisor: Professor Xiaosong Yang The automatic diagnosis of diseases has drawn increasing attention from both research communities and health industry in recent years. Due to the conversation between a patient and a doctor can provide many valuable clues for diagnosis, dialogue system is naturally used in the field of medical diagnosis to simulate the consultation process between doctors and patients. The existing dialogue-based diagnosis systems is mainly based on data-driven methods and highly rely on the statistical features from large amount of data which is normally not available. Previous works have already indicated that using medical knowledge graph in the diagnosis prediction system will improve the model’s prediction performance and robustness against data insufficiency and noise effectively. The aim of my project is to propose a new dialogue-based diagnosis system which not only can efficiently communicate with patients to obtain symptom information but also can be guided by medical knowledge to make accurate diagnoses more efficiently. Our proposed Knowledge-based GCN dialogue system for automatic diagnosis is shown below. |
2021Asha Ward |
Music Technology for Users with Complex Needs CDE EngD in Digital Entertainment (Bournemouth University):Academic Supervisor: Dr Tom DaviesIndustrial Partner: Three Ways SchoolIndustrial Supervisor: Luke WoodburyMusic is essential to most of us, it can light up all areas of the brain, help develop skills with communication, help to establish identity, and allow a unique path for expression. However, barriers to access or gaps in provision can restrict access to music-making and sound exploration for some people. Research has shown that technology can provide unique tools to access music-making but that technology is underused by practitioners. This action research project details the development and design of a technological toolkit called MAMI –the Modular Accessible Musical Instrument technology toolkit – in conjunction with stakeholders from four research sites. Stakeholders included music therapists, teachers, community musicians, and children and young people. The overarching aims of the research were: to explore how technology was incorporated into practices of music creation and sound exploration; to explore the issues that stakeholders had with current music technology; to create novel musical tools and tools that match criteria as specified by stakeholders, and address issues as found in a literature review; to assess the effectiveness of these novel tools with a view to improving practices; and to navigate propagation of the practices, technologies, and methods used to allow for transferability into the wider ecology. Outcomes of the research include a set of design considerations that contribute to knowledge around the design and practical use of technological tools for music-making in special educational needs settings; a series of methodological considerations to help future researchers and developers navigate the process of using action research to create new technological tools with stakeholders; and the MAMI Tech Toolkit – a suite of four bespoke hardware tools and accompanying software – as an embodiment of the themes that emerged from: the cycles of action research; the design considerations; and a philosophical understanding of music creation that foregrounds it as a situated activity within a social context. |
2021Phil Wilkinson |
Exploring Expectations of Technology in Education CDE EngD in Digital Entertainment (Bournemouth University): Academic Supervisor: Dr Mark Readman Industrial Partner: IPACA Industrial Supervisor: Jackie Taylor This thesis explores the impact of expectations of technology on educational practices and the challenges in researching these impacts. Expectations of technology are not universal, but there is a prevalent solutionist perspective that provides a simplistic account of technology. The critical issue that will be explored within this thesis is the way in which underlying ideological values inform constructions of what does, or does not, constitute legitimate educational practices with technology. Further, it will also highlight the impact of these constructions of legitimate practices on parents and educators. Overall, it will present an account of technology in education that is influenced by powerful forces of legitimation that lead to presumptions of deficiency. This is a portfolio thesis that explores the role of technology in children’s learning and development across three research settings. To begin, in Mediating Family Play I investigate the perceived impact of digital technology on parents’ presumed role in cultivating developmentally appropriate forms of play. In Game Makers I discuss the production of digital games for social change as a constructionist pedagogy and the ways in which different systems of meaning intersect in the classroom. Finally, Digital Families explores the imbalanced influence of school-based educational practices on the home. In presenting this thesis I draw heavily on my professional experience through a reflective account of my research trajectory. In doing so I document and highlight the uncovering of these underlying critical issues, and the subsequent development of a reflexive, critical stance. In presenting the thesis in this way, along with the content covered, the contribution made is two-fold. First, it contributes to existing critical discussions of educational technology. Second, it presents a transparent account of researching educational technology in practice that will be of use for other early career researchers, or researchers and practitioners transitioning from technical backgrounds. Thesis: Purposing digital media for education: critically exploring values and expectations in applying digital media for children’s learning and development.View Phil’s Research Outputs |
2017Steve Willey |
Improving the Pipeline for Stereo Post-Production CDE EngD in Digital Entertainment (University of Bath): Academic Supervisors: Professor Phil Willis, Professor Peter Hall Industrial Partner: DNeg Industrial Supervisor: Jeff Clifford and Ted Waine We investigate some problems commonly found when dealing with stereo images. Working within the context of visual effects for films, we explore software solutions to issues arising with stereo images captured on-set. These images originate from a wide variety of hardware which may or may not provide additional data support for post-production needs. Generic software solutions are thus greatly to be preferred. This dissertation documents contributions in the following three areas. Each project was undertaken at Double Negative and investigated with the aim of improving the post-production pipeline for 3D films. Colour matching is the process whereby the colours of one view from a stereo pair are matched with those of the other view. This process is necessary due to the fact that slight differences in hardware and viewing angle can result in some surprisingly large colour discrepancies. Chapter 3 presents a novel approach to colour matching between stereo pairs of images, with a new tool for visual effects artists given in section 6.2. Vertical alignment of stereo images is key to providing a comfortable experience for the viewer, yet we are rarely presented with perfectly aligned footage from the outset. In chapter 4 we discuss the importance of correcting misalignments for both the final audience and the artists working on these images. We provide a tool for correcting misalignments in section 6.3. Disparity maps are used in many areas of post-production, and so in chapter 5 we investigate ways in which disparity map generation can be improved for the benefit of many existing tools at Double Negative. In addition, we povide an extensive exploration of the requirements of 3D films in order to make them presentable in the cinema. Through these projects, we have provided improvements to the stereo workflow and shown that academic research is a necessary component of developing tools for the visual effects pipeline. We have provided new algorithms to improve the 3D experience for moviegoers, as well as artists, and conclude by discussing the future work that will provide further gains in the field. Thesis: Improving the Pipeline for Stereo Post-Production |
2022Thomas Williams |
Exploring the Potential of Augmented Reality to Support People Living with Dementia to Complete Tasks at Home CDE EngD in Digital Entertainment (University of Bath): Academic Supervisors: Dr Elies Dekoninck, Dr Simon Jones, Dr Christof Lutteroth Industrial Partner: Designability Industrial Supervisor: Dr Hazel Boyd Dementia affects more than 850,000 people in the UK, and this figure continues to rise year on year. People living with dementia often have difficulties completing activities of daily living (ADLs), leading to a reliance on family or professional carers. However, assistive technology, such as task prompting tools, can support people living with dementia to maintain their independence and live at home for longer. Augmented Reality (AR) is an increasingly prevalent technology and has been used as a task-prompting tool in industrial settings to support complex maintenance and assembly tasks. The use of AR for task assistance has also been identified as a promising area for development in Domestic AR. Despite the use of AR as a task-prompting tool in industrial settings and the potential of AR in domestic settings, relatively little is known about the efficacy of augmentations for ADLs. The work in this thesis aims to provide an initial exploration into the use of AR as a task-prompting tool to support people living with dementia to complete ADLs at home. Multiple stakeholders, including health professionals, a general adult population, older adults without cognitive impairment, and people living with dementia and their family carers, have been included to develop a holistic understanding of the use of AR in a domestic setting, based on four studies carried out for this project. The first study consisted of in-person interviews with professionals with experience of working with people living with dementia. The second study was a lab experiment with older adults to compare four AR visual prompting techniques to prompt five basic actions found in many ADLs. The third study involved the co-design of AR prompts in a kitchen context, and their evaluation with a general adult population using an online survey. The final study consisted of online interviews with people living with dementia and their family carers to explore the results of the previous three studies and how AR could be beneficial from the point of view of people with lived experience of dementia. The overall findings show that AR as a tool to support task prompting of domestic tasks was received positively by the participants of these studies. A combination of text, audio, and a ghost hand image demonstrating the action to carry out could be most beneficial for people with dementia, but AR prompts should be easily customisable to cater for different abilities, preferences, and personalities. Furthermore, early introduction of AR will be key for uptake when the technology has been developed further. The potential of domestic AR to improve the lives of people affected by dementia and those that support them with ADLs is considered as motivation for future work in this promising research area. Thesis: Exploring the Potential of Augmented Reality to Support People Living with Dementia to Complete Tasks at Home |
2018Katarzyna Wojna |
Natural user interaction and multimodalityIndustrial Partner: UltraleapIndustrial Supervisor: Matt CorrallCDE EngD in Digital Entertainment (University of Bath): Academic Supervisors: Dr Christof Lutteroth, Dr Michael WrightThis project explores how multi-modal interaction can enable more meaningful collaboration with systems that use Natural User Interaction (e.g. voice, gesture, eye gaze, touch etc.) as the primary method of interaction. In the first instance, this project will explore Digital Out Of Home systems. However, the research and insights generated from this application domain could be transferred to virtual and augmented reality as well as more broadly to other systems where Natural User Interaction is used as the primary method of interaction. One of the challenges of such systems is to design adaptive affordances so that a user knows how to interact with an information system whilst on the move with little or no training. One possible solution is to provide multiple modes of interaction, and associated outputs, which can work together to enable meaningful or “natural” user interaction. For example, a combination of eye gaze and gesture to “understand” that the user wishes to “zoom in” to a particular location on a map. Addressing this challenge, this project explores how multi-modal interaction (both input and output) can enable meaningful interactions between users and these systems. That is, how can a combination of one or more inputs and the associated outputs allow the user to convey intent, perform tasks and react to feedback as well as the system providing meaningful feedback to the user about what it “understands” to be the user’s intended actions? “Mid-air haptic feedback technology produces tactile sensations that are felt without the need for physical interactions and bridges the gap with digital interactions, by making the virtual feel real. However, existing mid-air haptic experiences often do not reflect user expectations in terms of congruence between visual and haptic stimuli. To overcome this, we investigate how to better present the visual properties of objects, so that what one feels is a more accurate prediction of what one sees. In the following demonstration, we present an approach that allows users to fine-tune the visual appearance of different textured surfaces, and then match the set corresponding mid-air haptic stimuli to improve visual-haptic congruence” Eurohaptics 2020 Demonstration videoView Katarzyna’s Research Outputs |
2017Michelle Wu |
Motion Representation Learning with Graph Neural Networks CDE Ph.D. (Bournemouth University): Academic Supervisors: Dr Zhidong Xiao, Dr Hammadi Nait Charif The animation of digital characters can be a long and demanding process: the human eye is very sensitive to unnatural motions, and this means that animators need to pay extra attention to create realistic and believable animations. Motion Capture can be a helpful tool in this matter, as it allows to directly capture movements performed by actors and converts them into mathematical data. However, dealing with dense motion data presents its own challenges, and this usually translates into studios having difficulty reusing the large collections of motion data available, often resorting, in the end, to capturing new data instead. To promote the recycling of motion data, time-consuming tasks (e.g. manual data cleaning and labelling) should be automated by developing efficient methods for classifying and indexing data to allow for the searching and retrieval of motions from databases. At the core of these approaches is the learning of a discriminative motion representation. A skeleton can naturally be represented as a graph, where nodes correspond to joints and edges to bones. However, many human actions need far-apart joints to move collaboratively and to capture these internal dependencies between joints (even those without bone connections), we can leverage the potential of Graph Neural Networks to adaptively learn a model that can extract both spatial and temporal features. This will allow us to learn potentially richer motion representations that will form the basis for the tasks of motion classification, retrieval and synthesis. |
2021Huan Xu |
Common Sense Reasoning for Conversational AI Intel-PA Ph.D (Bournemouth University): Academic Supervisor: Professor Wen Tang Conversational AI allows computer chatbots to interact with people in a human-like way, by bridging the gap between human language and computer language. However, existing conversational chatbots are mainly based on predefined command patterns, and there are still challenges to making conversational AI behave human-like. To solve this problem, applying common sense knowledge to conversational AI is a viable solution. With common sense reasoning, chatbots can better understand human conversation not just from context information but also from common knowledge. Thus, it can make communication between humans and computers straightforward and natural and improve the customer experience by better understanding humans’ intentions. The goal of my research is to find a domain-specific knowledge hunting approach and apply common sense knowledge into task-driven conversational AI, to make AI aware of common sense knowledge, and further make the agent more human-like and provide better user experiences. |
2018Hashim Yaqub |
Reducing simulator sickness while maintaining presence in first-person head-mounted VR applications CDE EngD in Digital Entertainment (University of Bath): Academic Supervisors: Dr Paul Shepherd, Dr Leon Watts Industrial Partner: BMT Defence Services Industrial Supervisor: Simon Luck Although virtual reality (VR) head-mounted displays (HMD) have been in use since the mid-1960s, the surge in public awareness and access to VR had spurred an increased interest in all industries to investigate the potential of VR as an interaction modality associated with high subjective presence. Many challenges need to be addressed through the disciplined application of research methods, especially combating VR sickness, if this potential is to be realised. This Engineering Doctorate thesis reports a series of investigations within the context of real-world development with a partner company (BMTDefence Service, a naval engineering consultant). The primary interest of the thesis is in the potential of VR for developing cases and uses for this technology in training. The target modality of training was a portable set-up, i.e. sitting down with a laptop, HMD and a game controller. This set-up would prove beneficial for providing axillary training to personnel who are not always able to receive regular on-board training. It would also prepare people for situations which are difficult to simulate in real-world conditions. Example cases included familiarisation, line of sight tests, hazard recognition and evacuation procedures. An initial study of VR HMD experience in training scenario highlighted VR sickness as a key limiting factor for usability thus focusing the research on identifying and reducing the factors which induce VR sickness. Prior research suggests that static field-of-view restrictions could help but only at the cost of loss of presence. There were no reported studies of the effects of restricting the field of view dynamically thus this thesis presents two investigations of dynamic Field of View (FOV) constriction triggered by movement in a virtual space. It was hypothesised that a reduction in FOV reduced the induction of VR sickness. The problem with doing so however was that it may negatively influence presence as the change in FOV could distract the user. This thesis reports the development of a method for adjusting FOV to reduce simulator VR without loss of presence. Two dynamic FOV constriction studies are reported. The first failed to demonstrate a clear effect but subjective user reports suggested methodological and experiential issues in its design. Meanwhile, research into a similar method was published at the 3DUI Symposium at IEEE VR 2016. Fernandes & Feiner (2016) [1], who demonstrated that dynamic FOV constriction can reduce VR sickness without compromising presence. However, their work used interaction scenarios with normal walking in an unchallenging virtual environment. Users were not subject to the types of motion which the literature suggests are most likely to induce sickness. Consequently, the second DFOV constriction study tested VR sickness reduction in 6 more discomforting situations via involuntary movements and animations on the virtual character and camera. Many of these animations and movements are typical in first-person applications and yet are absent from VR applications. These include for example head-bobbing, falling animations, stumbling, and forward rolls. The aim was to test whether DFOV constriction could allow VR developers to include such facets in future development. It showed that extreme movements still generate VR sickness, despite the use of DFOV constriction, but subjective reports suggest some users appear to benefit. Further research is recommended on introducing user control to the extent of DFOV manipulation. The thesis concludes with an evaluation of the state-of-the-art in DFOV constriction as a general approach to immersive VR interactions, including how the human vestibular system may limit DFOV effectiveness as a means of controlling VR sickness. Thesis: Reducing Head Mounted Display VR Sickness Through Dynamic Field of View Constriction
|