One of the biggest events on the calendar, the 42nd International Conference on Computer Graphics and Interactive Techniques – SIGGRAPH 2015, took place in Los Angeles, USA. Here CDE Research Engineer Rosie Campbell shares her experience of the conference.
SIGGRAPH 2015
This year, I attended SIGGRAPH 2015 as part of my EngD in Visual Computing with the Centre for Digital Entertainment (CDE). SIGGRAPH 2015 was the 42nd International Conference and Exhibition on Computer Graphics and Interactive Techniques and is widely regarded as the world’s premier computer graphics conference. It is a unique mix of academic research and industry application and features not only research papers but production talks, courses, a commercial exhibition and much more. For a first-time attendee like myself, this can be somewhat overwhelming, and proper planning and time management is essential!
Here is a selection of my highlights…
‘Ready, Steady, SIGGRAPH!’ Introductory Session
I definitely wouldn’t have got as much out of my time at the conference had I not attended this introductory session. It went through the various session types (as I’ve defined above) which were previously a bit of a mystery. It also emphasised that you literally cannot see everything, so try not to get too stressed by the ‘fear of missing out’. They recommended making a list of priorities – sessions you really want to attend plus some backups in case your first choice is full or not what you expected. The overall message was do some research plan your time as best as you can, but also be open to the possibility of serendipity, both in terms of attending sessions and meeting new people.
Technical papers
Here are some of my favourite papers:
Decomposing Time-Lapse Paintings Into Layers (PDF) Jianchao Tan (George Mason University), Marek Dvoroznak, Daniel Sykora (CTU in Prague, FEE), Yotam Gingold (George Mason University)
This documents a system that allows someone to draw a picture with traditional materials (pen/paint/paper), which can then be digitally edited in layers as if they had created the image in photo-editing or drawing software. The way it works is to set up a fixed camera filming the drawing/painting process, then computationally extract layers. It tackles some interesting problems like the occlusion of the image by the artist’s hands, and the color shifting between frames due to varying lighting.
RingIT: Ring-Ordering Casual Photos of Temporal Events Hadar Averbuch-Elor, Daniel Cohen-Or (Tel Aviv University)
The idea behind this paper is that when something interesting happens, usually lots of people gather around and take photos. This technique orders and aligns the photos, producing a kind of ‘bullet-time ‘-like effect.
Time-Lapse Mining From Internet Photos Ricardo Martin-Brualla (University of Washington), David Gallup (Google Inc.), Steven M. Seitz (University of Washington and Google Inc.)
This was a simple concept that created a really effective time-lapse result. The algorithm involves using a Google Image search to retrieve photos of various landmarks, ordering them by time, aligning them by computing a depth map, modelling a 3d reconstruction of the scene, and then removing flickering/obstructions by tracking pixel colours over time.
Real-Time Hyperlapse Creation via Optimal Frame Selection Neel Joshi, Wolf Kienzle, Mike Toelle, Matt Uyttendaele, Michael F. Cohen (Microsoft Research)
This is a continuation of Microsoft’s Hyperlapse research which did the rounds a while ago. This time, they were experimenting with speed versus smoothness, looking to create a decent-looking hyper-lapse with no special hardware in a much quicker time than their previous efforts. You can download the software as an Android or iPhone app if you want to make some yourself. They did a real-time demo of this using someone walking to their presentation!
Lillicon: Using Transient Widgets to Create Scale Variations of Icons Wilmot Li (Adobe Research), Gilbert Bernstein (Stanford University)
This was a really neat, UX-led approach for improving the editing of SVGs. I liked how it turned the way a computer interprets an image into something far more sensible for humans, using concepts of negative space and ‘widgets’ to represent parts of images. It’s open source if you want to have a play.
Keynote
The keynote session was from Joi Ito, Director of the MIT Media Lab, and it was very thought-provoking.
Ito called the MIT Media Lab ‘anti-disciplinary’, i.e. for people with skills that don’t really fit into just one discipline. He said ‘if you could get a job somewhere else, don’t apply’.
Ito spoke about Shenzhen, which he called the Silicon Valley for manufacturing, and about the innovation taking place in this industry. However, the main part of his talk was about how bio is the new digital, how biology is programmable just like any other device and how powerful this could be – can we store movies in DNA?
He pointed out that biology now looks more like computer science than the subject we learned at school. But he urged us not to make the same mistake with young people as we did with computers and the internet – instead of demonising young hackers, we should encourage them and involve them in how to make hacking biology safe. It could prevent an extinction event!
VR Panels, talks and demos
VR was a huge theme at SIGGRAPH. I attended a panel discussion on the topic as well as a couple of other relevant talks and demos. Here are the main insights I gathered on VR.
Virtual Reality does not equal Head Mounted Display, although these are often conflated. A VR experience is more than hardware, it’s any technology that ‘takes you there’.
People think VR has recently had a resurgence, but in reality, it’s always been there but in other, non-HMD forms. To take advantage of the full potential of VR, we need to start going beyond just ‘flying through pretty models’. Initiatives like Oculus – Story Studio aim to showcase a variety of possible experiences for VR.
But there is a problem – the underlying infrastructure of VR is not conducive to creativity – it is easy to get lost in the clutter of the technology, and there is a high barrier to entry to creating experiences. This is compounded by the fact that VR tools are targeted for specific platforms, it is not very open. VR applications currently have a very short lifespan, not much more than a novelty. Is this due to a lack of compelling experiences or the physical sensation of being ‘in another world’?
Some interesting VR research going on:
– Using VR to extend developer workshop into the world around you, creating a physical IDE
– Investigating how multiple users can interact in VR
– Developing a mixed reality platform for maths education – interacting with geometry
– ‘Never blind in VR’ – tackling the problem of not being able to see your physical surroundings while in a VR experience
Some insights from an agency creating VR experiences:
– VR experiences are interactive, even if they are just films. Because of this, we need real-time rendering. We can use technology from games because it runs at the required performance for VR
– To create a sense of a volume of space or parallax in VR, use particles (e.g falling snow)
– A problem is getting people to realise they can look around – people are used to watching a fixed screen. Spatialised audio helps people to realise they are free to explore, as does grabbing their attention with something moving like a bouncy ball
– How do you get users to stay in the target area without jarring graphics or messages? Desaturating (a bit like -greying out-) parts of the scene is a subtle suggestion that you are moving out of the interactive area, and prompts the user to reposition
-When users have an avatar in the virtual world but can’t control their avatar’s arms with their own, it can produce a very unnerving feeling of having ‘dead arms’. Instead, put them in a situation where they can’t use their arms, e.g. handcuffed behind their back.
– In VR, it is better to have avatars with no expression than ones that can have simple happy or sad expressions – this just emphasises their lack of expression
Interstellar Production Session
A particularly enjoyable moment was finding out about the visual effects and physics behind the film Interstellar. The visual effects were done by Double Negative, a company which hosts a few other CDE EngD students. The panel discussed the technical side of the visual effects, the physics they were inspired by, and the design challenges of creating a visual representation of things in more than 3 dimensions. Physicist Kip Thorne (who was scientific advisor on the film) was on the panel and talked about how he ended up publishing a number of scientific papers after being asked difficult physics questions by members of the film crew. I also enjoyed hearing about the iterative design process of visualising the ‘tesseract’ – a 4-dimensional hypercube.
Baby X
Baby X was by far the creepiest thing I saw at SIGGRAPH. It was presented as part of ‘Real-Time, Live’ which was a fantastic showcase of all kinds of real-time demos. This was the thing that stood out to me most. It features an incredibly realistic virtual baby (beyond uncanny valley) which reacts to your input and learns. It is powered by an algorithm which simulates pleasure and pain responses. So when you smile and clap, the baby looks pleased and learns. When you leave it for too long, it starts to get visibly distressed. Like an extreme tamagotchi…
Baby X
The team behind this have also experimented with incredibly realistic virtual adults, to explore the importance of nonverbal communication. Currently, when we get instructions from a machine, we get only the language information in the words used. However, when we communicate with each other, we use facial expressions, tone of voice and body language to fill our words with extra meaning.
The team demonstrated this power by changing the tone of voice and facial expressions of their realistic human avatars, while they issued mundane instructions that you might get from a computer, like ‘please enter your password’. It’s amazing how this can go from perfectly polite to hugely passive-aggressive with some tiny tweaks.
Shadertoy
Shadertoy is an online shader editor. Having been interested in shaders for a while, I was really excited to have a go at writing one for the first time. In fact, I was so eager I got there half an hour early and was first in line! As you can see from the photo below it was very popular, with a large crowd standing at the back once the workstations were full. Despite a few setbacks like having a keyboard with the stickiest keys in the world, very little light and the presenters going at lightning speed, I just about kept up and created a procedural landscape shader which you can see here https://www.shadertoy.com/view/4tlSDS
The large crowd at the Shadertoy workshop – and me eagerly at the front!
Exhibition and Interactive areas
A really enjoyable way to spend time was to wander around the commercial exhibition, the interactive demos and the studio spaces. Here are a selection of highlights from these areas.
The Maker Studio – including numerous laser cutters and 3d printers
Modular Instruments
Using light to create a visual ‘wobble’ effect on guitar strings
A 3D interactive interface for architecture or game design
Live motion capture
Laser cut guitars
HDR projector displays ‘Holographic’ TV
Summary
I’ve only scratched the surface of SIGGRAPH here, but I hope it’s clear that there is incredible depth and breadth at this conference, and it is of a very high quality. I learnt much more than I was expecting and came back with lots of research ideas for my EngD.
SIGGRAPH was one of the best, most inspiring conferences I’ve been to, and I would highly recommend going if you get the chance and are interested in computer graphics.
Here are a few more pictures from the rest of the CDE team.