Den syvende utgaven av bransjeseminaret Digital Storytelling ble avholdt på Filmens hus mandag 14. mai. Som vanlig blir VFX-artister, regissører, produsenter, journalister og andre interesserte fra både inn- og utland samlet på dette seminaret for rette et fokus på hvordan digitale effekter kan fremme historiefortelling på ulike plattformer. Hovedgjest i år var teknologisjef i WETA Digital, Sebastian Sylwan.
I den årlige bolken The Nordic VFX omelette oppsummeres og oppdateres bransjen på nye prosjekter fra en del sentrale aktører i Norden. Det lille, finske spillstudioet Theory Interactive presenterte sitt nye spillprosjekt Reset, særlig i forhold til den populære youtube-traileren. Det svenske VFX-selskapet FIDO var på plass med klipp fra tre av sine største prosjekter den senere tiden – den tyske barnefilmen Yoko, Hollywood-oppfølgeren Underworld: Awakening og kanskje mest spennende for vår del: Et klipp fra den norske mastodont-produksjonen Kon-Tiki. Jeg kan ikke avsløre så mye annet enn å si at det så lovende ut.
Sebastian Sylwan, seminarets hovedgjest, har tidligere vært teknologisjef i Digital Domain samt Senior Film Industry Manager i designselskapet Autodesk. For tiden er han teknologisjef i WETA Digital, og har jobbet med noen av de aller største VFX-produksjonene i nyere tid, deriblant Avatar (2009), The Adventures of Tintin (2011), Rise of the Planet of the Apes (2011), The Avengers (2012) og X-Men: First Class (2011). I tillegg til Ridley Scotts kommende Prometheus og Peter Jacksons The Hobbit. Montages fikk anledning til å slå av en prat med ham dagen før seminaret:
Vi beholder praten nedenfor på engelsk, som var slik intervjuet ble ført.
M: We’re going to have your lecture tomorrow, but I’ve seen a few clips from some of your talks online. Your whole approach seems to view digital effects as the meetingpoint between art and science. Could you briefly sum up that idea?
SS: Well, I think in the history of art, there’s always been the intention of pushing the boundaries of what’s possible, and I think what that means is to look at the way the artists understood light, geometry, composition and so on. It’s easy to talk about painting since I’m a visual person, but the same applies to music with rhythm and harmony, or to poetry with pauses etc. You can really think of everything in the same way. It’s always a challenge to have a deeper form of communication, or a more direct or subtle form of communication. Maybe less so in the printed word, because it has had few innovations recently.
That is, untill you think of Twitter, where we’ve seen a resurgence of the haiku poem in order to fit everything into 140 characters. I was at a conference in Stuttgart last week, and the Indian director Shekar Kapoor told me he steered a controversy in India with a simple Twitter message. The reason he said it worked so well was that he was making a somewhat political statement, while also having a story in 140 words. So even in the spoken word or written word, you can see changes in the way the stories are told. The Gutenberg press, for example, pretty much brought us the three stages of dramatic narrative.
So if you look at things that way, there definitely is a brotherhood between technological advancement and artistic expression, and it’s been like that for ages. In my field (digital effects) we are now at the peak or the center of that encounter. We’re trying to understand the physics of light or the physics of movement in terms of cloth, muscles, hair and so on. We try to understand the natural phenomenon in order to model them. We want to model them not just to replicate them, but also to control them and say that we drive them to better express the creative intent of the director, the director of photography, the art director etc.
M: You’ve previously said that digital effects have changed the linearity of film production. Could you explain what you mean by that?
SS: Traditionally, film production has evolved over the last 120 years to include specific tasks. Because film was expensive, you needed to plan very well what needed to be done when you were in front of the camera. And everything needed to work like clockwork. You needed the performance at the right time, you needed the sets to be ready at the right time, you needed the grips to not be walking in the middle of the stage, you needed both performances in a dialogue to be like clockwork, you needed the lighting to be right, you needed everything to have a sort of magic at that moment. And in order to do that, it required a lot of planning. There was a pre-production stage, where you had a limited number of people being creatively involved with the project, then there was the production phase, and then there was the post-production, where you did the ulterior processes, to alter its colours, to give it a grade, to edit it.
Nowadays, the visual effects part of that has grown a lot, compared to where it was even 10 years ago. That growth has been accompanied by a lot of technological advancements, and those technological advancements have enabled some of the creative parameters to be extended, experimented with, brought forward, involving impulses from various areas. In short, I think we are talking about a type of virtual production now. There isn’t really a checklist in my mind or a definition of what virtual production is or isn’t, but I think it’s basically about using virtual tools in order to make films. So it’s really production with the best tools available, and those tools are able to enable communication between creative figures in the film making process that would normally not talk to each other.
For example, an art director normally gets involved in the early phases of the process and then leaves the project, hoping that his content was conveyed well enough to be preserved throughouh the production. But now, if you’re tracing digital assets early on, the input of that art director can be tied to that specific asset, and that data is going to flow with the same data probably all the way to the end. Just as much you are able to provide visual input from the visual effects side, saying what will look better from that point of view.
M: So there is more of a simultaneity going on?
SS: There is a simultaneity, but also not. You can’t have 950 people at WETA Digitial involved at once, so it’s more about making sure that the tools and work flows are made so that the communication and the input can be properly expressed. The other thing is that you can now change things later on. A leading example of that is Avatar, where a lot of creative content is created digitally. So you can go and revisit creative decisions very late in the process.
M: Speaking of Avatar and aesthetics in general, I’ve always found it fascinating how you can have digital effects enhancing the storyline to make it a seamless universe on one hand, but then you also have individual moments where the audiovisual effects are allowed to shine more. How do you balance between narrative storytelling and also getting the spectacle in the moment?
SS: I see visual effects as one of the tools in the arsenal of storytelling, and if we follow the metaphor of painting, the fact that you’re using a broad brush doesn’t mean that you cannot bend laws and use a detailed brush or water your colour down or mix them differently. The overall intent is to tell a better story, and to follow the director and to hopefully to provide as much input to the director so that he can make the best decisions. I think that the tools we create – even the tools to make the digital effects – need to be as transparent as possible. As much as I like technology, what matters is the intent.
M: It’s interesting that you say that, because Ridley Scott was recently interviewed in relation to the upcoming Prometheus, and he was asked by an audience member if he was going to go «old school» or «new school» with more CGI. He replied that ‘no, he was going to go sensible school’ because he wanted the interaction with the production design as well, not just the green screen. I’ve always been curious – how do you relate to all that, in terms of having production design, real-life footage and visual effects interact without one getting more than the other?
SS: I think it’s a case-by-case basis, and you really need to be engaged with your brain on when you make decisions on how things are going to work. I wasn’t part of WETA Digital at the time, but when we were doing King Kong (2005), we ended up putting Andy Serkis on top of a platform so that the eyeline from Naomi Watts would be the correct one, at the correct scale. And he had a microphone that would be so strongly amplified that the whole stage would pretty much shake, just to convey that sense of power. And we knew all that was going to be CG later on. We have an amazing selection of tools at our disposal, and what we need to preserve is just that; keeping our mind on the intent rather than the tool. There are plenty of ways to do that. On a case-by-case basis, we need to evaluate the situation and acknowledge that the actors are there; and they need to feel they’re there.
M: There is a fantastic shot in Tintin which has the ship entering the London docks at night, and I noticed the background, in particular. It was dark, you had light sources emanating from each house window and I thought to myself – this must be real footage superimposed on the CG landscape. Can you enlighten me on that?
SS: There was no real footage in the film.
M: So that combination never happens?
SS: Nowadays, even if you have real footage, it’s probably easier to create it. We did that on The Avengers. I’m not quite sure about this, but I was talking to the visual effects supervisor, and at this point our pipeline is sophisticated enough that we can easily reconstruct environments and have it look good. Quickly.
M: And not only environments, but motion capture too. As I watched Rise of the Planet of the Apes, I was waiting for you guys to go overboard; to make the ape emote too much; to become too human. But that never happened. You managed to be restrained – to have the ape-like quality while at the same time making him a little ‘off’. Was that a challenge, to not go overboard?
SS: I don’t know, we definitely participated in the character design, and we bounced ideas around with the director. There was very much a creative collaboration in that. You don’t want it to go overboard, but the sensitivity of the character; what the character had to be, was certainly at the center of our attention for the whole year. And yes, we needed to have that edge of human nature without going overboard. I think that was a central point of the movie; we would really be making a large mistake if we were focussing on the visual effects themselves rather than what the creative intent was.
M: There is this phenomenon now with virtual or digital cinematography as well. In Tintin it’s particularly evident with the Stereoscopic 3D and the swirling cameras everywhere. That’s also a challenge; to not go overboard with the possibilities of that. How do you see digital cinematography?
SS: Fortunately, I think that’s also the sensitivity of the time. If you look at how a modern concert is shot, or a modern series, there is a lot of that. The ubiquity of the digital camera has changed the aesthetics. Personally, I’m not too fond of the Dogma-style filmmaking. Although I understand the principles and intent behind it, I think that when you bring an intent to be a rigidity or a dogma, it’s when you’re kinda losing the soul of why you’re doing something. And I didn’t feel that in Tintin. Yes, sure, there was very much a Spielberg signature in it, but I actually like that. I think that is a great thing; there were also several transitions that said ‘Spielberg’. That was actually created by the previous departments, who had the idea and the knowledge and the study of Steven’s style, and then proposed things that he could use. Some of these gags were changed from what was originally proposed, but the seed was preserved. And I think that’s the seed of creative collaboration.
M: Is there anything that WETA is working on right now that could be a new, potential gamechanger?
SS: There definitely are a few cool things happening. I’m in a lucky position right now, because there’s been a public announcement on a project we’ve been working on a year and a half – the sequels to Avatar. Avatar was the prototypical virtual production, and so after it finished, we started setting up a work flow, a pipeline and a series of tools that would make the next step. We have been working on this in a partnership with James Cameron’s Lightstorm Entertainment, Autodesk and ourselves. So that’s one of the things we’re working to define. To bring the techology together with the process.
M: So will the new Avatar will be just as groundbreaking, or will it just be an organic continuation?
SS: We’ll certainly try. Beyond that, there isn’t much more on the production side. The fact that Peter [Jackson] decided to shoot The Hobbit at 48 frames pr. second is certainly an area where there will be changes.
M: That’s a bit controversial in certain circles, isn’t it?
SS: There was certainly a lot of discussion around it. To me, the fact that people noticed there was a difference is the most important part. Whatever thing audiences can discern can be used as a creative tool in one way or the other. You might disagree with the way it was used, but if it’s something you can notice, it’s something you can use to a creative intent. The same goes for stereo, colour, sound. If we are discussing resolutions and people go ‘I think that was scanned at 4k or 6k’, then maybe you’re going to tell if you’re seeing a digital projection or a film projection. In any case, Peter is pioneering it, so he will definitely push the boundaries. To a certain extent, the controversy is a…
M: …a PR thing?
SS: I don’t know if it’s PR. People definitely have an opinion and are certainly entitled to it. In order to judge, you need to see the final results. Otherwise, you’re judging the tool and not the final product.
M: OK, thank you so much for taking the time to talk to us.
SS: You’re welcome.