Unser Journal enthält Einträge zu unseren täglichen Arbeitsabläufen, die für unsere Leser von Interesse sein könnten
Wir besprechen u.a. unsere hauseigenen Softwaretechniken, den Fortschritt unserer Filme oder einfach nur, was gerade im Studio Lampion vor sich geht.
The idea for our animated short film Bagel 2 was first conceived by the founders of Studio Lampion shortly after hearing all the news about the European space mission "Beagle 2" around December of 2003.
We found it quite amusing that while all one ever heard of the European Beagle were messages of failure and tragedy, the American mission at the same time was seemingly doing a much better job.
We soon realized that the story could make for a cute animated short film and started sketching out our ideas.
Once we'd fleshed out a rough story, we went on to design our characters. In the beginning, there were several different designs for the protagonist Bagel 2, some of which you can see sketched out below. Only a single day before the modeling-phase started, the look for Bagel 2 was finalized.
Fortunately, the story itself was already roughly fixed when we started drawing the storyboard, but now the time had come to judge for every single shot, whether
a) it was required in order to tell the story and
b) it was possible to solve any technical issues that might arise in the relatively short timeframe of about 4 months.
We were aiming for a two minute film, and so we added, removed, changed and swapped shots until finally we were happy with the story and the length.
The storyboard was then scanned, cropped and inserted into an editing application to create an animatic. This was useful to see whether the story really works or not, in that it helps in timing the length of shots. Also, when people ask you what you're doing all the time, it's always nice to have at least something to show them.
By early April, all pre-production work had been completed and we could finally start modelling the "characters": As you can probably see, the models featured in Bagel 2 are relatively simple, so the construction process didn't take too long. All animate objects are modelled using NURBS, only the terrain and rocks are made of polygons.
Take a closer look at Bagel's antenna (or tail) in the above image: In the film, there was a shimmering light where here is only a little green dot. This was done because it's much simpler to add glowing lights in post-production than to render them in 3D. In a compositor, all we had to do was to key that specific shade of green and apply a glow effect there, giving great control over the look and intensity of the light.
Interestingly, one of the only stored textures used in the film is the body of the American space ship, "Bob". The flag and the details on him were painted in 2D.
Most other textures are created using procedural shaders, meaning that they are not stored in image files but are created only through mathematical formulas directly at render-time. We preferred using procedurals because even though the original texture for the Marsian landscape was huge (around 300MB), it still wasn't large enough for close up camera views. Procedurals on the other hand do not lose quality when zooming in, so they were the obvious choice here.
The detail on Bagel's tires and Bob's thrusters are created via displacement shaders . These are applied like bump-maps, but with the difference that they actually move the geometry instead of just the normals of the surface. If you look closely at the depiction above, you can probably see that the tires on the left are completely flat, unlike those on the right.
Displacement shaders are great when you want to keep your models light and still retain a high level of detail. Beware though, certain renderers take much longer to render displacement maps than bump maps! We rendered most scenes using the free RenderMan-compliant renderer 3delight, so rendering displacements wasn't that much of a problem in this case.
Above you can see our lighting setup for the terrain, the vehicles were lit separately. In most shots the terrain was lit by some 15 individual light sources, adding to the 10 or so that were needed to light each of the "characters".
If you look closely, you can make out the sky dome surrounding the landscape, which is used to create the colours of the sky. A single directional light was used to simulate the sun, which was also the only one casting shadows.
The novice reader might think that one directional light could simulate the sun well enough, but that's far from the truth: To effectively simulate outdoor lighting in a 3D program you'll need to use either a technique called "Global illumination" (GI), or you can fake it using an array of well placed lights. We opted for the second approach, because even though GI can give you more realistic and fancy results , the render times will shoot through the roof making it infeasible for use in animation.
Also, and this was even more important to us, the traditional way of placing lights gives you much more creative freedom in developing the look of your images.
Lighting is a time consuming task where the most subtle of changes can make a big difference in the final image. While adding lights we always had to keep track of the number of lights we already had in your scene, because every light source adds to the final render time.
Perhaps the worst part about rendering 3D imagery is the time required to actually see your output. Since every single frame of the film must be processed separately by the render program, you never know where a mistake or artifact may popup until it's too late. Through thorough optimisation and tweaking we were able to cut down the render time to an average of about 1 minute per frame, which is quite acceptable for a reasonable high quality DVD format image.
Still, we simply didn't have the computing power to render all those frames (around 4500) in our little "render farm" consisting of our workstations and the awesome combined power of our family's respective personal computers. What we needed was a way to render the shots quicker, without reducing the overall quality!
Having lived in Japan, I was introduced to the way anime is created. I remembered that anime is produced in "layers": You generally have a background layer consisting of the scenery or location, and a foreground layer with the characters and any other moving objects.
For Bagel 2 we found it was possible to employ this exact method to tremendously shorten the render times: Fortunately for us, on Mars there needn't be any movement in the background, so in shots without camera movement the terrain is simply rendered a single time. Later the character's animation is rendered alone in a separate pass, eliminating the need to render the computationally intensive background again for each frame. This technique reduced the render time by around two thirds per shot .
There is a problem with this approach, though: Shadows cast on objects can only appear if the object itself is visible in the scene, obviously.
But as I had already mentioned, the background which would include the shadows is only rendered once, meaning that the shadow couldn't move according to the movement of the character. The practical solution is to use a special shader that receives shadows and simply makes everything else transparent, just like it isn't there.
After rendering, the file sequence containing the character and shadow can then be stacked as layers in a compositing program to create the illusion of a single intact image or, in this case, film.
Some people have asked me how I made the particle effects in Bagel 2, such as smoke and fire. Well, I'm sure you'll be glad to hear that most of them were accomplished relatively easily using an application called Particle Illusion SE and a compositing program.
I received Particle Illusion SE for free with a magazine, and I must say it's a great time saver for working with 2D particles.
The way I did it was simple: I'd choose a preset from the library which more or less fit my requirements for a specific shot. Then I'd play with the various settings until it looked and behaved just the way I needed it to.
In After Effects, I often like to stack the same particle sequence in several layers with different blend modes, which gives an added level of control. In the scene below, most of the smoke was done using Particle Illusion. To create the smoke trail following Bagel, I animated the position of the smoke particle's source, which I found worked surprisingly well.
The shimmering light coming from Bagel's camera wasn't done in Particle Illusion, however: This is a true volumetric light rendered in 3D and composited later with the rest of the scene.
Most of the haze and fog is done using a pretty cheap trick: A simple fractal cloud layer in After Effects, with the blend mode set to "Add"...
We've heard lots of nice comments on the look and atmosphere in Bagel 2. Good that nobody's seen the images that came directly from the renderer - they looked bland, flat and lacked much of the appeal they have now. Thank god for post-processing!
Compare these two images: The left one is "untouched", in that it is exactly what our 3D renderer produced when hitting the "Render"-button. The image on the right was post-processed in After Effects to add an effect of fog and depth of field to the background, blurred highlights for that dreamy feeling and some quite intensive colour correction .
To achieve the haze and blur in the background, we had the renderer output a so called depth-map or z-map. This is a grayscale image of the scene, where the lightest parts are closest to the camera and the darkest are the farthest away. This map can be used by a post-processing application to mask certain effects.
Many compositing applications include plug-ins for use with depth maps, but the trouble is that there are several different formats of depth maps so it's quite likely that you'll have to use a workaround like the one described below. For the haze we used a "Solid Layer" with the colour of the fog and applied the inverted depth map as a mask. Then we played with the opacity and the blend modes until we were happy with the look - we found that an opacity value of 50-75% coupled with the blend mode "Lighten" will give convincing results in most scenes. Though you'll need to fiddle around with the blend mode as results vary heavily depending on the scene. Due to the depth mask, the fog gets thicker in the distance - there are many more effects you can achieve using this method!
The depth of field effect is achieved in roughly the same manner, but this time by using an "Adjustment Layer" instead of the "Solid Layer" and applying a compound blur filter. This one is a little more work to get to look right, but I think you get the idea.
Well then, I hope we were able to give you a little insight into the making of our animated short film Bagel 2. The film is available on 3Dtotal's "Short Drawer DVD" or viewable online in the shorts section of this site - we'd be pleased if you had a look!
(Revised November 2007. Originally published on Vocanson in 2004)