Looking at some great articles from what I believe is one of developers at DICE and certainly one of the best environment artists out there (https://80.lv/articles/rendering-scanned-vegetation-in-real-time/) I decided to write a small piece on realism in my project and CG in general.
A lot of people tend to believe that making “realistic” visuals is making something look… well real as in just like it is in real life – but what if I told you that some of the most “realistic” visuals that a ton of people take for granted are not realistic at all!
HDR lighting is perhaps one of the most ironic names for a visual effect – as it means opposite things in well real-life (photography/the way our eyes work) and CGI. In real life High Dynamic Range means that our eyes (and suuuuper expensive cameras) can view both bight lights and shaded objects with equal clarity – looking at the sky during the day does not make the grass look dark/looking at the ground does not make the sky appear white with light. In this image you can see the effect on an SR image and HDR image respectively.
The SR image is taken with a low quality camera sensor that can only pick up light withing a certain brightness range as suck outdoor photos will either have an over exposed sky or and underexposed landscape – our eyes being more advances than a phone camera see images closer to that on the right as they can pick up a High Dynamic Range of light.
When the Source2 engine game Half-Life 2 was released by Valve this was one of the features they talked about… as you can see the “feature” performed the opposite task – but the name stuck.
The reason I’m starting with this feature is that it was at this point that graphics started to look less “realistic”. The reason for this is that engines now more than ever have features that emulate the world as seen through the lens of a CAMERA and not us. Us being more accustomed to watch things on TV in turn interpret such visual effect as realistic – despite the face that we do not see the world like this at all.
At this point there is a massive list of features that we don’t see in real life that get used in games. These include:
- Lens Flares
- Anamorphic Bloom
- Chromatic Aberration
- Lens Distortions/Vignettes
- Color Correction (more of an artistic tool but it quickly made the jump from film to games)
- Bokeh Blur (it looks cool but your eyes don’t work like that)
- Motion Blur (we do see motion blur, but its waaaaaay different)
An easy example of this are the first 2 Crysis games from Cryteck – Crysis 1 running on the Cryengine 2 look extremely well as it focused on visual effects that our eyes can detect… and then Crysis 2 threw a ton HDR, bloom and Lens Flares to the point where seeing anything in the environment was a tack.
(Crysis 1, clear image + motion blur)
(Crysis 2, Bloom + Blur + HDR = barely visible environment)
So what am I doing?
For my project I have decide a long time ago that I will be pursuing the Movie-like photo-realism in my project. Why?
Well for one thing a sci-fi concept such as this have existed exclusively in cinema/games where star-ship engines aren’t really on until they emit a massive lens-flare that cover the screen.
Two 80% of the UE4 engines tools and new features like FFT and ACES (filmic) tonemapper cater to this visual style and have in fact replaced things like the old tone mapper that did a better job at simulating the eye’s perception of color.
Three lens flares, bloom and color correction allow for a massive creative freedom when it come to final project presentation – games like Killzone 2/3 and Mass Effect have used some of these features to great effect to the point where certain flares or color grading effect have become iconic (Mass Effect – anamorphic blooom/flare, Killzone 2 Helghast eye glow). So when someone says that stylized visuals (hint: They always refer to cartoon looking characters) offer more visual variety – tell them they are wrong (Diablo 3 = WoW, Overwatch = *any Pixar film*, Crash Bandicoot = Jack&Dexter/Ratcher&Clank)