DOOM3 artist interviewCommunity Forums/Developer Stations/DOOM3 artist interview
| Don't know where I should put this link really. Anyways NICE interview how he did 3D models etc for DOOM3:|
| Some cool stuff there. I'll never understand how people can make 3d models so damn fast though... it took me forever just to make a low poly crate... and a tank... and a hover racing ship. |
| I didn't think they made every model in high-poly.. I thought some of the textures were plain bump maps.. jeez that would take a long time..|
Good interview though ^^
| "jeez that would take a long time"|
Three years to be exact. :-)
| He says he uses Relax UVs "in some other program" - I wonder what that other program would be? Unwrap3D maybe? |
| heh.. only other program that I know of at least ^^ |
Ultimate Unwrap 3D rulez!
| Max 5 had it via maxscript, as was actually internally but not in the ui :?|
As far as I've seen, sadly , relax is not as good in UU like in Max..(heh, I think is the first time I mention an uu lowdown...indeed is not...)
Max 6 I think has it now clearly and , then, improved in Max7 (way cool that 7 has now normals mapping workflow inside it, the epoly modifier and cs4 now comes included...also char anim improved.... )
Deep Uv has relaz uv...a number of others too...
But yep, UU rulez totally in any way...see, from that interview :
"-UV editing. There are a number of UV features that LightWave really needs improvements on in my opinion. The fact that UVs have to be unwelded causes damage all over the place."
Yep, that happened to in max5, it was mroe uncomfortabl than in UU. In general th uv handling in UU for me its faster, dunno. Both are my fav uv mapping solutions.
And I am finding a fountain of power in XSI uv mapping and polygon tools..and to think I purchased for animation...!
Indeed, I'm for it for that reason.
There was a lot of debate of doom3 tools used...I heard Max, LW and Maya...I even have shots with winki animated in Maya...heck , hard to know the real truth...;)
looks lik ethey used LW a lot...is a kickass modeller..
| Well considering the fact that this interview is in Newtek's website and screenhots are from LW, I'd say that they did use LW :P ...And of course that leaked beta had .LWOs, so... |
| yup, must be. ;)|
And the interview is quite interesting...no "modellers" and "texturers" ...in small companies everybody is a do-it-all...seems also this way in the huge ones...:)
The stuff is going to get more and more painful...how much he said...two, 3 weeks for a character... And he probably is really fast..
the time and moment when absolutely all needs to be hi res modelled at thousands or millions of polies for normal maps...well, probably next titles are so...what a pain.
| yeah, I don't really see the point in millions of polys even for the kind of normal maps they are doing, it does seem a bit overkill. Allthough it would be faster to work super high poly in many cases, assuming you had fast enough machines to work with them in realtime.|
I can't wait for max 7 as I've wanted editpoly modifier for years, I used to avoid it much of the time as I don't like collapsing the stack too early, on th eothr hand its so usefull that you often say screw it and use it anyway.
Some of the tools the guy wishes he had are pretty standard in max, and actually have better alternatives for level building etc. Havent used Lightwave extensively since 6.5 though which was quite some time ago now.
| @sswift: Modeling is a piece of cake. I can knock off a great model pretty fast. It's texturing that takes forever. And even there, setting UV coordinates is pretty fast and easy. It's creating the texture image that takes me forever. Man, if I had a buddy to take care of all the texturing, so that I can concentrate on modeling and animating, I would be so set. |
| yeah thats why a lot of developers have dedicated texture artists that create all the base textures that artists and level designers need to flesh out their level. |
I really like it that way as you can get so much done a lot faster, and have a consistent look between different environments. The level modelers get to do a lot of custom textures of course, but it certainly helps get the overall look and feel of a game down.
the smallest place I worked, we all did our own textures, and things would often bog down. At the place where we had 1 artist per environment type and a master texture artist things went pretty well, but we all had to get time with the guy to explain what we wanted the look of the level to be, and hope he understood. But it worked pretty well once we had are coupl eof dozen base textures.
Another place had 1 level designer, for the entire game, and the game levels split into 6 pairs consisting of 1 texture artist and one modeler.
Usually they did all the vehicles, characters for their level too.
Anyway, in my experience 1 good dedicated texture artist goes a long way and things move a along at a good pace.
Once we have money and leadfoot decided to get another artist, I think we'd probably end up trying for a good web designer and 2D texture artist who we might train into some 3D work as things develop. We allreday have a really nice game concept but at the moment its a bit too ambitious for a 2 man, low income team lol.
| Oh yeah, a 2D artist to do the textures and web design would be perfect. So would a sound/music person. Those are my main shortcomings as a game developer so people to cover those bases would be perfect complements.|
I just read the interview, and his closing lines are pretty funny. I'd not thought about it, but now Carmack has that cliche genius title of "rocket scientist." That and "brain surgeon" are the two cliche genius professions.
"He says he uses Relax UVs "in some other program" - I wonder what that other program would be? Unwrap3D maybe?"
Maya has a pretty good Relax UVs tool. When Brad was creating Relax UVs for Ultimate Unwrap he was using Maya's tool as a basis for comparison. He asked me to compare results between his tool and Maya a couple times.
| heh your right about the music person, allthough I think we'd probably take one on as a contractor. At the moment were using Jeremy's musical talents, which allthough not great, aren't too shoddy either.|
BTW, the last place I worked were able to licence tracks from well known artists for 800 - 1400 $, not bad really, and something else worth considering if you are making enough out of your business.
| It seems like they start with some nice map geometry pieces, and then totally destroy all the detail and use crappy bump maps.|
I have decided I really dislike the bump map look. It's more of a style than it is realism. Kind of like comic-book art or something.
I mean look at this texture. I'd prefer the Unreal artwork any day:
| "I have decided I really dislike the bump map look. It's more of a style than it is realism"|
I've been thinking/saying that for a while now. I have nothing wrong with bump maps per se, but the whole starkly-lit per-pixel dynamic lighting in recent games looks like ass to me.
| The only things I see wrong with that texture are that the lighting is too harsh, and the sign looks like it is transparent rather than being a seperate element, which should be pretty bright. Also, there shouldn't be any stuff behind the grating, that should be added on a seperate plane to give it a 3D look. |
| nitpicks |
| I think a lot of people are changing their definition of "good graphics" because of Doom 3. If some no-name company was using textures like that, they would get laughed at. |
| When I originally said I wasn't that impressed with the game from an artistic point of view, most people thought I was nuts. But there are a considerable amount of shortcuts that lower the final quality of the graphics in the game.|
I'd rather model a few more features and work on better shadows over curved surfaces. There's all kinds of weirdness in dooms shadows, and some other games have actually done better in the realtime animated shadow department. I personally prefer to use optimized geometry than going overboard with the bump maps, and really don't see the need for these millions of poly models except to sound good. I doubt anyone could really tell the difference between a 50k - 70k poly character, and these. I wouldn't be surprised if they used splines or nurbs and turned up the subdivisions, to come up with crazy poly numbers with which to impress the media.
Still, it is cool that these kinds of games are being done, it's not 10 years since I first got interested in 3D rendered characters, and these are better than what was around back then.
3dsmax can allready create your FX shaders, and this is supposed to have been improved a lot in Max 7, in 5 years time, I expect we will be creating materials the same way you might for a pre rendered scene in your favurite 3D app.
Pretty cool really. you should be able to do a lot of that now, if the hardware is taking over the lighting and shadows, working with simple diffuse textures becomes a lot easier.
Anyway, I've started rambling, but things are looking like they should be fun in the future, and I don't think it will be too long that smaller devs like ourselves can make use of this kind of tech.
| "I wouldn't be surprised if they used splines or nurbs and turned up the subdivisions, to come up with crazy poly numbers with which to impress the media."|
Heh, I never thought of that but you're probably right. Hell, never mind NURBS; they could use subdivision surfaces (the guy in the interview even referred specifically to using MetaNURBS, the Lightwave term for subdivision surfaces) and set the number of polygons as high as they want without changing the appearance of the model at all.
| The idea that they are representing million-poly meshes with normal maps is cool, but the end result looks worse than a well-made texture. |
| Except that a well made texture won't react to lighting. |
| Who cares? The lighting is terrible; entirely too stark and contrasty. |
| Does this DOT3 normal mapping idea like Doom3 is using only work in dark scene's with a few light sources ?|
Every screen shot with it in only looks really cool when it's mostly dark and from my experience in the game so far the brighter lit areas do have a rather low quality look compaired to other recent games ?
id, apparently. That's the entire purpose behind the tech: lighting.
| Hands up if you're saying dot3 lighting is bad in this thread *and* have made a game using it exclusively? I could never go back to per-vertex lighting..not if you offered me all money in the west.|
don't knock a feature because id mis-used it. knock id.
| I think we're on about differant things. By saying you can't mirror geo etc to me means you're on about world space-normal mapping, as in the doom3 lower-res from higher-res.|
Tagant based mapping on the other hand means you just create a normal map version of each texture, and you give if it the same uv coords as the texture.
no need to diff norm maps depending face etc.
I mean in cryo now, it's got an automatic loader, that scanes my level chunks, gets texture names, finds their normal maps(By adding"Norm" to the name, and then retextures the chunk automatically..all normal mapped. Meaning the actual models don't even have to take it in account..you just model them flat with diffuse..then make the normal maps.
Actually got my first room done in meta..nice window..archway door..uv mapped, and normal mapped, in about an hour. And this is me we're talking about ;)
Yet without that..i would have spent hours trying to get vertex lighting/lightmaps etc all working.
| Displacement mapping, on the other hand, is the bomb. |
| And suddenly every newbie to modelling starts using Lightwave. |
| yeah um spaced man, but your not normal mapping characters, your talking about tiling tangent maps for the levels, which are pretty simple. it gets a lot more complicated with a character, as its pretty common to mirror parts of textures to the other side of the torso something you can't do with normal maps from what I can tell.|
your levels probably just time the same map all over a wall in a planar fashion, it gets more complicated with a skinned character or object though.
| Not really. You can just cubic normal mapping on characters..Similar to cubic lighting, only you instead of rendering spherical lights to the cubic map, you render a cube from the inside, uv mapped 360 degrees with the per-pixel tangents. |
So per character = 1 8 poly render to generate tangents for entire mesh. No messy per vertex stuff, all totally done on hardware..no cpu hit. Again with tiling normal maps.(I.e 1 for 1 with the diffuse maps. Well saying that, the normal maps can be bigger or smaller than the diffuse. I've found it makes little difference, other than blur factor.)
And be it that, or the level technique I mentioned prior, it places no restrictions on uv mapping other than those already found.(Whatever they may be)
And this without vertex/pixel shaders of course..with them it becomes even easier.
I really wouldn't be using if this wasn't the case..I'm not about to box myself into a corner.
| " Displacement mapping, on the other hand, is the bomb. "|
this will avoid the low pol siluette of normal mapped models? You know, it all looks hi poly till u look at the external model contour...
I have that point not clear....
this will avoid the low pol siluette of normal mapped models?
yeah, the silhouette is affected by the displacement map:
| I know, I know that's so in hi res rendering, I've even used it...what i ask if is done so powerfully in 3d real time, and then silouettes are not low poly...or may it be using a dirty trick... |
| One thing I don't get(not that I've really looked into it either), is how exactly you displace the verts, as even pixel shaders2.0 don't allow texture look ups.|
Do you pre-feed the normals at each vert(Based on the normal map pixel it occupies) into a spare uvcoord buffer stream?
It definitely seems to be the ideal blend of lod/bump-mappig..If only we could cull triangles effective in hardware based on distance...without cpu lod..that's the dream.
| Displacement mapping is still really not that supported in HW... IMO only card that has it is Matrox, and those are far behind in speed when compared to the more popular cards ppl use for gaming. HW disp mapping links:|
| Displacement mapping is the next big thing. It actually looks good, unlike bumpmaps, which I have to squint and look around at the light sources, to even verify that I am seeing anything. |
| looks like is actually what normal maps should be...for what I have read, is same concept than in rendering packages!|
Matrox Pharnelia. Let's see if new cards start adding it.
I'd rather prefer not have the normal maps limits and ugly things, and all that complex and imposed limited workflow...yep, in uv mapping, seams, etc...
at the end anyway, is all about modelling in low and hi...
| so is displacement mapping the same as what 3D artists have been using for years in pre rendered dispacement mapping? using greyscale for bump maps, and having the geometry subdivided based on complexity?|
At the end of the day, normal maps are a bit clunky and require a lot of extra work compared to straight rendering in the name of keeping things low poly. Someday polys will no longer be an issue, I guess the displacement mapping is going to be the start of using math in hardware to up the polycounts procedurally without putting as much of a strain on the GPU as exporting all those polys manualy.
Thats if it workd the way I think it does, without having had time to read those articles properly.
| Normal maps etc are a stop-gap solution. I can do per-pixel lighting entirely in pixel shaders, without the need for any normal maps, special cases. Have done.. Trouble is you limit yourself to pix shader 2.0+(I.e top of the range nvidia only) cards. Ati doesn't support the extended constructs needed.|
That's the future...100% programmability, ultra definition. (Normal maps are not ultra definition. They're texture defined..ugly)
The ps3 gpu works on this princible too, only much more powerful than current pc bound shaders, as it converts every poly into sub pixel micro polys. Which is frankly..the coolest thing ever.
| well... bumpmapping does add a lil to the reality of a scene... take for instance a worn metallic skin of a plane... a B-52 for instance, with the ripples and dents of age...|
i like it, if not overly used...
even where it is implemented without tangential lighting effects, an object with a normal mapped texture as one of its components looks more 'real' than the same object with no normal mapping applied.
these implementations that do not require shader capable cards are better (in most cases) than not having it at all...
but i do see the points you guys are making... and since you guys are actually coding the bits, i have to defer to your consensus on this topic...
... but hey, if the video cards can't do it... or if the majority of user's video cards can't support it...
| I don't really see why you would need hardware at all for this. Just calculate a tesselated version of each surface for up-close viewing, and hide them when they are more than a few feet away. This seems to really only be for improving really up-close rendering. |
| The problem is the bottleneck when sending a massive amount of geometry to the videocard. You can avoid the bus limits by sending a lower res mesh and having the card bump up the detail. |
| Generally though modern engines store all geo on the agp anyway. vivid for examlpe, lets you flush a model into agp(Once, not every frame), and come render time you just point the gpu to it.|
You can basically create blitz like banks in agp memory, and story verts/normals/color info etc.. So it's more a case of using up extra memory for the up close shots.
This is where you need a primitive processor..as you could construct geo on the fly based of textures, but not to be until G7's/ps3...
| Huh. That's an interesting point. I guess then the limitation is mostly memory on the videocard. |
| you're right Josh... |
i've seen some neat bumpmapping demos running around here, none of which rely on any shaders...
and there's a facinating bloom normal mapped demo that is just outta sight...
all run on my ole GeForce2...
not taking anything away from the shader fans... shader effects are cool indeed... but can't some neat graphics also be done whithout em... so that everyone (almost everyone) can play, regardless of their hardware...
| Yes, you can have neat graphics without them. But you can have truely amazing graphics with them.(Pixel shader2, not 1.1 shaders, which are in my eyes..crap) |
| yeah well pixel shader 1.1 has the most support and is the most useful for indie games, and does all the basic features that you need to make a AAA game. shader 2 stuff is just a bonus thats worth supporting if your that way inclined since most people playing your game will lose out. Better to support 1.1 for all your FX and have enhancements for higher end systems. |
| Backwards compatiblity is another issue. But as far as pix shaders 2.0 go(It's easy to test what shaders are supported on a gpu) you can really do a TON you can't with pix shaders 1.|
iirc with pix shaders 1.0 you can look up textures and do simple multiply etc, basically texture combiners with script like access. definitely useful, but pix shaders2.0+ can be much more complex. I did a per-pixel lighting routine entirely in a pixel shader for example, without using normal maps/dot3.
| yeah, but your still stuck with 1.1 for the time being, unless your catering to 1 in 6 gamers. Unless your doing a commercial game you can't really afford to do that, not when most indies are lucky to get 1 in 100 downloads of their demo turn into a purchase for a good game, and many make considerably less .1 or .3% |
| Not really, you just disable fx on machines that don't support it. |
;do this shader
;do that shader.
People with lesser cards miss out, but they'ed hardly gain by pix shaders2.0 missing out, so makes no diff.
| only reason I'd ignore 2 for the moment and concentrate on 1, is that people with a ok computer will expect to get the same graphics as your screenshots, and where you don't have any real marketing pushing your product, you don't want low ratings from 70% of trhe people trying to use your computer.|
Allthough on the developer side of things, it also has other problems, like a lot of artists can't do graphics for a shader 2 game. Myself and bob3d included And If the artist doesn't get a lot of control of the visual effects the game usually loses out in the end..
| Well, you not having pixel shaders2.0.. that's more of a personal reason than a good reason.|
Next time you have a choice between an xbox game and a 30 doller g5...pick up the gpu :)
Heh, anyway...I wouldn't care what lower-spec people thought, I mean..obviously I care to a point..but not enough to intentionally hold back what I can achieve.
| what kind of art path does vivid currently have for artists to get control of the shaders and fx stuff without coding? thats something I'm quite interested in. |
probably quite easy for an experienced programmer to do something with igame, and you can create .FX shaders in max and export them from the material editor now if your engine and exported supports .fx
| Well, for one they can do shaders either in pure cg, in any text editor, cg editor out there, or you can use cryoPrism, the script engine vivid uses, and just write shaders as you do normal funcs|
method normalFunc() end method shader normalMapping(int technique) select technique @ Bump_Map surface.uvset.u= end select end shader
etc, then assign them in cryo..basically, no need to use b3d at all, other than the game's core engine/whatever.
+An app to do this in real time, on test media.
The new oct-tree rendering system is really going to make it all possible on a grand scale too..Can load in a single 2million poly mesh for example, and effectively cull 90% of unseen polys befor even rendering..meaning you can up the ante shader wise..Throw in per-Node leaf_based lod and you're talking astromnically high poly levels ;)
anyways, time to watch sopranos, get myself a gun, blue moon's in my eye.(err..)
| heh, well the high polycount streaming LOD is definately cool if it works, and allows flexibility in the texturing and lighting department. Personally prefer to use polys for medium detail where I can. |
Hoping to work on a more interesting 3d action game again soon.
are the .FX files D3D only, not sure. But I've been noticign that a lot of games have a standard set of effects saved in a seperate directory that materials call upon when setting up brushes in their material editors.
| Heh nah, octree's arn't related to streaming, far cooler than that. You can load any mesh...say you have a office|
complex, in total 1 million tris, 15 surfaces.. the oct_Tree engine first creates a bounding box around the level. Then devides the bounding box by 2. =4 smaller boxes. Then for each of those 4 boxes, it devides them by 2. Then it devides all those by 2. And so on.
The trick is, it's done in like a tree. so if a big one can't be seen, all the smaller ones can't be seen either, so huge areas will be culled with a few checks, will
small segments will be culled accurate in tight siturations.
Still sounds bulky though right? Well, instead of keeping every sub box, you check how many tris are in it. Below say 50? end the branch there. So by the time it's done, your single mesh will be converted into shape fitting bounding boxes, and the rest ignored. so if you had a huge scene, but with only two spheres in it, you'll end up with about 3-4 nodes.
Lod comes in by generating lower poly versions of each accepeted node, and switching them based on distance..
Throw in vis to cull nodes behind other nodes..and when a door shuts(Dynamic node)..all the sceneary behind it is culled too, in about checks, no maths.
Also, collisions use the same speed ups...
Rambled on, but didn't want people expecting something differant in vivid..i'd get the blame ;p
as for .fx, that's dx. But there is cgFx,which i'm guessing is the same.(never needed)
| good points E... |
seeing as the gfx effects are subsidiary to the actual game, and PS 1 will be enough to astound many a gamer, who, up to this point, has rarely gotten even environment mapped scenery objects in a lot of indie stuff...
plus, the numbers really warrant making sure that your super high end gfx effects do have some fallback code to allow for a wider audience...
interesting discussion... i'd have to agree with your thoughts on this one for the immediate future.
| Post one link of something a pixel1.1 shader is doing that will astound anyway. I dare you. You can't , because they can't.|
if a Pixel shader1.1 games look nice, I doubt very much it has anything to do with the shaders. They're mostly a convience for artists at the pix1.1 stage, and that's such a bad reason to up your game's min systems. If it doesn't make the game better, it doesn't belong in the game. (Tits? Good. Always good. Pictures of enay eating cheese? Bad. Very bad.)
| hey VivMan... do you usually ask people to do something, but tell em they can't before you give em a chance to do it...|
first of all, i really don't understand why you are getting so worked up about all this...
a few months ago, you were merrily running around your fp scenes, happy to have lightmapping and a lil enviromapping to boot...
now all of a sudden, shaders rule!!! we can't live without em...
... and if that aint bad enought... pixel shaders 2.0 is a MUST have or else your game aint gonna be crap.
... just stop and listen to yourself for a second.
ok... some PS 1.x stuff... that ordiary people are doin...
some in progress work a guy from the 3DGS boards is doin...
another from the 3dgs boards... me thinks this is also PS 1.x... not bad heh...
a few TSE shader examples... not bad either, ey...
now lets see some of this PS 2.x drop dead and cry stuff...
... or maybe i should wait for PS 3.0 so the 2.0 looking stuff will run at a decent framerate.
| Yeah did get a little worked up, ;)|
Anyway, those demos above look good. Maybe it's cg, but when I had a pix shader 1.1 card(g4-4600) nothing would compile on it.
Yet just about anything I can think of runs on this pix shader 2 one..yet even simple shaders need to VERY simple to ron a pix1.1 card..
as for pix shader 2.0 stuff, look on shadertech.. For one, they have a real time raytracer done on the gpu. nuff said ;)
| For all the talk about how awesome new graphics are, they haven't really made much progress. It might be technologically impressive, but the end result is not all that asthetically different.|
Sometimes it seems like 3D developers forget that thet are trying to represent the real world, and only think in terms of what the last generation technology did. For example, how many times have you heard the term "photorealistic" applied to something that if it were presented to you as a photo, you would call that person a liar?
| shader 1 stuff is really useful for mask blending different brush effects, masking allows you to easily blens different terrain types in the same brush with unique UV's and little effort for the artist if they had something like pipelines blitz material editor.|
if you can mask this way you can do nice things like simple corroded or worn pipes, where you have a rust texture, a shared bump and specular and reflection mask, and a cubic environment map, and make nice work industrial stuff that looks cool, and on a Geforce 3 or higher will look really nice and takes a lot of the hassle out of producing the work for the artists and coders involved.
For me to do that easily in blitz now I need to have double geometry and brushes and risk a lot of sorting issues. Either that or rely on the coder to code each multipass rendered surface. I see shaders as a nice way to give the developers freedom, without long convoluted workarounds using regular techniques that are very time consuming and inpractical to implement.
easy layered masking may not sound cool to a coder, but for an artist its a feature that enables them to produce AAA quality visuals without a lot of effort. It may still require 2 or even 3 passes in hardware and in effect double the actual rendered geometry, but its a damn sight easier to have that done automatically than have to do it all in code, or model duplicate geometry, and pray that every poly is in the exact same work space to avoid sorting.
I'm sure a lot of people will never use the most fancy shaders done in PS2, it's overkill and the work required to implement them will be seen by a minority, when you shoudl be trying to support the best features the majority of people can enjoy.
Anyway, your engine ovbiously does shader 1.1 so its not a problem, but most people using 2.0 will probably be the usual tech demo creators, not those trying to make commercial quality games for a broad demographic.
Hopefully if Vivid does well, there will be some 3rd party exporters created with it, that allow non coders and non blitz owners developing blitz games to make good use of it.
anyway, look forward to seeing some good graphics demo's for your product. I'm hoping pudding supports Bmax as well as he has Blitz3D, in which case I'll stick to the bmax rendering engine, as the way pipeline works allows a max user like me to throw together animated, 3D, or 2D in 3D UI's in a couple of hours and experiment with most of what blitz has to offer and get the look I want eithout any coding whatsoever. I also don't have to rely on the coder to understand the look I'm trying to get in my artwork. Which can sometimes be a real pain if your relying one someone to code a lot of effects for you, and leaving you powerless to tweak them and experiment.
Kind of like the glows and blooms in FMC.
I shoudl probably read my post but I have to take off, if it sounds overly critical, it wasn't meant to sorry. Really hope it is a big success :)
| Doesn't really matter too much about piplines, vivid has full support for the .b3d format anyway, and no max exporter is going to be as good as a custom made fx editor.|
I dunno aboust shaders1.1 man...if I recall the masking demo I sent you didn't even work? And that's a one line cg script ;p
As for demos for vivid, it's a good urn. Using FMC's physics lib :)
| yeah, I think I shoudl shut up on this thread lol, if we want to talk we can do it in Pvt heh. I an see this thing turning into a huge dialogue preserved for prosperity, that only we get to read.|
.B3d format doesn't really support an awful lot though really, hence pipeline etc being so useful. anyway I'm outa here :)
| nah... it's cool... different perspectives and all...|
as long as we don't turn it into a religious war, it's cool... :)
| Interesting thread.|
In the end, I think the best graphics result from artists working well within whatever constraints they're given, going for a specific "look" rather than trying for realism and failing.
One of the problems with higher end 3D graphics capabilities is that artists are told to try for realism, and get something that looks kind of cool today, and horribly dated tomorrow. Meanwhile, good art direction produces graphics that look good years after something that was bleeding edge becomes dated.
E.g. Darkstone -- an old DX5 game -- used stylized character designs, painterly texture maps, and vertex mapped lighting. It still looks great today (not bleeding edge, but great). Vagrant Story, an old PlayStation game, which used nearly monochromatic "pencilled" texture maps still looks amazing by today's standards. OTOH a lot of more "realistic" games of similar or lesser vintage (e.g. Soldier of Fortune or Deus Ex) look like crap today -- striving for realism and failing to get it just gives your game a shorter shelf life.
By the same token, I think World of Warcraft will look good five years from now (good art direction) while EverQuest never looked good and neither will EQ2.
| Don't know fellas. Bump maps make a huge difference to my eyes. I can always see the light reflecting in every little crevace and just because some game has nice textures doesn't really do it for me ... nice textures still look flat and lifeless, like a page out of a magazine ... whereas even an ugly bumpmapped texture has depth and looks more 'real' to me. |
Textures just don't really do it no matter how detailed they are, they're always just kinda dead compared to bumps. They are a better artistic acheivement sure ... but when I'm playing Halo I notice the bumps a lot because laser fire increases the detail in the texture when it goes by. A regular texture doesn't change it just kinda lights up ... you don't see more detail.
In real life when something lights up in all the cracks you see more detail ... you get a broader picture of what's really there which is intriguing. With bumpmaps you get a similar effect, like a mystery being uncovered as the light passes over it.
| That's one of the biggest reasons for me also. When my character fires I *need* to see the area lit up properly, not like it's all painted onto cardboard(Which is what non-bumpy mapping looks like now)|
Not to say regular normal mapping is the best way, but with vert/pixel shaders there's no such thing as one way.
| yeah... i've gotta agree... just having bumpmapping alone does wonder for the immersion factor in most indoor games...|
even landscapes look more real with it...
| prefer the more 'stylised' look myself. not saying that normal mapping and bump mappng doesn't have merit, but they all look kind of the same these days.|
Yeah I like realism with character.
| agree here. |
I think the solution would come if really cards reach the point to output same exact effect than displace mapping used in high res rendering, as that's just like if u were seing the high res version, not a often too obvious hack.
BTW, keep enjoying Ut 2004 artwork, without all this new things...