Hardware based AA post process for software render

About Truespace Archives

These pages are a copy of the official truespace forums prior to their removal somewhere around 2011.

They are retained here for archive purposes only.

Hardware based AA post process for software render // Feature suggestions

1  |  

Post by Jack Edwards // Jan 9, 2007, 3:29pm

Jack Edwards
Total Posts: 4062
pic
I think it would be cool to have access to the video hardware based antialiasing and filtering as a post process for software renders. The hardware is likely much better and faster at performing the antialiasing and anistropic filtering than software anyway.


This cool feature that might be relatively easy to implement since DX save to file is already implemented.


-Jack.

Post by TomG // Jan 10, 2007, 2:42am

TomG
Total Posts: 3397
Bear in mind that anti-aliasing is dependent on rendering. What it is in effect is if you render a 200x200 image at 2x anti-aliasing, you actually calculate a 400x400 render and then downsize it to 200x200.


This double size image lets you get more detail, and lets you do some blending when you downsize.


So hardware anti-aliasing only works when the hardware is doing the rendering - you can't (as far as I know) anti-alias in hardware the 2D result from another renderer, since you need to be rendering an image twice as large as you need in order to get 2x anti-aliasing.


This is what happens in a game of course, your real-time image is rendered twice as large as your screen, then downsampled. This is why anti-aliasing takes up extra processing time and can be quite significant - it's not just a smoothing or blurring pass done on top of the 2D image, it is in fact the calculation of a more detailed render - and that requires the render engine to do it, so you would still need your Lightworks or V-Ray engine to render an image twice as large.


As a note, you might think "Well why don't I just render to 400x400 with no anti-alisaing and downsize myself in Photoshop if that's the case?" - and indeed you can, and indeed it generally gives better results than rendering 200x200 with 2x AA, and can be a bit faster too. For 3x anti-aliasing, render with none to 600x600 for our 200x200 image, and so on.


Anyway, with that in mind, I am not sure that hardware AA can be helpful except with a hardware render, due to the nature of AA.


For anisotropy, no idea how that is done, but as far as I know that is a shader effect which is again done at render time, so again requires the render engine and can't be added to a 2D image after the render engine is finished. All I can think of there is you'd need to render using your offline engine, then redo the scene with real-time anistropic shaders, then save that real-time render, then blend the two, but that would have to be done manually. Again since the effect comes from the render engine at the point at which it calculates light falling onto a surface, not sure you can use that post-process in a 2D image created from another render engine.


HTH!

Tom

Post by Jack Edwards // Jan 10, 2007, 7:05pm

Jack Edwards
Total Posts: 4062
pic
Thanks for the reply Tom!


I'm going to do a bit more research on the techinical aspects of the technology because even with the resizing method, the sampling algorithm would make a large difference on the quality of the effect.


Okay quick trip to Wikipedia:

http://en.wikipedia.org/wiki/Anti-aliasing

and

http://en.wikipedia.org/wiki/FSAA


Looks like the anistropic filtering uses oversampling as well.

http://en.wikipedia.org/wiki/Anisotropic_filtering

The thing that was throwing me was how can they oversample and not take up more video memory thus reducing the max video resolution with each level of AA/AF? Looks like the solution is to render the oversampled pixels (sub pixels) average them, then only write the averaged pixel value to the video buffer.


As far as implementing it goes, I would simply throw the image up as texture on plane in front of an orthographic camera and re-render it using the hardware (D3D), then save the result to file. Should only take a fraction of a second to generate the filtered image and it doesn't even need to be displayed since an offscreen Directdraw buffer can be used to do the render.


Problem is based on what you're saying and what I've read there shouldn't really be any improvement in the image. I was curious so I rendered out an image, loaded as a texture then did a DX screen capture (with 4x AA) and there is some perceived AA when comparing the results. I wonder if this is more from texture compression or filtering before the AA though.


You are right that this would realistically only be useful in performing an FSAA operation on a double scale rendered image and downsampling it. Still for rendering out an animation, if the hardware can give a faster resampling with a more sophisticated algorithm developed by nVidia or ATI then that might be useful. On the otherhand, I did notice that the development version of GIMP does have Lanczos resampling as an option, so there's always the option of rendering seqential double size images and batch resampling the image sequence...


-Jack.

Post by prodigy // Jan 15, 2007, 7:19am

prodigy
Total Posts: 3029
pic
Another good point be use over a final 2D render the Glow post process in hardware rendertime.. thats i think its totaly posible and usesfull..

:D

Other point its save like a render layer... to use over corel or photoshop..
Awportals.com is a privately held community resource website dedicated to Active Worlds.
Copyright (c) Mark Randall 2006 - 2024. All Rights Reserved.
Awportals.com   ·   ProLibraries Live   ·   Twitter   ·   LinkedIn