Path tracer advances

As announced in the mailing list before leaving to the Havana event Informatica 2009 I have rewritten the Path Tracer code according to a modular black box BxDF design (BRDF,BTDF,BSDF… could be easily handled once implemented)
Since my return I have retaken its development and have improved convergence, now with less samples per pixel images exhibit less noise than before.
I have implemented for now only one BRDF model: the Blinn-Phong modified model and have made some tests with it.  Currently not big deal but the longest journey begins with a single step.
I’m glad that the image quality is at the same level as the first versions of many path tracers floating in the web so im not so lost 🙂

Later I plan to implement the perfect mirror model and the refractor, one thing I have realized from the implementation is that one single mode to achieve all the possible effects is not possible or even desired,because every rendering algorithm have pathological scenes where it will fail so there should be there other models to workaround this. That´s the typical case of the perfect mirror: since the BRDF of it is a Dirac function, the probability of a light ray to be reflected according to that directions tend to zero and path tracers hang up (= if the raytracer shoots the rays randomly around a point (=spherical distribution), only those who are shot in the reflexion direction (one point on the spherical distribution) will converge, which is a very low probability (because only one successful point on all a sphere) )

Of course , there´s a lot of optimizations to be done, currently there´re no importance sampling, startified sampling and many more stuffs that could make the life of path tracer better, currently is the raw pathtracer in action!

200-spp-time-3-min

Fig. 1 – 200 samples per pixel (spp) – rendering time: 3 min

torus

Fig. 2

1000-spp-different-reflections-t-30min

Fig. 3 – 1000 spp with reflexions – 30 min

1000-samples-t-43-min

Fig. 4 – 1000 spp with reflexions – 43 min

2000-spp-time-33-min

Fig. 5 – 2000 spp – 33 min

ring-1000-spp-time-31min

Fig. 6 – 1000 spp – 31 min

3200-spp-time-34-min

Fig. 7 – 3200 spp – 34 min

sky-1000-spp-time-10-min

Fig. 8 – 1000 spp – 10 min

Advertisements
Path tracer advances

13 thoughts on “Path tracer advances

  1. Dennis F. says:

    Interesting … thank you for that much rendertime you waitet to make pictures for us 😀 ..

    For animations not realy usefull.

    Thanks a lot for your works!!!

    Cya,
    Dennis

    p.S.
    Would be great to see some progressive bars on this homepage to different of your projects 😉

    Like

  2. JayV says:

    As always, great job Farsthary.

    @Dennis.
    As mentioned by Farsthary, it doesn’t have optimizations yet. And in any case, an unbiased renderer in rarely intended for use in animation. Call it Indigo, Luxrender, etc.

    Farsthary, keep it up man, your fantastic work is always appreciated.

    Like

  3. joel says:

    great work! i didnt think someone could implement this so quickly.. even you!
    did some quick research on brdf models and it really seems like a great solution, also because most models are also suitable to be rendered in realtime (at low quality ofcourse) so that would be great for previewing.

    while looking around for info i found this:
    http://graphics.cs.ucf.edu/brdfshop/index.php

    it’s an BRDF plugin for maya, no real use to you.. but it shows how nice it can be integrated in an 3d application.
    So eventhough BRDF models is a science.. their method makes it useable for a kid. and still keep it physicly plausible.

    Hope it’s of any use in terms of designing/planning for a possible blender integration in the future.

    Like

  4. n-pigeon says:

    Great, I’m always waiting for news from you 😀

    Dennis F. unbiased rendering algorithms aren’t good for animation. In animation biased renders are better. No noise end shorter render time.

    Like

  5. Arnie says:

    Sorry, I don’t understand for which a Path tracer should be good.

    He is unable for serious work (Ultraslow and No Animation).

    30-45 Minutes for thoose little grainy Pictures?

    If you will Render a Picture in Printresolution you can drive into holidays.

    Farsthary, PLEASE continue your awesome Work on the Photon map Global Illumination for blender. PLEASE!!!

    So many people wait for an fast GI Solution in Blender.

    Yours sincerely

    Arnie

    Like

  6. Joel says:

    Arnie, the pathtracer is usefull because it’s physicly correct, something other type of renderers arent.
    having a correct one in blender is great to compare different render methods and optimisations.
    Also! when openCL arrives for Blender it’ll be possible to have GPU computing.. that way unbiased rendering will see a BIG speed increase.

    read the blender mailing list for a discussion about all this.
    january and february

    Like

  7. Arnie says:

    Hi Joel,
    try Luxrender and Indigo.
    There is no need for compare different render methods.
    All are Ultraslow and you cannot make Animation.

    And openCL is away into the distance.

    In these days every Renderer has a (fast) GI Solution… except Blender

    Like

  8. Ruddy says:

    Hi Arnie,

    As someone interested by animation, I totally understand your concerns.
    However, this is like Joel said, and moreover this allows Raul to master the fundamental algorithms. Finally, he does what he want (we are not hiring him..)

    But he is also really interested by fast GI approximations like the facet-based, artifacts-free methods used by Brecht for Approximated Ambiant occlusion (it only lacks color-bleeding)
    There’re still the problem of indoors scene or meshes with extrem faces, which gives artifacts with the latter methods or make rendering time benefit negligible with better settings to avoid them…

    He’s also interested by temporal algorithms, like those used for GI in video game: it consists in baking interactively the texture with a given GI algorithm, the refresh being controlled by an algorithm which compared illumination discrepencies between each frames (up to x6 speed-up)
    http://www.graphics.cornell.edu/~jaroslav/papers/2007-caching_course/index.htm

    Like

  9. Ruddy says:

    ..And for photon mapping, I think this is currently in Matt’s hands for a stable basis, so Raul is waiting for him to avoid redundancy (it’s pre-alpha states, if Raul hadn’t spoken about it, Matt wouldn’t have told so soon about his experiment)

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s