Perfect specular reflections

Hi again! 🙂

A path tracer is very simple in theory (and indeed a specialized path tracer producing good looking pictures  can be written in few lines of code), but a powerful general one is  not an easy task. Currently I’m doing heavy researches on the sampling area that is the core of the path tracer since smart sampling will make life easier and pictures look the right way while raw sampling only worsen things.


Fig. 1 – 160 sample per pixel (spp) – render time:  4 min 58 s


Fig. 2 – 550 spp –  17 min


Fig. 3 – tone mapping


Fig. 4 – 160 spp – 3 min 43


Fig. 5 – 550 spp – 11 min 20

I cannot give any dates yet, but as soon as the sampling core is stable enough I implement a few more features and make a public release before the code starts getting heavy.

One of the good things of a Monte Carlo render method is its low coupling with Blender structure, allowing very easy plug-n-play!


Perfect specular reflections

A delayed post

Hi all!:)

After returning from Havana event Informatica 2009 from February, the 9th-13th, I was so busy that I didn’t have the time to speak about the event. I presented over there a report “Volumetricos en Blender” (Volumetrics in Blender) that was a summary about what volumetrics are, and their development in blender since the beginning of the project. However, because previous speakers took more time than allowed I had to speak in a hurry 😦 but aside on that I met excellent people, many of them being related to blender:

* The cool guys from UCI doing FreeVIUX project (on the right side of the picture below), they have progressed a lot since our last meeting, they even have a 100 pcs renderfarm 🙂


Abel and David and the Big Buck Bunny dvd offered by Grafixsuz (thank again!)

* Junior Frometa, a good fellow who is struggling for making blender the standard tool in a faculty that uses 3dsmax by default, best wishes to them.

* A very smart guy (that like me is working alone) in game development. he also has a blog.


I´m the guy in front of the pc, I’ve changed a little my look since my last picture 🙂

Cheers to all Farsthary

A delayed post

About realistic materials…

Hi all 🙂

I’ve been making some researches and tests in the path tracer and as a result fewer samples per pixel (spp) are required now to achieve a desired level compared with my previous implementation. Of course , for pure path tracers that´s very dependent on the scene but in general 5 times less samples are required. A scene where 1000 spp were needed before performs now at similar level at 200 spp.

The algorithm handles very well diffuse surfaces, now is the time to tackle reflective and refractive surfaces and it’s where some questions arise :

Many small projects of path tracers around the web are designed to work with a very specific BRDF model, or support only few BRDF implementations, but since the design of the Russian Roulette (if we want to account for those effects because pure statistics diminish small probability states like specular reflections ) should be tailored to certain parameters of a light scatter model, in order to design a general Russian Roulette I need to implement a physically based material with all the parameters needed for BxDF functions.

also I need good formulas for implementing tonemappers because path tracer images fall outside the (0,1) range.

Tonemapped image with exrtools
Tonemapped image with exrtools
Direct renderer image 480 spp Time 5 min 49s
Direct renderer image 480 spp Time 5 min 49s

the show must go on 🙂

About realistic materials…

Rectangular Datasets and Still Frame features

New proposal to commit for Sim_physics

Hi all! 🙂

Currently the handling of voxeldata is implemented without any kind of header for the sake of simplicity and because it was implemented as a test framework of future implementation in that direction (since datasets are one of the most powerful features of a volumetric engine). But voxeldata have started to being used for some serious scientific projects to import datasets into Blender, allowing it to easily fit into more scientific pipelines than before,  that ́s why at least a basic header should be implemented, that will allow:

– ­Automatic resolution detection:   The user no longer needs to manually set up the resolution of the dataset, a very dangerous procedure.

– ­Rectangular datasets: Previously only regular datasets (NxNxN sizes) were supported; now are possible any
combinations of NxMxK sizes

– Another important feature is the possibility to render only a single frame of the dataset during a whole animation, that ́s why I added the “Still” feature.

In the future more improvements could and should be done in that area as Blender starts to being used in scientific pipelines due to its huge flexibility.  The changes to the current implementation are minimal but greatly improve the usability of the voxeldata feature.

The basic header implemented is:

typedef struct Header_vb //Header could be extended here easily
int resolX,resolY,resolZ;
int frames;
} Header_vb;


The changes to the already implemented tools that make use of voxeldata are also minimal, they only need to store the header at the beginning and to take into account the size of it  (offset) to properly index the dataset, and nothing more!

I have reimplemented also the basic simulator to store the header (will not be compatible with previous builds).  So, now the basic structure of a Blender compatible Dataset is


the Data format is a Linear array of floats, where each element is adressed as before by
the formula:


the UI looks like this:


Fig. 1 – Automatic resolution  detection


Fig. 2 – Still frame selected

For those impatient to wait inclusion into the sim_physics branch, here is the “semi-manual” patch (done against Sim_physics rev18682)

Rectangular Datasets and Still Frame features

Progress bars

Hi all 🙂

In order that viewers do not get lost, and following Dennis’ advices (thank for the idea! 🙂 ), qualitative progress bars have been added. Aside from a lack of documentation, the volumetric renderer seems to be mature enough for an inclusion in an official blender release (stability, rational UI, flexibility..caution: these are not Ton’s words, only strong assumptions based on objective facts!). Please notice that these bars only deal with blender, so even if there’s already a Navier-Stokes simulator for realistic Smoke/fire, it’s nonetheless in the TODO state (Daniel?) because it’s not integrated inside blender. For volumetric optimizations, adaptative step size still has to be implemented (in progress because first builds had this feature so the code is available, but the new, more robust renderer is implemented in a way which causes its addition less straightforward) . For external shadows, the proposal has been postponed in order to design (= in agreement with Broken/Ton) a more general system, which can deal with buffered shadows instead of raytraced ones only. Photon Mapping is a joint project with Matt Ebb, so work will be resumed as soon as he will have a stable core to share with, avoiding thus rewrites. In the meantime, writing an unbiased renderer allows better understanding of the fundamental algorithms (see last posts for more reasons) and also will allow to have a tool for calibration inside blender.

As previously said, do not fear dilution of the work because we have to wait for the volumetric inclusion in a next blender version before doing more radical changes caused by optimization/better integrations. (see last posts). In the mean time, keep learning the existing features (cf. download the manual) whose interface won’t change for an official release and do VFX experiments by using the particle system.

p.s: remember, volumetrics uses the raytracer which is cpu-intensive, so if you use it in a scene, DO use layer rendering: render only the smoke/clouds with ray activated, then the rest of the scene without it, then composite all the layers through blender powerful nodal system. For shadows from the volumetrics, you can fake it through dummy objects whose shape/alpha cast shadows similar to the volumetrics ones…

Hope this will help and have fun! 🙂

Progress bars

Path tracer advances

As announced in the mailing list before leaving to the Havana event Informatica 2009 I have rewritten the Path Tracer code according to a modular black box BxDF design (BRDF,BTDF,BSDF… could be easily handled once implemented)
Since my return I have retaken its development and have improved convergence, now with less samples per pixel images exhibit less noise than before.
I have implemented for now only one BRDF model: the Blinn-Phong modified model and have made some tests with it.  Currently not big deal but the longest journey begins with a single step.
I’m glad that the image quality is at the same level as the first versions of many path tracers floating in the web so im not so lost 🙂

Later I plan to implement the perfect mirror model and the refractor, one thing I have realized from the implementation is that one single mode to achieve all the possible effects is not possible or even desired,because every rendering algorithm have pathological scenes where it will fail so there should be there other models to workaround this. That´s the typical case of the perfect mirror: since the BRDF of it is a Dirac function, the probability of a light ray to be reflected according to that directions tend to zero and path tracers hang up (= if the raytracer shoots the rays randomly around a point (=spherical distribution), only those who are shot in the reflexion direction (one point on the spherical distribution) will converge, which is a very low probability (because only one successful point on all a sphere) )

Of course , there´s a lot of optimizations to be done, currently there´re no importance sampling, startified sampling and many more stuffs that could make the life of path tracer better, currently is the raw pathtracer in action!


Fig. 1 – 200 samples per pixel (spp) – rendering time: 3 min


Fig. 2


Fig. 3 – 1000 spp with reflexions – 30 min


Fig. 4 – 1000 spp with reflexions – 43 min


Fig. 5 – 2000 spp – 33 min


Fig. 6 – 1000 spp – 31 min


Fig. 7 – 3200 spp – 34 min


Fig. 8 – 1000 spp – 10 min

Path tracer advances

Support issues

Hi, this is a friend of Farsthary,

Due to troubles, you can not donate through paypal anymore. In the meantime, if you want to show gratitude, please support the blender foundation and improve your knowledge by buying quality learning material in the blender shop.

Moreover, the foundation may support Farsthary in the future, so please send an additional email to the eshop so that they know you support them thank to Farsthary. For people interested by animation, I also advise this really helpful book.


Support issues

Vanishing fears

Hi all 🙂

Since there’ve being some concerning from community users about me spreading to several projects I want to make an aclaration to fade away fears :). I’m not forgetting the volumetrics project in any way, in fact I have a todo list for soon commit proposals for it, it’s just that volumetrics are now pretty full featured (even myself find hard some times to keep track of is possibilities, so I can’t imagine a final user that haven’t follow its development when he found it 🙂 and we all know that in order to make a successful path toward official inclusion we have to make things as less traumatic as we could. Once volumetrics are in offical trunk it should be a lot less risky to keep adding shocking wave inclusions 🙂 For example full shadow support is a more complex problem than only adding raytraced shadows (it requires that the other shadows algorithms be supported as well, that’s why Matt haven’t include it yet in a better designed way). For me, a next important major feature will be the support of the full shader tree as my previous build do, but in a better designed way, is just that this could be also a little traumatic so it will not harm to push it back after the official inclusion.It will allow for example with PyNodes to write on the fly new volumetric textures 🙂 So, rather than make more heavy the volumetrics rigth now is better to take a break, (I will do commit soon some voxeldata related stuff that are needed now in a real use case 🙂 ) And reuse this time in others related cool projects (rather than watch series,films, play games 😉 like GI Currently Matt is pretty good at the photon map and he have lots of interesting ideas to implement (I still will make some experiments and will propose to him :)) but the Path Tracer is an interesting option since it will serve among other things as a foundation for further developments in this field (and could serve to ease the integration of other unbiased engines currently floating in the cyberspace :), could serve also as a calibrator for the so-needed new physically based material types and new render algorithms provided that could be successful implemented. And finally could spot the current problems, since bugs appear at code/run time and not at speculation time 😉 So don’t worry, as long as life allows me you will have Farsthary for long time 🙂

PS: WOW! I’m flattered that someone has thought that I’m a group of 20 peoples hiding under Farsthary’s name 🙂 jeje

Vanishing fears

Blender Internal + unbiased rendering

In this week-end, while I was waiting for Matt’s photon mapper refinements in order to avoid code duplication, I also started to code another extension to the Render Internal: Bidirectional Monte Carlo Path Tracing that will allow, for the most exigent users to fully resolve the rendering equation with a controlable error bound :), that’s pretty similar at
what Indigo and Kerkythea do.


Fig. 1 –  Path traced image generated in blender internal

The world of CG is very complex and broad, and that ‘s good , because it aims to model both the reality and the surreality. For that reason there’s no single render set up that works for all cases you want to make, that ‘s why having a full featured render at hand is very important.

Many times I’ve found in the topics (though some hate them, they are by no means useless, people always  need to compare, and things always change): Speed vs Accuracy vs Quality vs Realism vs NPR vs Biased vs Unbiased vs Tools vs Artists and so. All of them are terms in the life equation and together form a balance: In spanish language there ‘s a phrase named the BBB phrase:

“ Bueno, Bonito y Barato no existe ”
“Good, Beautiful and Cheap don ́t exist together”

(like “Cheap, Fast and Good, pick two”)

While in real life there are lots of examples of the contrary (Linux is BBB , and of course our beloved Blender is BBB and have done a BBB movie 🙂 ),  the deep meaning of that phrase is that in all the cases of correlated variables, as you gain in one value you loose in the opposite. In order to gain Speed you need to sacrifice Quality and Accuracy. Realism is not always a goal but has driven the CG industry and is the engine of its development.  NPR are free expressions in CG, they allow the artist to express himself the way he/she want or only represent the essential of a thing like cartoons (Freestyle is a big plus for Blender)

Many artist subvalorate the so-­called Intelligent renderers, like photorealistic ones (V­Ray, Mental Ray,Final Render, Brazil,Indigo,Kerkythea and so) saying that those renderers accommodate artist to not learn the foundations of illumination and composition since the must dumb set-ups look cool on them. That ‘s not necessarily true, intelligent renderers free the artist of low level tasks and instead they could focuse themself on the real artistic side. Everytime men are freed of low level task, higher arts and sciences could be done. Of course, foundations will always be valid, and smart tools in wrong hands will do less than dumb tools in experts hands. But there ‘s a subtle detail often forgotten: the tools do matters in the artistic expression, one idea will be differently expressed with different tools and materials and will have different impact on the receptor according with that (my father is an oil painter, my sister is a skilled drawer). It’s the artist who creates the masterpiece and with same widespread general tools the final result will depend on the artist only, but it’s the tool that time-wise limits/casts the artist expression capability (doing the joconde with paint instead of gimp or photoshop is indeed possible for an real artist, but how long does it take and how constraining is it (no layers for precise control, no gradient system > very rigid)?)

Biased vs Unbiased renderers: Here I will make a little stop because there ‘s a lot of confusion about those terms: Simply put, unbiased means that on average, the results of the render will converge to the correct solution (averaging any number of  1 sample per pixel, unbiased renderings will converge to the correct image), that ‘s why unbiased renderers are always taken as references for quality control because their results will tend to the correct solution if the inputs are correct or at least the same as the other compared rendering methods. Unbiased  renderers try to fully solve the rendering equation, a very intuitive equation but is a monstrosity that involves infinities at whatever side you look at because nature is unlimited 🙂

And the better way to solve with a desired accuracy such equation are through Monte Carlo methods. So typically the error in those renderers are bound to noises.  On the other hand, biased renderers make a series of simplification to the rendering equation to gain speed: deterministic raytracing (Blender already has one) doesn ́t account for diffuse interreflection effects (GI), caustics and so. Deterministic Raycasting (Volumetrics) , in development for Blender adds another term of the rendering equation to the simplificated raytracing equation, Photon mapping (In development for Blender 🙂 further extend the capabilities of deterministic raytracers to include GI and caustics in a biased way: averaging any number of low resolution photon map renderings of caustics will not converge to a sharper correct caustic .

The advantages of Intelligent designed biased renderers are that pushes the speed without loosing to much in accuracy nor in quality. But the brute force unbiased renderers will always be the rule to measure them as raycasting is the rule to measure all the others volumetric rendering methods (Shear warp,splatting,3d texturing and so) and always will bring up at no cost all the real life behavior of the light.

So this weekend, for learning purposes, in order to stress the flexibility of Blender render design and trying to set up a comparison rule for the photon mapper since I have found some artifacts derived from the biased nature of it:


Fig. 2 –  deterministic artifacts of caustics in photon mapping

I started the implementation of a Bidirectional Monte Carlo Path Tracing inside Blender, and I was happily surprised how “easily” blender can be extended with it.  Of course, currently blender doesn ́t have any physically correct material type (perhaps soon will have it) so putting the wrong input lead to “A correct answer to a wrong problem”.
But still, as a proof of concept and as a future possibility it will be useful to have integrated in Blender a Path Tracer, those kind of renderers have their own user base that currently blender lacks and also the Path Tracer fully complete the missing terms of the rendering equation so there will be minor features that blender Render Internal will not have compared to ANY renderers over there . One of the good things of being a CG programmer is that once you know a little about the underlyings of how CG work you realize that many of the marketing features that sell a product come at a very little cost  (I don’t mean they were trivial to program, sometimes the simplest of the algorithm involves years of research/development !! ). Here is a small list of the features that came for “free” in Monte Carlo Path Tracing
rendering algorithms simply by performing random samplings and recursions:

• Sampling a pixel over (x, y) prefilters the image and reduces aliasing.
• Sampling the camera aperture (u, v) produces depth of field.
• Sampling in time t (the shutter) produces motion blur.
• Sampling in wavelength λ simulates spectral effects such as dispersion
• Sampling the reflection function produces blurred reflection.
• Sampling the tranmission function produces blurred transmission.
• Sampling the solid angle of the light sources produces penumbras and soft
• Sampling paths accounts for interreflection.
(All of them are the flashy features of any unbiased renderer 😉
So, Imagine the day when you could fully solve the rendering equation in Blender without going out 🙂 ! Off course, none of these are finished in a few days, remember the BBB phrase, and also  I’m currently on Volumetrics (priority) and Photon mapping, I can’t promise anything yet , I’m just giving some advances of my development line in the Render module that will be toward fully resolve the rendering equation with Blender render internal.

This is also an ambitious project like volumetrics  so  help is very welcome, I have a lot to learn in the process and as always my main slowdowns are the unknowns of Blender inners that hope will be less and less over time . In the process many subgoals should be met (some of them by me, some of them by others).  I just coded a draft for test viability and here are some renderings showing only GI, the main use case of unbiased renderers are Archiviz, stills, referencing, etc.

Note: these pictures are direct output of the Path Tracer, no previous render algorithms or filters were performed


Fig. 3 – 10 samples per pixel (spp) –  render time: 4.35 s


Fig. 4 – 100 spp –  34 s


Fig. 5 – 1000 spp – 5 min 34 s


Fig. 6 – 4000 spp –  22 min


Fig. 7 – 10000 spp –  56 min


Fig. 8 – 100 spp –  44 s



Fig. 10 – 4000 spp – 28 min

As always I hope you like it
Raúl Fernández Hernández (Farsthary)

Blender Internal + unbiased rendering