Some words

Hi,

This is a friend of Farsthary and one of his early supporters. One year has passed since he announced that he would develop a volumetric engine for Blender. Now, thank to his and Matt Ebb’s hard work, this is a reality, and we all wish him good luck for a Cum Lauda PhD.

The DURIAN Project garantuees that all this hard work will be fully integrated in an official blender version and optimized for Movie-quality VFXs.

Support has played a huge role by showing to Farsthary the community’s consideration and carefullness, that keep him hoping despite hardnesses like the terrible hurricanes.

This is why I am introducing today brother sites of other very talented programmers so that they know similar and happy futures:

Janne Karhu for advanced particles effects: you can enjoy hair/fur shown in  the Big Buck Bunny open movie and available since Blender 2.46 mostly thank to this guy: if you support him, be prepared to have more incredible VFX based on the particle system, such as managing huge amount of caracters like birds or people in a convincing way.

Joseph Eagar for NGONS modelling in blender. Here is a comment of Big Buck Bunny artist William Reynish:

I donated to the Bmesh project, and here’s why:
I expect it will make modeling much more enjoyable, faster and more efficient. Not having n-gons is one of the biggest drawbacks of Blenders modeling system. Yes, you can achieve similar results without, and yes, it’s best to have your final models be all quads, but what n-gons help to improve is workflow. Aside from n-gons, Bmesh should make it easier to make create powerful modeling tools, more akin to the ones found in apps like Modo and Silo.
Here’s a video of Silo in action, using n-gons and other tools:
http://www.nevercenter.com/videos/tu…minotaur_3.mov
I see Bmesh as a very important project for Blender, and especially now with 2.5 coming up with its new tool API. I hope Bmesh can make it in 2.5 – also means developers don’t have to re-implement tools twice, one for 2.5, and one for Bmesh

So let us  keep supporting all the restless developpers: they use significant amount of their “lifespan” to give us beautiful CG tools…

regards

Advertisements
Some words

Another utility of vector input

Hi all!
few weeks ago I have implemented a simple patch for texture nodes, the vector input for every texture node type.
An important use of this feature is that it could transform (distort) a texture based on any input, that means an easy way of animating texture offsets, for cloud animations, etc and transformation, if as an input ,animations textures, time node , simulated datasets (voxeldata textures) are used.

Cheers and see you latter! 🙂
Farsthary

Another utility of vector input

Final Gathering at last ;)

Hi all !

Long time without news! I’m currently facing the most important time in a student life, its graduation,
that’s why I have posted previously that may-june will be a very busy time for me so I have near zero time to spend in my current projects, but zero coding is something simply impossible to ask to myself 😉
so to motivate me a little last nigth I open up blender source and review againg the so awaited Final Gathering
for the photon mapping algorithm, and this time with a fresh mind I could finally implement it.
The principal paremetter that now is added to the previously in the photon mapping build is final gather samples.
My previous development in the pathtracer was of big help because FG is in essence a hybrid algorithm of photon mapping and a pathtracer,
that’s why render times are longer that plaing photon mapping images but shorter than pure pathtraced images
(though nothing is absolut in this world 🙂

FG overcome many limitations of photon mapping at a price, it introduce high frecuency noise , only removed with many samples.

The essence of FG is very simple : firstly build the photon map in a preprocess stage, before the rendering actually begins and then at render time, when a shading samples is requested, instead of find illumination values in the photon map for a single value, many rays are traced in all hemispherical directions and then request illumination values at intersection points in the photon map and finally integrating all those samples.
(is a single bounce pathtracer of a photonmap)
Once the render internal of Blender get the benefits of faster raytracing many projects will get boosted at the same time like volumetrics and photon mapping so things may gett better in the future for all of us 🙂

There’s still many features that currently lacks the photon mapping build, like texture support and is very unstable and buggy for now, I will soon upload a new build with the Final Gathering in order to all of you toy with it. 🙂

Take a look at some test images, hope you will like them. with final gathering now is necesary in many cases to tonemap the images, I show here only the untonepapped ones.

Cheers to all Farsthary

PS: WOW! I have missed posting here 🙂

FG 8 OSA 100 FG samples Time 1min 59s
FG 8 OSA 100 FG samples Time 1min 59s
FG 8 OSA 100 FG samples Time 2min 4s
FG 8 OSA 100 FG samples Time 2min 4s
FG 8 OSA 200 FG samples Lamp samples 16 T 5min 33s
FG 8 OSA 200 FG samples Lamp samples 16 T 5min 33s
FG 100 OSA 8 Time 2min 16s
FG 100 OSA 8 Time 2min 16s
FG 100 OSA 8 Time 2min 56s
FG 100 OSA 8 Time 2min 56s
OSA 8 FG samples 100 Time 1min 46s
OSA 8 FG samples 100 Time 1min 46s
Time 2min 41s
Time 2min 41s
Final Gathering at last ;)