Procedural content generation is a fascinating topic. Once we understood the rules behind the creation of one type of content, we are able to generate almost endless amounts of varying individual instances of this type. It is impressive how powerful the concept of self-similarity is!

One of my first tries in procedural content generation was a tree generator.
This simple but surprisingly effective algorithm generates pseudo-random trees of different types.
It uses C++11-features such as *std::function* to allow for a flexible design.
Each property of a tree generator, such as *branch lengths* or *branchiness*, is defined by a functor.
Thus, it allows to use, for example, *probability distributions* or arbitrary mathematical functions to describe properties of trees.
Once a tree generator is defined, it can generate random trees of one "species" - trees that are created based on similar properties but are perfect individuals.
Each generator represents one species.
And just as a generator generates random trees, we can generate random generators by varying the parameters of their defining functors.
This leads to a vast variety of possible species and trees—not necessarily meaningful ones, of course.
The figures show some random trees produced by different random tree generators.
Please note that no fancy rendering algorithm was implemented or used.
It's just a geometry shader creating tree-like structures around the branch-representing line segments and some simple lighting in the fragment shader.

Another procedural project originated from experiments with random terrain generation.
I experimented with *displacement* algorithms, *perlin noise* and some variations like *ridged perlin noise*—mostly to gain a better understanding of how these principles work.
Besides the typical plane-based terrain I also implemented a simple sphere-based terrain generator.
Even though quite simple, the results are already pretty convincing—and could be used as the starting point for a procedural planet generation algorithm.
Nevertheless, to be able to build convincing planets one needs to solve the problem of creating details on all levels of detail.
Computing details on demand is the only feasible solution to this problem.
In this case, perlin noise and the likes are superior to displacement strategies, as values at a given point do not depend on "parent" values.
Hence, I started to work on a patch-based, parallel, on-demand implementation that calculates concrete graphical geometry only when the user is within reach of a patch.
Though the first results of my CPU-parallelized prototype are promising, higher levels of detail cry out for a GPU implementation to be done in the future.