Sunday, July 22, 2018

Python Turtles and Trees


Recently I have been studying the phenomenon of self-similarity in naturally occurring networks. Self-Similarity has implications for biology, geography, and many other sciences. However the concepts that underpin them are relatively simple. In this post I will cover how one can explore this interesting topic, and model it inside Python. By the end we will have an algorithm fully capable of letting a turtle construct a tree-like structure

The Turtle

Turtle interpretations have been a popular way to introduce newcomers to the science and art of programming. Essentially, it is a small drawing program where all the heavy lifting has been taken out of the way. The focus is then about learning to string together these simple commands (such as move forward, turn left, etc.) to reach some end state. This can be used to help train to think about solving problems in the form of a generalizable algorithm, rather than a situation-specific list of commands. The Turtle library is actually more capable than most people ever use. indeed even in this more advanced usage, we only need the basic movement command and just a few of the 'advanced' features. I put it in quotes because even the advanced features of the Turtle library are built with a beginner in mind and so are relatively easy to wrap your head around unlike some other graphic libraries. Here are the Turtle's commands we will utilize:
  • forward(int) - Moves the Turtle in the direction it is currently  
  • left(float) - rotate the turtle counter-clockwise by float degrees
  • right(float) - rotate the turtle clockwise by float degrees
  • heading() - returns the turtles current heading
  • goto((float, float)) - Moves the turtle to the provided x,y
  • seth(float) - changes the turtles current heading t othe provided float
  • penup()/pendown() - control if the turtle will draw as it moves.
  • showturtle()/hideturtle() - controls if the turtle will be displayed on the screen
  • pencolor(RGB color) - controls the line color drawn by the turtle
  • width(int) - controls the line thickness (also called stroke)
  • dot(int) - creates a circular dot with int diameter at the turtles current location.
There are a few other commands related to screen operation and initial position that I won't cover here. They are covered well in the documentation and other tutorials if you find you want to dive into them further.

Defining the Network

In this case, the networks I am talking about are not digital, but analog cell structures. One popular method of representing these networks is called L-Systems, after the man credited with defining them Aristid Lindenmayer. Formally, an L-System is a set of symbols (called an alphabet) which each describe an action in a sequence. In addition to the alphabet we also define a starting state, called the axiom (more on this in a moment). L-Systems are generational algorithms which repeat a simple process a number of times (called derivations). The simple process it follows are referred to as replacement rules. and will act to expand the axiom into a final set of instructions for our turtle to follow. This form is called a Turtle interpretation of L-Systems. Where each symbol in the genetic alphabet is given some instructional meaning for the turtle to use in it's operation.

Axioms and Replacement Rules

To understand L-Systems you have to get a grasp of the two primary concepts: Axioms, and Replacement Rules. Let's assume for a second that we have an alphabet with 2 symbols in it ("A, "B). The axiom can be any combination of these symbols. If we chose an axiom that was 2 symbols long that would give 2^2 = 4 possible axioms. For simplicity lets start with the single symbol axiom "B". Now we define how each step of the algorithm will behave. In this example lets create 2 rules:
A -> AB
B -> A

This is read as "replace the thing on the left with the thing on the right". These substitutions happen simultaneously. As you can probably tell it will also expand the string as it goes. We can represent these rules as if -> then statements in python easily.
def mutate(c):
     if c == "A":
         return "AB"
     else:
         return "A"
If we follow this algorithm for 5 derivations we will end up with the following progressing of strings:
B # This is the axiom
A
AB # Each symbol treated independently and at the same instant
ABA
ABAAB
ABAABABA
I came up with a way I think is fairly elegant to handle the string generation portion in python:
def createLSystem(derivs, axiom):
     newstr = axiom
     for i in range(0, derivs):
     newstr = "".join([mutate(n) for n in newstr])
     print(newstr)
     return newstr
If we interpret the above as colors, say Green and Blue, then the string becomes a description of how to construct a line which goes Green, Blue, Green, ..., etc.
As previously mentioned, this adds meaning to the symbols in the alphabet as instructions for the turtle to follow. In this case we will change the pen color to match each character and travel straight forward a set distance.
def drawLsystem(aTurtle,instructions):
     for cmd in instructions:
         if cmd == "A":
             aTurtle.pencolor("green")
         else:
             aTurtle.pencolor("blue")
         aTurtle.forward(50)

The resulting color interpretation, traveling left to right.
What is interesting about this is that it kind of resembles a basic version of the system used to describe the production of a single fragment of a multicellular filament of Anabaena Catenula (a blue-green bacteria). A and B refer to two different cytological states (basically, the willingness of a cell to split). Under the microscope these filaments appear as a sequence of cylinders of different lengths, with cylinders of type A being longer than that of type B.

Self Referencing

One of the key factors that makes these systems interesting to me is: they all have some form of self-reference in the replacement rules. For example, the above rule set contains the rule A -> AB which assures that rule will trigger again of every subsequent generation since it contains a copy of the character the rule is for.

Extending the Alphabet

Moving on from a simple strand of cells like the above example to a complex network like a tree is going to take more symbols to represent more concepts. We will scrap the current alphabet and rules and define new ones here:
  • F tells the turtle to move forward by step size.
  • + Tells the turtle to rotate delta degrees clockwise
  • - Tells the turtle to rotate delta degrees counterclockwise
And the new replacement rule:
F -> F-F++F-F
As you can see this one is going to be rapidly expanding. It has four references to itself in the one rule. You can think of this as replacing every forward step with that step, plus a rotation of delta to the right, another step, a rotation of 2*delta to the left, another step,a rotation delta degrees to the right and a final step forward. This has the effect of adding an angle to each previously flat edge of a system.We will define a delta of 60 degrees and a step size of 3 pixels.

After 5 derivations of the Snowflake Curve algorithm
This is actually a famous algorithm from a set of algorithms called Snowflake Curves. By varying the delta we can get sharper or smoother edges. By varying the step size of the turtle we can make the image scale up or down, but it will always retain it's basic pattern (as long as the parameters do not change during run time). This group of shapes has come to be known as Fractal Geometry, or just Fractals.

The effect lowering delta to 30 degrees for the above Snowflake Curve. Now the shape resembles a coastline or cloud.
There is a fantastic book which contains more axiom/rule combinations to create more of these famous fractal drawings. If you want to dive more into these shapes and the math that defines their structures check out "The Algorithmic Beauty of Plants".

Branching


Up to now we have only been able to move the turtle along a single path. However, many structures in nature (including trees like we are discussing here) experience branching. There is a lot of theoretical material on branching and especially it's relation to graph theory so I won't dive really deeply into it. What we need to understand about branching for the purposes of this discussion is that it too seems to follow this form of symmetry. Overall, the individual leaves of a plant tend to have the same shape and relative size. In fact, it seems that all branching really does is create copies of the pattern in intersecting directions with some main copy. Setting aside the slight variations for the moment, we only need to consider 3 more symbols in our alphabet:
  • [ Defines the start of a branching point. Commands following this refer to the growth of the branch.
  • ] Closes a branching point started by the most recent [ symbol.
  • X is used in the replacement rule to denote the site of a future branching node. 
There are two replacement rules in this system which I would like to discuss individually.
  • X -> F[+X][-X]FX
  • F -> FF
The First rule can be considered the Branching rule. For each symbol X in the current iteration, we are going to replace it with a step forward and then two new potential branching sites (at equal delta degrees from their main ancestor, the step forward or stalk). Finally we add another step forward on the main ancestor branch, and a new future branching site (assuming there are more derivations to calculate).

The Second rule can be viewed as the growth function which is exhibited in the stalks of the branches between interconnecting nodes of branches. As the derivations increase these will grow exponentially. After 4 derivations any F present will be 16 times its original length. Once the last derivation has completed, the turtle can choose to ignore unexpanded X symbols, or use some other indicator such as a dot to denote the final segment of a branch. In this example I choose to just ignore them. Both these rules contain self-referencing attributes.

The Axiom for this system is another interesting piece to consider.
  • X
That One symbol will grow on the next iteration to contain a stalk (F) and two other branching sites. This can be viewed intuitively like the seed of the tree, waiting to sprout first leaves.

Drawing

There is one other thing I would like to point out, which lot of the discussion on the topic glosses over (mostly because it has to do with the visual representation rather than the systemic one). That is branch thickness. if you look at a plant, the newer branches are smaller, thinner, and near the top and edges. When drawing the plant out, I added a count of how many branches deep I was and used this to taper the size a little bit, the farther away from the trunk it was. This made the output look a little more plant like as it tapers off.

Branching L-System. 7 derivations. delta=25.7 degrees. W/ colorization
The colorization applied to the output on the right functions in the same way the tapering algorithm works in that it uses the count of distance from the root of the tree to determine the color to apply to each segment. In this case I used the colour.Color() module to work out the gradients from a dark brown to a green. The maximum length of any segment is 1+ the number of derivations performed in this scenario, so I use this value as the number of segments to divide the color gradient into.
from colour import Color

trunk_color = Color("#8A360F")
leaf_color = Color("#3D9140")
colors = list(trunk_color.range_to(leaf_color, derivations + 1))

This isn't a perfect algorithm but it gets the point across. When dealing with drawing these branches we will want a way to store the position of the Turtle at the beginning of a branch ([) so that when we finish with that branch (]) we can return the turtle to the splitting node of the branch, and continue the rest of the pattern. This means placing the data onto a stack when a branch starts, and popping it off when the branch concludes. A list can be used as a simple pushdown stack in Python. When adding items, simply use the list.append() method to add the newest position information to the end of the list items. Then use list.pop(-1) to remove the last item added. This simulates the Last-In-First-out nature of pushdown stacks. We will capture the Turtles X and Y location, Heading, Stroke thickness, and current color. The last two are both controlled by the current layer count when the Turtle is pushed onto the stack. Once this is captured, we increase the layer count by 1 and adjust the color and thickness to match the new layer. Inside the Drawing function I have the code below to handle the process.
...
elif cmd == "[":
print("putting turtle in tank")
pos = {
"x": aTurtle.xcor(),
"y": aTurtle.ycor(),
"heading": aTurtle.heading(),
"color": "blah",
"stroke": aTurtle.width(),
"layer": curr_layer
}

curr_layer += 1
aTurtle.pencolor(colors[curr_layer].rgb)
turtle_tank.append(pos)
aTurtle.dot(pos["stroke"] * 1.5)
aTurtle.width(int(math.ceil(pos["stroke"] * 0.8)))

At every branching location, the Turtle also creates a dot at the start of the branch in the above implementation. This often occurs at the joining of nodes on plants. Admittedly the values for the tapering width and dot size ratios were just approximate guesses on my part. The inverse of this command pulls the last Turtle position object off the list and uses it to rest all the stored values to the state they were in at that time.

Deterministic Vs. Stochastic Systems

Now a few moments ago I casually set aside the 'slight' variations one sees in natural systems like plants. So far, I have treated these systems as if there were no chance of variation in structure. This is like saying a cell cannot mutate or more generally that no other factors outside of the replacement rules can ever occur. This is a great place to start because you are guaranteed to get the same results from your run as I do from mine. This is referred to as a Deterministic system. All possible outputs can be determined simply by knowing the input and the algorithm functioning on it. These can be interesting, but don't really capture natural systems very well. As any researcher will tell you, it is possible to get the oddest results from seemingly deterministic systems. This happens so much in fact that it became the heart of what is known somewhat incorrectly as Chaos Theory. Instead, I think it is better to refer to it as the Stochastic principle of experimentation. A Stochastic system is one in which some portion of the output depends on random selection. Rather than having a single output, a Stochastic system has a range of possible output states, and the probability for each. Algorithmically, there are several areas we can apply pseudo-random selection to introduce more natural variation into the network structures. A few options include:
  • Randomly selecting a delta between min delta and max delta.
  • Randomly selecting between multiple replacement rules
  • Randomly select to skip a replacement with some probability
  • Randomly replacing a symbol with one or more outside of the replacement rules.
Each one of these has it's own uses and implications. For example if you decide to go with Randomly selecting a delta you have to consider further do you select it once for each branch, once for each set of branching nodes, or do you select it only once and apply equal deltas to each branch? What are reasonable Min and max deltas for the system you want to model? With selecting between multiple replacement rules this is more like uniform cellular mutation. Cells will often follow a given system 'most of the time' with some probability of behaving in a different manner. Coding up multiple rules for a symbol and selecting between them with this probability allows us to model these events more accurately. The option to skip a replacement rule for a symbol is intuitively like stunting the growth of a cell with that probability. This one should be used sparingly unless it has a specific affect on the system which you are trying to model. Finally, the option of replacing a random symbol (or semi-random according to observed probabilities of an actual system) with one or more symbols chosen outside of the normal replacement rules is closer to single cell mutation where the function of that particular unit flips and can have a drastic impact on the outcome. Let's look at an example for each. using the above tree as a starting reference.

Stochastic Deltas

First, lets examine what happens if we set a minimum delta of 10 and a maximum delta of 60 and randomly select our delta for each branch individually and then see what it looks like when we select randomly for each connecting node.
two runs of the same plant (right) with stochastic delta selections
Of course, theses are just two examples of plants that could be generated using the same ending string when considering stochastic deltas. In fact, the number of possible plants is related to the number of nodes in the output, and the range of values possible in the random selection process. Supposing we use a uniformly selected floating point number (with 2 digit precision) for the delta we end up with a random range of 10.00 to 60.00 (5,000 possible values). if we count the number of nodes in the resulting instruction set after 7 derivations we end up with 2,186. So the total number of possible plants is 2,186 ^ 5,000 or a really large number. However, lots of these plants would appear visually identical. Consider that there are entire subclasses of plants where every branching point has the exact same value, except 1, or 2, or 3. Suffice to portions of these plants would be indistinguishable from each other without a mathematical analysis of every single branching node. It should also be noted that the chance of randomly drawing two plants like this is fairly small as defined above. most resulting trees will be distinguishable, while maintaining similarities with most other trees in the set. There are of course, with very high delta ranges like the above you can end up with something that barely resembles the starting figure at all. If you see these types of results, try shrinking the delta range back toward 25 degrees as roughly the mid point, as I find a range of around 20 to 30 degrees adds enough minor variation.

Stochastic Replacement Rules

Now, let's return to a static delta and examine a stochastic set of replacement rules. I am going to add a second type of branching pattern and a second possible growth function. And select between each with a 50% probability. The new rule set looks like this:
  • X -> F[+X][-X]FX or FF[+X]F[-X]FX
  • F -> FF or FFF
This should give some drastically different final instruction sets and therefore different resulting trees.

3 runs of the stochastic rules (Left) and the original (Right)
You can see the effect of the staggered branching node rule combined with the occasional application of accelerated growth. The structure starts to look less like a leaf and more like a tree, viewed at a distance.

Genetic Mutation

The final stochastic system I want to point out is replacing an instruction with another random instruction. This can intuitively be thought of as a genetic mutation. When modeling this way it is important to understand the probability of mutation and the effect on the replacement rules. As an example. Lets add a mutation symbol C to the alphabet. C symbols have one rule and one rule only. Spread. So we add the rule C -> CC. When being interpreted by the Turtle, C cells will be ignored. Now, for the actual mutation: Whenever the Turtle begins to process any instruction it randomly selects whether to follow that instruction with probability p, or replace it with a randomly selected symbol from the alphabet at a probability of 1-p. Even though it seems like a small change, if left as defined you will realize it can quickly corrupt the interpretation of the network altogether. Imagine what happens if we replace a [ with a F. Rather than starting a new branch, the code for that branch and it's predecessor bleed together. Once the Turtle reaches the former branches closing ] it tries to return to a branching position that was never created...game over! I ended up excluding [ and ] from the replacement consideration. More for convenience than modeling. Even in this case, setting p to low can lead to can lead to drastic structural abnormalities. The trees generated below had a 5% chance of genetic mutation (which is insanely high for demonstration purposes).

Context in L-Systems

For the sake of this discussion, an L-System can show one of two properties. It can be "Context Free", which means each rule focuses on a single symbol without regard to the symbols that surround it. Or, it may be Context-Sensitive, which means the rule that is applied to a symbol depend on it's neighboring symbols. As an example, take the system
  • Axiom = "ABCD[HI]EF"
  • Rules: 
    • D < E > F -> "DH" 
    • D < H > F -> "AB"
In this case "<" and ">" aren't symbols in the alphabet per se. They indicate ordering of symbols to satisfy the condition. In Context-Sensitive Branching L-Systems (like the above) , the parsing is further complicated by the requirement that the symbols occur, in order, on the same branch. Looking at the axiom, we can see the main branch consists of "ABCDEF" and there is a sub-branch which contains "HI". The First rules reads: "If the symbol is an E which was preceded by a D and is followed by an F replace it with "DH". This rule would apply to the E within the primary branch. The second rule would not be triggered when the parsing reached the "H" in the sub-branch, as it doesn't satisfy either condition. Even though the H is physically preceded by a D in the output string, the logical separation of the new branch symbol keeps this from firing. Second the symbol H is followed by the Symbol I, not the symbol F. At the end of 1 derivation our output becomes "ABCD[HI]DHF". On the next derivation the "H" at the end of the system would trigger the second rule and the output would become "ABCD[HI]DABF". At this point no further rules would apply to the system. Unlike this toy example, actual Context-Sensitive Systems will tend to reference themselves in some manner, and therefore continue propagation.

Considering context while parsing systems allows us to do more complex modeling of propagation signals that travel through the system. There are two directions in which theses signals will tend to propagate. Acropetally (from the base to the tip) or Basipetally (from the tip to the base). 

Conclusion

I would like to conclude with a few examples I have generated that I I enjoy. The Right-most image in each is the deterministic interpretation. The left most images are the same system with all 3 random selections applied (in much smaller amounts than exemplified above). With each I will include a description of the Axiom Rules, and Delta which generated the interpretations.

n=7, delta=20.0 Node Replacing
  • Axiom: X
  • Rules:
    • X -> F[+X]F[-X]+X
    • F -> FF
n=5, delta=20.0 Edge Replacing
  • Axiom: F
  • Rules:
    • F -> F[+F]F[-F][F]

n=5, delta=22.5 Node Replacing
  • Axiom: X
  • Rules:
    • X -> F-[[X]+X]+F[+FX]-X
    • F -> FF
    • n=4, delta=22.5 Edge Replacing
  • Axiom F
  • Rules:
    • F -> FF-[-F+F+F]+[+F-F-F]
n=5, delta=25.7 Edge Replacing
  • Axiom: F
  • Rules:
    • F -> F[+F]F[-F]F

No comments:

Post a Comment