“What I cannot create, I do not understand” – Richard Feynman
When thinking about Artificial Life, we’re looking at abstracting the methods found in nature to digital entities – in essence, extracting the patterns that are found naturally and, in turn, making new forms of life. This is not life in the biological sense, which is contained in the realm of reality, but life in the metaphysical sense – blessed with certain philosophical gifts and traits.
But in a sense, this is merely the illusion of life. The actual blocks dance around wonderfully, even taking on patterns that appear naturalistic. But the blocks themselves have nothing invested in their existence. They are slaves to their surrounding ruleset, they do not improve or evolve. They are closer to atoms and molecules then living organisms.
Second generation ALife creates discrete entities that are both self-serving and environmentally influenced. Conway’s game could be modified to include the same rules, but give blocks a kind of longevity – or self preservation. Perhaps the ruleset could change based on the survival rates of individual blocks. Of course, because the environment has no connection to reality, this simulation would perhaps turn out more dull then Conway’s own game.
If we think of Artificial Life as an interesting diversion, what are the precepts and goals? Are we attempting to “grow” a living reality similar to our own? Are we trying to spot trees and tigers and humans amongst these digital puzzle pieces flitting around? Or are we attempting to discover completely new paradigms amidst the pattern storms?
Revealing new paradigms would ultimately be the most rewarding.
Therefore, not only will the ruleset evolve via mutation, the interpretation of the ruleset will change due to memes floating among populations of genetically tied organisms. Co-vergent and divergent evolution would also occur based on the influence of memes.
However, we don’t want to hardcode these paradigms into the simulation. We want to abstract these processes to something altogether separate. In order to do this, first we must look at nature, and try to find a few important strands.
Both genes and memes are remarkably similar – an idea was first proposed by Dawkins, but has been floating around undoubtedly for a long time.
Both undergo evolution and change over time due to types of natural selection – genes are selected from prosperous sexual reproduction / memes are selected from prosperous ideas in reality.
In computers, everything must be represented using binary code – 0s or 1s. In biology, DNA is the datatype of genes. Memes are another story, though one could say that Natural Language is their datatype.
The problem lies in creating a digital datatype to store genetic information that can undergo mutation.
Digital datatypes, such as strings, integers, video and picture files – these are very strict. Often they are hierarchical – organized by a “marching order” of bits. If the bits are misplaced, often the entire thing is rubbish, especially in more complex datatypes like images, video, polygons.
The genetic code is different. Compared to a digital datatype, it is remarkably robust and resilient. It is composed of triplets of nucleotides, each which code for an animo acid. The relation between an expressed phenotype and the genetic code is still being understood. Currently, correlations are known and proposed, but there is no defined “calculus” for these relations. We are without the API for the genetic datatype.
Though memes are much more transitory, ethereal and illusory, they work in the same way. If their datatype is Natural Language, the API is defined and reinterpreted every instant – in dictionaries, encyclopedias, magazines, books, television, films and conversation. The point of translation between abstraction (meme) and datatype (natural language), is once again impossible to find or know.
But like I said earlier, we are not attempting to replicate and model these existing datatypes; we are trying to learn and abstract them into new paradigms of ALife.
So we’ve come to the realization that these naturally occurring datatypes are not hierarchical or hardcoded. They also are not self describing, at least in a visible way. When you think of certain digital datatypes – XML for example – self description is key. XML contains tags and attributes that describe the contained data in very concrete ways.
DNA doesn’t do this. It merely contains a list of nucleotides – the coded amino acids created and the resulting structures that are formed are emergent. Memes via natural language work in the same way – the vast breadth and scope of human understanding emerges from combinations of the finite number of words in the dictionary.
And finally, these datatypes theoretically emerged themselves from the unknown ether. Perhaps we can go all the way backwards, attempting to find a starting point of these pattern chains. Perhaps the impetus is the nuclear forces binding atoms, and the atomic forces bending and twisting complex molecules. Maybe that is the only ruleset, and all further datatypes and rulesets are mere metaphor and illusion.
But lets be arbitrary and create. Let us define something as a starting point, experiment, and see where it goes. The first task – design a living digital datatype.