Conway’s Game of Life (Evolutionary Algorithms Part 1)

This is the first in a series of posts about natural-world inspired algorithms, building up from simple cellular automaton to more complex examples of evolutionary algorithms and their applications.

The following example dates from the 1970’s and is one of the first attempts at describing natural processes utilizing modern computer science. Conway’s Game of Life is a clear example of how given very simple rules and boundary conditions, complex patterns can emerge. The full C# implementation is available here. The implementation provided sacrifices efficiency for readability – for a much more efficient opensource implementation see Golly.

Simulation Structure

var brd = GetInitialBoard();
var rules = GetDefaultRulesArray();
Console.WriteLine("Initial Board:");
OutputBoardToConsole(brd);
		
for(int i=1;i<=25;i++)
{
   Console.WriteLine(string.Format("Generation {0}:",i));
   IterateWithRules(brd,rules);
   OutputBoardToConsole(brd);
}
Console.WriteLine("End of Simulation");

This Life simulation is set to run for 25 generations to demonstrate the behavior of an oscillator traversing down the grid. Note the repeated call to the ‘IterateWithRules’ method – this is the method that performs the heavy lifting. The pattern is similar to that of optimization and search algorithms where the boundary conditions are defined independently of the algorithm itself. In this case, the boundary condition is running the simulation for 25 iterations, but it could just as easily depend on features of the grid (how many cells are alive/dead, whether specific shapes are present, percent of grid ‘filled’, etc.).

glider_gen4 glider_gen21-25

Simulation Rules

These are the rules of the game:

return new int[9]{0,0,1,2,0,0,0,0,0};

They are the default rules for Life. The array index represents the number of neighbours for any given cell (0-8), and the value of the array cells describe the behavior for a given amount of neighbours. A value of 0 means the cell will die, a value of 1 means a living cell will survive, and a value of 2 means an empty/dead cell will come to life (ie populate). Given these rules and a seed layout the simulation is set. This is all done in the IterateWithRules method:

public static void IterateWithRules(bool [,] board, int[] rulesArray){
	var dieList = new List<Tuple<int,int>>();
	var birthList = new List<Tuple<int,int>>();
		
	for(int i=0; i<board.GetLength(0);i++)
	{
		for(int j=0; j<board.GetLength(1);j++)
		{
			var currentCelVal = board[i,j];
			var neighbourCount = GetNeighbourCount(board,i,j);				
			var rule = rulesArray[neighbourCount];
			if (currentCelVal && rule==0)
				dieList.Add(new Tuple<int,int>(i,j));
			else if (!currentCelVal && rule==2)
				birthList.Add(new Tuple<int,int>(i,j));				
		}
	}
	Console.WriteLine(string.Format("{0} cells died. {1} cells were born",dieList.Count,birthList.Count));
	foreach(var d in dieList)		
		board[d.Item1,d.Item2] = false;	
	foreach(var b in birthList)		
		board[b.Item1,b.Item2] = true;	}

Following this logic (and much more optimized code), along with the appropriate initial layout, a program was written to represent a full-blown Turing Machine. It is a powerful example of a general-purpose algorithm that can govern and generate arbitrarily complex emerging behaviors.

Next Steps:

  • Optimize the implementation provided for
    • a) smaller memory footprint (consider using quadtrees)
    • b) more efficient iteration processing  utilizing a list of ‘changed’ cells carried forward from one iteration to the next.

Zero-Downtime Code Deployments with MEF

Adding production binaries to a .NET WebApp at runtime requires the application pool to restart. Unless you have the infrastructure to address this (ideally multiple servers and application instances setup for automatic fail-over, and a true Service Oriented Architecture with router services in front of your processing services), this means downtime for your users. For internal enterprise development, legacy applications, or proof of concept projects that outgrew their initial goals, not having the requisite infrastructure is a common scenario.

The approach described below quickly mitigates this issue by allowing your application to load some of its code-components during runtime using the Managed Extensiblity Framework. It relies on MEF for loading assemblies that share an interface at runtime, and RequestRouter object that routes the different requests to the appropriate implementation. However, it is no substitute for a more robust architecture – especially since MEF would be running these assemblies within the application domain of your main application (with access to all of its in-memory objects, and the possibility that bad component code could cause your entire application to crash).

Setting Up Extensible Components

None of this would work without using components which share an interface, for example:

Report interface for application driven reports:

interface IReportDefinition
{
    string ReportName; // unique identifier for this report definition
    ReportResponse ExecuteReport(ReportParameters paramaters);
    bool AcceptsExecutionRequest(ReportParameters parameters);
}

Integration message handler interface:

interface IMessageHandler
{
    string MessageName; // unique identifier for this type of message
    MessageResponse HandleMessage(Message msg);
    bool AcceptsExecutionRequest(ReportParameters parameters);
}

Calculation handler interface:

interface ICalculationHandler
{
    string CalculationName; // unique identifier for this type of calculation
    CalculationResult PerformCalculation(CalculationInputParameters paramaters);
    bool AcceptsExecutionRequest(CalculationInputParameters parameters);
}

In a similar manner, each interchangeable component collection can be setup as an interface. Another important detail is to keep the definition of the interface in a different assembly (ie project) than the implementations of the interface. This means the two would get compiled into different DLL’s, and different report definitions wouldn’t need to depend on each other. So far, this is all part of standard object-oriented design.

Using MEF and Handling Requests

To tie MEF into the solution, we need to setup the correct attributes on the reportdefinition implementations:

[Export(typeof(IReportDefinition)), PartCreationPolicy(CreationPolicy.NonShared)]
public class TestReportDefinition : IReportDefinition
{...}

The [Export] attribute specifies instances of this class will be exported by MEF (to be imported elsewhere). A corresponding [ImportMany] attribute with the same type in the RequestRouter class identifies the list which will contain all the instances of the different IReportDefinition object:

class RequestRouter{
   ...
   [ImportMany(typeof(IReportDefinition), AllowRecomposition = true)]
   protected IEnumerable<IReportDefinition> ReportDefinitions { get; set; }
   ReportResponse RouteReport(ReportParameters parameters)
   {
       foreach(var repDef in ReportDefinitions)
          if ((repDef.AcceptsExecutionRequest(parameters)) // find first match
              return repDef.ExecuteReport(parameters);
       throw new ReportNotFoundException(parameters); // couldn't find the report
   }
   ...}

The collection is iterated through whenever a request gets to the RouteReport method.

The magical (it uses .NET reflection) command that instantiates and loads the IReportDefinition objects into the ReportDefinitions IEnumerable is the CompositionContainer.Compose() method. This is how you can tie it to two different directories:

var catalog = new AggregateCatalog();           
catalog.Catalogs.Add(new DirectoryCatalog("C:\\ExtensionsPath1\\"));
catalog.Catalogs.Add(new DirectoryCatalog("C:\\ExtensionsPath2\\"));
var batch = new CompositionBatch();
batch.AddPart(this); //populates the collection of composable parts in this object
CompositionContainer container = new CompositionContainer(catalog); 
container.Compose(batch);

Note: the compose method can fail(if the assemblies are not loaded correctly, if the .NET versions dont match, etc.) so it would be a good idea to wrap it in a Try..Catch statement and explicitly handle reflection exceptions. 

Once we setup MEF to load assemblies from a set of folders, all we need to do is attach a FileSystemWatcher to those folders’ change and added events, and trigger the ‘Compose’ method above to update our collection of report definitions.

This method will work correctly for any new assemblies dropped in the extension paths. But it will not work for multiple instances of the same assembly unless the assembly is strongly named. The only workaround I could find is to rename the assembly before dropping it into the extensions folder (so it is treated as a new assembly) and update the RouteRequest method to pick the assembly with the latest modified date for the request.

Next Steps:

  • Put together a working example of the above in .NET Fiddle using a very small assembly (loaded from an in-line defined byte array)
  • Explore the Microsoft Addin Framework for better code isolation
  • Explore convention-based MEF (instead of using attributes)

 

Trying Tries (C# IDictionary Generic Trie)

4-key Trie

Tries are a classic implementation of the symbol-table (or associative array) abstraction. They allow for quick retrieval of values based on (typically string) keys and optimize lookup performance. They are a useful way of implementing a dictionary and excel for cases where you would want to find non-exact matches (ie longest prefix match), or all possible suffixes given a starting key (ie a drop-down that shows you all of the different options as you start typing in text).

There are a few good C# Trie implementations available online (see here, and here), but I did not see any that implement the C# Generic Dictionary interface. My goal was to create a Trie implementation that could be used interchangeably with the default C# dictionary, and could also be used with non-string keys.

All of the code is available here. One significant caveat – the overhead I added to make the structure work with the IDictionary interface has a performance impact. In many cases, the default Dictionary implementation outperforms this Trie implementation (more details below). However, even with a relatively inefficient implementation, the Trie outperforms the hashtable-based .NET Dictionary for the longestPrefix method.

How Do Tries Work

Trie Example - Right Oriented
Image 1: Example of a string Trie for ‘A’, ‘AD’, ‘ADD’, ‘ADDRESS’, ‘BAT’

A Trie is a type of tree where every arc represents a character and the path from the root to a value node represents a valid key.

Above is an example of a simple Trie with the nodes marked in green being accepting ‘value nodes’. The Trie data structure maintains a reference to the root node.

Finding a key means traversing down the root node character by character. If we end up with a null reference or a non-value node, the key could not be found.

The Trie structure is constructed whenever adding a key, and when a key is removed the appropriate nodes are removed from the tree (for example, removing ‘ADDRESS’ from this Trie would remove 4 nodes):

Trie Example - post removal
The Trie in Image 1 after removing the ‘ADDRESS’ key

The strength of the Trie data structure is that lookup time would be proportional to the size of the key, rather than to the number of elements in the data structure. For small keys (ie words in the English language) this is O(1) vs. O(logn) for binary search. It puts the Trie on par with a hash table for average time complexity.

Performance vs. .NET Dictionary

The .NET Dictionary implementation is an efficient hash table implementation with O(1) lookups. It has a lot less overhead than this Trie implementation, and would easily outperform it for general CRUD operations. However, for the Longest Prefix operation – the Trie wins every time. Since a scan of the entire dataset is not necessary, the Trie is over 100 times more efficient than the basic Dictionary implementation:

Trie v Dictionary Longest Prefix Performancel

However, for finding all suffixes, both exhibit O(1) and the .NET Dictionary outperforms the Trie. This is likely due to the efficient implementation of string operations in .NET:

Trie v Dictionary Find All Suffixes Performancel

Next Steps

  • This Trie implementation stores the entire string key in each node. This is very space inefficient, but was necessary to fit the Trie into the .NET dictionary interface. I would like to find a way to store each letter only once.
  • The internal tree data structure relies on a Dictionary to connect each node to its children. With more knowledge of the alphabet, this can be optimized.
  • Implementation of ternary search tries
  • Hybrid dictionary that uses both a Trie and a hashtable to optimize for all operations without sacrificing too much space.
  • Explore other, non-string use cases of Tries.
  • Since Tries are a specific type (non-cyclical, tree vs. graph) of a finite state automaton look to see if there are more general solutions that can bridge both problem sets.

 

C# LSD Radix Sort

The Least Significant Digit Radix sort algorithm is a classic near-linear-time sorting algorithm. It is a good example of how additional information about the inputs for an algorithm allow us to improve our code’s efficiency.

In the real world, for an overwhelming majority of applications, using the native .NET Array.Sort() implementation is efficient and adequate. The native sort algorithm implemented in the .NET library is a smart combination of three different sort algorithms (insertion sort, heapsort, and quicksort) depending on the input parameters. These provide a worst-case runtime in the order of O(nlogn) where n is the input size.

However, theory is very powerful, and for some applications, when you know more about the input (for example – the range and distribution of the population), you can achieve near-linear time sorting. Counting sort is a classic simple example of the concept, useful for sorting integers.

Following is a C# implementation of Least Significant Digit Radix Sort. This algorithm sorts strings (or anything that can be represented as a string) in O(n*k) time (where k is the average length of each string key). For many languages (DNA, words in the English language, ISBNs, etc.), this means near-linear performance. The complete implementation is available here.

How the Algorithm Works

During each step we sort the strings according to one of their characters, starting from the rightmost character and working our way left to the first character:

sort_steps

Note the number of these steps depends on the length of the longest key. This is why the performance of the algorithm is of O(k*n).

Between each step, sorting the keys by one character is performed via Bucket Sort. We create R buckets (one bucket for each letter in our alphabet) and add the strings to the buckets in order. The buckets are then combined in order and the result is fed to the next step. In the C# implementation I used queues to represent the buckets:

sort_queue_steps

LSD Radix Sort vs. Array.Sort()

The C# implementation of Radix LSD sort linked above performs much faster than the native .NET Array.Sort() implementation:

radix_sort_chart

LSD Radix sort also has the benefit of being stable (relative order of elements remains the same for elements with the same key), and allows for very easy reconfiguration of the alphanumerical order (by changing the alphabet definition).

However, at peak it does end up using roughly 2n+k memory – and for most cases, the O(nlogn) performance of the native algorithm is more than adequate. The LSD Radix algorithm also does not lend itself well to parallelization.

Optimization

The implementation provided can be optimized in a variety of ways:

  • Scanning the input once for the alphabet. The implementation relies on us already knowing the alphabet and the length of the longest key string – a real world implementation would likely need a quick pass of the input to gather this information upfront.
  • Use of a more efficient data structure than queues –  The algorithm spends a lot of time merging the queues between the different iterations. An array-based data-structure that handles these merges in constant time (by modifying index references rather than copying over values) would significantly improve the runtime of the algorithm.
  • Compound alphabets could be used (ie combining every adjacent two letters and increasing the number of queues at each step to RxR) to optimize performance for cases where you know some two letter combinations are very common (ie consonants and vowels in the English language). This would come at a cost of much higher memory consumption.

Let me know if you have any ideas as to how the implementation could be improved.

 

DFS/BFS Graph Search with C#

This is a basic implementation of DFS/BFS graph search with C#. I have not seen many implementations in C# (most educational ones use Java) utilizing the .NET built-in data structures (Stack vs. Queue).

The search method returns a list corresponding to a path in order. The sample dataset is setup to illustrate the different results: BFS will always return the shortest path.

Code: https://dotnetfiddle.net/HCR4O8