I'm fascinated by the nature of small-world networks and how adding a little randomness can dramatically change the nature of connectedness. Our neural networks are structured a bit like this. I wanted to explore this on my own and so wrote these programs to play around with random versus clustered linking. (My approach is based on the beta model of Watts and Strogatz.)
Clusters are like social cliques where members of a clique link mostly to each other. It takes a lot of steps to move across a tightly clustered network. The first program, a ring network, has 100 nodes, and each node can have two or four links or vertices. (The goal is to start at link #0 and get to the farthest from zero, link #50.) The grid network defaults to 500 nodes, each with 5 neighbors.
Replace just a few near neighbors with random links (think of it metaphorically as adding diversity). Now it takes far fewer steps to move across the network. As the amount of randomess goes up linearly the number of steps needed to traverse the network goes down dramatically (and non-linearly).
To explore these networks, change the amount of randomness, from highly structured at 0% to maximally connected at 100%. Even 5% of randomness (the default here) is amazingly well connected.
These programs are recent re-workings of the orignal Processing programs (which won't run in current browsers). The new versions are written in javascript (with a little help from my AI friends). Also note that these are directed graphs, something that is not immediately apparent in the current renderings.
suggested reading (books)