This varies vastly, however I am thinking about a model
for a distributed system I am thinking of writing which
will probably use java (for many reasons).
My system will require little to no communication and 
massive cpu usage in comparison to the communication rate
between nodes. It is hoped they would all share a same physical
and logical ethernet segment and there would be /NO/
single point of failure (at the node end anyway) and would
communicate requests for critical sections using broadcast, 
which may seem like a bad idea, however it is no intended for this
small test network to be connected to an external network.
UDP packets will be sent for requests and results to requests.
For a truely distributed system it is also advisable that each
node keep a state table of the current request/result matrix.
For a better discussion of general distributed systems
and more specifically distributed operating systems
I /highly/ reccommend Distributed Operating Systems
by Andrew S. Tanenbaum.
Little ideas which will probably come to nothing
but interesting concepts.
My model as above in a private network using amoeba with
a threaded prolog compiler. However I am sure with work
most (all) of what can be done on prolog can be ported to C
, however threading may have to be improved and my model
as described above would have to be put into the ameoba (possibly at a
kernel level) as well as a new time sync system.
Distributed AI...prototype called borgnet.
Also for critical paths a process would be taken by two nodes
rather than one, and a vote over broadcast for a result would
be taken. If it wasn't clear then two others would be given the task
and the previous results cached. Then the best of four would be accepted.
Much of this system, mainly being 
1) Ethernet Broadcast
2) Constant caching
may seem inefficent and an overkill, however it does eliminate any
single point of failure (on the node side).