Pipeline Publishing, Volume 4, Issue 5
This Month's Issue:
Keeping Promises
download article in pdf format
last page next page

Self-* Networks: Helping Networks
Help Themselves

back to cover
.
article page | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 |

“… the 3 key functions:
  1. Monitoring - all the nodes in the system or network must be instrumented… Like fractals, any node of a system might and I would say IS a system in itself so the same rules can be recursively applied down.

  2. Redundancy - to have a self healing system redundancy is mandatory. Call it fault tolerance, call it cluster, call it grid; it must be there.

  3. Central Control - the spine of the network or system has to be able to analyze, correlate and take decisions based on n-factor situations. Now, where existing architectures fail is in the ability to automate these central control tasks, and apply recovery rules with minimal human intervention.”

We see these as starting points toward control systems and patterns which embody situational awareness, self-similarity, self-*, and semantic self-organization.

There are commonalities in the approaches to designing self-* systems. These originate from the use of Internet Middleware. Most self-* design patterns liberally use the services developed for control and communication in the Internet – often extending and improving these. Principle of these is the Directory. This has been extended to the logical construct of the Registry. Originally, LDAP directories were the implementation mechanism, but databases behind a service interface can also be used. The Registry is a semantic repository of meta-data and collaborative patterns. The Registry stores the pattern of service deployment and the initial state (startup data) for services. Think of the Registry as a static model of the entire self-* system. It can store references to all the hardware (called platforms) and the information necessary to securely log into these systems and deploy services on them. It stores the structure of logical domains in the system. It stores the services needed in each domain. It stores the service dependency mix for an application so it can be maintained as a unit. It stores the start state for any service which is launched.

Most self-* systems use the same pattern of services to both launch and maintain the health of services. A collection of interacting services components is called an application pattern. The assumption is any specific service component might (in fact, will at some point) fail, and when discovered, the system will regenerate this service on a healthy platform. All self-* systems are distributed systems, and assume there is a pool of resources on which to deploy services. Usually today, this pool is called a Grid and consists of many servers of similar (but not identical) characteristics. So,

If you are only reactive, you are bound into system eddies, facing the same problems over and over, or worse you become victim to Darwinian-like negative selection.

.
or example, if a specific server fails or becomes overloaded, the monitoring component discovers this and then regenerates all the lost services on healthy servers from the grid pool.

In self-* systems, usually there are embedded behavioral constraints on services. A service must not assume anything about the platform on which it is to be deployed. Instead, the service is deployed into a virtual machine (or virtual grid middleware, such as a specialized application server or web server) that is deployed on every server in the grid pool. This virtual machine is usually called a Container. Services understand the Container artifact and the Container can host any service. A Service Loader or Service Launcher looks at the Registry for the pattern of service deployment and then launches the services into containers to match this pattern. The service code must come from some form of code server. In Java systems this is usually an HTPP server delivering Java Jar files. So deploying a software update just becomes a matter of putting the new version of code in a code server and revoking the old code’s authentication. The updates are then distributed throughout the system.

Generally, services must contain an interface to the Registry that will fetch their initial state, or an interface to a proxy service which passes them this information. Getting and using this data, the service loads and starts up. Services also are required to contain a management/control interface. This interface finds and actively links to a monitoring service – it registers with the monitoring service and says, “I’m alive and watch that I stay alive,” periodically passing current state information. In many systems it leases its existence in the Container, and must renew that lease periodically to be considered alive. This is not an alarm, per say. The system assumes that unless state information in the monitor is current, the service is no longer available. When the state is no longer current, the service is regenerated elsewhere. The monitor requests the Service Launcher to deploy the service (again).

Services need a way of finding other services that they may need in their role as components of an application. This is usually called a Discovery service. The service will find the Discovery service (through a constant way such as a known address) and then register itself as an active service. Service

article page | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 |
last page back to top of page next page
 

© 2006, All information contained herein is the sole property of Pipeline Publishing, LLC. Pipeline Publishing LLC reserves all rights and privileges regarding
the use of this information. Any unauthorized use, such as copying, modifying, or reprinting, will be prosecuted under the fullest extent under the governing law.