The main problem about computers in nowadays is that they all are, or at vast majority, are based on the same computation model, the “von Newman model “, that we are all knowing from school. In this article I am going to present a less know face of computability, basically I am going to scratch the surface of the networks of functions.

The idea behind such systems is to create some networks of functions , witch are some functional units that can perform some operation on the input(s) and generate a **single output**. Mainly the idea from function networks is used also in the artificial neural networks, with the big difference that in the case of the neural networks, the functions are bulided based on the training information.

If you are not very familiar with the notion of functions I will try to explain it in a very simple fashion, ignoring all the formal aspect of definition. We could say that a function is a transformation. A simple representation of a function can be seen in the image below; a function has some inputs, in this case just one, and an single output. A function can perform the operation F witch can be theoretically every possible transformation from simple addition up to very complex transformations. As an observation the inputs and the output doesn’t have to be primitive data (as numbers), this could be everything you may want, for e.g. a very complex data model with very complex internal states and so on. Until now nothing it doesn’t seams to be very interesting in fact this are thinks that you may already know. What you may not know is that we can group such functions in a network to perform computation. So basically the programming such systems would appear more like playing with some Lego pieces.

A very simple way to show the power of the networks of function is to define some simple function with N entry witch will produce at output 1 only id the sum of the entries are at last equal with a fix threshold value (in short therm F= 1 only if sum(inputs) >= threshold ). In the next figure it can be seen a implementation of a logical AND. By changing the value of the threshold to 1 we can obtain a logical OR . By combining of such functions in networks we can obtain really complex functionality. The main power of such network is the parallel nature. So basically every operation can be made independent from the other.

Another example that I like is writing the Fourier series using function networks. If you wan to find out more about Fourier series go to (http://en.wikipedia.org/wiki/Fourier_series) . The Fourier series is described by the next expression: .

So this can be representated using a network like in the following image; the main advantage of this **computability model **is that the sin and cos functions can be computed in parallel, in contrast with a traditional iterative model where basically the values of each sin and cos is computed step by step.

Although this model is used basically for computation ,it can be also extended for other stuff as a model for an adaptive ranking (http://portal.acm.org/citation.cfm?id=1062806).