Internet_Is_Stupid.png
  

I believe the majority of us are familiar with the comparison between the computing power of the Apollo moon lander and today’s simple pocket calculator. I believe the comparison is already a bit outdated – now we can say that we are all carrying more than the whole NASA computing power of that time in our pockets.

I believe none of us would be interested today to travel to the Moon relying on the computing power of a couple-of-hundred-dollar smart phone. The Moore’s law has done its task – making us rely on the practically infinite computing power available to us.

There’s one thing though that has not changed too much since Neil Armstrong set his foot on the Moon in 1969. Just a couple of months later, the basis of the Internet was paved as the first ARPANET connections were made, eventually resulting in the Internet as we know it. The protocols and the architecture evolved rapidly for a couple of decades, but the basis of the Internet are still the same.

The original Internet architecture was based on the computing capacity that was available in the times when the Internet backbone was established, starting from 1969 and spanning over the next few decades. The development paradigms of today are still based on the wars of “netheads” willing to make the Internet a loosely-coupled network and “bellheads” who liked the idea of the centralized control – see an interesting 1996 Wired article explaining the “infrastructure wars”.

As we know now, the netheads won those wars, making Internet a non-centralized, loosely controlled infrastructure available to all of us today. This has lead us all to understand Internet as a “stupid” or “dumb” network whose only responsibility is to move information from one place to another. Today’s developers are treating the Internet backbone as such a utility, providing us just the capability of information transfer. Per this paradigm, the intelligence is “on the edge” of the network. We are used to drawing the Internet in our diagrams as a cloud because we are not interested in its capabilities – when we draw these diagrams, we are drawing the actual functionalities outside of that cloud. We are used to delivering functionalities from the edge of the network.

We are building our applications and services to more or less centralized data centers, without realizing that we are using one of the world’s best inventions – an incredibly fault-tolerant and efficient network – just for those “dumb” tasks that it was originally designed for. And with more than 40 years of Moore’s law in action, I think that some of the initial design principles of Internet should be considered outdated.

I agree with the netheads that the Internet should provide just the basic services, and the intelligence should be located on the edge of the network – to those applications that are serving the end-users. However, what should be reconsidered is the definition of the “basic services”. Forty years ago, it was clear that the basic service is the information exchange; instead of using expensive and non-efficient leased lines or telephone connections, the packet-switching network provided an effective and cost-efficient way for data transfer.

However, as Moore’s law has been well proven in practice, it is time to reconsider the “basic service”. Is data exchange enough? Or should the network provide some services related to data processing as well – relieving the “edge” designers to more effectively focus on the application-level tasks – and not worry about the hosting of their trivial computing tasks?

The Internet was initially designed to rely on simple routers that didn’t have too much computing capacity. However, today the situation is different – just as we are carrying the moon-flight era NASA computing capacity in our pockets, the current Internet infrastructure would be capable of serving much more than the Internet backbone was originally designed for.

This computing capacity could well serve for instance as a backbone for integration workflow engines. Vendors who provide integration services – technologies for exchanging and manipulating information between different applications – are building their own processing engines that take care of the process control, execution, transformations etc. They are all building basically very similar (but hardly interoperable) engines based on their own preferences.

Think about the possibility that instead of multiple, vendor-specific engines, there would be a single processing backbone – a beefed-up Internet that would not only transfer the data but also process it on its way. Think about the efficiencies – all the vendors could rely on the same processing service and focus on their strengths in the development, adapters, etc. Think about a network that would efficiently move the data between different applications between different continents, without a need to rely on a single, centralized workflow engine hosted in vendor-controlled data centers.

Think about iPaaS (integration platform as a service) transforming into iPaaNS (integration platform as a network service).

It is hard to change the way the Internet works, but it will eventually change. To achieve an iPaaNS kind of service, there are a lot of challenges, the smallest one not being security and confidentiality related matters, but these can be resolved. And if not directly embedding the iPaaNS functionality into the Internet backbone, we can always start with the application-level peer-to-peer network architecture that will provide the common processing service. It will eventually move down in the technology stack.

Stay tuned.

iPaaS-eBook-CTA.png

Topics: integrations, internet, iPaaS