Software development has always been a bit hairy. To be perfectly honest, we are only doing it for around fifty years now. And at some points we still do not really know what we are doing or lack the knowledge about best practices. Apart from that, our technology is moving at lightning speed. Of course, that is making it more difficult to predict where our industry will be in a number of years.
The technology sinus
This can clearly be seen from the technology sinus. In the early days we had powerful mainframes and dumb terminals. Fat clients later replaced these. And slowly we are moving back to big mainframe (but we call it the cloud this time).
This pattern of moving back and forth is seen a lot in software development. At a point in time, it is decided that something is too expensive/slow, so that part is moved to another system. Once the bottleneck is cleared, the part is moved back. From this we can already learn that a solution that is a good idea right now is likely to be terrible within a number of years.
In short: that is our actual industry.
Architect or real estate magnate?
Once upon a time, software was relatively simple. In fact the industry was just starting to exist. Once these programs started to grow, nobody actually knew how to grow these. So opting for the easiest solution, these programs were simply extended.
This approach worked fairly well for quite some time. However after a certain period of time later (most often when original developers had moved on) an organization could suddenly discover its software was not a scalable nor maintainable as it had thought. In order to put off the cost of replacement, many of these application (often ran at the “blue-tie” IBM mainframes) were simply kept running.
Remote Procedure Call
Since these big pieces of software were incredibly hard to build and maintain (compared it to a full solid wall and a number of bricks), people opted at some point to split the applications. So instead of one big application, developers opted for the low coupling, high cohesion The first iteration of this way the Remote Procedure Call (RPC), which originated in the Xerox Palo Alto Research Center in 1981. The researchers in question (Andrew D. Birrell and Bruce Jay Nelson) described how procedure calls were well understood, and how such calls could function over a network.
For many years this was the predominant way how software components could interact with each other. Methods to support this were added to many programming languages, such as Java RMI.
A number of years later, in the late nineties, the architectural pattern SOA (Service Oriented Architecture). SOA had simple principles: it aimed to decouple several distinct components into separate services. These services should communicate over the network in an interoperable way. Using this approach the principle of low coupling and high cohesion is easier to maintain. A protocol supporting the principles of SOA was the SOAP protocol. This protocol is a RPC variant, based on messages formatted on XML (eXtensible Markup Language). XML itself is a “simplified” version of SGML (Standard Generalized Markup Language). The SOAP standard was part of the WS standards group, an enormous set of standards that together are capable of almost any network related activity. Apart from a number of certain standards (SOAP, WSDL, SAML, and BPEL4WS) the standards turned out to be overly complicated due to the sheer number of standards and the relations between the standards itself.
Some software vendors advocated the use of intelligent middleware to reduce the number of connections between all the components. These best known middleware became the Enterprise Service Bus. This consisted of bus that could handle the routing of messages, the transformation of messages between different formats, and additional business logic. The ESB is therefore a very important component of the architecture.
However the ESB later turned out to have its own problems. First of all the ESB became a very central and tightly coupled component. This introduced tighter coupling then intended by the principles of SOA. In fact the ESB became a central component, which was so big that it started to oppose change and the addition of new services. Instead of adding a new service, it was easier to bolt a component to another component (compared to create a new proper service). The net end result was that many organizations ended up with a group of large applications that were still difficult to maintain. Instead these applications had become big monolith of each own. Not a very happy result!
The full stack of problems with SOA will be described in a later blog post.
In recent years, the microservices have become a reasonable alternative to SOA. I’ll describe in a later post why for most organizations SOA did not live up to its expectations.
This blog will describe:
- How microservices work,
- How you as an “architect” should work with microservices,
- How to define the space a service should operate in,
- How to communicate with all services,
- How your general development should look like before you are ready for microservices.