For the longest time, I’ve felt that a component-based approach to operations and OSS is essential for re-use, flexibility, agility, cost reduction, and maintenance simplification. I also think that micro-services, SOA and Orchestration are either the same thing, or aspects of the same design philosophy. Consequently I use a fairly simple term “component based” because there are so many buzzwords that seem to carry lots of baggage – both good and bad.

Clearly, “micro-services” are in vogue. Yet services, as in “services oriented architecture (SOA)” can be useful whether they are “macro” or “micro”. Micro-services provide more granularity, and macro-services more simplicity – it’s a trade-off. Many of these can be exposed via an API. It’s very common to implement a SOA using internal APIs, and they are practically essential for external APIs. We can (and I do) think of digital services and digital collaboration as cross-company or cross-industry SOA. The ARG report “The Rise of Digital Ecosystems” goes into great depth on both the market evolution/needs of digital ecosystems, and the software capabilities and architecture needed to implement advanced commercial relationships and complex products/services.

Orchestration is the latest buzzword associated with fulfillment logic. Somehow, “orchestration” is considered new and different from yesterday’s operations logic. The Oxford dictionary defines it as “the planning or coordination of the elements of a situation to produce a desired effect” – which isn’t a bad definition in our context. Yet, if we dig deep the difference is in the fact that modern orchestration is a) componentized and b) parametric & template driven. So we are back to component based execution, and good-old-fashioned separation of logic from data – taught way back in the 1970s! So I suggest we focus not on the buzzword “orchestration”, but rather on the design tenets that make it flexible, modern and agile.

There’s a method to my madness here. The point is that many of the new methods and buzzwords are not new at all, and they are not specifically tied to virtualization, NFV or SDN. They are just plain, old, good design – and should have been adopted for all OSS long ago, but alas were not. This has two giant implications:

1. we have the opportunity to fix our collective OSS architectures
2. we can apply the same principles and solution to existing network technologies as well as new ones.

I’d like to propose that it makes sense to create a componentized OSS architecture; to drive it via a catalog; to use it to quickly assemble products from service components; and to use it across four macro-domains:

1. The new virtualized network (including NFV and SDN)
2. The existing network, broken into SOA-like “services” and
added to the library along with the new.
3. The new cloud based apps that CSPs are building, using mostly PaaS and SaaS
4. The capabilities that CSPs choose to expose to 3rd parties via APIs –
contributing them to “digital ecosystems”

Basically this means that everything becomes a “component” (micro-service, macro-service, …) and that each of those components can be orchestrated. In turn, we can compose (assemble) these components into larger services – combining a firewall, router, SDN flow and load balancer into an enterprise product. It also means that several business processes are similarly composed – fulfillment, testing, assurance, and even charging. In fact, if, as we define the “component” we also create the fulfillment orchestration template, the test scripts, the charging rule-sets, and on-board those, we have in effect fulfilled the dev-ops vision. The service and its ops were co-developed, by the experts, to be executed whenever the service is needed.

Let’s pull the last piece of the puzzle together. I assert that “legacy” network functionality, “virtualized” network functionality and services that are exposed to third parties can all follow this same formula and reside in the same catalog. Here’s also where the difference between micro- and macro-services may become important. Legacy technologies already have fulfillment flows, for example. These can be broken into re-usable flows that effect orchestration. In that way, they can have the same modularity and re-use as “modern” technologies. The one caveat is that it may be too complex, with marginal value to break these large flows into micro-flows – therefore they may be “macro-services” – but they are still modular components that may be re-used.

Similarly, any service can be used internally, or, potentially priced and exposed to third parties via an API. Same service, same ops logic, but possibly a different catalog entry (maybe it carries commercial terms that differ, or several price/quantity package…). There is beauty in the simplicity – one catalog, one composition method, one library of services. Changes to one component for the first time create no ripple effects, since everything this loose –coupled. So SI and maintenance costs don’t skyrocket with changes.

I know this is a lot to digest, and this is an 800-word blog. I encourage everyone to read our full reports on the various topics: Digital Ecosystems, MANO (2016 update), Innovation & Service Creation, and Automation & Control Theory. Several are published while others are scheduled for the next few quarters.

Happy reading, and remember we have a chance to truly change the industry.


Leave a Reply

Your email address will not be published. Required fields are marked *