An opinionated view on application modernization: Grow your containers footprint and nurture your legacy

Franck Boudinet
9 min readJan 7, 2020

--

At the IBM Garage, we’ve engaged right from the beginning with enterprises of all sizes to build new cloud native applications blending IoT, blockchain, and AI technologies to create innovative solutions focused on delivering the best user experience possible to help our customers to disrupt their market or to differentiate from their competitors. (See my earlier blog here.)

Naturally, recognizing the value of how the Garage Methodology can quickly deliver concrete results, more and more businesses have come to us the last several years for help modernizing their existing applications as well.

I’d like to reflect on these experiences, detailed the entry points we’ve observed in these modernization journeys, approaches to get around some of the road blockers, and characteristics of successful transformations we’ve worked on.

Multiple entry points to application modernization

As everybody knows, there is no “one-size-fits-all” solution when it comes to application modernization. Even-though categorizing the various possible approaches taken to modernize an application (as described by Gartner as: rehost, replatform, refactor, rearchitect, rebuild, replace) is interesting, the most important thing to consider is the “why” behind the decision: for what reasons, for which purpose and returns? Then, if the decision is to actually do something, the next question becomes: how do we get started? As Joel A. Baker says: “Vision without action is merely a dream. Action without vision just passes the time, but vision with action can change the world.”

While in the past, customers of application modernization projects were willing to simply monitor a new technology evolution — such as embracing a micro-services architecture for their solution — today, most customers have check-marked that step and initiated projects for one or several of these reasons:

  • Adapting to support a new business model (e.g. expose APIs to their partners or B2B customers, to participate the API economy)
  • Aligning with changes in the enterprise organization or to new market regulations
  • Enriching the application by adding new functions (e.g. requiring AI models or cognitive services integration)
  • Increasing agility and improving time-to-market when adding new features and functions with DevSecOps
  • Benefiting from the elasticity of the cloud to adapt to variable demands on their solution
  • Addressing issues like technical components becoming obsolete or being withdrawn
  • Improving upon performance and scalability issues
  • Willing to reduce cost to do any of the above (very often, if not always, the primary reason)

The following diagram describes a high-level view of the journey that many of our customers are embarked on, whether it is for one or few applications or whether it is for an entire enterprise applications portfolio.

As shown in this diagram, modernization will not always deliver a return on investment (ROI), and some applications should be retired or kept as is, but even for these, they might be some other aspects to consider as we’ll discuss later in this blog.

For some of the applications, it actually makes a lot of sense to realize early minimum viable products (MVPs), even during the assessment phase, to be in a better position to evaluate the cost to modernize, start validating the potential benefits, and initiate or reenforce the DevOps transformation, depending on the point where the teams start.

Quickly understanding the as-is and defining the ideal to-be model both functionally and technically is key. Design thinking, along with architecture and inception workshops, are part of the IBM Garage Methodology and have proven to be very effective to either:

· Decide to go for modernization with relevant information

· Or to get the teams focused on clear outcomes and concrete next steps when embarking on the actual modernization activities

If the choice is made to modernize, the transformation can then be initiated through one or a series of MVPs, depending on the scope of the modernization and the complexity of the application. As an example, re-architecting and re-factoring does not necessary mean months of effort like Performance of Asset (P4A) mentioned in this blog. In this case, being able to harvest and re-use years of investment in analytics implemented in python, predicting downtime on assets, and integrating them in the “micro-services based” modernized solution was a key critical success factor.

Modernizing one or few applications versus transforming an entire portfolio

Analytical thinkers might think, “Why this question? If you can do it for one application, you can apply the same process for many.” However, as mentioned earlier, there is no one-size-fits-all. Return on investment (ROI) of an application migration, versus containerizing, versus refactoring, versus rebuilding or versus doing nothing should be analyzed on a case-by-case basis. I can think of two examples:

  • A minor refactoring we proposed to a customer would dramatically change the performance of his application, enabling him to propose it to a large B2B client. His original thought was to rewrite it entirely as he had lost control with his application and maintenance services vendor.
  • Conducting a study to identify key patterns in the applications portfolio and then performing an MVP for selected applications amongst each of the discovered application patterns could provide a more accurate estimate of the entire move-to-cloud to better assess the ROI and “de-risk” the project.

Even-though modernizing one or few applications can be done by focusing only on the applications and their business and technical contexts, our view is that it should absolutely include Continuous Integration, Continuous Delivery (if possible), and (for some applications) Continuous Deployment. It becomes very important when considering the move of an entire portfolio of applications, not only to select and integrate the right DevSecOps tools, but also to drive and generalize across the enterprise cultural, organizational and processes changes with the right method.

As the next diagram shows, DevOps is not the “appanage” of cloud natives or modernized applications only and the DevSecOps generalization mentioned above can actually also bring a lot of value to the migrated or legacy applications.

Optimizing legacy application lifecycle (or don’t forget your pets :-))

As mentioned earlier, for some bespoke applications developed in-house that are very often critical to the business, there might be no ROI at all for migration or modernization. But there are always good benefits in establishing or reenforcing DevSecOps practices. This, in fact, consists in modernizing the way the applications are developed or operated.

The first benefit is to get the business, development, operations and security teams to work together in an agile way with maximum alignment, and focus on getting the next set of functions released to the end-users, while maintaining SLO/SLA, security, and compliance requirements.

The other areas to consider for improvement are:

· Automated application build and deployment

· Unit and integration testing automation

· Automated infrastructure provisioning and day 2 operations

· Monitoring and insights on the entire application lifecycle from code quality to test coverage to event monitoring at run-time

Just picking automated infrastructure provisioning and day 2 operations includes transforming legacy infrastructure to become infrastructure as code, managed with APIs directly from application DevOps toolchains. It reduces significantly the development cycles and also helps to provision production deployments with repeatability.

Obviously in this world of virtual servers and virtual machines, resources are typically not immutable and being able to perform automated day 2 operations like scaling their capacities, adding, modifying, removing storage, and so on is also super important.

One could say that it is like creating an “old” good private cloud. However, it is much more than that as when it comes to production workloads, it can includes premium services like automating the provisioning of security policies, firewall rules, monitoring and backup procedures, and registration in compliances databases.

In addition, some customers need these automation capabilities to span across several locations like central datacenters, regional ones, local ones (even sometimes (e.g. Telcos) considering small “edge” locations), as well as several zones or regions on public clouds like Azure, AWS, GCP, or IBM Cloud to really achieve their objectives.

Finally, the benefits of this automation towards infrastructure as code are also applicable to the cloud native and modernized applications environment to provision the underlying infrastructure within the hybrid & multi-cloud environment.

Getting the full benefit with an hybrid and multi-cloud management services platform

Let’s consider the key points mentioned above on:

· Application modernization versus migration

· DevSecOps generalization to modern, migrated, and legacy applications

· Day 1 and Day 2 operations automation

Let’s focus on the following “units of packaging” typically used to deploy components of legacy, migrated, and modernized applications or solutions:

· Virtual servers on public cloud, OpenStack, or VMware virtual machines running on or off-premise.

· Docker containers deployed on all sorts of Kubernetes distributions offered either as a services on main public cloud providers or installed on premise or off-premises in public clouds.

· All running workloads and sometimes invoking SaaS services API.

It becomes extremely useful to have a hybrid and multicloud management platform providing services to deploy, move, manage and monitor these workloads across the various zones of our deployment environments.

Let’s assume that company A has an existing Azure subscription to consume and has modernized an application that:

· Makes use of data that should not leave the existing data center when running in production

· Has been enriched with a chatbot as part of the modernization activities using IBM Watson conversation services

Using the DevOps toolchain combined with the hybrid and multicloud management platform, CI/CD pipelines can deploy either :

· In production

· For dev and test or integration testing on OpenShift cluster as a service

To finish this article, let’s look at how to build such an hybrid and multicloud management services platform as a set of micro-services running on top of Red Hat OpenShift and leveraging IBM Cloud Pak for Multicloud Management (ICP4MCM).

It exposes API to execute Day 1 (provisioning) and Day 2 (management) automated functions on virtual servers/machines and/or containers within which applications are deployed and/or packaged.

These API are consumed by DevOps toolchains to provide maximum agility in managing the applications pieces from the infrastructure on or in which they are deployed all the way up to application level artifacts.

It leverages technologies embedded in ICP4MCM such as:

· Terraform and its providers to most clouds.

· Ansible, Ansible Towers, and its incredible community of roles, collections, and playbooks available for download.

· IBM Multicloud Manager that provides user visibility, application-centric management (policy, deployments, health, operations), and policy-based compliance across clouds and clusters. With some of its unique features, it enables control of Kubernetes clusters deployed almost anywhere, and it allows to ensure that clusters are secure, operating efficiently, and delivering the service levels that applications expect.

It provides off-the-shelf immediate value and enables adaptation to specific needs by orchestrating these technologies.

It can be integrated through the same APIs as the ones used by DevOps toolchain with customer existing “service portal” as well as with other tools like ServiceNow, if needed.

Finally, metadata related to existing infrastructure resources can be imported into the platform to provide some automation on their management.

If you’ve reached this point, then thanks for reading & hope this will one way or another be useful for you.

Learn more about IBM Garage at ibm.com/garage.

--

--

Franck Boudinet
Franck Boudinet

Written by Franck Boudinet

IBM Garage for Cloud CTO for Europe

Responses (1)