Microservice Adoption

I worry that companies are deconstructing their monolithic applications into microservices because it’s trendy. In fact, there are places where microservices don’t make sense but rather impart additional complexity to an application that is not enhanced by the benefits of microservices. While some challenges to microservice adoption are transient or can be addressed through business decisions … some are fundamental aspects of the architecture.

Microservices are (relatively) new. Whereas a company that has built and run many monolithic applications has network, hypervisor, OS, deployment, and application experts … unless the company hires in a container orchestration / API gateway expert or brings in a consulting team (real world experience has been “learning it” was left up to employee initiative and the global archive of IT knowledge that is the Internet), there isn’t a deep knowledge base to support the framework. Not an insurmountable problem, and frankly no different than how virtualization was introduced — there weren’t hypervisor experts at the time, no one really understood sizing/scaling intricacies. It was learned, but the first 6-12 months were rough. High availability applications were physically designed to withstand failure. Our data centre has two unique circuits run to each rack – and dual power-supply servers are plugged into both the “A” and “B” circuits. Same with network – there’s a team that goes through two different switches. In switching to VM’s … we had to identify where this server runs (i.e. what is it’s host)? Is every component of a redundant system co-located on a single hypervisor or in SAN-booted VMs are they stored on a single SAN frame? Microservices will have a similar challenge — where is it running, can the service as a whole survive a fault? How do we recover from a major data centre failure?

Some of the places where I see microservices making development and operations more complex can be eliminated by business policies. Allowing individual service teams to dictate their own development language can reduce mobility between teams — the Java guru for service A will spend time researching the c# equiv if they move over work on service B. And while it is possible to publish a general coding standard that covers all languages (how variables will be named, what comment blocks should look like, etc) there are nuances to each language that make a shared standard impossible. Using multiple development languages limits employee mobility, and it also reduces a company’s ability to shift employees around to cover temporary resource shortfalls. While planned absences can be accounted for when selecting work for the next cycle, emergencies happen.

Breaking an application into small component services can create challenges in troubleshooting issues. There may be few who have an end-to-end understanding of the application. Where the monolithic application X getting munged information means the development team for App X needs to debug and sort the issue … ten interacting microservices can mean ten groups saying nothing’s wrong on their side and it’s everyone else’s problem. I’ve seen that occur frequently in infrastructure support — app guys says it is the server, server guy says it’s the hypervisor, hypervisor guy says it’s the SAN, SAN guy says it’s all good and someone should check with the network guys to see how those load balancers are doing.

Fundamentally, microservice architecture introduces additional components to run the application — the API gateway and container orchestration are functions that simply don’t exist in a monolithic application. These services themselves, as well as the supporting technologies that allow these services to function, add additional complexity.

As an example, the networking configuration behind making microservices available are not, in my experience, something with which developers are familiar. This is not a problem when dev teams require out-of-box functionality and said functionality is working properly. I became involved with container orchestration system because a friend’s dev team encountered failures where kube-proxy did not create the required iptables rules — a quick and easy thing for a Linux/Unix admin to identify and troubleshoot, but not something that concerned application developers in monolithic deployments. Since then, the dev team sought to use multiple network interfaces and the Kubernetes CNI plugin did not support that feature.

For an application where individual components have different utilization rates, microservice architecture makes sense. Thinking about a company that runs a major promotion. There will (hopefully) be a flood of customers browsing the web site. The components that handle browsing and search functions need to grow significantly. The component that handles existing user authentication, new user registration, customer checkout, inventory update, and shipping quote generation components don’t need to scale at the same level — only a fraction of the web traffic will actually convert to sales. So there’s no need to spin up new hosts in the web farm to handle users browsing product information.

For an application where individual components require frequent updates, microservice architecture makes sense. Is there a component that suffers frequent failures where having a pool of microservices available would increase the application’s uptime?

Leave a Reply

Your email address will not be published. Required fields are marked *