Why network pros need a seat at the application-planning table

Application workflows can have a significant bad impact on cloud costs and app performance that network pros could head off.

IT professionals converse in a network server room / data center.
Gorodenkoff / Shutterstock

Over 90% of the network managers, executives, and planners I’ve interacted with in the last six months believe that they have little or no strategic influence on how their companies’ networks are evolving. That’s a pretty astonishing statistic, but here’s one that’s even more astonishing: Almost 90% of that same group say that their companies’ application cost overruns and benefit shortfalls were either predictable based on network behavior, or a direct result of mistakes that network professionals could have caught. It’s important for network professionals to get their seat back in those planning conferences, but it’s vital for their companies that they do so.

Wishing, so the song goes, won’t make it so. When something is explicit, things need to be done to bring it about, and that used to be true of networks. When it’s implicit...well...it just happens. Increasingly, networking and network planning are implicit concepts. When we design applications and select how and where they’re hosted, we’re at least constraining and often defining the network that supports them. And guess what? We’re doing that wrong a lot, maybe even most of the time. The solution is to engage network-think when everyone thinks all they need is to define applications and write code. The key to that is to stop thinking about networks and applications and start thinking workflows.

Applications talk with users on one side and databases on the other, via streams of messages called workflows. It’s easy to understand workflows in the data center, but how do you lasso a bunch of unruly cloud-agile components? In most cases, doing that requires that you consider how each application is structured, meaning where all its pieces and databases are hosted, and where its users are located. You then trace the workflows among all the elements, and that’s something a network pro would expect to do. Be sure to count all the flows in/out of each element.

An application that’s a single component has only an in-and-out workflow. Add components, and you add workflows, and if it goes far enough, the chart of your cloud workflows might make you dizzy. Application design these days involves breaking applications down into components and then assigning those components to the cloud or to the data center. Inside the cloud and the data center, network technology, increasingly virtual network technology, connects these workflows. That means the componentization and component placement decisions decide the network requirements, and it’s these things that software types are tending to get wrong.

The reason is that they treat the cloud as an abstraction with implicit, invisible, connectivity. They don’t think about what’s inside the cloud or how their design and componentization impact cost and quality of experience (QoE). A network professional can look at the workflows a multi-component cloud application creates, understand the implications, and ask the developers whether all these workflows, and the independent hosting they indicate, are really justified.

What justifies componentization in the first place? The best answer is scalability. Sometimes an entire application is scaled in and out, but if your application has pieces that are worked more than others, it might make sense to allow those pieces to scale up and down as their work varies. If there are no components that require separate scalability, then it’s questionable whether they should be scalable and independent. Was that how developers decided on componentization? Probably not, because making each microservice into a separate software component is considered good development practice. Well, last fall one enterprise told me that their cloud-native implementation of an application took 10 times as long to respond to users and cost three times as much as it had in its legacy form. How good was that?

How about resilience? Don’t multi-component applications let you replace a broken component more easily?  Actually, cutting down on componentization might improve availability. Network professionals know that if you need four pieces of equipment in a path in order to create a connection, the meat time between failures of the connection is shorter than it would be for a single component because a failure of any of the components means the path fails. The same is true for application components. Developers need to think of their components the way network people think of devices, and thinking of a cloud application as an abstraction that you can then decompose in multiple ways is a great way to add network insight to application planning.

It’s also a great way to think about the network itself. The internet and an MPLS VPN are abstractions. So are the data-center network and the cloud network. You can’t manipulate the internals of those things easily, only connect with them. That means that from the application perspective, network planning is really planning for the interconnection of abstract network services. That means thinking network gateways.

Network pros are used to gateways; MPLS VPNs, for example, use a BGP gateway technology to support connection. In the case of cloud networks, there’s a wider variety of options. Many providers support the same sort of VPN gateway, but it’s more common to provide a connection through the internet via an encrypted tunnel. This connection, whether it’s used to connect the cloud to the data center (hybrid cloud) or to another cloud in a multi-cloud configuration, is almost always a chargeable service and also incurs a traffic/usage cost.

How much traffic flows into and out of the cloud depends in part on application design. Most enterprises use the cloud as a front-end, interacting with the user to provide a friendly GUI, then converting the interaction into a transaction to a data-center application or database. A single transaction might involve a dozen back-and-forth user interactions, and if the developer pulls a component of the GUI into the data center because the logic is available there, the gateway costs can rise alarmingly.

That’s important because application developers often design their applications so that some components can be pulled from cloud to data center or pushed into the cloud, for reliability and scalability reasons. This is another source of unexpected performance problems and cost overruns because if you move a component, you also have to move the workflows.

All of this makes two important points. First, workflows are how application planners and programmers see networks. That’s what their stuff generates. Second, network professionals can talk with planners and programmers about the impact of their decisions on workflows, and the impact of workflow changes on cost and QoE. It’s that workflow-meeting-of-the-minds that empowers network professionals. If the topic of workflows, costs, and QoE are linked, then the network pro can lead the discussion, from their seat at the planning table.

Copyright © 2023 IDG Communications, Inc.

The 10 most powerful companies in enterprise networking 2022