Back to Blog

Top Reasons to Leave Legacy NPM Behind

Alex Henthorn-Iwane
blog post feature image

Summary

NPM appliances and difficult-to-scale enterprise software deployments were appropriate technology for their day. But 15 years later, we’re well into the era of the cloud, and legacy NPM approaches are far from the best available option. In this post we look at why it’s high time to sunset the horse-and-buggy NPM systems of yesteryear and instead take advantage of SaaS network traffic intelligence powered by big data.


Why It’s Time to Move On…

NPM appliances and difficult-to-scale enterprise software deployments were appropriate technology for their day. But 15 years later, we’re well into the era of the cloud, and legacy NPM approaches are far from the best available option. So it’s high time to sunset the horse-and-buggy NPM systems of yesteryear and instead take advantage of SaaS network traffic intelligence powered by big data. With that in mind, here are a dozen things about legacy NPM that underscore the new reality: it’s time to migrate to a modern, advanced NPM solution.

1. Low fidelity

Legacy NPM systems are built on low-scale, single server software architectures with highly constrained computing, memory, and storage resources. As a result, they can’t retain very much data. So they have to roll everything into summaries and discard the underlying details. That robs you of data fidelity when you really need to dig in deep. It’s not just that it’s difficult or slow to figure things out — it’s impossible, because the details are no longer there.

2. Highly constrained ‘analytics’

Legacy systems have limited computational power and memory. Even with relatively small sets of detailed data, they don’t allow you the freedom to ask the questions you need answered, particularly the ones you didn’t anticipate in advance. Instead, they limit your views to pre-defined reports.

3. Dead-end troubleshooting

When you don’t have a lot of details and you don’t have much analytical freedom, it doesn’t take long — three clicks? — for your troubleshooting workflows to hit a dead end. That leaves you in a blind alley with only your intuition and guesswork to figure things out.

4. Overly simplistic anomaly detection

With most NPM products, the alerting is as simplistic as the reporting. You’re generally able to define only one field and one metric to monitor, such as bps per destination IP. This simplistic approach leads to lots of false positives by alerting on normal traffic. When operators “de-tune” triggers to avoid a sea of red in their alarm management views, they get false negatives instead, missing significant anomalies and attacks.

5. Operational blindness

With legacy tools, your users will often notice issues before you do. Even when problems are reported, they disappear - like footprints in the sand - before you can figure them out. Without complete, detailed historical data, there is no way to go back and look at the details from while it was happening.

6. Inaccurate or no DDoS detection

Most NPM tools don’t offer DDoS detection. Those that do are highly inaccurate because they can’t do more than track a handful of different packet types. The result is tons of false negatives, allowing attacks to go undetected until they are wreaking havoc.

7. Siloed tools and costly infrastructure

If all of the limitations we’ve already covered aren’t frustrating enough, siloed tools — each requiring its own copy of the same dataset, are sure to multiply the frustration. Multiple tools mean separate UIs, little to no usable APIs, islands of summarized partial visibility. And once you accumulate a number of these siloes, you have to buy costly packet brokers so they can all share limited span ports. Now you’re running a network infrastructure to support an ecosystem of siloed tools. Costly and ineffective.

8. Fragmented visibility

With all those siloed NPM tools, your visibility is fragmented. Maybe your team will swivel-chair between different systems and laboriously calculate correlations by eyeball. Or maybe they’ll dump .csv files and work in excel. Or maybe operators will just stay siloed in their different knowledge sets.

9. Painful upgrades

Oh boy. Now that you’ve got all those tools and infrastructure, you’re in for some fun as various software upgrades need to be deployed. These are time-consuming and risky upgrades so you only do them occasionally, if at all.

10. Feature lag, training gaps, and no mercy

With only occasional software upgrades being deployed, you won’t be getting new functionality very often. You’ll be months to a year or more behind the curve. This means the upgrades are more dramatic, time-consuming and risky. You’ll be doing serial minor to major OS upgrades, and all the deployed versions mean that it’s impossible for the vendor to ensure that all upgrade paths work. But that’s your fault, the vendor says, that you waited so long. And when you do bite the bullet and catch up on upgrades, you face a whole raft of changes, which creates a training gap.

11. EOLs

Randomly dropped into all of those painful upgrade cycles, EOLs turn the complexity and frustration knobs to eleven. Now you have to go replace your NPM boxes with the latest generation, usually at the same cost (or more) than the original products that you purchased. And you thought software upgrades were painful?

12. Cloud and big data head fakes

Adding to the cruelty, legacy NPM vendors will do anything to keep their outdated architectures pumping out the dollars. So they’ll offer “cloud” and “big data” solutions that are at best bandaids and in some cases just pile more burden on you. Some NPM vendors offer VM based versions of their venerable appliances, which means that you get to deal with all of the infrastructure issues. Other vendors will try to convince you to tack an open source big data “project” onto your deployment, while retaining all of the existing appliances.

If the above symptoms of legacy NPM sound all too familiar, there is a genuine solution: Kentik Detect, the industry’s first big data, multi-tenant SaaS for network traffic analytics. Kentik Detect unifies network visibility — operational troubleshooting, highly accurate anomaly and DDoS detection, and peering and capacity planning — into a single cost-effective solution that delivers ultra-fast answers to ad-hoc questions. Learn more on our product pages. Request a demo or start a free trial today and be up and running in fifteen minutes.

We use cookies to deliver our services.
By using our website, you agree to the use of cookies as described in our Privacy Policy.