BGP used to be primarily of interest only to ISPs and hosting providers, but it’s become something with which all network engineers should get familiar. In this conclusion to our four-part BGP tutorial series, we fill in a few more pieces of the puzzle, including when — and when not — it makes sense to advertise your routes to a service provider using BGP.
Kentik CEO Avi Freedman joins Ethan Banks and Chris Wahl of the PacketPushers Datanauts podcast to discuss hot topics in networking today. The conversation covers the trend toward in-house infrastructure among enterprises that generate revenue from their IP traffic, the pluses and minuses of OpenStack deployments for digital enterprises, and the impact of DevOps practices on network operations.
In this post we continue our look at BGP — the protocol used to route traffic across the interconnected Autonomous Systems (AS) that make up the Internet — by clarifying the difference between eBGP and iBGP and then starting to dig into the basics of actual BGP configuration. We’ll see how to establish peering connections with neighbors and to return a list of current sessions with useful information about each.
BGP is the protocol used to route traffic across the interconnected Autonomous Systems (AS) that make up the Internet, making effective BGP configuration an important part of controlling your network’s destiny. In this post we build on the basics covered in Part 1, covering additional concepts, looking at when the use of BGP is called for, and digging deeper into how BGP can help — or, if misconfigured, hinder — the efficient delivery of traffic to its destination.
In most digital businesses, network traffic data is everywhere, but legacy limitations on collection, storage, and analysis mean that the value of that data goes largely untapped. Kentik solves that problem with post-Hadoop Big Data analytics, giving network and operations teams the insights they need to boost performance and implement innovation. In this post we look at how the right tools for digging enable organizations to uncover the value that’s lying just beneath the surface.
Cisco’s recently announced Tetration Analytics platform is designed to provide large and medium data centers with pervasive real-time visibility into all aspects of traffic and activity. Analyst Jim Metzler says that the new platform validates the need for a Big Data approach to network analytics as network traffic grows. But will operations teams embrace a hardware-centric platform, or will they be looking for a solution that’s highly scalable, supports multiple vendors, doesn’t involve large up-front costs, and is easy to configure and use?
Cisco’s announcement of Tetration Analytics means that IT and network leaders can no longer ignore the need for Big Data intelligence. In this post, Kentik CEO Avi Freedman explains how that’s good for Kentik, which has been pushing hard for this particular industry transition. Cisco’s appliance-based approach is distinct from the SaaS option that most customers choose for Kentik Detect, but Tetration’s focus on Big Data analytics provides important validation that Kentik is on the right track.
Network security depends on comprehensive, timely understanding of what’s happening on your network. As explained by information security executive and analyst David Monahan, among the key value-adds of Kentik Detect are the ways in which it enables network data to be applied — without add-ons or additional charges — to identify and resolve security issues. Monahan provides two use cases that illustrate how the ability to filter out and/or drill-down on dimensions such as GeoIP and protocol can tip you off to security threats.
CIOs focus on operational issues like network and application performance, uptime, and workflows, while CISOs stress about malware, access control, and data exfiltration. The intersection of these concerns is the network, so it seems evident that CIOs and CISOs should work together instead of clashing over budgets and tools. In this post, network security analyst David Monahan makes the case for finding a solution that can keep both departments continuously and comprehensively well-informed about infrastructure, systems, and applications.