stillgoal.blogg.se

Network topology mapper bgp peering device
Network topology mapper bgp peering device









network topology mapper bgp peering device network topology mapper bgp peering device

Network topology mapper bgp peering device how to#

The issue we then had to think about was how to achieve a uniform enough distribution. Given that we only have one IP active in on each node, the next step was to have this landing node act as a router for inbound BGP connections with policy routing as the high-level design. On top of that, since our BGP nodes were identical, the distribution of sessions should be balanced. The next big step was to scale out the actual connections, by allowing them to land on different nodes. Each host then announced the route with a different MED so that we had multiple paths available at all times. This took care of announcing our floating BGP IP, which was now provisioned on the host’s loopback interface. We developed a home-grown shell script (called ‘bgp-vips’) that communicates with a spawned exaBGP. Looking at our setup, the first thing we did was to replace ucarp because we observed scaling issues with more than 2 nodes. In order to fit more peers, we had to add extra BGP nodes. With Kentik growing quickly, the solution needed to evolve. After a while, one active node could no longer handle all peers, with the node getting overutilized in terms of memory and CPU. Obviously, this setup didn’t go very far with the rapid growth of BGP sessions. This setup ran at boot time from a script residing in /root via rc.local. The 2 nodes were sharing a floating IP that handles the HA/Failover, which is managed by ucarp - an open BSD CARP (VRRP) implementation. Phase 1 - The beginningīack in 2015, when we monitored approximately 200 customer devices, we started with 2 nodes in active/backup mode. Scaling phasesĬostas then shared how the infrastructure has been built out from the beginning to today as Kentik’s customer base has grown. Moreover, each BGP session can be used as the transport to push mitigations, such as RTBH and Flowspec, triggered by alerting from the platform. For example, you can see how much of your traffic is associated with RPKI invalid prefixes you can do peering analytics if you have multiple sites, you can see how traffic gets in and out of your network ( Kentik Ultimate Exit™) and eventually, perform network discovery. BGP peering uptime is part of Kentik’s contracted SLA - 99.99%Īt Kentik, we use BGP data not only to enrich flow data so we can filter by BGP attributes in queries, but we also calculate lots of other analytics with routing data.we never initiate connections to the customers) on servers running Debian GNU/Linux. Kentik’s capabilities act like a totally passive iBGP route-reflector (i.e.Kentik peers with customers, preferably with every BGP-speaking device that sends flows to our platform.BGP at KentikĬostas started off by giving a short introduction to how Kentik uses BGP, in order to develop the technical requirements, which include: Costas shared various challenges the team overcame, the actions the team took, and finally, key takeaways. Last month at DENOG11 in Germany, Kentik Site Reliability Engineer Costas Drogos talked about the SRE team’s journey during the last four years of growing Kentik’s infrastructure to support thousands of BGP sessions with customer devices on Kentik’s multi-tenant SaaS (cloud) platform. BGP can enrich network traffic data to visualize traffic per BGP path for peering analytics and also to inject routes that enable DDoS mitigation capabilities such as RTBH and Flowspec.

network topology mapper bgp peering device network topology mapper bgp peering device

BGP routing data is another important data source. These include SNMP, DNS, RADIUS and streaming telemetry. When it comes to analyzing network traffic for tasks like peering, capacity planning, and DDoS attack detection, there are multiple auxiliary sources that can be utilized to supplement flow information.











Network topology mapper bgp peering device