Marketing Manager at a manufacturing company with 5,001-10,000 employees
Real User
Top 20
2023-05-17T10:58:00Z
May 17, 2023
Kentik is used to visualize Internet connectivity, particularly for network connections. It's an "as-a-service" solution. We have clients such as NTT and KDD, major telecom providers in Japan.
Director - Site Reliability Engineering at GoDaddy.com
Real User
2019-12-15T05:59:00Z
Dec 15, 2019
We use it almost exclusively for flow data. We use that for a variety of things from network optimization to network capacity to security events, including DDoS protection, etc. We're using the SaaS version.
Area Controller at a computer software company with 5,001-10,000 employees
Real User
2019-12-12T07:48:00Z
Dec 12, 2019
We use it for traffic management. And when we want to set up new locations or a new market with our own CDN, we use it to scope what kind of internet traffic there is and what kinds of connections we should prepare. We also use it for some alerting and reporting, like if traffic shifts very much on the link or toward a certain ISP. That could potentially tell us that there are problems or something that we should check out. We're not super-advanced users, but we also use the API in the product. We have some tooling that we've written around these use cases that pulls data from the Kentik database. We send the dataflow to Kentik, in their cloud. We don't have any software installed on-prem here or in our data centers. As a company, we've always tended toward not having to manage more hardware and software than necessary. We're extremely happy with having it in the cloud and we're not afraid of sending this data to them in the cloud. We pretty much trust them.
For our purposes, where we're at today, and even in the past, to analyze flows and to pull specific data and understand where our traffic is going to — which AS path — that's primarily the value that I extrapolate from Kentik. It's mostly on-prem. We do some stuff with GCP and AWS, but it was all primarily licensed-based, based on the number of pieces of equipment we have on-prem that we actually attach it to. We have over 55 edge nodes and about 10 compute nodes.
Principal Engineer at a comms service provider with 501-1,000 employees
Real User
2019-10-06T16:39:00Z
Oct 6, 2019
I am in what's called the "data explorers," which is our organization's free-form, "write your own database query with a GUI" to get some numbers out. I do that because I'm usually looking to solve very specific problems or to get very specific questions answered. I'm very familiar with the GUI and it does what I need it to do. For our company, one of the major uses of it is in our sales organization. They run a lot of customer prospecting using it. Using the API stack, we ended up writing our own, internal sales tool webpage which does a lot of queries on the back-end to get info from the on-prem database. We are using the on-prem deployment. We did use the SaaS version initially for the test, to see if it met our needs, but for production we decided to go for the on-prem deployment. The reason we went with the on-prem — and I'm not involved in the purchasing aspects — was because at the level of our flows and data rates, when you're doing the cloud solution you're also paying for the hardware. I believe it was determined that a one-time cost for us in buying the hardware, and then just doing the software license part, ends up being more cost-effective. I cannot speak to anyone else's particular pricing model or the amount of data they're sending. That may make it a very different equation. I have a feeling that the on-prem would really only work for the really large customers.
Manager, Automation Tools at a tech services company with 1,001-5,000 employees
Real User
2019-09-26T04:12:00Z
Sep 26, 2019
We're using Kentik for flow data, so we can do things like peering management and interconnection research, as well as capacity management. We also use it fairly heavily in our tech cost-reporting so we can see things such as how many dollars per gigabyte and how much we're using. The deployment model is cloud, which Kentik provides.
The primary need is to really understand where our traffic is going, not just the transit ASNs — we know that — but where else is it going? How much traffic are we sending to those other ASNs? Of course, DDoS is also another use case for us. We have identified DDoS. And we're also using alerting now to help us understand when service owners are perhaps utilizing more than they should.
Kentik's AIOps Network Traffic Intelligence platform unifies network operations, performance, security, and business intelligence. With a purpose-built big data engine delivered as public or private SaaS, Kentik captures a high-resolution view of actual network traffic data and enriches it with critical application and business data, so every network event or analysis can be tied to revenue & costs, customer & user experience, performance & risk.
The solution is used for network monitoring. My clients monitor the performance of networks using Kentik.
Kentik is used to visualize Internet connectivity, particularly for network connections. It's an "as-a-service" solution. We have clients such as NTT and KDD, major telecom providers in Japan.
We use it almost exclusively for flow data. We use that for a variety of things from network optimization to network capacity to security events, including DDoS protection, etc. We're using the SaaS version.
We mainly use it for visibility into our traffic but we use it for DDoS detection as well.
We use it for traffic management. And when we want to set up new locations or a new market with our own CDN, we use it to scope what kind of internet traffic there is and what kinds of connections we should prepare. We also use it for some alerting and reporting, like if traffic shifts very much on the link or toward a certain ISP. That could potentially tell us that there are problems or something that we should check out. We're not super-advanced users, but we also use the API in the product. We have some tooling that we've written around these use cases that pulls data from the Kentik database. We send the dataflow to Kentik, in their cloud. We don't have any software installed on-prem here or in our data centers. As a company, we've always tended toward not having to manage more hardware and software than necessary. We're extremely happy with having it in the cloud and we're not afraid of sending this data to them in the cloud. We pretty much trust them.
For our purposes, where we're at today, and even in the past, to analyze flows and to pull specific data and understand where our traffic is going to — which AS path — that's primarily the value that I extrapolate from Kentik. It's mostly on-prem. We do some stuff with GCP and AWS, but it was all primarily licensed-based, based on the number of pieces of equipment we have on-prem that we actually attach it to. We have over 55 edge nodes and about 10 compute nodes.
I am in what's called the "data explorers," which is our organization's free-form, "write your own database query with a GUI" to get some numbers out. I do that because I'm usually looking to solve very specific problems or to get very specific questions answered. I'm very familiar with the GUI and it does what I need it to do. For our company, one of the major uses of it is in our sales organization. They run a lot of customer prospecting using it. Using the API stack, we ended up writing our own, internal sales tool webpage which does a lot of queries on the back-end to get info from the on-prem database. We are using the on-prem deployment. We did use the SaaS version initially for the test, to see if it met our needs, but for production we decided to go for the on-prem deployment. The reason we went with the on-prem — and I'm not involved in the purchasing aspects — was because at the level of our flows and data rates, when you're doing the cloud solution you're also paying for the hardware. I believe it was determined that a one-time cost for us in buying the hardware, and then just doing the software license part, ends up being more cost-effective. I cannot speak to anyone else's particular pricing model or the amount of data they're sending. That may make it a very different equation. I have a feeling that the on-prem would really only work for the really large customers.
We're using Kentik for flow data, so we can do things like peering management and interconnection research, as well as capacity management. We also use it fairly heavily in our tech cost-reporting so we can see things such as how many dollars per gigabyte and how much we're using. The deployment model is cloud, which Kentik provides.
The primary need is to really understand where our traffic is going, not just the transit ASNs — we know that — but where else is it going? How much traffic are we sending to those other ASNs? Of course, DDoS is also another use case for us. We have identified DDoS. And we're also using alerting now to help us understand when service owners are perhaps utilizing more than they should.