We're a hosting company and, in this industry, it's inevitable that you're going to be attacked. We originally purchased the product back in early 2000 for the SQL monitoring. Over the years, DDoS has become a nuisance for other companies we're hosting as well. We had originally purchased it just for internal use, and to predict our own internal infrastructure. But we found an avenue to offer it to our customers as well. It has just grown from there.
It's on-prem to protect our own infrastructure, as well as in the product that we sell to our customers to protect their services. We have a hybrid as well, as we use Arbor Cloud to protect our company's major assets if needed, as a type of over-capacity swing-over.
In terms of the visibility it provides into traffic to the application layer with the Sightline with Sentinel product, it's really good for what it's getting. If you're sampling traffic at the network edge, you don't get the grand scope that you would if you were seeing every single packet. But you're also getting a wide view of information, and at my level, working on the backbone, I need to see the grand scheme of things. If one customer is being scanned or penetrated in one way, it's not as important to me on the network layer as it is to somebody further down the stack. But if I'm seeing all the different scans coming in at a network layer, or bad actors that we have already identified as trying to hit our infrastructure, then that gives me a better idea of what's going on in my network, which is extremely important to me at that point. I can rally the troops to where I need them at that time.
We've gotten to the point where we have worked with this for so long that the protection provided by Sightline with Sentinel, across the different layers of our architecture from the network to the application, is automatic for us. There are very few adjustments that we need to do for customers, even with the wide range of customers that we have. We've been able to configure and to templatize different aspects of the system to fit about 80 percent of our customers, without having to go in there and fine tune. And now, with the addition of the passive protection, we're able to go in and tune a template further, so that it matches the customer even better with what we're doing.
Another way it helps the way our organization functions is because it does have a GUI. I'm able to present information and walk different parts of our leadership through different aspects of attacks, and how we're blocking them. One of the biggest examples of that was my ability to show them, by deploying flow specs, how much traffic I was dropping at the network edge, compared to how much traffic was actually coming into our networks. I showed them how it was saving us from having to upgrade capacity within the data center. It's been our backbone to different aspects of our environment.
In addition, other security groups that may not be at the network level, have the ability to go in and pull NetFlow from Arbor, and start looking for defined signatures of known bad actors out there or known signatures of tools that they may have.
We're averaging about 1,900 attacks a day. And we're only looking at attacks that could affect our infrastructure. We don't offer this service to everybody within our data centers. Arbor was deployed to protect the infrastructure. There are still a lot of attacks that are getting through that we're not really worried about. We're only looking at the larger types of attacks and engaging them more.
And because this is pretty much automated, we are able to catch attacks now within five to 30 seconds. And in the world of hosting, every single millisecond counts. We offer 99 percent uptime. Without Arbor, we'd probably be around 75 to 80 percent uptime. Attacks are cheap nowadays. People can create a lot of bandwidth for a couple of dollars.
Arbor DDoS also consolidates visibility and the actions we need to take, at the backbone level. Because we have 10 data centers spread out across the globe, and more coming in the future, it gives us better visibility not only into bad actors and traffic coming in, but also the ability to see how traffic is moving from one data center to another. Peer evaluation helps us to see if a peer at a given location is a better use than at another location. Also, point-to-point, from data center to data center, for VPN services that we offer, it has opened up a lot of different aspects of traffic analysis that we weren't really utilizing. Now, we're able to see where we need to adjust our bandwidth, and save money, and other places where we need to raise bandwidth before it costs us money.
It's also helped us get a better idea for future capacity planning, not only for current data centers, but data centers that are going to be in different regions where our company is located.
And the biggest benefit, for us as A company, are savings from peer evaluations; seeing where we can better utilize the relationship with different providers and if there is the potential for mutual benefit across multiple data centers, globally.