I work in a distributed simulation. It's a much smaller network and it is a closed one. It's classified and it's behind encryption devices. You can't actually get there from the outside.
We had a unique situation here. We are using this solution with distributed simulation for military purposes. Packet loss is equated with a lack of credibility in your simulation. You basically can't trust the data if you lose packets.
After a lot of testing and verification, we decided that we cannot afford more than 1% packet loss. You can't say zero, because zero isn't possible to achieve. We tested and we said that if 1% packet loss is all we have, then that's what we can live with.
The question we had was how we can measure the 1% data loss in a distributed environment. We instrumented all of the sites with this solution. There are about 25 sites in the hub. The interface is going to decide. We're collecting data on that. I worked with the development team in Australia to give us a script that exports the data for us so we can actually obtain the data. We made our own dashboard out of it. It's a dashboard, or a line chart, that shows the usage on each interface, but all of the lines are on one graph.
Statseeker didn't have that capability. I don't know if they have that now. Rather than looking at 25 charts and trying to figure out which one might be going over 50 MB, we put all of the charts into one chart. We had one inbound and one outbound chart, and we normalized it to 50 MB so we would see, at one glance, if somebody is overriding. One person overriding ruins the whole experiment. This was a unique use of Statseeker.
It's different than what most IT enterprise people use it for. They look at their environments one at a time. They look at one connection from a server to a client. In our case, we wanted to see all of it combined into one. We can't afford any losses. even at the most insignificant site, a loss means losing the credibility of the event.
It's very expensive to set these up. It costs a couple million dollars for each experiment that runs only for two weeks out of the year. We can't afford any loss during that time. The best choice for that was Statseeker. We exported the data in real time and we're putting it on our dashboard charts ourselves.