We do a lot of pricing data through here, market data from the street that we feed onto the event bus and distribute out using permissioning and controls. Some of that external data has to have controls on top of it so we can give access to it. We also have internal pricing information that we generate ourselves and distribute out. So we have both server-based clients connecting and end-user clients from PCs. We have about 25,000 to 30,000 connections to the different appliances globally, from either servers or end-users, including desktop applications or a back-end trading service. These two use cases are direct messaging; fire-and-forget types of scenarios.
We also have what we call post-trade information, which is the guaranteed messaging piece for us. Once we book a trade, for example, that data, obviously, cannot be lost. It's a regulatory obligation to record that information, send it back out to the street, report it to regulators, etc. Those messages are all guaranteed.
We also have app-to-app messaging where, within an application team, they want to be able to send messages from the application servers, sharing data within their application stack.
Those are the four big use cases that make up a large majority of the data.
But we have about 400 application teams using it. There are varied use cases and, from an API perspective, we're using Java, .NET, C, and we're using WebSockets and their JavaScript. We have quite a variety of connections to the different appliances, using it for slightly different use cases.
It's all on-prem across physical appliances. We have some that live in our DMZ, so external clients can connect to those. But the majority, 95 percent of the stuff, is on-prem and for internal clients. It's deployed across Sydney, Singapore, Hong Kong, Tokyo, London, New York, and Toronto, all connected together. We are currently working on the cloud setup to connect on-prem with cloud based virtual message routers all connected together.
With the old platforms we were coming from, if we wanted to make changes, some of those changes were intrusive to make. For example, to add a new application into the environment, we would have to make a change that might cause some disruptions to the environment. We only have very limited downtime for our environment on a Saturday after midnight and before midnight again on Sunday. That is our only change-window for the week, if we have to do something intrusive. That limited us to when we could truly make changes. On a lot of other vendors' platforms, to add things, you've got to restart components and cause disruption.
The benefit of Solace is that we can add an application in the middle of the day, with no disruption to anyone. It's purely based on our access-control list and permissioning. We can add an application in with zero disruption. We can onboard applications during the middle of our business day. It's still under change control, but there's zero impact by doing it. For us, that is super-powerful. Whether we're adding users or adding applications, we can do it, without causing any disruption. For a lot of other products, that's not the case. That's been a huge win for us.
In terms of application design, I've seen applications go live in less than a week, from coding the first line of code to putting something into production. It depends on how complex the application is. We have a central team where we support the wrappers on top of the vendor's API and we have some example code bases where we show a simple application built using our wrapper on top of Solace's API. A developer who joins our company knowing nothing about Solace, can walk through our documentation, have a look at our wrappers, take some of our example code, and get up and running and off to the races pretty quickly. Getting up to speed is definitely not difficult.
We might get a new user in our bank who is familiar with other messaging systems and who has preconceived ideas on how they want to do things. They might ask us, "How do I get access to this messaging system that I used to use with my old organization? That's what I'm familiar with." Sometimes we have to do sessions with those people and say, "Okay, we're familiar with the systems you're talking about. We supported them in the past. Talk us through what your use case is, what it is you are trying to achieve." Once they explain their use cases, we can say, "Okay, great. We actually have this and here's some example code and this how to do it." Within a day, that person has gone from knowing nothing about it to saying, "Okay, you're, absolutely meeting my application needs and now I'm educated on how this works." They're off and running very quickly.
We take all kinds of data onto the environment to share. Because the event bus is the place that every application always needs to start, they're no longer building an application now within the capital markets organization without putting their data onto our bus in some way. It's definitely a way of lowering the barrier to sharing data and getting things up and running quickly. Similarly, they can take data from other teams, once they find out what's available. Someone might say, "I need all the FX prices in the bank. Oh, I can just subscribe them from here. I don't even need to talk to the FX team." Teams can get up and running very quickly without having to spend a lot of time working with other groups to make that happen.
By having all of that data together in one place, Event Broker has definitely reduced the amount of time it takes to get a new application onboarded. We came from a place with six or seven different systems, where we might have bridged some of those together in some way, but it wasn't one common environment. Now, we've got application A that comes online and starts putting that data out for application B to get up to speed and to start looking at that data. That is very quick and easy for us to do. All the messaging that we do is self-describing. They can look at the payload of a message and understand it without even needing to talk to the upstream application. We can have applications starting to look at data where they didn't even have to speak to the upstream application. We've gone from 8 x 1 Gig, 10 years ago, to 8 x 10 Gigs today, and the reason for that is because we keep putting more and more data and applications on here. That continues to grow exponentially. If it wasn't easy to do, the data wouldn't be going up and we wouldn't have all these applications now on here. It's hard for me to say it has definitely increased the productivity, because I don't own the application development piece but, anecdotally, I would say it has.
Another area of benefit is that we're in the process of containerizing all of our applications at the moment, whether they'll be run on-prem or in the public cloud. The underlying piece is that these containers, wherever they run, are going to need to share data between the different applications and then back to the users. The Solace event mesh or event brokers are the underlying lifeblood among all of these containers. They need to have some way of communicating with each other and we see solace as being that connection among all of them. All the different cloud environments have their own messaging and we don't want to build applications that are specific to any one cloud; we want to be cloud-agnostic. To do that, we need to have a messaging system that is equally agnostic. Given that we already have a huge investment on-premise for all of our Solace stuff, we see that the future of containerizing our applications goes hand-in-hand with our messaging strategy around Solace, so we can be totally cloud-agnostic.
Technology, in the last 10 years, has probably become a lot more stable generally, but I can say that with the amount of data we put through these appliances, and route globally every day, if our environment was down capital markets wouldn't be operating for the bank. That's how critical it is. We can't afford to have any issues. At the same time, literally no application can run in our front office without this. If I look back 10 years ago, we might have had six or seven different distributed systems, all with their own problems. Now that we've consolidated all that, there's a huge efficiency by sharing all our data between the different groups. It means we can get up to speed very quickly, but also, what we're enabling from a business perspective, by sharing 120 billion messages a day, is hugely valuable to our front office.