We use PubSub+ primarily for long processing messages. For instance, we generate customer forms, which take a longer time. We place these requests in queues, process them, and return them to the queues. This ensures that messages are not lost due to the time-consuming nature of such tasks. We also use PubSub+ for audit trails where we intercept requests, apply them, and store them in MongoDB, placing them in the queue before final processing.
Cloud Architect at a transportation company with 10,001+ employees
Real User
Top 5
2023-03-17T12:56:42Z
Mar 17, 2023
There was a challenge in the market that needed addressing. While some tools could serve as event managers, they were not proper event brokers. For instance, Kafka is referred to as an event broker but not a proper one. If you want to use it as an event broker, you would have to implement your event manager, which could be quite complex. The same can be said for Kafka, a text-based broker that stores events in a text file on disk and then consumes them. In other words, an event is just a message stored in a text file that is not reactive. If you introduce events into the system, you cannot react to them in any way. It creates a problem that needs to be addressed by implementing tools that can react to events or pre-defined topics. The way topics are created is also an issue. For instance, if you want to consume a specific topic, you have to create a new one, and you cannot filter events using the mechanism provided. If you wish to query events, there is no provision. It is where PubSub+ comes in. It provides the option to query events and route messages from one topic to another, as well as clients, to facilitate this process. We primarily use PubSub+ for event-driven applications, where we react to and process applications based on those events. For instance, when we receive an order, we react to that event and create multiple other events based on it. This reaction is based on events, so we use PubSub+ Event Broker.
We use it as a central message bus to interconnect all our applications as well as for the transportation of market data. We're using the 3560s for the hardware appliances and version 9.3 for the software.
Head of Enterprise Architecture & Digital Innovation Lab at Hewlett Packard Inc.
Real User
2020-06-21T08:08:00Z
Jun 21, 2020
We are using Event Broker to publish data across the enterprise, then share the transaction data updates in real-time across the enterprise, and also in some cases the telemetry data. We do use event mesh, but our use is limited. The reason for that is we have our publishers and consumers on-prem while have our applications on AWS, Azure, and SaaS. It's a multicloud hybrid infrastructure, but the majority are still on-prem. We are slowly moving to AWS, Azure, and SaaS. As we expand to AWS and Azure, then event mesh will be a key feature that we would like to leverage. We are using the latest version.
Senior Project Manager at a financial services firm with 5,001-10,000 employees
Real User
2020-06-16T08:37:00Z
Jun 16, 2020
We're a capital markets organization, so we primarily use it for our trading algos order management, streaming market data, and general application messaging. Those are our key use cases. Our other use cases are for more guaranteed messaging-type or things where we absolutely need to have the resiliency of every message for higher performance streaming market data, meaning, millisecond latency-sensitive algorithm operations that are running as well. We also use it for general messaging and to displace some of our legacy messaging applications such as MQ, EMS, and things of that sort. We are standardized on Solace PubSub+; it's an architectural standard at our company.
Managing Director at a financial services firm with 5,001-10,000 employees
Real User
2020-06-15T07:34:00Z
Jun 15, 2020
We do a lot of pricing data through here, market data from the street that we feed onto the event bus and distribute out using permissioning and controls. Some of that external data has to have controls on top of it so we can give access to it. We also have internal pricing information that we generate ourselves and distribute out. So we have both server-based clients connecting and end-user clients from PCs. We have about 25,000 to 30,000 connections to the different appliances globally, from either servers or end-users, including desktop applications or a back-end trading service. These two use cases are direct messaging; fire-and-forget types of scenarios. We also have what we call post-trade information, which is the guaranteed messaging piece for us. Once we book a trade, for example, that data, obviously, cannot be lost. It's a regulatory obligation to record that information, send it back out to the street, report it to regulators, etc. Those messages are all guaranteed. We also have app-to-app messaging where, within an application team, they want to be able to send messages from the application servers, sharing data within their application stack. Those are the four big use cases that make up a large majority of the data. But we have about 400 application teams using it. There are varied use cases and, from an API perspective, we're using Java, .NET, C, and we're using WebSockets and their JavaScript. We have quite a variety of connections to the different appliances, using it for slightly different use cases. It's all on-prem across physical appliances. We have some that live in our DMZ, so external clients can connect to those. But the majority, 95 percent of the stuff, is on-prem and for internal clients. It's deployed across Sydney, Hong Kong, Tokyo, London, New York, and Toronto, all connected together.
The first use case is technology operations tools. We are a best of breed monitoring shop. We have all kinds of tools that monitor things, like storage, network, servers, applications, and all types of stovepipes that do domain specific monitoring. Each one of those tools was sold to us with what they called a single pane of glass for their stovepipe. However, none of the tools are actually publishing or sharing any of the events that they have detected. So, we have been doing a poor job of correlating events to try and figure out what's going on in our operations. Our use case was to leverage that existing investment. For about a year, we have been proving that we can build publishing adapters from these legacy monitoring tools which are each valid in their own right, like storage monitoring tools, network monitoring tools, and application monitoring tools (like Dynatrace), and more modern than other ones. We have been building publishing adapters from those things so we can transport those events to an event aggregation and event correlation service. We're still trying to run through our list of candidates for what our event correlation will be, but the popular players are Splunk, Datadog, and Moogsoft, then ServiceNow has its own event management module. From an IT systems management perspective, our use case is to have a common event transport fabric that spans multiclouds and is WAN optimized. What is important for me is topic wildcarding and prioritization/QoS. We want to be able to set some priorities on IT events versus real business events. The second use case is more of an application focus. I'm only a contributor on the app side. I'm more of an infrastructure cloud architect and don't really lead any of the application modernization programs, but I'm a participant in almost all of them. E.g., we have application A and application B side by side sitting in our on-prem data center, and they happen to use IBM MQ Hub to share our data as an integration. Application A wants to move to Azure. They are willing to make their investment to modernize the app, not a forklift, but some type of transformation event. Their very first question to us is, "I need to bring IBM MQ with me because I need to talk to app B who has no funding and is not going to do anything." Therefore, our opening position is, "Let's not do that. Let's use cloud-native technology where possible when you're replatforming your application. Use whatever capability you have for asynchronous messaging that Azure offers you. Let's get that message onto the Azure Event Hub. Don't worry about it arriving where it needs to arrive because we'll have Solace do some protocol transformation with HybridEdge, essentially building a bridge between the Azure Event Hub and MQ Hub that we have in our data center." The idea is to build bridges between our asynchronous messaging hubs, and there's only a small handful of them, where Azure Event Hub is the most modern. We have an MQ Hub that runs on a mainframe and IBM DataPower appliances that serve as our enterprise service bus (ESB). Therefore, if we build bridges between those systems, then our app modernization strategy is facilitated by a seamless migration to Azure. The most recent version is what we installed about three weeks ago. The solution is deployed on Azure for now. We will be standing up some nodes in our on-prem data centers during next phase, probably in the next six months. The plan is to use event mesh. We're not using it as an event mesh yet, as we are only deployed with Azure. We want to position a Solace event mesh for enterprise, but we're just now stretching into Azure. We're a little slow on the cloud adoption thing. We've got 1200 applications at CIBC with about four of them hosted in clouds: one at AWS and three at Azure. So, we're tiptoeing into Azure right now. We're probably going to focus our energy on moving stuff into Azure. However, for now, because the volume is so low on stuff that's outside of our data center, the concept of a mesh has been socialized. There's not a ton of enthusiasm for it, even though I might be shouting from the rooftops saying, "It's a foundational capability in a multicloud world." It looks like we're putting that funding on the back burner for using it as an event mesh.
Technology Lead at a pharma/biotech company with 10,001+ employees
Real User
2020-06-10T08:01:00Z
Jun 10, 2020
We have a hybrid model because we have a lot of systems on-premise as well as a lot on the cloud. We have one instance of Solace in AWS Europe, and the other one is an on-premise setup in our data center, also in Europe.
Lead Manager at a manufacturing company with 10,001+ employees
Real User
2020-06-09T07:44:00Z
Jun 9, 2020
One of our use cases at our global company went live recently. We have a lot of goods that move via sea routes. While there are other modes of transport, particularly for the sea route, we wanted to track our shipments, their location, and that type of information and generate some reports. Also, there are multiple applications which need this data. With Solace, we are bringing information in every minute (almost real-time) from our logistic partners and putting it on Solace. Then, from Solace, the applications that want to consume the information can take it. E.g., we are generating some dashboards in Power BI using this information. We are also pushing this information into our data lakes where more reporting plus slicing and dicing is available. In future, if more subscribers want this information, they will also be able to take it. We have both our private cloud and a version completely hosted on SaaS by Solace.
Manager, IT at a financial services firm with 501-1,000 employees
Real User
2020-06-09T07:44:00Z
Jun 9, 2020
We use it as a message bus for our different systems to connect to Solace on a pub/sub basis. We have about 10 systems interfacing with it. It is used for our critical payment systems which are mostly online payment transactions. There are also messages for streaming and data warehouse info. We are using the Solace PubSub+ 3530 appliance, and the AMI (Amazon Machine Image) version. We have a mixture of an on-premise deployment and a cloud deployment. The cloud part is more the AMI.
PubSub+ Platform supports real-time shipment tracking, IT event management in multiclouds, and connects legacy and cloud-native systems for application modernization. It's utilized for trading, streaming market data, and app-to-app messaging, enhancing event-driven architectures with reliable message queuing.Organizations adopt PubSub+ to efficiently transport events across hybrid and cloud environments, managing audit trails and long processing tasks. The platform aids integration through...
We use PubSub+ primarily for long processing messages. For instance, we generate customer forms, which take a longer time. We place these requests in queues, process them, and return them to the queues. This ensures that messages are not lost due to the time-consuming nature of such tasks. We also use PubSub+ for audit trails where we intercept requests, apply them, and store them in MongoDB, placing them in the queue before final processing.
PubSub+ Event Broker is used to help manage architectures, such as real-time integrations of business processes and systems.
We are using PubSub+ Event Broker for specific application processing. We have used the solution In banking systems.
There was a challenge in the market that needed addressing. While some tools could serve as event managers, they were not proper event brokers. For instance, Kafka is referred to as an event broker but not a proper one. If you want to use it as an event broker, you would have to implement your event manager, which could be quite complex. The same can be said for Kafka, a text-based broker that stores events in a text file on disk and then consumes them. In other words, an event is just a message stored in a text file that is not reactive. If you introduce events into the system, you cannot react to them in any way. It creates a problem that needs to be addressed by implementing tools that can react to events or pre-defined topics. The way topics are created is also an issue. For instance, if you want to consume a specific topic, you have to create a new one, and you cannot filter events using the mechanism provided. If you wish to query events, there is no provision. It is where PubSub+ comes in. It provides the option to query events and route messages from one topic to another, as well as clients, to facilitate this process. We primarily use PubSub+ for event-driven applications, where we react to and process applications based on those events. For instance, when we receive an order, we react to that event and create multiple other events based on it. This reaction is based on events, so we use PubSub+ Event Broker.
We are generating stock calls and then those are given to various other processes.
We use it as a central message bus to interconnect all our applications as well as for the transportation of market data. We're using the 3560s for the hardware appliances and version 9.3 for the software.
We are using Event Broker to publish data across the enterprise, then share the transaction data updates in real-time across the enterprise, and also in some cases the telemetry data. We do use event mesh, but our use is limited. The reason for that is we have our publishers and consumers on-prem while have our applications on AWS, Azure, and SaaS. It's a multicloud hybrid infrastructure, but the majority are still on-prem. We are slowly moving to AWS, Azure, and SaaS. As we expand to AWS and Azure, then event mesh will be a key feature that we would like to leverage. We are using the latest version.
We're a capital markets organization, so we primarily use it for our trading algos order management, streaming market data, and general application messaging. Those are our key use cases. Our other use cases are for more guaranteed messaging-type or things where we absolutely need to have the resiliency of every message for higher performance streaming market data, meaning, millisecond latency-sensitive algorithm operations that are running as well. We also use it for general messaging and to displace some of our legacy messaging applications such as MQ, EMS, and things of that sort. We are standardized on Solace PubSub+; it's an architectural standard at our company.
We do a lot of pricing data through here, market data from the street that we feed onto the event bus and distribute out using permissioning and controls. Some of that external data has to have controls on top of it so we can give access to it. We also have internal pricing information that we generate ourselves and distribute out. So we have both server-based clients connecting and end-user clients from PCs. We have about 25,000 to 30,000 connections to the different appliances globally, from either servers or end-users, including desktop applications or a back-end trading service. These two use cases are direct messaging; fire-and-forget types of scenarios. We also have what we call post-trade information, which is the guaranteed messaging piece for us. Once we book a trade, for example, that data, obviously, cannot be lost. It's a regulatory obligation to record that information, send it back out to the street, report it to regulators, etc. Those messages are all guaranteed. We also have app-to-app messaging where, within an application team, they want to be able to send messages from the application servers, sharing data within their application stack. Those are the four big use cases that make up a large majority of the data. But we have about 400 application teams using it. There are varied use cases and, from an API perspective, we're using Java, .NET, C, and we're using WebSockets and their JavaScript. We have quite a variety of connections to the different appliances, using it for slightly different use cases. It's all on-prem across physical appliances. We have some that live in our DMZ, so external clients can connect to those. But the majority, 95 percent of the stuff, is on-prem and for internal clients. It's deployed across Sydney, Hong Kong, Tokyo, London, New York, and Toronto, all connected together.
The first use case is technology operations tools. We are a best of breed monitoring shop. We have all kinds of tools that monitor things, like storage, network, servers, applications, and all types of stovepipes that do domain specific monitoring. Each one of those tools was sold to us with what they called a single pane of glass for their stovepipe. However, none of the tools are actually publishing or sharing any of the events that they have detected. So, we have been doing a poor job of correlating events to try and figure out what's going on in our operations. Our use case was to leverage that existing investment. For about a year, we have been proving that we can build publishing adapters from these legacy monitoring tools which are each valid in their own right, like storage monitoring tools, network monitoring tools, and application monitoring tools (like Dynatrace), and more modern than other ones. We have been building publishing adapters from those things so we can transport those events to an event aggregation and event correlation service. We're still trying to run through our list of candidates for what our event correlation will be, but the popular players are Splunk, Datadog, and Moogsoft, then ServiceNow has its own event management module. From an IT systems management perspective, our use case is to have a common event transport fabric that spans multiclouds and is WAN optimized. What is important for me is topic wildcarding and prioritization/QoS. We want to be able to set some priorities on IT events versus real business events. The second use case is more of an application focus. I'm only a contributor on the app side. I'm more of an infrastructure cloud architect and don't really lead any of the application modernization programs, but I'm a participant in almost all of them. E.g., we have application A and application B side by side sitting in our on-prem data center, and they happen to use IBM MQ Hub to share our data as an integration. Application A wants to move to Azure. They are willing to make their investment to modernize the app, not a forklift, but some type of transformation event. Their very first question to us is, "I need to bring IBM MQ with me because I need to talk to app B who has no funding and is not going to do anything." Therefore, our opening position is, "Let's not do that. Let's use cloud-native technology where possible when you're replatforming your application. Use whatever capability you have for asynchronous messaging that Azure offers you. Let's get that message onto the Azure Event Hub. Don't worry about it arriving where it needs to arrive because we'll have Solace do some protocol transformation with HybridEdge, essentially building a bridge between the Azure Event Hub and MQ Hub that we have in our data center." The idea is to build bridges between our asynchronous messaging hubs, and there's only a small handful of them, where Azure Event Hub is the most modern. We have an MQ Hub that runs on a mainframe and IBM DataPower appliances that serve as our enterprise service bus (ESB). Therefore, if we build bridges between those systems, then our app modernization strategy is facilitated by a seamless migration to Azure. The most recent version is what we installed about three weeks ago. The solution is deployed on Azure for now. We will be standing up some nodes in our on-prem data centers during next phase, probably in the next six months. The plan is to use event mesh. We're not using it as an event mesh yet, as we are only deployed with Azure. We want to position a Solace event mesh for enterprise, but we're just now stretching into Azure. We're a little slow on the cloud adoption thing. We've got 1200 applications at CIBC with about four of them hosted in clouds: one at AWS and three at Azure. So, we're tiptoeing into Azure right now. We're probably going to focus our energy on moving stuff into Azure. However, for now, because the volume is so low on stuff that's outside of our data center, the concept of a mesh has been socialized. There's not a ton of enthusiasm for it, even though I might be shouting from the rooftops saying, "It's a foundational capability in a multicloud world." It looks like we're putting that funding on the back burner for using it as an event mesh.
We have a hybrid model because we have a lot of systems on-premise as well as a lot on the cloud. We have one instance of Solace in AWS Europe, and the other one is an on-premise setup in our data center, also in Europe.
One of our use cases at our global company went live recently. We have a lot of goods that move via sea routes. While there are other modes of transport, particularly for the sea route, we wanted to track our shipments, their location, and that type of information and generate some reports. Also, there are multiple applications which need this data. With Solace, we are bringing information in every minute (almost real-time) from our logistic partners and putting it on Solace. Then, from Solace, the applications that want to consume the information can take it. E.g., we are generating some dashboards in Power BI using this information. We are also pushing this information into our data lakes where more reporting plus slicing and dicing is available. In future, if more subscribers want this information, they will also be able to take it. We have both our private cloud and a version completely hosted on SaaS by Solace.
We use it as a message bus for our different systems to connect to Solace on a pub/sub basis. We have about 10 systems interfacing with it. It is used for our critical payment systems which are mostly online payment transactions. There are also messages for streaming and data warehouse info. We are using the Solace PubSub+ 3530 appliance, and the AMI (Amazon Machine Image) version. We have a mixture of an on-premise deployment and a cloud deployment. The cloud part is more the AMI.