Network Operation Center Team Leader at a recruiting/HR firm with 1,001-5,000 employees
Real User
2020-06-23T18:01:49Z
Jun 23, 2020
I think different shops may use the term differently. In regards to an industry standard the other replies may be more appropriate.
I can tell you that where I work we refer to SEUM (Synthetic End User Monitoring) UX and Synthetic (both user experience monitors) monitoring as simulating actual human activities and setting various types of validations. These validations may be load times for images, text, pages, or validating an expected action based on the steps completed by the monitor. We target all aspects of infrastructure / platform for standard monitoring and then for any user facing service we try to place at least one Synthetic / UX monitor on top of the process. I often find the most value from our Synthetics comes in the form of historical trending. Great examples of NOC wins have been patch X was applied and we noticed a consistent 3 second additional time required to complete UX monitor step Y. Another value from Synthetics is quickly assessing actual user impact. More mature orgs may have this all mapped out but I have found that many NOCs will see alarms on several services but not be able to determine what this means to an actual user community until feedback comes in via tickets or user reported issues. Seeing the standard alarms tells me what is broken, then seeing which steps are failing in the synthetics tells me what this means to our users.
I think that one of the great benefits to an open forum like this is getting to consider how each org does things. There are no wrong answers, just some info applies better for what you may be asking.
Search for a product comparison in Application Performance Monitoring (APM) and Observability
There is actually a place and a need for both synthetic and real user experience monitoring. If you look at the question from the point of view of what you trying to learn, detect, & then investigate, the answer should be that you want to be pro-active in ensuring a positive end-user experience.
I love real user traffic. There are a number of metrics that can be captured and measured. The number of items that can be learned will be controlled by the type and kind of data source used. NetFlow, Logs, and Ethernet packets. Response time, true client user location, application command executed, response to that command from the application including exact error messages, direct indicators of server and client physical or virtual performance, the list goes on and on. Highly valuable information for APP-OPS, Networking, Cloud, & data center teams.
Here is the challenge though, you need to have real user traffic to measure user traffic. The number of transactions and users and volumes of traffic and the path of those connections are great for measuring over time and baselining and triage as a count or measure and to find correlations between metrics when user experience is perceived as poor. The variation in these same metrics though makes them poor candidates for measuring efficiency and pro-active availability. Another challenge is that often real user traffic is often encrypted now so just exposing that level of data has a cost that is prohibitive to do outside of data center, cloud, Co-Lo. These aspects are often controlled by different teams so coordinating translations and time intervals of measurements between the different data sources is a "C" level initiative. Synthetic testing is/are fixed in number, duration, transaction type, & location. A single team can administer them but everyone can use the data. Transaction types and commands, tests, can be scaled up and down as needed for new version changes of applications and micro-services living in Containers, Virtual hosts, clusters, physical hosts Co-Lo, & data-centers. These synthetic transactions also determine availability and predict end-user experience long before there are any actual end-users. Imagine an organization that can generate transactions and even makes phone calls of all types and kinds in varying volumes a few hours before a geographic workday begins? If there is not a version change in software or change control in networking or infrastructure and there is a change from baseline or a failure to transact, IT has time to address the issue before a real user begins using the systems or services. These fixed transactions in number and time are very valuable in anyone's math for comparison and SLA measurements and do not need to be decrypted to get a COMMAND level delineation measurement.
Another thing to consider is that these synthetic tests also address SaaS and direct cloud access as well as 3rd party collaboration access {WEBEX, ZOOM, TEAMS, etc.}. Some vendors' offerings integrate together with there real-user measurements and baseline's, out of the box to realize the benefit of both and provide even more measurements and calculations and faster triage. Others may offer integration points like API or WEBHOOKS and leave it up to you.
The value and the ROI are not so much one or the other. Those determinations for an organization should be measured by how you responded to my original answer, /"//you want to be pro-active in ensuring a positive end-user experience."
Synthetic monitoring and real user monitoring (RUM) are two extremely different approaches that can be used to measure how your systems are performing. While synthetic monitoring relies on automatic simulated tests, Real User Monitor (RUM) records the behavior of actual visitors on your site and let you analyze and diagnose
Synthetic monitoring is active, meanwhile Real User Monitoring is passive, that means both are complement of each other
Principal Architect, Payment Platform at Change Healthcare
Consultant
2020-06-23T16:35:30Z
Jun 23, 2020
Synthetic monitoring helps simulate traffic from various geographic locations 24/7 at some regular frequency, say 5 minutes to make sure your services are available and performing as expected. In addition, running Synthetic monitoring along with alerts on some of your critical services that are dependent on other external connections like Payment Gateways, etc. will help you catch any issues with external connections proactively and address them before your users experience any issue with your services.
Founder and Solution Architect at The APM Practice, LLC
Real User
2020-06-23T12:34:47Z
Jun 23, 2020
Synthetics for production, are best used when there is little or no traffic to help confirm that your external access points are functioning. They also can be used to stress test components or systems - simulating traffic to test firewall capacity or message queue behavior and many other cases. You can also use synthetics to do availability testing during your operational day - again usually directed at your external points. Technology for cloud monitoring is generally synthetics. And the ever-popular speedtest.net is effectively doing synthetics to assess internet speed. The challenge with synthetics is maintaining those transactions. They need to be updated every time you make changes in you code base (that affects the transactions) and to cover all of the scenarios you care about. And also the HW requirements to support the generation and analysis of what can quickly become thousands of different transactions. Often this results in synthetics being used every 30 minutes (or longer) - which, of course, defeats the usefulness as an availability monitor.
Real User monitoring is just that - real transactions, not simulated. You use the transaction volume to infer availability of the various endpoints, and baselines for transaction type and volume to assess the availability. This eliminates the extra step of keeping the synthetics up-to date and trying to live with the intervals at which you have visibility into actual traffic conditions. But it will take extra work to decide which transactions are significant and to establish the baseline behaviors, especially when you have seasonality or Time-of-Day considerations that vary greatly.
However, I'm seeing that the best measure of transaction performance is to add user sentiment to your APM. Don't guess at what the transaction volume means - simply ask the user if things are going well, or not! This helps you narrow down what activities are significant, and thus what KPIs need to be in your baseline.
A good APM Practice will use both synthetics and real-user monitoring - where appropriate! You do not choose one over the other. You have to be mindful of where each tool has its strengths, what visibility they offer and the process that they need for effective use.
Service Assurance, Senior Manager at a computer software company with 1,001-5,000 employees
Real User
2021-08-03T11:29:57Z
Aug 3, 2021
Actually, RUM is giving the value after the fact.
I mean once the customer got impacted then RUM will show that. Synthetic user monitoring will keep testing your service 24/7 and you will be notified in case there is an issue and this is not requiring any real user to interact with your service. Both components will complete each other.
Find out what your peers are saying about Datadog, Dynatrace, New Relic and others in Application Performance Monitoring (APM) and Observability. Updated: November 2024.
S/W Technologies & Processes Unit Manager at Unisystems
Real User
2020-06-24T09:28:00Z
Jun 24, 2020
Synthetic Monitoring refers to Proactive Monitoring of Applications’ Components’ and Business Transactions Performance and Availability. Using this technique the monitoring of availability and performance of specific critical business transactions per application is achieved by simulating user interactions with web applications and by running transaction simulation scripts.
By simulating user transactions, the specific business is constantly tested for availability and performance. Moreover, synthetic monitoring provides detailed information and feedback for the reasons of performance degradation and loss of availability, and with this information, performance and availability issues can be pinpointed before users are impacted. Normally tools supporting Synthetic Monitoring include features like: complete performance monitoring, continuous synthetic transaction monitoring, detailed load-time metrics, monitoring from multiple locations, and browser-based transaction recording.
On the other hand Real User’s experience Monitoring (RUM), allows recording and observation of real end-user interactions with the applications providing information on how users navigate in the applications, what URLs and functions they are using and with what performance. This approach is achieved by recording time-stamped availability (status, error codes, etc.) and performance data from an application and its components. RUM also helps in defining the most commonly used business transactions or most problematic transactions to properly configure them for synthetic monitoring, as described previously.
In real-time monitoring the load on the systems is different every time based on the total number of users, applications, batch jobs, etc. while in synthetic monitoring we use what we call a robot firing for example every hour the same transaction. Because it is the same transaction every time you can determine the performance of the transaction. if you do this in DevOps you can monitor the transaction before actually going live and minimize the risk of performance problems before going in production.
Synthetic monitoring is a method to monitor your applications by simulating users – directing the path taken through the application. This provides information as to the uptime and performance of your critical business transactions, and the most common paths in the application. The simple reality is that there is no easy way to combine the accessibility, coherence, and manageability offered by a centralized system with the sharing, growth, cost, and autonomy advantages of a distributed system. It is here, at this intersection, that businesses turn to IT development and operations teams for guidance—APM tools enable them to negotiate these gaps.
For Synthetic monitoring, a probe may be used from various geographies to simulate the communication between the user and the application. SEUM - Synthetic End User Monitoring will help in creating a TCP waterfall chart to measure the performance of the Application at each and every step of the entire client-server transaction. Each and every step is recorded to measure response and transaction time. A baseline can be created to measure the performance in ideal conditions, since the transaction recording can be initiated in different conditions - no load, different locations like local LAN, WAN connects, Internet connects, VPN connects etc. Since each and every step is recorded, the website performance - example login screen, time taken for login to complete with backend authentication, backend database queries etc can be measured individually. The development team can thereby create a performance improvement program for optimal application performance. The baseline can be used to compare real-time user experience and help with further fine tuning.
I think different shops may use the term differently. In regards to an industry standard the other replies may be more appropriate.
I can tell you that where I work we refer to SEUM (Synthetic End User Monitoring) UX and Synthetic (both user experience monitors) monitoring as simulating actual human activities and setting various types of validations. These validations may be load times for images, text, pages, or validating an expected action based on the steps completed by the monitor. We target all aspects of infrastructure / platform for standard monitoring and then for any user facing service we try to place at least one Synthetic / UX monitor on top of the process. I often find the most value from our Synthetics comes in the form of historical trending. Great examples of NOC wins have been patch X was applied and we noticed a consistent 3 second additional time required to complete UX monitor step Y. Another value from Synthetics is quickly assessing actual user impact. More mature orgs may have this all mapped out but I have found that many NOCs will see alarms on several services but not be able to determine what this means to an actual user community until feedback comes in via tickets or user reported issues. Seeing the standard alarms tells me what is broken, then seeing which steps are failing in the synthetics tells me what this means to our users.
I think that one of the great benefits to an open forum like this is getting to consider how each org does things. There are no wrong answers, just some info applies better for what you may be asking.
There is actually a place and a need for both synthetic and real user experience monitoring. If you look at the question from the point of view of what you trying to learn, detect, & then investigate, the answer should be that you want to be pro-active in ensuring a positive end-user experience.
I love real user traffic. There are a number of metrics that can be captured and measured. The number of items that can be learned will be controlled by the type and kind of data source used. NetFlow, Logs, and Ethernet packets. Response time, true client user location, application command executed, response to that command from the application including exact error messages, direct indicators of server and client physical or virtual performance, the list goes on and on. Highly valuable information for APP-OPS, Networking, Cloud, & data center teams.
Here is the challenge though, you need to have real user traffic to measure user traffic. The number of transactions and users and volumes of traffic and the path of those connections are great for measuring over time and baselining and triage as a count or measure and to find correlations between metrics when user experience is perceived as poor. The variation in these same metrics though makes them poor candidates for measuring efficiency and pro-active availability. Another challenge is that often real user traffic is often encrypted now so just exposing that level of data has a cost that is prohibitive to do outside of data center, cloud, Co-Lo. These aspects are often controlled by different teams so coordinating translations and time intervals of measurements between the different data sources is a "C" level initiative. Synthetic testing is/are fixed in number, duration, transaction type, & location. A single team can administer them but everyone can use the data. Transaction types and commands, tests, can be scaled up and down as needed for new version changes of applications and micro-services living in Containers, Virtual hosts, clusters, physical hosts Co-Lo, & data-centers. These synthetic transactions also determine availability and predict end-user experience long before there are any actual end-users. Imagine an organization that can generate transactions and even makes phone calls of all types and kinds in varying volumes a few hours before a geographic workday begins? If there is not a version change in software or change control in networking or infrastructure and there is a change from baseline or a failure to transact, IT has time to address the issue before a real user begins using the systems or services. These fixed transactions in number and time are very valuable in anyone's math for comparison and SLA measurements and do not need to be decrypted to get a COMMAND level delineation measurement.
Another thing to consider is that these synthetic tests also address SaaS and direct cloud access as well as 3rd party collaboration access {WEBEX, ZOOM, TEAMS, etc.}. Some vendors' offerings integrate together with there real-user measurements and baseline's, out of the box to realize the benefit of both and provide even more measurements and calculations and faster triage. Others may offer integration points like API or WEBHOOKS and leave it up to you.
The value and the ROI are not so much one or the other. Those determinations for an organization should be measured by how you responded to my original answer, /"//you want to be pro-active in ensuring a positive end-user experience."
Synthetic monitoring and real user monitoring (RUM) are two extremely different approaches that can be used to measure how your systems are performing. While synthetic monitoring relies on automatic simulated tests, Real User Monitor (RUM) records the behavior of actual visitors on your site and let you analyze and diagnose
Synthetic monitoring is active, meanwhile Real User Monitoring is passive, that means both are complement of each other
Synthetic monitoring helps simulate traffic from various geographic locations 24/7 at some regular frequency, say 5 minutes to make sure your services are available and performing as expected. In addition, running Synthetic monitoring along with alerts on some of your critical services that are dependent on other external connections like Payment Gateways, etc. will help you catch any issues with external connections proactively and address them before your users experience any issue with your services.
Synthetics for production, are best used when there is little or no traffic to help confirm that your external access points are functioning. They also can be used to stress test components or systems - simulating traffic to test firewall capacity or message queue behavior and many other cases. You can also use synthetics to do availability testing during your operational day - again usually directed at your external points. Technology for cloud monitoring is generally synthetics. And the ever-popular speedtest.net is effectively doing synthetics to assess internet speed. The challenge with synthetics is maintaining those transactions. They need to be updated every time you make changes in you code base (that affects the transactions) and to cover all of the scenarios you care about. And also the HW requirements to support the generation and analysis of what can quickly become thousands of different transactions. Often this results in synthetics being used every 30 minutes (or longer) - which, of course, defeats the usefulness as an availability monitor.
Real User monitoring is just that - real transactions, not simulated. You use the transaction volume to infer availability of the various endpoints, and baselines for transaction type and volume to assess the availability. This eliminates the extra step of keeping the synthetics up-to date and trying to live with the intervals at which you have visibility into actual traffic conditions. But it will take extra work to decide which transactions are significant and to establish the baseline behaviors, especially when you have seasonality or Time-of-Day considerations that vary greatly.
However, I'm seeing that the best measure of transaction performance is to add user sentiment to your APM. Don't guess at what the transaction volume means - simply ask the user if things are going well, or not! This helps you narrow down what activities are significant, and thus what KPIs need to be in your baseline.
A good APM Practice will use both synthetics and real-user monitoring - where appropriate! You do not choose one over the other. You have to be mindful of where each tool has its strengths, what visibility they offer and the process that they need for effective use.
Actually, RUM is giving the value after the fact.
I mean once the customer got impacted then RUM will show that. Synthetic user monitoring will keep testing your service 24/7 and you will be notified in case there is an issue and this is not requiring any real user to interact with your service. Both components will complete each other.
Synthetic Monitoring refers to Proactive Monitoring of Applications’ Components’ and Business Transactions Performance and Availability. Using this technique the monitoring of availability and performance of specific critical business transactions per application is achieved by simulating user interactions with web applications and by running transaction simulation scripts.
By simulating user transactions, the specific business is constantly tested for availability and performance. Moreover, synthetic monitoring provides detailed information and feedback for the reasons of performance degradation and loss of availability, and with this information, performance and availability issues can be pinpointed before users are impacted. Normally tools supporting Synthetic Monitoring include features like: complete performance monitoring, continuous synthetic transaction monitoring, detailed load-time metrics, monitoring from multiple locations, and browser-based transaction recording.
On the other hand Real User’s experience Monitoring (RUM), allows recording and observation of real end-user interactions with the applications providing information on how users navigate in the applications, what URLs and functions they are using and with what performance. This approach is achieved by recording time-stamped availability (status, error codes, etc.) and performance data from an application and its components. RUM also helps in defining the most commonly used business transactions or most problematic transactions to properly configure them for synthetic monitoring, as described previously.
In real-time monitoring the load on the systems is different every time based on the total number of users, applications, batch jobs, etc. while in synthetic monitoring we use what we call a robot firing for example every hour the same transaction. Because it is the same transaction every time you can determine the performance of the transaction. if you do this in DevOps you can monitor the transaction before actually going live and minimize the risk of performance problems before going in production.
Synthetic monitoring is a method to monitor your applications by simulating users – directing the path taken through the application. This provides information as to the uptime and performance of your critical business transactions, and the most common paths in the application. The simple reality is that there is no easy way to combine the accessibility, coherence, and manageability offered by a centralized system with the sharing, growth, cost, and autonomy advantages of a distributed system. It is here, at this intersection, that businesses turn to IT development and operations teams for guidance—APM tools enable them to negotiate these gaps.
For Synthetic monitoring, a probe may be used from various geographies to simulate the communication between the user and the application. SEUM - Synthetic End User Monitoring will help in creating a TCP waterfall chart to measure the performance of the Application at each and every step of the entire client-server transaction. Each and every step is recorded to measure response and transaction time. A baseline can be created to measure the performance in ideal conditions, since the transaction recording can be initiated in different conditions - no load, different locations like local LAN, WAN connects, Internet connects, VPN connects etc. Since each and every step is recorded, the website performance - example login screen, time taken for login to complete with backend authentication, backend database queries etc can be measured individually. The development team can thereby create a performance improvement program for optimal application performance. The baseline can be used to compare real-time user experience and help with further fine tuning.