Storage and Backup Engineer at a tech vendor with 5,001-10,000 employees
Real User
2019-10-16T12:48:40Z
Oct 16, 2019
The backup speed depends on:
- number of concurrent I/O streams
- data type
- network
- read/write speed of backup repository
- data encryption enable or not
- terabytes of front end data to be backed up
The question is not clear enough, to sizing a high scalable, high throughput environment. To archive the 100Gbps throughput, you have to list down the mentioned information.
For a very large environment, I strongly recommend using either NetBackup or CommVault.
Search for a product comparison in Backup and Recovery
I would suggest Veeam with the underlying storage being provided by a Pure FlashArray//C.
The FlashArray will provide the throughput you are after (its all-flash), the encryption (FIPS 140-2 certified, NIST compliant), data reduction (Veeams isn't that great) which should provide price parity to spinning disk, provides Immutability which you may also need & is a certified solution with Veeam.
The other storage platform worth looking at is VAST Storage, which has roughly the same features set as the Pure arrays, but uses a scale-out, disaggregated architecture and wins hands down in the throughput race against the Pure's.
Technical Presales Consultant/ Engineer at Ingram Micro
MSP
Top 5
2019-10-23T13:31:01Z
Oct 23, 2019
There is no such thing as best "anything" let alone backups. There are plenty of enterprise solutions that can handle the load you mentioned plenty are available in the market and it all comes down to your needs.
Hardware encryptions might be much more secure (tougher to hack but still hackable) than software encryptions however they open doors for vendor lock-in and that in certain situations can affect the recoverability of your data.
My advice to you is to focus on looking for a backup solution that can help you guarantee the recoverability of your data at the event of a disaster rather than focus on best backup 100gbps with hardware encryptions.
At the end of the day what's the point of a backup solution if it can do all that you mentioned and fails you at the event of a disaster.
If you can give me more environment details such as what kind of platforms and apps are being utilized I may be able to assist other than that my answers to you are there is no such thing as the best backup for 100gbps with hardware encryption.
We live in a world where everything is software-defined and it's safe to say that that's the way everyone should go.
We use the smallest Cohesity cluster possible with three nodes and have 60GBps of available bandwidth. I assume with more nodes you could get to 100Gbps. They have flash and an unbelievable filesystem. Do you have a use case for 12,500 megabytes per second of backup throughput? I'm having trouble envisioning an admin who would be in charge of a source capable of that coming to a forum like this with your exact question!
Backup & Storage Architect at a pharma/biotech company with 10,001+ employees
Real User
2021-08-13T12:01:51Z
Aug 13, 2021
Nowadays Cisco and other vendors are coming up with 25 Gig & 100 Gig Ports. On the physical setup of your physical or ESXi(Including backup servers) it should be planned in a way which can connect to this switches to have 100 Gig Pipe. DataDomain, HPE storeonce & Quantum DXI supports you the hardware encryption. Identify the right hardware model which supports the right I/O for your disk backups.This will eliminate your bottleneck after having the 100 Gig N/W. On software you can go for Netbackup, Veeam or Commvault. Each has its own option to reduce the frequent data flow by having client side deduplication
Senior Manager, Corporate Marketing at a tech services company with 1,001-5,000 employees
User
2019-10-18T12:39:43Z
Oct 18, 2019
It seems an object storage with inline dedupe could fit but would need to be sized for the performance. Backup targets are typically tuned for the ingest. Is the data dedup-able or compressible? How much data are you looking to backup and in how much time? How much data do you need to restore and in how much time?
SAN and UNIX Administrator at a comms service provider with 1,001-5,000 employees
Real User
2019-10-17T13:41:10Z
Oct 17, 2019
Your question is not cearly enough for calculate best scenario for your question, Because there are many factors depend on such as :
-Backup for what physical or virtualization environment.
-Data tybe.
-Network speed on all devices.
-Storage tybe flash or tap.
-What is the read/write speed of your disks/tape, AND the bus/controller speed that the disk is attached to?
-How many files and, how much data are you backing up?
-Is your backup application capable of running multiple jobs and sending multiple streams of data simultaneously?
Some potential points for improvement might include:
Upgrading switches and ethernet adapters to Gigabit Ethernet or greater.
Investing in higher performing disk arrays or subsystems to improve read and write speeds.
Investing in LTO-8 tape drives and consider a library, if you are not already using one, so that you can leverage multiplex (multistream) to tape.
To be able to reach that speed of read and writes, other factors also play a role. For example, network topology, NIC speeds and the backup client speed for data delivery.
Aside from that, you'll need larger files to reach that speed, since with smaller files there is always a speed ramp up time.
So there is no straightforward answer.
But what kind of data or machines is he trying to backup? What OS, DB, and type of apps, will help to give a definite answer.
Solutions that will always deliver is Netbackup (All apps, OS' and DB's), Backup Exec (MS apps, Win and Linux and some DB's) and Veeam.
Sales Engineer at a computer software company with 11-50 employees
Real User
2019-10-16T21:51:23Z
Oct 16, 2019
While we do not sell/offer backup SW per se, we do work with a lot of providers like Commvault, Rubrik, Veeam et el, I can say a majority of our user base, large global companies with 10+ Offices, do use Rubrik, and implement it with the generic S3 out that points to s3.customer_name.rstor.io .As RStor pricing model, we do not charge for Puts/Gets Reads/Writes, Ingress/egress fees. And with triple geographic replication a standard offering, the customer data moves fast in all regions with a super-fast network multiple +100G connections LAG's together, transferring 1PB in 22.5hrs, from SJC to LHR!
There are plenty of tools out there at the moment, many include features like data encryption, e-discovery, and instant restore.
For the current use case, small company/no data center - I would recommend Acronis.
The commercial version of the product even includes a proprietary feature called active protection that is a ransomware defense tool that is unlike anything else on the market.
There are a lot of details that you would need to provide to properly answer your question. How much and what type of data? Is the data being access at the same time in production as it is being ingested? Can you use an agent on a client to do things like client-side dedup? The source (storage) is most likely going to be the bottleneck. Without knowing the answers to the questions and many more I am not sure I can answer properly.
Having been in the industry for 20+ years and constantly staying up on the technology, I can tell you that the fastest backup and recovery solution I have seen to date is:
Source - PureStorage FlashArray (excryption is always on)
Data type - VMWARE 6.x VMDK and VVOL (included VMs with DBs and Apps)
Backup Software - NetBackup 8.x using CloudPoint snapshot manager
Storage Target - Pure Storage Flash Array over 16Gb FC
Backups and Recovery were extremely fast (seconds).
But with everything... the devil is in the details and mileage will vary.
Personally, I would recommend looking at ExaGrid. It is the fastest Backup & Recovery target that can scale from 3TB to 2PB. It also can have 1GbE, 10GbE or 40GbE connection speeds. Any appliance greater than 7TB can come with encrypted disks. Any backup software works to it accept the proprietary ones such as Avamar, Rubrik and Cohesity. Also, it has no forced obsolescence like the NetBackup or other such appliances.
Cognizant Storage Data Protection Engineer at a insurance company with 5,001-10,000 employees
User
2019-10-16T17:05:40Z
Oct 16, 2019
There are N numbers of good backup software. If the environment covers with both Physical, Virtual environment, I would suggest to try with Netbackup 8.1 backup Software or EMC Networker. Commvault is too good, however, its license and maintenance costings are higher than Veritas, EMC.
Apart from the file system level backup, for any Exchange, SQL, Oracle, NAS level backup, they are good at managing backup and recovery.
The throughput totally depends on the media server bandwidth, Switch connectivity, FC/FCoE connection, efficient backup resources for backup media servers with high-level planning & designing.
For exact recommendation, as all of us are suggesting, the details and capacity required for the overall environment.
You might just approach the backup software vendor to have a POC testing and DEMO overview before you finalize any tool.
Senior Sales Executive at Hewlett Packard Enterprise
Real User
2019-10-16T10:22:11Z
Oct 16, 2019
There isn´t one single back up target device/appliance doing 100Gbs throughput in the marker. To achieve that number requires multiple appliances like HPE StoreOnce. Also, it requires a lot from the primary disk array and infrastructure to provide 100Gbs ie multiple mid-range/high-end all-flash disk arrays etc.
System Administrator at a financial services firm with 10,001+ employees
Real User
2021-11-29T10:07:07Z
Nov 29, 2021
This is not the right question, while there are different areas in backup solutions like Software, Network and Storage and they are calculated by different measures than simple network bandwidth.
You can buy the best hardware in the world but earn not a good performance at all, e.g stream data have better performance on rotational drives than flash drives. In a backup process, you must consider data size and type, storage type, IOPS and throughput, backup time window, and much more.
We've gone for cohesity its a clustering solution that does distributed backups over the nodes. This would give you more bandwith when growing the cluster. So it mostly depends on the amount of data you need to back up in the environment.
Possibly CommVault can do it but what's the data type? If they are too small files, no backup program show 100Gbps. And also what kind of backup repository will you use? Maybe full flash repository can handle this IO.
What are your RPO and RTO requirements, as well as what SLA's do you have for your clients? Backup and Recovery aren't normally performance-driven since it's not a tier 1 storage.
Netbackup 8.x with Netbackup Appliance 5340 would be good to back up any Enterprise Backup Environment with a higher amount of speed.
For more detailed suggestions. Please share your Backup Environment - Server count, Type of backups, Volumetrics per month, Data retentions etc..
IT Architect at a computer software company with 51-200 employees
Real User
2019-10-16T09:44:19Z
Oct 16, 2019
Usually, you need backup data, not the hardware. Please share your current design, then we will be able to share our thoughts.
In general, it can be any backup software that you are familiar with, you just need properly size the HW for it.
Data backup involves copying and moving data from its primary location to a secondary location from which it can later be retrieved in case the primary data storage location experiences some kind of failure or disaster.
The backup speed depends on:
- number of concurrent I/O streams
- data type
- network
- read/write speed of backup repository
- data encryption enable or not
- terabytes of front end data to be backed up
The question is not clear enough, to sizing a high scalable, high throughput environment. To archive the 100Gbps throughput, you have to list down the mentioned information.
For a very large environment, I strongly recommend using either NetBackup or CommVault.
I would suggest Veeam with the underlying storage being provided by a Pure FlashArray//C.
The FlashArray will provide the throughput you are after (its all-flash), the encryption (FIPS 140-2 certified, NIST compliant), data reduction (Veeams isn't that great) which should provide price parity to spinning disk, provides Immutability which you may also need & is a certified solution with Veeam.
The other storage platform worth looking at is VAST Storage, which has roughly the same features set as the Pure arrays, but uses a scale-out, disaggregated architecture and wins hands down in the throughput race against the Pure's.
There is no such thing as best "anything" let alone backups. There are plenty of enterprise solutions that can handle the load you mentioned plenty are available in the market and it all comes down to your needs.
Hardware encryptions might be much more secure (tougher to hack but still hackable) than software encryptions however they open doors for vendor lock-in and that in certain situations can affect the recoverability of your data.
My advice to you is to focus on looking for a backup solution that can help you guarantee the recoverability of your data at the event of a disaster rather than focus on best backup 100gbps with hardware encryptions.
At the end of the day what's the point of a backup solution if it can do all that you mentioned and fails you at the event of a disaster.
If you can give me more environment details such as what kind of platforms and apps are being utilized I may be able to assist other than that my answers to you are there is no such thing as the best backup for 100gbps with hardware encryption.
We live in a world where everything is software-defined and it's safe to say that that's the way everyone should go.
We use the smallest Cohesity cluster possible with three nodes and have 60GBps of available bandwidth. I assume with more nodes you could get to 100Gbps. They have flash and an unbelievable filesystem. Do you have a use case for 12,500 megabytes per second of backup throughput? I'm having trouble envisioning an admin who would be in charge of a source capable of that coming to a forum like this with your exact question!
I don't think there are backup appliances with the 100Gbps interfaces that exist.
This speed is not needed for the backups, as the network is hardly ever the bottleneck.
Nowadays Cisco and other vendors are coming up with 25 Gig & 100 Gig Ports. On the physical setup of your physical or ESXi(Including backup servers) it should be planned in a way which can connect to this switches to have 100 Gig Pipe. DataDomain, HPE storeonce & Quantum DXI supports you the hardware encryption. Identify the right hardware model which supports the right I/O for your disk backups.This will eliminate your bottleneck after having the 100 Gig N/W. On software you can go for Netbackup, Veeam or Commvault. Each has its own option to reduce the frequent data flow by having client side deduplication
It seems an object storage with inline dedupe could fit but would need to be sized for the performance. Backup targets are typically tuned for the ingest. Is the data dedup-able or compressible? How much data are you looking to backup and in how much time? How much data do you need to restore and in how much time?
Your question is not cearly enough for calculate best scenario for your question, Because there are many factors depend on such as :
-Backup for what physical or virtualization environment.
-Data tybe.
-Network speed on all devices.
-Storage tybe flash or tap.
-What is the read/write speed of your disks/tape, AND the bus/controller speed that the disk is attached to?
-How many files and, how much data are you backing up?
-Is your backup application capable of running multiple jobs and sending multiple streams of data simultaneously?
Some potential points for improvement might include:
Upgrading switches and ethernet adapters to Gigabit Ethernet or greater.
Investing in higher performing disk arrays or subsystems to improve read and write speeds.
Investing in LTO-8 tape drives and consider a library, if you are not already using one, so that you can leverage multiplex (multistream) to tape.
To be able to reach that speed of read and writes, other factors also play a role. For example, network topology, NIC speeds and the backup client speed for data delivery.
Aside from that, you'll need larger files to reach that speed, since with smaller files there is always a speed ramp up time.
So there is no straightforward answer.
But what kind of data or machines is he trying to backup? What OS, DB, and type of apps, will help to give a definite answer.
Solutions that will always deliver is Netbackup (All apps, OS' and DB's), Backup Exec (MS apps, Win and Linux and some DB's) and Veeam.
While we do not sell/offer backup SW per se, we do work with a lot of providers like Commvault, Rubrik, Veeam et el, I can say a majority of our user base, large global companies with 10+ Offices, do use Rubrik, and implement it with the generic S3 out that points to s3.customer_name.rstor.io .As RStor pricing model, we do not charge for Puts/Gets Reads/Writes, Ingress/egress fees. And with triple geographic replication a standard offering, the customer data moves fast in all regions with a super-fast network multiple +100G connections LAG's together, transferring 1PB in 22.5hrs, from SJC to LHR!
There are plenty of tools out there at the moment, many include features like data encryption, e-discovery, and instant restore.
For the current use case, small company/no data center - I would recommend Acronis.
The commercial version of the product even includes a proprietary feature called active protection that is a ransomware defense tool that is unlike anything else on the market.
There are a lot of details that you would need to provide to properly answer your question. How much and what type of data? Is the data being access at the same time in production as it is being ingested? Can you use an agent on a client to do things like client-side dedup? The source (storage) is most likely going to be the bottleneck. Without knowing the answers to the questions and many more I am not sure I can answer properly.
Having been in the industry for 20+ years and constantly staying up on the technology, I can tell you that the fastest backup and recovery solution I have seen to date is:
Source - PureStorage FlashArray (excryption is always on)
Data type - VMWARE 6.x VMDK and VVOL (included VMs with DBs and Apps)
Backup Software - NetBackup 8.x using CloudPoint snapshot manager
Storage Target - Pure Storage Flash Array over 16Gb FC
Backups and Recovery were extremely fast (seconds).
But with everything... the devil is in the details and mileage will vary.
Personally, I would recommend looking at ExaGrid. It is the fastest Backup & Recovery target that can scale from 3TB to 2PB. It also can have 1GbE, 10GbE or 40GbE connection speeds. Any appliance greater than 7TB can come with encrypted disks. Any backup software works to it accept the proprietary ones such as Avamar, Rubrik and Cohesity. Also, it has no forced obsolescence like the NetBackup or other such appliances.
There are N numbers of good backup software. If the environment covers with both Physical, Virtual environment, I would suggest to try with Netbackup 8.1 backup Software or EMC Networker. Commvault is too good, however, its license and maintenance costings are higher than Veritas, EMC.
Apart from the file system level backup, for any Exchange, SQL, Oracle, NAS level backup, they are good at managing backup and recovery.
The throughput totally depends on the media server bandwidth, Switch connectivity, FC/FCoE connection, efficient backup resources for backup media servers with high-level planning & designing.
For exact recommendation, as all of us are suggesting, the details and capacity required for the overall environment.
You might just approach the backup software vendor to have a POC testing and DEMO overview before you finalize any tool.
There isn´t one single back up target device/appliance doing 100Gbs throughput in the marker. To achieve that number requires multiple appliances like HPE StoreOnce. Also, it requires a lot from the primary disk array and infrastructure to provide 100Gbs ie multiple mid-range/high-end all-flash disk arrays etc.
In my experiences i generally prefer netbackup appliances for fast backup and recovery including encrypted data.
This is not the right question, while there are different areas in backup solutions like Software, Network and Storage and they are calculated by different measures than simple network bandwidth.
You can buy the best hardware in the world but earn not a good performance at all, e.g stream data have better performance on rotational drives than flash drives. In a backup process, you must consider data size and type, storage type, IOPS and throughput, backup time window, and much more.
We've gone for cohesity its a clustering solution that does distributed backups over the nodes. This would give you more bandwith when growing the cluster. So it mostly depends on the amount of data you need to back up in the environment.
Possibly CommVault can do it but what's the data type? If they are too small files, no backup program show 100Gbps. And also what kind of backup repository will you use? Maybe full flash repository can handle this IO.
What are your RPO and RTO requirements, as well as what SLA's do you have for your clients? Backup and Recovery aren't normally performance-driven since it's not a tier 1 storage.
COHESITY is the solution. Parallel Ingest provides the best performance, impenetrable file system, with native encryption to the F142 layer.
Look into Veeam or Zerto provided backup tools.
It's all dependant upon the server/storage technology being deployed.
Netbackup 8.x with Netbackup Appliance 5340 would be good to back up any Enterprise Backup Environment with a higher amount of speed.
For more detailed suggestions. Please share your Backup Environment - Server count, Type of backups, Volumetrics per month, Data retentions etc..
Usually, you need backup data, not the hardware. Please share your current design, then we will be able to share our thoughts.
In general, it can be any backup software that you are familiar with, you just need properly size the HW for it.