Backup Systems should be lightweight, secure (immutable), reliable, and require little to no Maintenance with excellent reporting for execs and engineers. The skill level to operate and deploy should be Google-like in its simplicity. Once data is protected, it should be locked and prevented from change and be able to be self-validated with 100% accuracy.
Search for a product comparison in Backup and Recovery
When evaluating backup and recovery software, one of the most crucial aspects to look for is reliability and efficiency in data protection. The software should be able to consistently and accurately backup your critical data, ensuring that no information is lost in the event of hardware failure, data corruption, or other disasters.
Another key consideration is the ease of use and user-friendliness of the software. A good backup and recovery solution should have an intuitive interface, making it simple for IT administrators to set up backup schedules, perform data recoveries, and manage the entire backup process without unnecessary complications.
Moreover, the speed of backup and recovery operations is of paramount importance. The software should be able to perform backups quickly without causing significant performance impacts on your systems, and it should also facilitate rapid data recovery to minimize downtime and ensure business continuity.
Additionally, scalability is a crucial factor, especially for growing organizations. The software should have the flexibility to handle increasing data volumes and infrastructure expansion without requiring a complete overhaul of your backup architecture.
Data security is another critical consideration. The software should offer robust encryption options to protect your sensitive information during both transit and storage. It should also comply with industry standards and regulations to ensure your data remains compliant and secure.
Data Integrity ensures that backups are accurate and uncorrupted, preserving continuity. Scalability allows the solution to grow with data needs, accommodating increasing amounts of data without degrading performance. Security Protocols are essential to protect sensitive information during the backup and recovery processes, implementing encryption and secure access controls.
Ease of Use highlights the importance of an intuitive design, ensuring that operators can quickly understand and manage the system efficiently. Support and Maintenance reflect the need for robust customer service and ongoing updates, preventing potential issues and keeping the solution optimized for performance. Each of these features contributes to a comprehensive and reliable Backup and Recovery solution.
The ability to have the flexibility to fulfill requirements.
- Recovery Time Objectives (RTO, how to fulfill different requirements that the business has to restore data that meet the requirements of "how long time can the business live without the data")
- Recovery Point Objectives (RPO, how to fulfill different requirements that the business has about how much data to lose in case of different incidents)
- Backup Time Objectives (BTO, how efficient the solutions are to protect the data)
- Resource utilization (How cost-efficient the solutions are with the resources utilization), data reduction inline/post, progressive incremental forever with/without rebuilding base data
- Maintenance tasks on the solutions (data retention managements), protecting the solution, upgrading off/online, ...
- Support from vendor
- Price of the solution
- Limitation of licenses, gentlemen's agreement, or hard limits
- The ability to use different retention policies, exclude content, use different storages, extra copies, etc.
- Security of the solution
Philosophy: Why back up data again if the data has not been changed?
The fastest way to protect data is to not back it up (again)
Philosophy: Why restore all data if you can restore only the data needed
Instant recovery or restoring single objects
Integrating the backup process with applications such as PostgreSQL, Oracle etc, so that the archive logs / WAL logs, etc will be protected immediately when it is created will improve the RPO. This can be done using SPFS - a filesystem for Spectrum Protect.
Taking application-consistent snapshots stored on Spectrum Protect storage using efficient data transfer (progressive block-level incremental forever), reduces the time to take backups, and saves resources on the backup server and the server protected. This can be done using SPFS - Instant Recovery for Spectrum Protect
Restoring only what is needed, can be performed by native backup software such as Spectrum Protect. Provisioning an application-consistent snapshot to a server and accessing the data while the restoration is performed in the background can be done using SPIR - Instant Recovery for Spectrum Protect. This helps clients to access data directly to select the data that is needed to copy to the origin or use as production data directly.
Spictera Unified Storage is an immutable storage, an agentless approach, designed for simplicity, security and flexibility.With this solution one can protect Any Device, Anywhere, using Any Media. All easy managed centrally with filtering (include/exclude rules), versioning, retention management, replication, data reduction in transit and at rest, encryption when in use, at transit, at rest, tiering, amoung many more features. Access using file or bucket/object-storage/s3 or VTL (Virtual Tape Library), snapshot and instant restores. This is probably the only climate smart energy efficient Green IT solution on the market that helps reducing CO2 emissions. www.spictera.com
PMO y CIO - Tecnologías de Información at a consumer goods company with 501-1,000 employees
Real User
2019-09-19T20:47:14Z
Sep 19, 2019
They are several aspects;
1) The frequency with which you need the backup files, folders (files) and / or servers in question to be running. Since this frequency is in theory your closest and farthest recovery point at the same time.
Example 1: If you define that every four hours, in case of a problem you will be able to recover what you backed up four hours ago
2) The estimated size of what you need to back up vs. the time it takes to back it up
Example 2: If you are going to backup 300 GB every four hours and the process takes 8 hrs. (because your information is sent to a MDF - SITE mirror by an internet link or something) then you will not be able to back up every 4 hours, you will have to do it every 8 or 9 hrs.
Example 3: If you are going to backup 50 GB every four hours and the process takes 1 hrs. (because you send your information to an MDF - SITE mirror through an internet link or something) then you will not have problems when you have to make the next backup within 4 hours.
3) The applicant's ability to program (in sequence and / or in parallel) what you need to support
Example 4: Suppose that some files, folders (files) and / or servers need to be backed up every 4 hours. and others every 12 hrs. and others every 24 hours. and others maybe every week. In this case you have to estimate very well the worst scenario that is when the sum of what you are going to be supporting coincides and that slows the process, which implies that when the following programmed backups are activated they effectively run without setback.
4) The flexibility of the application for the execution of incremental or full backups
Example 5: In this case it is knowing what the application is going to do in case a backup fails. Does the incremental part that did not back up start again from scratch? Does it leave a process restart point, if so, how reliable is this process? Will it force you to make a FULL backup that will not take 4 hrs. and that it will take 24 hrs. or more? With what your programming will have to be re-standardized?
5) While it is true that the restoration is the most relevant, prior to this you must ensure that you have well supported what "budgets" should be supported.
In these aspects www.datto.com is what worked best for us.
1. Data integrity
(e.g., fast recovery capability, scheduling backups of the most recent problem / high success rate recovery / ability to automatically check or open data to be restored for quick check support to identify define backup data that can restore well).
2. Data availability
(e.g. the ability to successfully back up in the backup window).
3. Integrate with the rest of the infrastructure
(e.g. automation, the ability to create scripts when backing up or restoring or syncing data).
4. Easy to use
(for example, an easy-to-find interface for necessary functions, arranging drivers in a process sequence).
5. Confidentiality, data encryption and data protection.
6. Ability to integrate standards The General Data Protection Regulation (GDPR), Centralized data management, uniform data control, can access backed up data by Token, USB smart.
@Thang Le Toan (Victory Lee) look at progressive incremental forever techniques. Philosophy: Why backing up data again if the data has not been changed?
The fastest way to backup data is to not back it up (again).
Excluding content is also something to check
Progressive incremental forever also helps with restoration, as only one backup is needed to restore the data (no need to restore a full backup and all its incremental backups).
IBM Spectrum Protect has these features, SPIR instant recovery for Spectrum Protect.
How is it supported? Are problems resolved by correcting issues or do you have to wait for a new version or patch to correct issues.
I distinctly prefer TSM/SP since, like my favorite tools, it is a tool (that requires understanding and higher-level thinking to properly configure; it is not in any way shrink-wrap software) and is limited primarily by imagination as opposed to product limitations.
The system should use be smart and use little resources.
- Why taking periodical full copies, if the data has not been changed?
The system should be able to mix different media types.
- Why powering storage if the storage is not being used?
The system should allow replication copies
- If required, store a backup or archive copy on different locations
The system that uses replications, should be able to live without knowled of each other
- meta data about backup or archive should be keept on all replication places
Data reduction techniques are builtin
- possible directly from client or on server, or a combination of both
Encryption of storage or backup/archive
- using private keys, os hardware, or on the storage pools
Easy to customize with policies
- What data to filter in or filter out
- Which media to use
- What retention to use, or versions
- How many copies
- What to encrypt
Are agent installations needed?
- How easy to use them?
- How flexible are them?
- What techniques are they using (opensource databases has many different techniques, such as pg_dump/pg_basebackup/pg_probackup.. to protect PostgreSQL)
- Can the techniques be changed?
- If agentless, how is that working with the transactional data? How do they access the database data
- instant restore (data available immediately?) or required to wait?
Air gap protection / cyber and ransomware protection
- the ability to protect data from being destroyed
We found the spictera solutions interesting, as they can mount the backup storage directly as a local drive-letter or filesytem.
This makes traditional backup easier, as almost all applications can protect the data to directory path.
And the users does not require to learn how to use an agent, as they already knows how or can use vendor specific instructions to backup and restore the data.
Easy to browse or copy existing backup copies if required, as everyone knows how to use a filesystem, right?
The data stored on the filesystem is protected against ransomware.
spictera also has a solution to take application consistent snapshots storing the backups using progressive block level incremental forever (always incremental) techniques on the IBM Spectrum Protect backup server
This reduces the energy resources to take backup, and also the energy resources for restores, as the snapshot backups are reanimated as a local snapshot disk on the server, with a nearly instant restore to same or new location on the same or new server.
- why waiting for the data to be restored if one can use the data immediately?
When deploying backup solutions we look at features that work the way we expect them to.
Data should be deduplicated to retain quick efficient backups while actually being able to restore without issue. Restoring databases, mailboxes, and domain controllers is particularly difficult for some well-known vendors. We have observed many instances of potential clients having failed restores with "successful" backups. So, having reliable restores is a must. Test often!
Backups must be flexible to meet customer needs with custom retention times while providing quick restore options.
The UI must be easy to use or mistakes will be made during the configuration of backup jobs.
PMO y CIO - Tecnologías de Información at a consumer goods company with 501-1,000 employees
Real User
2020-10-15T20:31:48Z
Oct 15, 2020
Exactly, according to what is mentioned, I would add the order of priority that I would give them from my experience (and of course to your best consideration).
From the last Backup of the data you have until the moment you apply.
The contingency is your RPO (Recovery Point Objective) and after you apply the contingency and until you restore the data it is your RTO (Recovery Time Objectives).
It seems to me that it is more important to consider the RTO (Recovery Time Objectives) because it is always the longest in the process, if we saw it as a Critical Path the RTO this is your Critical Path.
@Raul Garcia If RTO also correctly includes the time between when the incident happened and when everything is recovered. Some miss the time between "when it really happened", and instead start counting from when someone executes the restoration.
Also a lot of people and vendor stop meassuring recovery time when the data is back, and misses the part of the humans that needs to figure out which part is missing, so that these data can be re-entered into the system.
The remaining part can be very time consuming....
So, RPO, is also very important. Sometimes it can be better to restore to an older state than a newer state because we are humans.
A human might have hard to remember what has been done the last hour, but might have more easy to remember what's been done until lunch break.
A system that can integrate with databases, securing the transactional data immediately helps protecting your data faster, with less data loss, and improved RPO.
Such as SPFS way to store data immediately on a Spectrum Protect server
Lead Systems Software Engineer at Tucson Medical Center
Real User
2019-09-18T22:55:27Z
Sep 18, 2019
It s really "Recovery" software, not "Backup" software. So it is the recovery features that are paramount.
Recovery considerations:
o What type of data do you need to easily recover:
- Single files from Vmware images
- Entire Vmware systems
- Needed OS Support: Windows, AIX, Linux
- Exchange: Database, single mailbox, single message
- SharePoint: Database or document or share
o Administration
- You don't want to have to have a dedicated "Backup administrator".
+ Should be a quiet, reliable, background operation
+ Avoid solutions that depend on Windows that has to be maintained and patched
+ Avoid solutions that require expensive skill sets such as UCS/VMWare/Linux/Windows/AIX etc
+ Upgrades should be painless and done by support not customer
- Hoopless recovery
+ Should be so simple the helpdesk could perform recoveries
+ Self serve sounds good and could be a plus but experience has shown me that they call the help desk anyway
- Self-monitoring
+ Space issues, problems, and such should alert
+ Success should be silent
+ Should not have to check on it every day
o Quick recovery
- Recovery operations usually happen with someone waiting and are usually an interruption to your real work.
- You want it to recover while you are watching so you can tell the customer it is done and get back to your real job
Backup Considerations
o System Resources
- Should be kind to network
+ Incremental backups based on changed blocks not changed files
+ Should automatically throttle itself
- Memory and CPU utilization
+ Backing up should be a background process that does not require much CPU or Memory on the host being backed up
+ Primarily this is for client-based backups
- VMWare considerations
+ If using Vmware snapshots consider the CPU and I/O oad on ESX servers... you may need more
o Replication
- if the backup host is local, a remote replica should be maintained at a remote location
- Replications should not monopolize WAN links
- Recovery should be easily accomplished from either location without jumping through hoops
Random thoughts
o It is OK to go with a new company as opposed to an established one
- Generally speaking, a backup/recovery solution has a capital life span of about 3 to 5 years
- Generally speaking, moving to a new backup solution is fairly straight forward
+ Start backing up to the new solution and let the old solution age out
- So. it is OK to look at non-incumbent solutions
o When replacing an "incumbent", vendors will often give deep discounts
o Don't feel like you are stuck if an upgrade path looks like you are having to buy a solution all over again
o Do a real POC. It does not really take that much time or effort... or shouldn't. If it does, it is not the right solution
o At the end of the day, if it successfully and reliably backs up and recovers your data, it is a working solution
@PaulLemmons Yes, backing up data smart does not mean that one can restore it smart. Eg a VM image backup includes more data than needed, and it might be problematic to find and restore an individual object (file).
mixing different architectures is also something to consider. We are using Mainframe, hypervisors (KVM/VMware/Xen), Windows, Linux and commercial Unix, BSD, iSeries, and Digitals VMS
some has requirements for vaulting offline media such as tape
And it can be good that the software can handle these too.
For this we selected IBM Spectrum Protect,
to improvve RPO, SPFS can be used to backup transaction logs immediately using Spectrum Protect
Enterprise Manager Infrastructure and Operations at McGrath RentCorp
Real User
2019-09-18T22:42:03Z
Sep 18, 2019
It depends on your operations structure, however, in all cases a solution that can reliably backup your targeted data within your time window, and restore that data in a timeline that meets your business needs is most important. If it cant do that task, it doesn't matter what it costs, how easy it is to integrate, or how intuitive the UI is.
Senior Principal Product Manager at Veritas Technologies
Real User
2019-09-18T18:13:51Z
Sep 18, 2019
There are two questions here, really. One is technical, and the other is political.
So often, over the years, I have found that the political one is the hardest and the one that tends to have more sway. I have seen, so often, that companies will have global standards, and yet someone always seems to find a way to break those standards and do what they want... and this is the basis of the rise of the new data protection companies.
Once upon a time, there were mainframes, and it was easy. Then we had distributed systems and this is where fragmentation started. I personally had to unify a data protection infrastructure that had 13 different OS' and 5 different data protection products. Just as I did that, that company started a different business unit... and they chose a different data protection product.
Then we got virtualisation, an the teams that ran that environment often ran as a separate unit,a nd so chose their own backup product.... which tended to be new products because they concentrated on just that one single platform. This enabled them to be focused and, arguably, deliver a better solution... for that one platform.
Now we are seeing a plethora of solutions that are coming up and their concentration is cloud providers. Even AWS is getting in the game with a solution, but concentrating on their cloud. This is the new battle ground.
Technically, you can choose one solution. That solution must :
1) Guarantee restore
2) Backup within the required backup window
3) cover traditional enterprise (Which is mattering less and less), Virtual/HCI, and cloud
4) enable you to put that data wherever you nee dit so that the restore can happens within the desired window.
5) Be low cost to run. That is infrastructure, software, facilities and people cost - not just software
6) Scallable
Above all of this, though, is that a company need the political will to force errant departments/people to bend the corporate decision. Without that, the corporate will always be fragmented and will never be able to get the best deal it can from whomsoever the vendor is, and will always waste time fighting off encroachments from other vendors.
Chief Information Officer at a financial services firm with 51-200 employees
Real User
2019-09-18T15:25:12Z
Sep 18, 2019
There are a ton of great answers below. They highlight all the characteristics of a good backup solution and those characteristics are important. For me, the ability to restore successfully is the one key characteristic. Imagine a 100% secure, easy to use, centralized, deduplicated, inexpensive, fast backup solution that, when you go to restore from it does not work. Does it matter that it is fast and cheap? Does it matter if it is centralized or deduplicated? Not in my view. The key is the ability to restore, and everything else is specific to your needs.
ABP Food Group Infrastructure team lead at a retailer with 1,001-5,000 employees
Real User
2019-09-18T10:56:12Z
Sep 18, 2019
The primary features that any Backup solution needs to provide are :
1. Ease of deployment
2. Clarity of licensing and support from a commercial point.
3. Ease of restoration on file and server level.
4. Ability to store backups offline to prevent corruption in the event of a security breach.
5. The speed that these backups can be accessed and deployed needs to be documented for Business and Operations.
Once these points are covered, the other features are nice to have but not essential.
- Check for the reliability of backup software
- What OS needs to backup
- Physical or virtual or both
- What kind of infrastructure is setup
- How big the environment is
- How many servers every night and the growth rate
- What the RPO and RTO is for recovery
- Any cluster backup for any OS
- Ease of use
- Expertise on OS. Backup admin should have depth knowledge of OS.
The success of recovery, "You only as good as your last backup"
Easy of the recovery process. Speed of the backup, Offsite replication options (DRP plan) scalability
Support from Vendor
Works at a insurance company with 1,001-5,000 employees
Real User
2019-04-23T20:35:43Z
Apr 23, 2019
Primarily RTO and RPO
Backup performance w.r.t to speed and de-duplication mechanism
Understanding of indepth backup process flow.
Ease of access/use/administration.
Great scope of automation & reporting on management console.
- Data Integrity (e.g. ability to restore / restore success rate)
- Data Availability (e.g. ability to successfully backup within backup window)
- Integration with the rest of the infrastructure (e.g. automation, scripting capabilities)
- Ease of use
- Data Security
Senior Storage / Backup Administrator at a healthcare company with 10,001+ employees
Real User
2017-10-25T17:24:54Z
Oct 25, 2017
"How will this product help us better meet the associated business requirements such as storage requirements (local and DR), data retention requirements (both internal and regulatory), and security?" is the first question I ask myself...
Then the Basics:
-------------------------------
- Deduplication / Compression / Encryption REQUIRED
- Restore Throughput for different types of data (File System, Virtual, SQL/Oracle, etc.)
- Reporting (not only backup/restore metrics, but overall health of the environment) & Custom Reports
- Automation Automation Automation
- Centralization
- Capacity Based Licensing
- How is the Technical Support?
- SLAs, RPO/RTO - how will this be affected?
Systems Administrator at a construction company with 51-200 employees
Real User
2017-10-12T16:10:46Z
Oct 12, 2017
Speed to backup and restore
Tech support availability should something go wrong. Especially off hours times.
Size of storage to fit your backups
Ease of management
1. It should be able to seamlessly integration capabilities with different operating systems and virtualization technologies.
2. Less administration after initial setup.
3. Easy and fast recovery of backup.
IT / IS Manager at a real estate/law firm with 1-10 employees
User
2015-11-12T23:32:29Z
Nov 12, 2015
I feel the top three priorities for a backup solution are...
1. Recovery
2. Recovery
3. Recovery
All the KPI's, must haves, should haves and nice to haves are important but without good, reliable and tested recovery they mean very little when having to explain "There is nothing we can do to recover" to the CIO or CFO or Lenny in accounting.
Senior Storage and Backup Consultant & Technical Instructor at Free Lance
Consultant
2015-08-04T09:05:26Z
Aug 4, 2015
I've installed a lot of different backup software across different customer, and from my personal point of view the right one is the one that most fit the customer need, you need to have an eye at the mandatory environment and one at the forecast grow in term of data, technology and budget.
From this point of view is always better to go for one of the main player, since you will always have support, development and a good portfolio of products. Also I've found very useful to ask the pre-sales team of my backup vendor for specific solution. If I have some constraints is always better to specify them. Also let your vendor perform a training of the solution in advance of the implementation not after. This will help you to clarify some points that can be in a grey area and they can be fixed/implemented in the installation and configuration of the product.
No single product will do the magic for you, you need to specify what you want and ask for it.
Network Administrator at a construction company with 501-1,000 employees
Real User
2015-05-15T06:03:39Z
May 15, 2015
That in itself is a loaded question. Every company has different needs. First question that needs to be answered is what your needs are. Data retention requirement, data importance, recovery time vs revenue loss, network speeds from production servers to backup server or servers, are you required to be HIPPA compliant, that’s just a few of the first questions that come to mind. All of that aside, what if your backup option fails? Do you have a backup plan for the production server that you can’t restore because your only backup software didn’t do its job right and it just failed and the support technicians can’t help you restore it? But that goes full circle to “First question” what are your needs?
- The RtO is vital
- Latency data transfer, mainly remote and long distance backups;
- Performance on Restores and Recovers;
- Retention policies;
- Data recovery policies;
- Service Availability (24/7);
- And the last but not the least: The budget.
-The Average of daily or weekly changes in your data
-High level of recovery options
-Storage media type based on corporate needs
-The size & type of the data
Make sure the software functions as advertised... we tried Nakivo and were sorely disappointed, after 30 days they were still unable to make their own software function.
Director of Information Technology and Telecommunications at Câmara Municipal de Campinas
Real User
2022-03-07T18:45:29Z
Mar 7, 2022
Before getting into the technical aspect, we need that the solution has a quality team, that its service and solution of incidents and problems are Agile and that it has extensive documentation.
Now, on the technical side, I believe that RPO and RTO rates are paramount to consider.
When deploying backup solutions we look at features that work the way we expect them to.
Data should be deduplicated to retain quick efficient backup it is a test kups while actually being able to restore without issue. Restoring databases, mailboxes, and domain controllers is particularly difficult for some well-known vendors.
We have observed many instances of potential clients having failed restores with "successful" backups. So, having reliable restores is a must. Test often!
Backups must be flexible to meet customer needs with custom retention times while providing quick restore options.
The UI must be easy to use or mistakes will be made during the configuration of backup jobs.
IBM Spectrum Protect Expert at a non-tech company with 10,001+ employees
Real User
2021-11-29T13:55:13Z
Nov 29, 2021
Hello,
I think that the most important is that you know what your environment is looking like and then challenge the tool against it to ensure to not multiply the solution for your Backup & recovery strategy.
Having too many solutions, for the same will give headache to your admins' fellows to put in place and surely even more when you will face a disaster to be recovered, could it be ransomware, or simple DC down (no matter on/off-premise).
So when it comes to choosing a solution, I'm applying the KIS methodology in regards to teams and infrastructure. KIS stands for "Keep It Simple".
Business Development & Product Manager at Prianto Ltd
User
2021-11-16T17:54:39Z
Nov 16, 2021
Choose a single vendor all-in-one solution with a Single Management Console that can accommodate all your requirements both now and in the future as you grow.
Starting with basic backup and recovery with user self-service and immutability (now required by cyber insurers), but also consider future needs like can you add direct to cloud backup for Home users workstations later, Backup for SaaS like Microsoft 365, Google workspaces or Salesforce, solutions for AWS, Azure, Cloud VM workloads, DR from the location of your choice (on-prem, second site or DC, or vendor cloud) adding DR (or DR-as-a-Service) when you are ready to take that step as an add-on.
Scalable Enterprise functionality, encryption, deduplication, customer service and pricing. There is only one.
I would really appreciate more targeted questions. And not from Content Directors @ IT Central Station. Let real users ask their questions. Every environment has different needs and urges.
Let real users speak what their key points to cover are. There are enough sites out there discussing general questions.
Manager - Cybersecurity & Cloud Solutions at Paramount Computer Systems
Reseller
2021-11-16T09:24:34Z
Nov 16, 2021
There are many factors you have to put into consideration while selecting a Backup and recovery vendor as below:
1 - The Computability for the applications like SAP, DBs, SharePoint, Exchange and various Operating systems and versions (Windows OS client and server), Linux, Solaris,.etc) physical and virtual environment.
2 - Performance of the backup and restore, and here you have to check compression/deduplication capabilities.
3 - Integration capability with other platforms like Hyper-converged Infrastructure or mainframe, if exists.
4 - Ease of use, the management console especially if you have more than one environment that you want to be protected
5 - Complete data protection SW/HW
6 - Security features like ransomware protection, recovery verifications, etc.
Senior System Engineer at a comms service provider with 201-500 employees
Real User
2020-10-10T09:51:25Z
Oct 10, 2020
Your business needs must be the main driver for selecting your backup and recovery solution. Look to what are the needs and translate it in terms of RPO, RTO, and retention then reflect these measures to see what solution can achieve these measures for workloads.
An enterprise backup and recovery solution must be able to protect ANY workload. It must scale easily as data sets grow and shrink. Managment should be autonomous. Once the policies are in place and the source data is identified the solution should be able to protect, validate, report and automatically test recovery without the babysitting of an admin. the Admin is there to setup policies and watch for alerts.
Business Development Manager at Hewlett Packard Enterprise
Real User
2019-09-20T13:22:22Z
Sep 20, 2019
One should assess against proper expectation/requirements:
1. compliance requirements and options to address:
a. f.e. GDPR – do you prefer granular restores or you plan restoring entire environments to conform article 11 for all your data protection processes or granular restore of historical data (+90 days) using self healing portal
1. SLA requirements of BW,RPO, RTO
a. Having in mind flash revolution and significant higher UBER rate of flash media vs magnetic disks need for BW=RPO=RTO->0
b. additional use cases for DP system:
i. production environments cloning for UAT, Q&A, T&D purposes.
ii.Support DR solution using another site/cloud services
2. Assess what new technologies/systems/projects would likely incorporate in IT next 24 months:
a. HCI, Containers, cloud services, archiving solutions) along with existing environment and decommissioning plans
b. intelligent storage with integrated Data Protection services in HW
c. DR solution compliance
Shortly I would rather opt towards highly automated DP and integrated with HA/DP solutions preferably implemented in HW instead of sophisticated feature-rich and costly. The main target is to reduce/eliminate maintenance windows while being compliant and have certified/supported environment/solution.
Principal Systems Engineer at a tech services company with 5,001-10,000 employees
Real User
2019-09-20T04:09:30Z
Sep 20, 2019
When evaluating backup and recovery software, what aspect do you think is the most important to look for:
1. The different Applications that are used by the customer
2. Current Network used
3. Current Storage used
4. Customer expectation
5. Customer retention Policy requirements for current data
6. Customer Archiving requirements for old data
7. Data classification
8. Customer SLA to the business
9. Customer RTO and RPO
10. How much the customer wants to spend on Backup Software
The above is what I normally ask the customer and based on the above, this will determine the best backup software the customer requires or willing to pay for.
The primary considerations regarding backup are simply, “Is my data protected, available and current?”
Tape meets the first criteria, protection, but fails miserably on the availability and currency metrics – you have to find and recall the tape(s), condition them, load them, then begin the restore process.
Data Deduplication Appliances meet the protection and availability requirements, but typically suffer from poor data currency – you would be restoring relatively ‘stale’ data or simply losing hours if not days of work.
The only solution I have found that meets all 3 requirements is Zerto. It uses hypervisor-based continuous data replication, writing a journaled copy to disk for not only your database but all component VMs in your application. This enables 3 types of immediate recovery with as little as 15 to 30 seconds – seconds, not minutes, not hours – of data at risk:
1. You can recover at the file, volume, or individual VM level
2. You can recover the entire application in a remote site – your DR solution,
3. You can reverse sync the entire application back to the primary site.
Since all Zerto implementations are symmetrical, and since a failover takes only 15-20 minutes to ‘User Ready’, failing over is often the quickest way to restore an application to service.
A few other key points that are inherent with Zerto:
1) No agents to the VM to manage and update
2) No snapshots – ever
3) No proxy servers to drive up costs
4) Point-In-Time Recovery enables you to recover from database corruptions as well as things like ransomware.
5) Older data, say older than 7 days can be automatically migrated to lower-cost storage or public cloud
6) Zerto works virtually identically in 4 modes:
- On-prem for VMware and Hyper-V,
- In a hybrid cloud mode between on-prem and Azure, AWS or other VMware Public Cloud providers (one of several hundred that support Zerto globally).
- In public cloud-resident or cloud-native apps
- Between public clouds, e.g. Azure to AWS
So while this may sound like an advertisement for Zerto, I assure you it is one of the very few applications I have ever seen that does what the vendor claims – out of the box. And when measured against the bar it sets, no other backup solution even comes close. So like my grandfather used to say, “If you’re telling the truth, you ain’t bragging.
RPO and RTO is the most important thing while taking backup and recovery software and I usually prefer to secure the date on its own file system during the backup for security.
I work for a backup vendor so I can't be considered as an unbiased adviser, however, from my experience with our partners they mostly look for ease of use and speed of recovery. When disaster strikes it could cost businesses millions in unwanted downtime. Hope this helps and thank you for the opportunity.
I work for a backup vendor so I can't be considered as an unbiased adviser, however, from my experience with our partners they mostly look for ease of use and speed of recovery. When disaster strikes it could cost businesses millions in unwanted downtime. Hope this helps and thank you for the opportunity.
System Administrator at Bakhresa Group of companies
Real User
2019-09-19T06:39:28Z
Sep 19, 2019
The most important feature on any recovery software is the flexibility of the solution, simple and easy recovery procedure.
Also, you should consider recovery time, the time taken to recover from the disaster.
Configuration of backup to be simplified
Backup software should support Dedupe
Backup software should support easy recovery options
Data which is backed up should be Reliable
Majorly Vendor support to be provided in Backup configuration and Restoration.
Technical Presales at a tech services company with 11-50 employees
User
2019-09-19T04:28:45Z
Sep 19, 2019
One of the key factors to consider will be Recovery Point Objective (RPO) and Recovery Time Objective (RTO) of the Backup Solutions.depending on individual Corporate's policy on the turn around time for restoration as well as how current the data should be. Another important factor would be de-duplication feature availability as this has a direct impact on the storage sizing for backup retention period.
The cost of the implementation and maintenance will go higher as the RPO, RTO and data capacity grows regardless of whatever Backup and Recovery solution is adopted.
Freelance Web Developer, Ecommerce Developer at a tech services company
Real User
2019-09-19T01:51:48Z
Sep 19, 2019
The ability of the backup Software has to:
- Flexible Licensing.
- Compare original to backup (Data Integrity)
- Granula Access
- Unattended backup to multiple locations
- inbuilt malware checks(ability to check patterns)
- Data Security - very important. Any change in the size of the archive must be reported, including transfer and deletions.
- Continuity and Support.
Commercial Manager at a computer software company with 11-50 employees
User
2019-09-18T21:56:40Z
Sep 18, 2019
A complete, easy to manage and easy to deploy, Business Continuity/Disaster Recovery Solution, with backup verification, integration with different environments and security
Senior Consulting Project Manager with 11-50 employees
User
2019-09-18T20:00:14Z
Sep 18, 2019
Hello Ariel, nice to meet you.
My concern regarding this subject is : the Time to Recover must be relevant to the equation. So, no metter how long you record your information, no metter where you store it. Everything must be recovered as fast as possible. Nowadays, there is no space and time for delays, and your business may fail due to a long outage.
As a result, as fast as your selected tool may recover, as fast your business will return to the game.
That´s it.
My best wishes to the IT Central Station community.
Sr. Product Marketing Manager at a tech services company with 1,001-5,000 employees
Real User
2019-09-18T19:53:01Z
Sep 18, 2019
Does it meet your service level objectives for backup windows, recovery point objectives, and recovery time objectives, and retention?
- Across all the different types of systems and data in your environment (operating systems, applications, virtual machines, etc.)
- Against all the different things that can go wrong (human error / malicious behavior, system failure, site outage, regional outage, etc.)
If you have a complex environment, you'll want a solution that can manage a range of policies specific to the value of each data type to help reduce overall costs. You don't want to pay for a tier 1 solution for tier 3 data.
Responsabile Data Management DC Area Nord Ovest at a tech services company with 501-1,000 employees
Real User
2019-09-18T18:13:28Z
Sep 18, 2019
I think the most important aspect when evaluating a backup software it's starting from the needs. What are the architectures, the applications, the servers, the databases to restore ( not backup )? And also, what are the peoples that will be responsible of the backup service ( their skills, etc. )?
Then you can put in the right order all the features.
1. Automating the backup process
2. Capturing all the data, compressing it, deduplication and saving only the new files
3. Fast recovery of data AND ability to prioritize specific data to recover based on user's needs
4. Business continuity - keep the business flowing even during the event of a cyber attack, system failure, ransomware attack, etc.
Follow these 2 simple statements and you will be able to narrow down the field to a handful of vendors 1) Choose vendor that deals in Business backup and Business backup ONLY. 2) The bitterness of poor quality remains long after the sweetness of low price is forgotten
Technical Presales Consultant/ Engineer at Ingram Micro
MSP
Top 5
2019-09-18T13:06:35Z
Sep 18, 2019
The most important aspect to look for in backup and recovery software is its ability and flexibility to help achieve the best practice 3-2-1 rule. 3 backup copies, 2 different mediums, 1 offsite/offline.
All in one solution for VMWare, HyperV, Bare Metal, Cloud, Replica, Application-Aware, CBT, Dedupe, Tape Backup, etc. Free for small office and commercial for bigger model with affordable price. For example, Veeam Community Edition.
Sales Director Benelux bij Bacula Systems SA at Bacula Systems
Vendor
2018-02-13T13:30:19Z
Feb 13, 2018
This must be it:
- Backup & Restore Reliability and Data Integrity
- Ease of implementation and user interface
- Cloud Integration
- DR capabilities
- Compliance reporting
- Role Based Access Management
- Deduplication
The obvious top answer is the reliability of restores.
After that there are many important factors: easy of use and maintainability, block level backup and deduplication, encryption, flexibility in retention policy-backup sets-access policy, performance, cost...the list can go on and on.
Senior Systems Engineer at a sports company with 51-200 employees
Real User
2017-06-07T19:55:30Z
Jun 7, 2017
In order of importance:
- Restore Reliability and Data Integrity
- Ease of implementation and user interface
- Cloud Integration
- DR capabilities
- Compliance reporting
IT Supporter and Software developer with 51-200 employees
Vendor
2017-02-09T21:48:05Z
Feb 9, 2017
- Can a backup resume when it suffered a catastrophic failure?
- Can a backup be split into parts by content for remote duplication?
- Can a backup application support multiple OS?
- Ease of set up and continued use (and also how easy it is to get it off your systems)
- Useful and easy reporting
- speed of backup
- Multiple recovery options
- cost
Owner at a tech services company with 51-200 employees
Consultant
2016-03-10T06:35:59Z
Mar 10, 2016
- clear consolidated reports to user groups (if and when was the last backup 100% successful)
- easy to deploy, use, and manage
- incremental forever if storing to the cloud and protecting large files / databases
- long term archiving / retention options (never purge deleted files, etc.)
- recovery time and recovery point - both less than an hour
- low system resources, reliable
Technical Services Specialist at IBM India Private Limited
MSP
2016-03-01T04:20:27Z
Mar 1, 2016
Check the below points.
*Enough bandwidth at both ends(Client and Server)
*Set the backup jobs at off production time.
*Set the alerts for the disk space, backup success/failure jobs,
*Check randomly for the restore of backups for every 15days.
*exclude the Videos/ISO/MP3 files while backup schedule.
Meets and supports the max and min configurations mentioned in the guides
Admin and other related docs are precise
RTO and RPO's are defined properly
Is a popular solution around
Has a strong Tech support
Many Blogs and whitepapers written around their solutions
Cost, Scalablity and performance meets the standards
DR support and Cluster centric.
Supports HA and DRS.
Meet SLA for resolving problems
Technical Sales Advisor/Deploy Agent at a tech services company
Real User
2016-01-08T23:13:18Z
Jan 8, 2016
support team about the backup software-easy to use-easy admin console-easy to deployment -and mustbe a complete product that included physical and virtual environment.
- Meeting SLAs in terms of RTO and RPO
- Short Backup window
- Recovery time near 0
- Deduplication
- DR backup Copy
- Virtual and Physical machines
- Costs
- Reliability and stability
- Ease of use
- non complicated process
- quick and efficient backup and restore process
- user friendly
- easy search mechanism for files
- cost efficiency
- stability
1) How comfort the backup software gives in maintaining the enterprise setups
2) Should meet RPO/RTO
3) Should be able to overcome the pain area of customers especially with backup performance, reducing of space and high availability of data
4) Disaster recovery plan
5) Other advanced features like deduplication, Replication,cloud, Virtualization etc
Intuitive GUI
Quick setup
Vast compatibility of storage devices
Ability to backup to Mapped Network Drives
Most importantly... the time to restore whether it's one file or a complete image.
A backup solution is only as good as the ability to quickly recover from any disaster.
Sr. Pre-Sales Technical Consultant- Big Data Software at Hewlett Packard Enterprise
Vendor
2015-11-13T10:38:25Z
Nov 13, 2015
Dependable
Ease of use, don't need a lot of fancy bling just a clean interface
Application integration
Ease of recovery of entire machine or single file
Single deduplication solution from end to end including target hardware
Reporting
Low hardware footprint
Quick recovery
Low resource consumption
Role based assignment
Bare metal recovery capability
VM support (optional)
Centralized management (A Must)
Support for UNC Paths and multiple targets
1) Reliability and stability
2) Support for both virtual and physical servers
3) Better than average support for Exchange and Active Directory
4) Good support for both disk based and tape library backup
Storage and Backup Engineer at Fidelity Investments
Real User
2015-10-25T16:48:48Z
Oct 25, 2015
1) Quick restore options
2) Disk backup would be better option than SAN.
3) Easy to install and configure
4) Good support to backend hardware. May it be tape library, tape drives, media etc.,
It depend of the What do you need?
Restore in a very short time, backup with zero disruptance, or easy administration with a very short learner curve.
But finally, I think our most important criteria is the available of a reliable technical information.
CTO Infrastructure, Tehcnical Support, & Operations at a healthcare company with 1,001-5,000 employees
Vendor
2015-09-30T18:35:38Z
Sep 30, 2015
The most important to us, is that the application must have a easy way to recover the backups. A good way is to have an environment inside the application or something to test the backups. Another way is to have clear steps to build a DRP, and not only a manual.
Another good thing is to have a clear item selection to backup.
Backup to disk, D2D support.
Business Development Manager at a tech vendor with 51-200 employees
Vendor
2015-09-18T08:29:51Z
Sep 18, 2015
The solution meets your Business's requirements specifically in terms of data security, data retention, speed of restoration when you need data back and confirmation that your data will be destroyed when you terminate the contract.
Systems Engineer at a engineering company with 501-1,000 employees
Real User
2015-08-27T05:08:27Z
Aug 27, 2015
The most important thing is
Backup can be restored in case of incident happened. If a backup can not be restored then it's useless.
By that point, it must at least has the following things:
- Follow 3-2-1 rule for backup
- Meet organization RPO/RTO
- Critical in validating backup integrity
- Various backup format
- Extensible via API or third-party modules
Channel Development - Business Continuity at a tech services company with 501-1,000 employees
Consultant
2015-08-20T13:41:39Z
Aug 20, 2015
Thoughts-
- When was the last time you completed a backup restore test
- How current are our RTO and RPO targets
- How current are your identified BU&R data sets
- Where do you store your DR and BC plans and who/how can they be accessed
- Empower the user
- Explain the difference between backup (process based) and recovery (task based)
- How many BU&R applications do you need to cover your whole business
- It it the Companies responsibility to provide BU&R for data residing in the cloud.
Compatability with your company software. Such as Oracle. Without this you may find you don't really have a backup. Feature set that meets your requirements. Compression has proven to be another important feature as your data grows. Bare metal restore is a must and universal restore for driver replacement is genius. Speed and throughput is important but relies heavily on your networks ability so numbers may not reflect what you may be capable of.
Bid Management / Consultancy / Operations / Service & Delivery coördination at a transportation company with 1,001-5,000 employees
Vendor
2015-08-13T07:25:58Z
Aug 13, 2015
- Interoperability and coverage with/of the used environments
- Stability
- Ease of use in case of emergency
- The organization that is behind
- License pricing
Easy to find and restore data. Recovery situations are sometimes stressful. The backup administrator may be offline and have to talk someone else through the procedure. The exact backup client, date or share name may not be known. A simple interface and fast catalog browsing are essential. An efficient and simple search index are excellent add-on features if you can get them. You won't appreciate this until the day you really need it.
System Engineer at a tech services company with 51-200 employees
Consultant
2015-06-26T10:03:00Z
Jun 26, 2015
-stable backup for critical application and guaranteed recovery.
-fast and reliable recovery.
-ease of restore to a different hardware and OS level.
-small footprint in case of hourly backups.
Backup and Restore time.
TCO/ROI (by Dedup capabilities, Scalability,etc.)
Data Security
Seamless integration into existing as well as future infrastructure/ application.
Good vendor support
Senior IT Infrastructure Analyst at a financial services firm with 501-1,000 employees
Vendor
2015-05-29T17:59:06Z
May 29, 2015
- fast recovery
- no impact to production when backup occurs
- vmware integrated
- storage integrated
- multiple restore options
- software aware integration
Does it meet my RPO time and RTO time. Does it support backing up my apps with app/data consistent. Does it support the OS in my environment? Does it support the virtualization technology that I am using and leverage on their API to offload process.
Storage specialist, Infrastructure Architect at Atea
Real User
2015-05-06T11:05:42Z
May 6, 2015
Meeting SLA levels and garanty recoverability within RPO/RTO.
Using systems that can scale and cope with future business within an expected litetime of the system.
Designing a solution that is easy to maintain and support.
People who maintains and runs the backup, should have clear responsebilities and documentation should always exist.
Senior Netwerk Engineer at a tech services company
Consultant
2015-02-15T10:52:04Z
Feb 15, 2015
-backup speed How long does it take to make a backup
-restore speed
-restore complexity
-disaster recovery options
-virtual machines capabilities
-is restoring individual items easy (emails, calendar items ...)
-is it possible to backup Online Content like Microsoft 365 files.
- Latency data transfer, mainly remote and long distance backups;
- Performance on Restores and Recovers;
- Retention policies;
- Data recovery policies;
- Service Availability (24/7);
- And the last but not the least: The budget.
Data backup involves copying and moving data from its primary location to a secondary location from which it can later be retrieved in case the primary data storage location experiences some kind of failure or disaster.
Backup Systems should be lightweight, secure (immutable), reliable, and require little to no Maintenance with excellent reporting for execs and engineers. The skill level to operate and deploy should be Google-like in its simplicity. Once data is protected, it should be locked and prevented from change and be able to be self-validated with 100% accuracy.
When evaluating backup and recovery software, one of the most crucial aspects to look for is reliability and efficiency in data protection. The software should be able to consistently and accurately backup your critical data, ensuring that no information is lost in the event of hardware failure, data corruption, or other disasters.
Another key consideration is the ease of use and user-friendliness of the software. A good backup and recovery solution should have an intuitive interface, making it simple for IT administrators to set up backup schedules, perform data recoveries, and manage the entire backup process without unnecessary complications.
Moreover, the speed of backup and recovery operations is of paramount importance. The software should be able to perform backups quickly without causing significant performance impacts on your systems, and it should also facilitate rapid data recovery to minimize downtime and ensure business continuity.
Additionally, scalability is a crucial factor, especially for growing organizations. The software should have the flexibility to handle increasing data volumes and infrastructure expansion without requiring a complete overhaul of your backup architecture.
Data security is another critical consideration. The software should offer robust encryption options to protect your sensitive information during both transit and storage. It should also comply with industry standards and regulations to ensure your data remains compliant and secure.
The ease of creating immutable backups, with the ability to manage different backup schemes from a single console.
Key features to consider include:
Data Integrity ensures that backups are accurate and uncorrupted, preserving continuity. Scalability allows the solution to grow with data needs, accommodating increasing amounts of data without degrading performance. Security Protocols are essential to protect sensitive information during the backup and recovery processes, implementing encryption and secure access controls.
Ease of Use highlights the importance of an intuitive design, ensuring that operators can quickly understand and manage the system efficiently. Support and Maintenance reflect the need for robust customer service and ongoing updates, preventing potential issues and keeping the solution optimized for performance. Each of these features contributes to a comprehensive and reliable Backup and Recovery solution.
There are several things to consider:
The ability to have the flexibility to fulfill requirements.
- Recovery Time Objectives (RTO, how to fulfill different requirements that the business has to restore data that meet the requirements of "how long time can the business live without the data")
- Recovery Point Objectives (RPO, how to fulfill different requirements that the business has about how much data to lose in case of different incidents)
- Backup Time Objectives (BTO, how efficient the solutions are to protect the data)
- Resource utilization (How cost-efficient the solutions are with the resources utilization), data reduction inline/post, progressive incremental forever with/without rebuilding base data
- Maintenance tasks on the solutions (data retention managements), protecting the solution, upgrading off/online, ...
- Support from vendor
- Price of the solution
- Limitation of licenses, gentlemen's agreement, or hard limits
- The ability to use different retention policies, exclude content, use different storages, extra copies, etc.
- Security of the solution
Philosophy: Why back up data again if the data has not been changed?
The fastest way to protect data is to not back it up (again)
Progressive incremental forever (Always incremental)
Philosophy: Why restore all data if you can restore only the data needed
Instant recovery or restoring single objects
Integrating the backup process with applications such as PostgreSQL, Oracle etc, so that the archive logs / WAL logs, etc will be protected immediately when it is created will improve the RPO. This can be done using SPFS - a filesystem for Spectrum Protect.
Taking application-consistent snapshots stored on Spectrum Protect storage using efficient data transfer (progressive block-level incremental forever), reduces the time to take backups, and saves resources on the backup server and the server protected. This can be done using SPFS - Instant Recovery for Spectrum Protect
Restoring only what is needed, can be performed by native backup software such as Spectrum Protect. Provisioning an application-consistent snapshot to a server and accessing the data while the restoration is performed in the background can be done using SPIR - Instant Recovery for Spectrum Protect. This helps clients to access data directly to select the data that is needed to copy to the origin or use as production data directly.
Spictera Unified Storage is an immutable storage, an agentless approach, designed for simplicity, security and flexibility.With this solution one can protect Any Device, Anywhere, using Any Media. All easy managed centrally with filtering (include/exclude rules), versioning, retention management, replication, data reduction in transit and at rest, encryption when in use, at transit, at rest, tiering, amoung many more features. Access using file or bucket/object-storage/s3 or VTL (Virtual Tape Library), snapshot and instant restores. This is probably the only climate smart energy efficient Green IT solution on the market that helps reducing CO2 emissions. www.spictera.com
They are several aspects;
1) The frequency with which you need the backup files, folders (files) and / or servers in question to be running. Since this frequency is in theory your closest and farthest recovery point at the same time.
Example 1: If you define that every four hours, in case of a problem you will be able to recover what you backed up four hours ago
2) The estimated size of what you need to back up vs. the time it takes to back it up
Example 2: If you are going to backup 300 GB every four hours and the process takes 8 hrs. (because your information is sent to a MDF - SITE mirror by an internet link or something) then you will not be able to back up every 4 hours, you will have to do it every 8 or 9 hrs.
Example 3: If you are going to backup 50 GB every four hours and the process takes 1 hrs. (because you send your information to an MDF - SITE mirror through an internet link or something) then you will not have problems when you have to make the next backup within 4 hours.
3) The applicant's ability to program (in sequence and / or in parallel) what you need to support
Example 4: Suppose that some files, folders (files) and / or servers need to be backed up every 4 hours. and others every 12 hrs. and others every 24 hours. and others maybe every week. In this case you have to estimate very well the worst scenario that is when the sum of what you are going to be supporting coincides and that slows the process, which implies that when the following programmed backups are activated they effectively run without setback.
4) The flexibility of the application for the execution of incremental or full backups
Example 5: In this case it is knowing what the application is going to do in case a backup fails. Does the incremental part that did not back up start again from scratch? Does it leave a process restart point, if so, how reliable is this process? Will it force you to make a FULL backup that will not take 4 hrs. and that it will take 24 hrs. or more? With what your programming will have to be re-standardized?
5) While it is true that the restoration is the most relevant, prior to this you must ensure that you have well supported what "budgets" should be supported.
In these aspects www.datto.com is what worked best for us.
1. Data integrity
(e.g., fast recovery capability, scheduling backups of the most recent problem / high success rate recovery / ability to automatically check or open data to be restored for quick check support to identify define backup data that can restore well).
2. Data availability
(e.g. the ability to successfully back up in the backup window).
3. Integrate with the rest of the infrastructure
(e.g. automation, the ability to create scripts when backing up or restoring or syncing data).
4. Easy to use
(for example, an easy-to-find interface for necessary functions, arranging drivers in a process sequence).
5. Confidentiality, data encryption and data protection.
6. Ability to integrate standards The General Data Protection Regulation (GDPR), Centralized data management, uniform data control, can access backed up data by Token, USB smart.
@Thang Le Toan (Victory Lee) look at progressive incremental forever techniques. Philosophy: Why backing up data again if the data has not been changed?
The fastest way to backup data is to not back it up (again).
Excluding content is also something to check
Progressive incremental forever also helps with restoration, as only one backup is needed to restore the data (no need to restore a full backup and all its incremental backups).
IBM Spectrum Protect has these features, SPIR instant recovery for Spectrum Protect.
The most important aspect is the time for the backup and restore to finish, and of course how easy it is to configure schedules, rules, policies, etc.
How is it supported? Are problems resolved by correcting issues or do you have to wait for a new version or patch to correct issues.
I distinctly prefer TSM/SP since, like my favorite tools, it is a tool (that requires understanding and higher-level thinking to properly configure; it is not in any way shrink-wrap software) and is limited primarily by imagination as opposed to product limitations.
The system should use be smart and use little resources.
- Why taking periodical full copies, if the data has not been changed?
The system should be able to mix different media types.
- Why powering storage if the storage is not being used?
The system should allow replication copies
- If required, store a backup or archive copy on different locations
The system that uses replications, should be able to live without knowled of each other
- meta data about backup or archive should be keept on all replication places
Data reduction techniques are builtin
- possible directly from client or on server, or a combination of both
Encryption of storage or backup/archive
- using private keys, os hardware, or on the storage pools
Easy to customize with policies
- What data to filter in or filter out
- Which media to use
- What retention to use, or versions
- How many copies
- What to encrypt
Are agent installations needed?
- How easy to use them?
- How flexible are them?
- What techniques are they using (opensource databases has many different techniques, such as pg_dump/pg_basebackup/pg_probackup.. to protect PostgreSQL)
- Can the techniques be changed?
- If agentless, how is that working with the transactional data? How do they access the database data
The ability to restore
- individual object restores (single files, tables..)
- redirect restore to new place on same server
- redirect restore to new server
- instant restore (data available immediately?) or required to wait?
Air gap protection / cyber and ransomware protection
- the ability to protect data from being destroyed
We found the spictera solutions interesting, as they can mount the backup storage directly as a local drive-letter or filesytem.
This makes traditional backup easier, as almost all applications can protect the data to directory path.
And the users does not require to learn how to use an agent, as they already knows how or can use vendor specific instructions to backup and restore the data.
Easy to browse or copy existing backup copies if required, as everyone knows how to use a filesystem, right?
The data stored on the filesystem is protected against ransomware.
spictera also has a solution to take application consistent snapshots storing the backups using progressive block level incremental forever (always incremental) techniques on the IBM Spectrum Protect backup server
This reduces the energy resources to take backup, and also the energy resources for restores, as the snapshot backups are reanimated as a local snapshot disk on the server, with a nearly instant restore to same or new location on the same or new server.
- why waiting for the data to be restored if one can use the data immediately?
Thanks for reading
When deploying backup solutions we look at features that work the way we expect them to.
Data should be deduplicated to retain quick efficient backups while actually being able to restore without issue. Restoring databases, mailboxes, and domain controllers is particularly difficult for some well-known vendors. We have observed many instances of potential clients having failed restores with "successful" backups. So, having reliable restores is a must. Test often!
Backups must be flexible to meet customer needs with custom retention times while providing quick restore options.
The UI must be easy to use or mistakes will be made during the configuration of backup jobs.
RPO and RTO
Exactly, according to what is mentioned, I would add the order of priority that I would give them from my experience (and of course to your best consideration).
From the last Backup of the data you have until the moment you apply.
The contingency is your RPO (Recovery Point Objective) and after you apply the contingency and until you restore the data it is your RTO (Recovery Time Objectives).
It seems to me that it is more important to consider the RTO (Recovery Time Objectives) because it is always the longest in the process, if we saw it as a Critical Path the RTO this is your Critical Path.
@Raul Garcia If RTO also correctly includes the time between when the incident happened and when everything is recovered. Some miss the time between "when it really happened", and instead start counting from when someone executes the restoration.
Also a lot of people and vendor stop meassuring recovery time when the data is back, and misses the part of the humans that needs to figure out which part is missing, so that these data can be re-entered into the system.
The remaining part can be very time consuming....
So, RPO, is also very important. Sometimes it can be better to restore to an older state than a newer state because we are humans.
A human might have hard to remember what has been done the last hour, but might have more easy to remember what's been done until lunch break.
A system that can integrate with databases, securing the transactional data immediately helps protecting your data faster, with less data loss, and improved RPO.
Such as SPFS way to store data immediately on a Spectrum Protect server
The most important thing is the speed and accuracy and flexibility of the recovery process.
It s really "Recovery" software, not "Backup" software. So it is the recovery features that are paramount.
Recovery considerations:
o What type of data do you need to easily recover:
- Single files from Vmware images
- Entire Vmware systems
- Needed OS Support: Windows, AIX, Linux
- Exchange: Database, single mailbox, single message
- SharePoint: Database or document or share
o Administration
- You don't want to have to have a dedicated "Backup administrator".
+ Should be a quiet, reliable, background operation
+ Avoid solutions that depend on Windows that has to be maintained and patched
+ Avoid solutions that require expensive skill sets such as UCS/VMWare/Linux/Windows/AIX etc
+ Upgrades should be painless and done by support not customer
- Hoopless recovery
+ Should be so simple the helpdesk could perform recoveries
+ Self serve sounds good and could be a plus but experience has shown me that they call the help desk anyway
- Self-monitoring
+ Space issues, problems, and such should alert
+ Success should be silent
+ Should not have to check on it every day
o Quick recovery
- Recovery operations usually happen with someone waiting and are usually an interruption to your real work.
- You want it to recover while you are watching so you can tell the customer it is done and get back to your real job
Backup Considerations
o System Resources
- Should be kind to network
+ Incremental backups based on changed blocks not changed files
+ Should automatically throttle itself
- Memory and CPU utilization
+ Backing up should be a background process that does not require much CPU or Memory on the host being backed up
+ Primarily this is for client-based backups
- VMWare considerations
+ If using Vmware snapshots consider the CPU and I/O oad on ESX servers... you may need more
o Replication
- if the backup host is local, a remote replica should be maintained at a remote location
- Replications should not monopolize WAN links
- Recovery should be easily accomplished from either location without jumping through hoops
Random thoughts
o It is OK to go with a new company as opposed to an established one
- Generally speaking, a backup/recovery solution has a capital life span of about 3 to 5 years
- Generally speaking, moving to a new backup solution is fairly straight forward
+ Start backing up to the new solution and let the old solution age out
- So. it is OK to look at non-incumbent solutions
o When replacing an "incumbent", vendors will often give deep discounts
o Don't feel like you are stuck if an upgrade path looks like you are having to buy a solution all over again
o Do a real POC. It does not really take that much time or effort... or shouldn't. If it does, it is not the right solution
o At the end of the day, if it successfully and reliably backs up and recovers your data, it is a working solution
@PaulLemmons Yes, backing up data smart does not mean that one can restore it smart. Eg a VM image backup includes more data than needed, and it might be problematic to find and restore an individual object (file).
mixing different architectures is also something to consider. We are using Mainframe, hypervisors (KVM/VMware/Xen), Windows, Linux and commercial Unix, BSD, iSeries, and Digitals VMS
some has requirements for vaulting offline media such as tape
And it can be good that the software can handle these too.
For this we selected IBM Spectrum Protect,
to improvve RPO, SPFS can be used to backup transaction logs immediately using Spectrum Protect
It depends on your operations structure, however, in all cases a solution that can reliably backup your targeted data within your time window, and restore that data in a timeline that meets your business needs is most important. If it cant do that task, it doesn't matter what it costs, how easy it is to integrate, or how intuitive the UI is.
There are two questions here, really. One is technical, and the other is political.
So often, over the years, I have found that the political one is the hardest and the one that tends to have more sway. I have seen, so often, that companies will have global standards, and yet someone always seems to find a way to break those standards and do what they want... and this is the basis of the rise of the new data protection companies.
Once upon a time, there were mainframes, and it was easy. Then we had distributed systems and this is where fragmentation started. I personally had to unify a data protection infrastructure that had 13 different OS' and 5 different data protection products. Just as I did that, that company started a different business unit... and they chose a different data protection product.
Then we got virtualisation, an the teams that ran that environment often ran as a separate unit,a nd so chose their own backup product.... which tended to be new products because they concentrated on just that one single platform. This enabled them to be focused and, arguably, deliver a better solution... for that one platform.
Now we are seeing a plethora of solutions that are coming up and their concentration is cloud providers. Even AWS is getting in the game with a solution, but concentrating on their cloud. This is the new battle ground.
Technically, you can choose one solution. That solution must :
1) Guarantee restore
2) Backup within the required backup window
3) cover traditional enterprise (Which is mattering less and less), Virtual/HCI, and cloud
4) enable you to put that data wherever you nee dit so that the restore can happens within the desired window.
5) Be low cost to run. That is infrastructure, software, facilities and people cost - not just software
6) Scallable
Above all of this, though, is that a company need the political will to force errant departments/people to bend the corporate decision. Without that, the corporate will always be fragmented and will never be able to get the best deal it can from whomsoever the vendor is, and will always waste time fighting off encroachments from other vendors.
There are a ton of great answers below. They highlight all the characteristics of a good backup solution and those characteristics are important. For me, the ability to restore successfully is the one key characteristic. Imagine a 100% secure, easy to use, centralized, deduplicated, inexpensive, fast backup solution that, when you go to restore from it does not work. Does it matter that it is fast and cheap? Does it matter if it is centralized or deduplicated? Not in my view. The key is the ability to restore, and everything else is specific to your needs.
The most important thing RPO & RTO. Also, I prefer if the same solution supports physical, virtual, and cloud solution all with one GUI management.
The primary features that any Backup solution needs to provide are :
1. Ease of deployment
2. Clarity of licensing and support from a commercial point.
3. Ease of restoration on file and server level.
4. Ability to store backups offline to prevent corruption in the event of a security breach.
5. The speed that these backups can be accessed and deployed needs to be documented for Business and Operations.
Once these points are covered, the other features are nice to have but not essential.
- Check for the reliability of backup software
- What OS needs to backup
- Physical or virtual or both
- What kind of infrastructure is setup
- How big the environment is
- How many servers every night and the growth rate
- What the RPO and RTO is for recovery
- Any cluster backup for any OS
- Ease of use
- Expertise on OS. Backup admin should have depth knowledge of OS.
The success of recovery, "You only as good as your last backup"
Easy of the recovery process. Speed of the backup, Offsite replication options (DRP plan) scalability
Support from Vendor
Primarily RTO and RPO
Backup performance w.r.t to speed and de-duplication mechanism
Understanding of indepth backup process flow.
Ease of access/use/administration.
Great scope of automation & reporting on management console.
Recovery reliability and performance.
- Data Integrity (e.g. ability to restore / restore success rate)
- Data Availability (e.g. ability to successfully backup within backup window)
- Integration with the rest of the infrastructure (e.g. automation, scripting capabilities)
- Ease of use
- Data Security
"How will this product help us better meet the associated business requirements such as storage requirements (local and DR), data retention requirements (both internal and regulatory), and security?" is the first question I ask myself...
Then the Basics:
-------------------------------
- Deduplication / Compression / Encryption REQUIRED
- Restore Throughput for different types of data (File System, Virtual, SQL/Oracle, etc.)
- Reporting (not only backup/restore metrics, but overall health of the environment) & Custom Reports
- Automation Automation Automation
- Centralization
- Capacity Based Licensing
- How is the Technical Support?
- SLAs, RPO/RTO - how will this be affected?
Speed to backup and restore
Tech support availability should something go wrong. Especially off hours times.
Size of storage to fit your backups
Ease of management
1. It should be able to seamlessly integration capabilities with different operating systems and virtualization technologies.
2. Less administration after initial setup.
3. Easy and fast recovery of backup.
- Small time backup
- Cloud ready
- Ease of use
- Fast and easy recovery
- Latency data transfer, especially for remote backups
- Performance ....
I feel the top three priorities for a backup solution are...
1. Recovery
2. Recovery
3. Recovery
All the KPI's, must haves, should haves and nice to haves are important but without good, reliable and tested recovery they mean very little when having to explain "There is nothing we can do to recover" to the CIO or CFO or Lenny in accounting.
Backup = Recovery
The solution must be simple to manage, develop by a well know organization and have top reviews in the IT Industry.
- Backup window
- Deduplication
- LAN/WAN backup
- Restore time
- Virtual and Physical machines
- Granularity
Backup time
Restore time
Reporting
I've installed a lot of different backup software across different customer, and from my personal point of view the right one is the one that most fit the customer need, you need to have an eye at the mandatory environment and one at the forecast grow in term of data, technology and budget.
From this point of view is always better to go for one of the main player, since you will always have support, development and a good portfolio of products. Also I've found very useful to ask the pre-sales team of my backup vendor for specific solution. If I have some constraints is always better to specify them. Also let your vendor perform a training of the solution in advance of the implementation not after. This will help you to clarify some points that can be in a grey area and they can be fixed/implemented in the installation and configuration of the product.
No single product will do the magic for you, you need to specify what you want and ask for it.
-Is it a resource intense solution?
-Data security
-Backup and Restore time.
-Usability (ie. centrally managed)
-Cost to purchase and maintain.
reliability to capture all data
speed of recovery
Speed of backup
no impact to users
- SLA to Backup/Restore Time
- Deduplication Ratio
- Replication by low speed links
- Fault tolerance / no single point of failure
That in itself is a loaded question. Every company has different needs. First question that needs to be answered is what your needs are. Data retention requirement, data importance, recovery time vs revenue loss, network speeds from production servers to backup server or servers, are you required to be HIPPA compliant, that’s just a few of the first questions that come to mind. All of that aside, what if your backup option fails? Do you have a backup plan for the production server that you can’t restore because your only backup software didn’t do its job right and it just failed and the support technicians can’t help you restore it? But that goes full circle to “First question” what are your needs?
- Backup Time
- Restore time
- Deduplication
- Replicate
- DR Capabilities
- Virtual & Phisical capabilities
- The RtO is vital
- Latency data transfer, mainly remote and long distance backups;
- Performance on Restores and Recovers;
- Retention policies;
- Data recovery policies;
- Service Availability (24/7);
- And the last but not the least: The budget.
-Backup Time (backup Window)
-Restore time
-Solution which can save Backup data size
-Solution which can make DR Backup copy
-The Average of daily or weekly changes in your data
-High level of recovery options
-Storage media type based on corporate needs
-The size & type of the data
Solving My Backup Problem :
- small time backup windows
- Easy to recovery
- Storage Consent
- Role Based Access Management
Make sure the software functions as advertised... we tried Nakivo and were sorely disappointed, after 30 days they were still unable to make their own software function.
Before getting into the technical aspect, we need that the solution has a quality team, that its service and solution of incidents and problems are Agile and that it has extensive documentation.
Now, on the technical side, I believe that RPO and RTO rates are paramount to consider.
When deploying backup solutions we look at features that work the way we expect them to.
Data should be deduplicated to retain quick efficient backup it is a test kups while actually being able to restore without issue. Restoring databases, mailboxes, and domain controllers is particularly difficult for some well-known vendors.
We have observed many instances of potential clients having failed restores with "successful" backups. So, having reliable restores is a must. Test often!
Backups must be flexible to meet customer needs with custom retention times while providing quick restore options.
The UI must be easy to use or mistakes will be made during the configuration of backup jobs.
Hello,
I think that the most important is that you know what your environment is looking like and then challenge the tool against it to ensure to not multiply the solution for your Backup & recovery strategy.
Having too many solutions, for the same will give headache to your admins' fellows to put in place and surely even more when you will face a disaster to be recovered, could it be ransomware, or simple DC down (no matter on/off-premise).
So when it comes to choosing a solution, I'm applying the KIS methodology in regards to teams and infrastructure. KIS stands for "Keep It Simple".
Choose a single vendor all-in-one solution with a Single Management Console that can accommodate all your requirements both now and in the future as you grow.
Starting with basic backup and recovery with user self-service and immutability (now required by cyber insurers), but also consider future needs like can you add direct to cloud backup for Home users workstations later, Backup for SaaS like Microsoft 365, Google workspaces or Salesforce, solutions for AWS, Azure, Cloud VM workloads, DR from the location of your choice (on-prem, second site or DC, or vendor cloud) adding DR (or DR-as-a-Service) when you are ready to take that step as an add-on.
Scalable Enterprise functionality, encryption, deduplication, customer service and pricing. There is only one.
I would really appreciate more targeted questions. And not from Content Directors @ IT Central Station.
Let real users ask their questions. Every environment has different needs and urges.
Let real users speak what their key points to cover are.
There are enough sites out there discussing general questions.
There are many factors you have to put into consideration while selecting a Backup and recovery vendor as below:
1 - The Computability for the applications like SAP, DBs, SharePoint, Exchange and various Operating systems and versions (Windows OS client and server), Linux, Solaris,.etc) physical and virtual environment.
2 - Performance of the backup and restore, and here you have to check compression/deduplication capabilities.
3 - Integration capability with other platforms like Hyper-converged Infrastructure or mainframe, if exists.
4 - Ease of use, the management console especially if you have more than one environment that you want to be protected
5 - Complete data protection SW/HW
6 - Security features like ransomware protection, recovery verifications, etc.
7 - Licensing Model and price
8 - Vendor Support
Throughout my years of specializing in Backup and DR i found the below to be the most sane decision criteria to make when choosing a backup solution.
1. Where the metadata is stored i.e. DB or Backup Storage
2. backup airgap/immutability features
3. flexible backup and restore scenarios
4. ease of use
5. simple license scheme
6. its ability to help achieve the 3-2-1 rule
7. recovery verifications
8. compression/deduplication capabilities
9. in flight and at rest encryption standards
10. after sales support
11. hybrid cloud readiness
12. compatibility compliance
data reliability is the most important if ever disaster recovery is needed.
Your business needs must be the main driver for selecting your backup and recovery solution. Look to what are the needs and translate it in terms of RPO, RTO, and retention then reflect these measures to see what solution can achieve these measures for workloads.
Ease of use, and recovery time. That's it, not complicated at all.
An enterprise backup and recovery solution must be able to protect ANY workload. It must scale easily as data sets grow and shrink. Managment should be autonomous. Once the policies are in place and the source data is identified the solution should be able to protect, validate, report and automatically test recovery without the babysitting of an admin. the Admin is there to setup policies and watch for alerts.
HI
seems like a "complicate" question.
My Preference ARE :
1..Reliable of ALL needed file are backups (reports )
2..Reliable of RESTORES
All other consideration are Far less important.
Thanks
BB
The first thing to look is the technology compatibility, ease of implementation, cost, support and scalability.
One should assess against proper expectation/requirements:
1. compliance requirements and options to address:
a. f.e. GDPR – do you prefer granular restores or you plan restoring entire environments to conform article 11 for all your data protection processes or granular restore of historical data (+90 days) using self healing portal
1. SLA requirements of BW,RPO, RTO
a. Having in mind flash revolution and significant higher UBER rate of flash media vs magnetic disks need for BW=RPO=RTO->0
b. additional use cases for DP system:
i. production environments cloning for UAT, Q&A, T&D purposes.
ii.Support DR solution using another site/cloud services
2. Assess what new technologies/systems/projects would likely incorporate in IT next 24 months:
a. HCI, Containers, cloud services, archiving solutions) along with existing environment and decommissioning plans
b. intelligent storage with integrated Data Protection services in HW
c. DR solution compliance
Shortly I would rather opt towards highly automated DP and integrated with HA/DP solutions preferably implemented in HW instead of sophisticated feature-rich and costly. The main target is to reduce/eliminate maintenance windows while being compliant and have certified/supported environment/solution.
When evaluating backup and recovery software, what aspect do you think is the most important to look for:
1. The different Applications that are used by the customer
2. Current Network used
3. Current Storage used
4. Customer expectation
5. Customer retention Policy requirements for current data
6. Customer Archiving requirements for old data
7. Data classification
8. Customer SLA to the business
9. Customer RTO and RPO
10. How much the customer wants to spend on Backup Software
The above is what I normally ask the customer and based on the above, this will determine the best backup software the customer requires or willing to pay for.
The primary considerations regarding backup are simply, “Is my data protected, available and current?”
Tape meets the first criteria, protection, but fails miserably on the availability and currency metrics – you have to find and recall the tape(s), condition them, load them, then begin the restore process.
Data Deduplication Appliances meet the protection and availability requirements, but typically suffer from poor data currency – you would be restoring relatively ‘stale’ data or simply losing hours if not days of work.
The only solution I have found that meets all 3 requirements is Zerto. It uses hypervisor-based continuous data replication, writing a journaled copy to disk for not only your database but all component VMs in your application. This enables 3 types of immediate recovery with as little as 15 to 30 seconds – seconds, not minutes, not hours – of data at risk:
1. You can recover at the file, volume, or individual VM level
2. You can recover the entire application in a remote site – your DR solution,
3. You can reverse sync the entire application back to the primary site.
Since all Zerto implementations are symmetrical, and since a failover takes only 15-20 minutes to ‘User Ready’, failing over is often the quickest way to restore an application to service.
A few other key points that are inherent with Zerto:
1) No agents to the VM to manage and update
2) No snapshots – ever
3) No proxy servers to drive up costs
4) Point-In-Time Recovery enables you to recover from database corruptions as well as things like ransomware.
5) Older data, say older than 7 days can be automatically migrated to lower-cost storage or public cloud
6) Zerto works virtually identically in 4 modes:
- On-prem for VMware and Hyper-V,
- In a hybrid cloud mode between on-prem and Azure, AWS or other VMware Public Cloud providers (one of several hundred that support Zerto globally).
- In public cloud-resident or cloud-native apps
- Between public clouds, e.g. Azure to AWS
So while this may sound like an advertisement for Zerto, I assure you it is one of the very few applications I have ever seen that does what the vendor claims – out of the box. And when measured against the bar it sets, no other backup solution even comes close. So like my grandfather used to say, “If you’re telling the truth, you ain’t bragging.
RPO and RTO is the most important thing while taking backup and recovery software and I usually prefer to secure the date on its own file system during the backup for security.
I work for a backup vendor so I can't be considered as an unbiased adviser, however, from my experience with our partners they mostly look for ease of use and speed of recovery. When disaster strikes it could cost businesses millions in unwanted downtime. Hope this helps and thank you for the opportunity.
I work for a backup vendor so I can't be considered as an unbiased adviser, however, from my experience with our partners they mostly look for ease of use and speed of recovery. When disaster strikes it could cost businesses millions in unwanted downtime. Hope this helps and thank you for the opportunity.
The most important feature on any recovery software is the flexibility of the solution, simple and easy recovery procedure.
Also, you should consider recovery time, the time taken to recover from the disaster.
The most important thing is: Does the vendor tool meet business RPO and RTO?
Configuration of backup to be simplified
Backup software should support Dedupe
Backup software should support easy recovery options
Data which is backed up should be Reliable
Majorly Vendor support to be provided in Backup configuration and Restoration.
One of the key factors to consider will be Recovery Point Objective (RPO) and Recovery Time Objective (RTO) of the Backup Solutions.depending on individual Corporate's policy on the turn around time for restoration as well as how current the data should be. Another important factor would be de-duplication feature availability as this has a direct impact on the storage sizing for backup retention period.
The cost of the implementation and maintenance will go higher as the RPO, RTO and data capacity grows regardless of whatever Backup and Recovery solution is adopted.
Recoverability is the most important.
Scalable, available , supporting application backups, easy manageable, easy integration, alerting and reporting.
The ability of the backup Software has to:
- Flexible Licensing.
- Compare original to backup (Data Integrity)
- Granula Access
- Unattended backup to multiple locations
- inbuilt malware checks(ability to check patterns)
- Data Security - very important. Any change in the size of the archive must be reported, including transfer and deletions.
- Continuity and Support.
A complete, easy to manage and easy to deploy, Business Continuity/Disaster Recovery Solution, with backup verification, integration with different environments and security
Ease of use, de-duplication (storage efficiency), ease of restore, policy driven
Hello Ariel, nice to meet you.
My concern regarding this subject is : the Time to Recover must be relevant to the equation. So, no metter how long you record your information, no metter where you store it. Everything must be recovered as fast as possible. Nowadays, there is no space and time for delays, and your business may fail due to a long outage.
As a result, as fast as your selected tool may recover, as fast your business will return to the game.
That´s it.
My best wishes to the IT Central Station community.
Emerson Roberto Tavares
Does it meet your service level objectives for backup windows, recovery point objectives, and recovery time objectives, and retention?
- Across all the different types of systems and data in your environment (operating systems, applications, virtual machines, etc.)
- Against all the different things that can go wrong (human error / malicious behavior, system failure, site outage, regional outage, etc.)
If you have a complex environment, you'll want a solution that can manage a range of policies specific to the value of each data type to help reduce overall costs. You don't want to pay for a tier 1 solution for tier 3 data.
I think the most important aspect when evaluating a backup software it's starting from the needs. What are the architectures, the applications, the servers, the databases to restore ( not backup )? And also, what are the peoples that will be responsible of the backup service ( their skills, etc. )?
Then you can put in the right order all the features.
1. Automating the backup process
2. Capturing all the data, compressing it, deduplication and saving only the new files
3. Fast recovery of data AND ability to prioritize specific data to recover based on user's needs
4. Business continuity - keep the business flowing even during the event of a cyber attack, system failure, ransomware attack, etc.
Follow these 2 simple statements and you will be able to narrow down the field to a handful of vendors 1) Choose vendor that deals in Business backup and Business backup ONLY. 2) The bitterness of poor quality remains long after the sweetness of low price is forgotten
1) The easy way to backup, add or delete what we want to backup.
2) The easy and fastest way to restore the backup generated.
The most important aspect to look for in backup and recovery software is its ability and flexibility to help achieve the best practice 3-2-1 rule. 3 backup copies, 2 different mediums, 1 offsite/offline.
All in one solution for VMWare, HyperV, Bare Metal, Cloud, Replica, Application-Aware, CBT, Dedupe, Tape Backup, etc. Free for small office and commercial for bigger model with affordable price. For example, Veeam Community Edition.
1..Total Success backups (number of clients / sum of data )
2..Ability to do a restore / part / files / total machine
The reliability and speed of the restores.
This must be it:
- Backup & Restore Reliability and Data Integrity
- Ease of implementation and user interface
- Cloud Integration
- DR capabilities
- Compliance reporting
- Role Based Access Management
- Deduplication
The obvious top answer is the reliability of restores.
After that there are many important factors: easy of use and maintainability, block level backup and deduplication, encryption, flexibility in retention policy-backup sets-access policy, performance, cost...the list can go on and on.
First off the files must be easily restored and secondly the option needs to be affordable.
backup data integrity and availabilty
Most important is ease of use. Next is the speed of the restore. It is also very important to cover a broad range of platforms and storage.
Below would be my top three important aspects:
- Ease of use
- Continually evolving to meet new standards
- Good product documentation
With GDPR coming to the EU in May 2018, granular control over data is getting higher and higher in the list of things to need.
In order of importance:
- Restore Reliability and Data Integrity
- Ease of implementation and user interface
- Cloud Integration
- DR capabilities
- Compliance reporting
Easy of use, manageability, and speed
Security (ransomware crypto, intrusion), redundancy, and data fail (degradation)
- Can a backup resume when it suffered a catastrophic failure?
- Can a backup be split into parts by content for remote duplication?
- Can a backup application support multiple OS?
-Speed
-Protect Data
-Cost
Dependability. If the data is not there when needed, what is the point?
-Support for the client's total backup environment
-Ease of deployment
-Ongoing management and reporting
- Ease of set up and continued use (and also how easy it is to get it off your systems)
- Useful and easy reporting
- speed of backup
- Multiple recovery options
- cost
- clear consolidated reports to user groups (if and when was the last backup 100% successful)
- easy to deploy, use, and manage
- incremental forever if storing to the cloud and protecting large files / databases
- long term archiving / retention options (never purge deleted files, etc.)
- recovery time and recovery point - both less than an hour
- low system resources, reliable
Check the below points.
*Enough bandwidth at both ends(Client and Server)
*Set the backup jobs at off production time.
*Set the alerts for the disk space, backup success/failure jobs,
*Check randomly for the restore of backups for every 15days.
*exclude the Videos/ISO/MP3 files while backup schedule.
Meets and supports the max and min configurations mentioned in the guides
Admin and other related docs are precise
RTO and RPO's are defined properly
Is a popular solution around
Has a strong Tech support
Many Blogs and whitepapers written around their solutions
Cost, Scalablity and performance meets the standards
DR support and Cluster centric.
Supports HA and DRS.
Meet SLA for resolving problems
support team about the backup software-easy to use-easy admin console-easy to deployment -and mustbe a complete product that included physical and virtual environment.
- Meeting SLAs in terms of RTO and RPO
- Short Backup window
- Recovery time near 0
- Deduplication
- DR backup Copy
- Virtual and Physical machines
- Costs
- Reliability and stability
- Ease of use
- non complicated process
- quick and efficient backup and restore process
- user friendly
- easy search mechanism for files
- cost efficiency
- stability
Easy, dependable and fast recovery/restore
Easy setup but numerous functionalities such as LongTerm, Replication, Application Awereness, etc
1) How comfort the backup software gives in maintaining the enterprise setups
2) Should meet RPO/RTO
3) Should be able to overcome the pain area of customers especially with backup performance, reducing of space and high availability of data
4) Disaster recovery plan
5) Other advanced features like deduplication, Replication,cloud, Virtualization etc
How fast it restores all the information
Quick set up
Time for backup
Weekly, daily, monthly etc
How much dependency
Intuitive GUI
Quick setup
Vast compatibility of storage devices
Ability to backup to Mapped Network Drives
Most importantly... the time to restore whether it's one file or a complete image.
A backup solution is only as good as the ability to quickly recover from any disaster.
Dependable
Ease of use, don't need a lot of fancy bling just a clean interface
Application integration
Ease of recovery of entire machine or single file
Single deduplication solution from end to end including target hardware
Reporting
Low hardware footprint
Quick recovery
Low resource consumption
Role based assignment
Bare metal recovery capability
VM support (optional)
Centralized management (A Must)
Support for UNC Paths and multiple targets
1) Reliability and stability
2) Support for both virtual and physical servers
3) Better than average support for Exchange and Active Directory
4) Good support for both disk based and tape library backup
1) Quick restore options
2) Disk backup would be better option than SAN.
3) Easy to install and configure
4) Good support to backend hardware. May it be tape library, tape drives, media etc.,
inline deduplication ratio
management interface
integration with SAN platform
Being able to install quickly and efficiently, create a working policy, begin backing up and test recoveries without a lot of customer intervention.
It must be able to backup huge amount of data in the shortest time frame available, backed up data are reliable and easy to restore.
It depend of the What do you need?
Restore in a very short time, backup with zero disruptance, or easy administration with a very short learner curve.
But finally, I think our most important criteria is the available of a reliable technical information.
The most important to us, is that the application must have a easy way to recover the backups. A good way is to have an environment inside the application or something to test the backups. Another way is to have clear steps to build a DRP, and not only a manual.
Another good thing is to have a clear item selection to backup.
Backup to disk, D2D support.
Reliability
Backup and restore time
Budget
The solution meets your Business's requirements specifically in terms of data security, data retention, speed of restoration when you need data back and confirmation that your data will be destroyed when you terminate the contract.
- Physical and virtual capability
- Backup and restore times
- Deduplication
- Budget
Deduplication
Check Consistency
Bugdet
Restore Time
Nice Support
Bandwidth optimization (latency)
Backup integrity assurance
Server performance optimization
Space optimization
Compatibility with my systems
Budget
Performance,
Restore time,
Management Tool,
Reports
The most important thing is
Backup can be restored in case of incident happened. If a backup can not be restored then it's useless.
By that point, it must at least has the following things:
- Follow 3-2-1 rule for backup
- Meet organization RPO/RTO
- Critical in validating backup integrity
- Various backup format
- Extensible via API or third-party modules
- Backup and Restore time.
- Performant
- High Availability
- TCO/ROI (by Dedup capabilities, Scalability,etc.)
- Data Security
- Centrally managed (missing Remote feature)
The position of the solution in Gartner Magic Quadrant.
Thoughts-
- When was the last time you completed a backup restore test
- How current are our RTO and RPO targets
- How current are your identified BU&R data sets
- Where do you store your DR and BC plans and who/how can they be accessed
- Empower the user
- Explain the difference between backup (process based) and recovery (task based)
- How many BU&R applications do you need to cover your whole business
- It it the Companies responsibility to provide BU&R for data residing in the cloud.
1st - compatibility issues (clustering, distributed architecture)
2nd - performance (deduplication ratio, speed)
3rd - consistent backup task and throughput
4th - features
SLA is very dependency. Even though the restore time is very short, but it also depends on recovery procedures, company compliance issues.
Compatability with your company software. Such as Oracle. Without this you may find you don't really have a backup. Feature set that meets your requirements. Compression has proven to be another important feature as your data grows. Bare metal restore is a must and universal restore for driver replacement is genius. Speed and throughput is important but relies heavily on your networks ability so numbers may not reflect what you may be capable of.
Reliability
Ease of use
Granularity
Good pricing
- Interoperability and coverage with/of the used environments
- Stability
- Ease of use in case of emergency
- The organization that is behind
- License pricing
No fuss backups
Quick reliable restores
Easy to find and restore data. Recovery situations are sometimes stressful. The backup administrator may be offline and have to talk someone else through the procedure. The exact backup client, date or share name may not be known. A simple interface and fast catalog browsing are essential. An efficient and simple search index are excellent add-on features if you can get them. You won't appreciate this until the day you really need it.
PCI, SEC, SOX, HIPAA certification
High Availability
Performance (short backup window)
Ease of use, easy to restore with options to choose what, when and where!
Short RPO time
Instant Virtualisation
Has secure off site backup
Support
-stable backup for critical application and guaranteed recovery.
-fast and reliable recovery.
-ease of restore to a different hardware and OS level.
-small footprint in case of hourly backups.
It has to work without fail.
Does not require a lot of setup/support
-Is it dependable?
-Is it easy to use?
-Does it interfere with normal operations?
-Can the average user install it/backup/restore?
Backup and Restore time.
TCO/ROI (by Dedup capabilities, Scalability,etc.)
Data Security
Seamless integration into existing as well as future infrastructure/ application.
Good vendor support
- Ability to meet current SLA's around backup and recovery now and with expected growth.
- Insight into what is being backed up
- Flexibility
-Speed
-Protect Data
-Easy
- fast recovery
- no impact to production when backup occurs
- vmware integrated
- storage integrated
- multiple restore options
- software aware integration
- Availability
- Performance
- Lan Wan bandwidth optimized
- Cloud ready
- scalable
- High availability
- Support backup of many system and application
- Meeting SLA levels and garanty recoverability within RPO/RTO.
- Performance
- Latency
- Retention policies
- Data recovery policies
- Availability
file level recovery
LAN & WAN backup
Scalable network traffic
Central management
Does it meet my RPO time and RTO time. Does it support backing up my apps with app/data consistent. Does it support the OS in my environment? Does it support the virtualization technology that I am using and leverage on their API to offload process.
Client requirements
Support
Cost
Features
Easy to restore
A blend of RtO and RpO.
Ease of Pre Failover and Post Failover abilities.
Meeting SLA levels and garanty recoverability within RPO/RTO.
Using systems that can scale and cope with future business within an expected litetime of the system.
Designing a solution that is easy to maintain and support.
People who maintains and runs the backup, should have clear responsebilities and documentation should always exist.
Recoverability of full systems virtual and physical.
Recoverability
Compression
Dedupe
Replicate
Error Reporting with corrective measures
-Availability
-RTOs
-Performances
-Cost
-backup speed How long does it take to make a backup
-restore speed
-restore complexity
-disaster recovery options
-virtual machines capabilities
-is restoring individual items easy (emails, calendar items ...)
-is it possible to backup Online Content like Microsoft 365 files.
- Latency data transfer, mainly remote and long distance backups;
- Performance on Restores and Recovers;
- Retention policies;
- Data recovery policies;
- Service Availability (24/7);
- And the last but not the least: The budget.