We are resellers of VMware products. We sell both VMware vSphere and VMware vSAN.
This solution is used for infrastructure virtualization. It is deployed to get the most benefits of our hardware or for the user's hardware.
We are resellers of VMware products. We sell both VMware vSphere and VMware vSAN.
This solution is used for infrastructure virtualization. It is deployed to get the most benefits of our hardware or for the user's hardware.
It is easy to use.
The features are very rich.
I would like to see it more open to working with other platforms.
They could improve the pricing. The license could be cheaper.
They have multiple components required for the setup. It would be better to integrate it into one solution, especially for small business companies.
I have been acquainted with vSphere for two years.
It's a stable product.
It's a scalable solution. We have between 10 and 20 users.
Technical support is fine.
The initial setup is straightforward.
Depending on the environment, it can take a couple of days to complete the installation and configuration.
We have a team of two engineers to complete the entire setup.
It's a perpetual license paid on a yearly basis.
A customer can buy a license and support on a yearly basis.
The pricing should be more flexible and more affordable for the customer.
I can recommend vSphere to other users who are looking into implementing it.
I would rate VMware vSphere an eight out of ten.
Basically, our operation cluster is hiding under vSphere.
vRealize Operations Manager is the most valuable feature, but it is not embedded in vSphere; it is a part of vSphere. It is used for forecasting and checking the consumption of CPU, memory, and other resources. It has the capability to do the forecast based on the history and give advice on consumption.
VMware vSphere is easy to use and easy to implement. Its learning curve is not sharp. Any engineer with little or medium knowledge of hypervisors and virtualizations can implement vSphere with a few clicks.
Its performance is an issue in version 6.5, but with the inclusion of HTML files in vSphere version 6.7, the experience is seamless. In version 6.7, VMware has included the HTML file protocol for the web browser or web console, which has changed the console's response and improved the performance.
We are using the trial version of vRealize Operations. It would be nice if some of those capabilities could be included in future versions of vSphere, not as a part of vRealize Operations, but in vSphere itself. It can provide some kind of forecast about your resource consumption based on the actual workload and modeling or testing scenarios. It can give you some advice or tips for the future growth of your infrastructure.
I have been using this solution for three years.
It is stable enough. We are satisfied with its stability.
Its scalability seems to be straightforward. Obviously, you have to check if you have the correct license. Otherwise, you will have to change the license. Licensing can sometimes stop you from growing.
We currently don't have any plans to increase its usage.
We haven't raised any case with VMware so far. We didn't require any support from vSphere tech support.
We moved from legacy servers to VMware Hyperconverged Infrastructure. We were using the ESXi version from vSphere, and then we moved to the cluster version. We have multiple servers in one cluster.
One of the main reasons for choosing vSphere was that it is one of the most known hypervisors in the market. It is easy to use, easy to implement, and straightforward. It was very good for our proof of concept, and we went for it. Eventually, we moved to a cluster infrastructure. Obviously, we use vSphere as the hypervisor.
The initial setup was straightforward. The deployment took one to three days.
We bought the local service from vSphere. The experience was overall good.
We have three people for its maintenance. They are system integrators and infrastructure administrators.
We also tested Microsoft Hyper-V, but at that time, it was unstable. It was not stable enough to be implemented in our environment. That's why we didn't use it.
I would recommend this solution. I would rate VMware vSphere a ten out of ten.
The primary use case is documentation.
I use this solution on AWS, which is pretty standard. It is fairly easy to use and has enhanced security.
From a feature set point of view, I am quite comfortable with it.
The pricing and tech support need improvement.
I have not scaled it very high. I have only used it in small implementations. I only have a total of 190 people using the solution.
The technical support is poor. We are in Australia, but we do not have the same level of support as the US and Europe.
Setting up this solution is not a problem.
The price is high. It would be nice if VMware made a price reduction.
I looked at native AWS as an option. My preference is Oracle VM versus this solution.
We are using the VMware vSphere product to virtualize our servers and we are very succesful. We are very satisfied.
It provides a new environment in an expedient manner. It is a better use of resources between the servers. As we can use these resources better, it helps our TCO (Total Cost of Ownership) analysis.
We would like VMware to add capacity to add more equipment. We also think it could improve with the hyper-converged.
It is very stable.
It is very scalable. We like that it is very functional and it has ability to access hyper-conversions. There is a capacity to grow the environment by adding the same type of equipment, and that really interests us.
I do not have experience with the technical support team.
We looked at Microsoft Hyper-V, but it does not have all of the systematics of VMware vSphere.
I think that vSphere is an expensive solution.
It's made us a lot more agile. We don't have to acquire new hardware just to bring it up or utilize new services for our customers. It makes it a lot easier for my team to allocate resources for the other business teams at the company.
The most important feature for us is clearly the foundation it provides. In addition to that, we've found the High Availability and flexibility to be important as well.
I definitely could see some improvements in Operations Management. That's another product that they have, but it's lacking in a few things. I feel that it's not as aggressive as it should or could be. They have different levels built into it, but I think they should have more aggressive levels.
Another area of improvement would be the further development of graphics virtualization. They've starting dabbling in that, it seems, but it definitely needs a lot more. They need to make it a little quicker and better.
I could count on one hand the number of times I've had issues with it and it's generally been related to hardware faults.
It's been very much scalable. When we started using it, we only virtualized a handful of servers. We've since expanded it to virtualize about 90% of our infrastructure at this point.
Customer Service:
Not really applicable to my situation. I've always had a good relationship with the regional sales rep but I don't need to contact him very often.
Technical Support:
It's been a little bit hit-or-miss at times. I think that's related to who picks up the phone first. They always get my problems resolved, but sometimes it ends up being quicker for me to figure out on my own than it is for them to get back to me. I've probably rate technical support a 6 out of 10.
We evaluated Citrix, but in our testing, vSphere was definitely more stable. Once we got started with vSphere and saw what it could do, we liked it more and more.
The initial setup is pretty straightforward, but it can get complex as you want to use more features. When we first started, it was very, very simple, but we've since made it a lot more complex to account for redundancy.
We implemented using in-house talent.
Make sure you find a good reseller you can trust. I don't have any advice with regard to pricing though, because the product is worth what you pay for it. I definitely feel like I"m getting good value.
Because there are multiple tiers, you want to make sure that you size your licensing appropriately. If you're going to have a stack, you're going to want to weigh the features that are available with the Enterprise versions versus the standard versions and really understand what you're going to get out of it.
Yes we looked at Xen server, but we had issues with VM stability. This was over 8 years ago though so obviously that isn't likely the same anymore.
The most important accomplishment was the cost savings that were achieved by server consolidation and eliminating dependency on the physical server's environment. This also facilitated our disaster recovery by easy replication of the VM images from one site to another.
VMware's high availability which supports our SLA, VMware on the fly features like LUN expansion, P2V and API integrations are the most valuable features.
The solution could benefit by expanding the CPUs and memory from different physical nodes. A more mature dashboard is needed; currently, we rely on third-party VM Management Solutions but most of the features have matured since we first started using it in 2007.
In the early years, we faced few issues but in the last four years, the environment has been quite stable.
The software has been scalable, most of it depends on the physcial server's capacity.
Technical support has been excellent.
We did not use another solution; we started out with VMware and we now have Hyper-V and VMware.
The initial setup was straightforward.
Pricing needs to be competitive since Microsoft Hyper-V has come a long way; they are both around the same price range.
We did not evaluate other solutions, it was the only leading product in 2007.
If you need to meet your business SLA, then there is no second choice in virtualization to give you peace of mind; it is easy to manage, scalable, stable and has APIs to integrate with all the backup solutions.
Nice review! Curious about your VM replication - is your second site a DR facility such as SunGard? Do you utilize VMware's SRM?
Thanks!
PART I
Have you spent time searching the VMware documentation, on-line forums, venues and books to decide how to make a local dedicated direct attached storage (DAS) type device (e.g. SATA or SAS) be Raw Device Mappings (RDM)' Part two of this post looks at how to make an RDM using an internal SATA HDD.
Or how about how to make a Hybrid Hard disk drive (HHDD) that is faster than a regular Hard Disk Drive (HDD) on reads, however more capacity and less cost than a Solid State Device (SSD) actually appear to VMware as a SSD'
Recently I had these and some other questions and spent some time looking around, thus this post highlights some great information I have found for addressing the above VMware challenges and some others.
The SSD solution is via a post I found on fellow VMware vExpert Duncan Epping’s yellow-brick site which if you are into VMware or server virtualization in general, and particular a fan of high-availability in general or virtual specific, add Duncan’s site to your reading list. Duncan also has some great books to add to your bookshelves including VMware vSphere 5.1 Clustering Deepdive (Volume 1) and VMware vSphere 5 Clustering Technical Deepdive that you can find at Amazon.com.
Duncan’s post shows how to fake into thinking that a HDD was a SSD for testing or other purposes. Since I have some Seagate Momentus XT HHDDs that combine the capacity of a traditional HDD (and cost) with the read performance closer to a SSD (without the cost or capacity penalty), I was interested in trying Duncan’s tip (here is a link to his tip). Essential Duncan’s tip shows how to use esxcli storage nmp satp and esxcli storage core commands to make a non-SSD look like a SSD.
______________________________________________________________________
The commands that were used from the VMware shell per Duncan’s tip:
esxcli storage nmp satp rule add –satp VMW_SATP_LOCAL –device mpx.vmhba0:C0:T1:L0 –option “enable_local enable_ssd”
esxcli storage core claiming reclaim -d mpx.vmhba0:C0:T1:L0
esxcli storage core device list –device=mpx.vmhba0:C0:T1:L0
______________________________________________________________________
After all, if the HHDD is actually doing some of the work to boost and thus fool the OS or hypervisor that it is faster than a HDD, why not tell the OS or hypervisor in this case VMware ESX that it is a SSD. So far have not seen nor do I expect to notice anything different in terms of performance as that already occurred going from a 7,200RPM (7.2K) HDD to the HHDD.
If you know how to decide what type of a HDD or SSD a device is by reading its sense code and model number information, you will recognize the circled device as a Seagate Momentus XT HHDD. This particular model is Seagate Momentus XT II 750GB with 8GB SLC nand flash SSD memory integrated inside the 2.5-inch drive device.
Normally the Seagate HHDDs appear to the host operating system or whatever it is attached to as a Momentus 7200 RPM SATA type disk drive. Since there are not special device drivers, controllers, adapters or anything else, essentially the Momentus XT type HHDD are plug and play.
After a bit of time they start learning and caching things to boost read performance (read more about boosting read performance including Windows boot testing here).
Screen shot showing Seagate Momentus XT appearing as a SSD
Note that the HHDD (a Seagate Momentus XT II) is a 750GB 2.5” SATA drive that boost read performance with the current firmware. Seagate has hinted that there could be a future firmware version to enable write caching or optimization however, I have waited for a year.
Disclosure: Seagate gave me an evaluation copy of my first HHDD a couple of years ago and I then went on to buy several more from Amazon.com. I have not had a chance to try any Western Digital (WD) HHDDs yet, however I do have some of their HDDs. Perhaps I will hear something from them sometime in the future.
For those who are SSD fans or that actually have them, yes, I know SSD’s are faster all around and that is why I have some including in my Lenovo X1. Thus for write intensive go with a full SSD today if you can afford them as I have with my Lenovo X1 which enables me to save large files faster (less time waiting).
However if you want the best of both worlds for lab or other system that is doing more reads vs. writes as well as need as much capacity as possible without breaking the budget, check out the HHDDs.
Thanks for the great tip and information Duncan, in part II of this post, read how to make an RDM using an internal SATA HDD.
PART II
In the first part of this post I showed how to use a tip from Dunacn Epping to fake VMware into thinking that a HHDD (Hybrid Hard Disk Drive) was a SSD.
Now lets look at using a tip from Dave Warburton to make an internal SATA HDD into an RDM for one of my Windows-based VMs.
My challenge was that I have a VM with a guest that I wanted to have a Raw Device Mapping (RDM) internal SATA HDD accessible to it, expect the device was an internal SATA device. Given that using the standard tools and reading some of the material available, it would have been easy to give up and quit since the SATA device was not attached to an FC or iSCSI SAN (such as my Iomega IX4 I bought from Amazon.com).
Image of internal SATA drive being added as a RDM with vClient
Thanks to Dave’s great post that I found, I was able to create a RDM of an internal SATA drive, present it to the existing VM running Windows 7 ultimate and it is now happy, as am I.
Pay close attention to make sure that you get the correct device name for the steps in Dave’s post (link is here).
For the device that I wanted to use, the device name was:
______________________________________________________________________
From the ESX command line I found the device I wanted to use which is:
t10.ATA_____ST1500LM0032D9YH148_____Z110S6M5
Then I used the following ESX shell command per Dave’s tip to create an RDM of an internal SATA HDD:
vmkfstools -z /vmfs/devices/disks/ t10.ATA_____ST1500LM0032D9YH148_____Z110S6M5
/vmfs/volumes/datastore1/rdm_ST1500L.vmdk
______________________________________________________________________
Then the next steps were to update an existing VM using vSphere client to use the newly created RDM.
Hint, Pay very close attention to your device naming, along with what you name the RDM and where you find it. Also, recommend trying or practicing on a spare or scratch device first, if something is messed up. I practiced on a HDD used for moving files around and after doing the steps in Dave’s post, added the RDM to an existing VM, started the VM and accessed the HDD to verify all was fine (it was). After shutting down the VM, I removed the RDM from it as well as from ESX, and then created the real RDM.
As per Dave’s tip, vSphere Client did not recognize the RDM per say, however telling it to look at existing virtual disks, select browse the data stores, and low and behold, the RDM I was looking for was there. The following shows an example of using vSphere to add the new RDM to one of my existing VMs.
In case you are wondering, why I want to make a non SAN HDD as a RDM vs. doing something else' Simple, the HDD in question is a 1.5TB HDD that has backups on that I want to use as is. The HDD is also bit locker protected and I want the flexibility to remove the device if I have to being accessible via a non-VM based Windows system.
Image of my VMware server with internal RDM and other items
Could I have had accomplished the same thing using a USB attached device accessible to the VM'
Yes, and in fact that is how I do periodic updates to removable media (HDD using Seagate Goflex drives) where I am not as concerned about performance.
While I back up off-site to Rackspace and AWS clouds, I also have a local disk based backup, along with creating periodic full Gold or master off-site copies. The off-site copies are made to removable Seagate Goflex SATA drives using a USB to SATA Goflex cable. I also have the Goflex eSATA to SATA cable that comes in handy to quickly attach a SATA device to anything with an eSATA port including my Lenovo X1.
As a precaution, I used a different HDD that contained data I was not concerned about if something went wrong to test to the process before doing it with the drive containing backup data. Also as a precaution, the data on the backup drive is also backed up to removable media and to my cloud provider.
Thanks again to both Dave and Duncan for their great tips; I hope that you find these and other material on their sites as useful as I do.
Meanwhile, time to get some other things done, as well as continue looking for and finding good work a rounds and tricks to use in my various projects, drop me a note if you see something interesting.
Additional Information
Which Enterprise HDDs to use for a Content Server Platform
Another great tip - if you use Nimble Storage install the Nimble Connection Manager software on your hosts for the pathing management. It works very well with these devices for connectivity.
Obviously the data center virtualization is of importance for multiple reasons, but Horizon View as well.
I would consider our deployment, at least from the college’s deployment, vanilla, meaning we do not leverage a lot of the technologies VMware offers. We do deploy HA + DRS clustering, but that is about the extent of it.
Our vITA environment does have its uniqueness, and we continually attempt to develop labs that can address most of the products/features available from VMware.
VMware-based solutions are designed for the consolidation of servers. Also, since we had to expand our market globally to support the expense of running our vITA program, we had to come up with a delivery method to teach these courses anywhere in the world.
By using Horizon View's virtual desktop technology as the portal for participants to gain access to our virtual lab environment along with use of live online meeting tools (currently we use Adobe Connect), we became early adopters of the course delivery method now known as VILT (Virtual Instructor Led Training).
Continue to develop products that address the SMB market.
I have used VMware products for ten years.
My initial use was to teach Operating Systems at Caldwell Community College & Technical Institute. Within a year after I began using it for curriculum courses, Google decided to build its largest datacenter in the world just out our backdoor. We were invited by Google to develop a program to train individuals how to become “Datacenter Technicians”. I became intimately involved with this due to my industry background and my use of open source products, including VMware. Due to the rapid turnover in courses, preparation of VMware’s Workstation product became too time consuming so I installed the VMware Server solution, which at that time was v3.5.
Primarily since we were early adopters there was little expertise available, other than directly from VMware, which is one of VMware’s strong points in that they provide a wealth of information through their documentation (too much) and their community forums.
Hardware compatibility issues, in particularly early on, needed to be identified prior to attempting deployments. This is not really an issue with VMware products. Their guides refer back to verify compatibility with the HCL. Now most vendors ensure their hardware does comply. There were also issues arising from integration of vSphere with SAN vendor hardware. Again, most of these issues occurred early on due to our learning curve.
For the college, not only being “vanilla”, we are also not a huge institution so scalability is not an issue.
For our vITA program, we had to find ways to get the most from our available hardware. We initially had old equipment from the college as they increased the use of virtualization. I actually embrace this approach since I have been in the technology field for four decades. I consider it a challenge to get the most from limited resources. If you have ample resources, time and money, you should be able to accomplish most anything technologically. The skill/talent, at least from my point of view, is being able to accomplish this without the abundance of time/money/resources.
From the college, we have not had many occurrences to contact VMware support directly. Some of this is had to do with the relationship we had between myself, being the vITA Director, and the colleges Network/System Administrator. I did the research and development, which is basically what I have done both in my industry career and in my academic career, thus the college benefitted from my lumps on implementation on the production side.
With the vITA program, I was pretty much on our own, but did have access to some VMware internal information.
We didn't use any previous solution for server virtualization. For desktop, the college still uses XenWorks, with minimal Horizon View deployment mainly due to manpower issues and comfort.
We were early adopters, so obviously there were complexities.
We did it in-house.
From my point of view, particularly in the IT industry, you need to be continually moving forward, otherwise you are moving backwards or out. But that is not to say there is no room for improvement in particular areas, for instance, in addressing products that help the small business arena. With discussions I have had with internal VMware employees, they have known this and have introduced products, like VSAN, to help address this arena.
Get buy-in from other areas within your organization, which is typically an easy sell. But do it up front and identify a relatively small test deployment and the internal level of expertise. Then fill voids with either internal training or by establishing partnerships.
This is a logical diagram of our vITA Lab environment:
Ops Manager is a good product but also requires Orchestrator for automation. Be sure to check out other vendors for this type of thing if you are looking for this. Very well written review of vSphere.