We use it for about 90 percent of our corporate network.
We have a separate vSphere for an ISP that we run on a private and public cloud, because we are an anti-cloud company.
We use it for about 90 percent of our corporate network.
We have a separate vSphere for an ISP that we run on a private and public cloud, because we are an anti-cloud company.
It rides our entire corporate network. Everything inside of our corporate Windows domain (e.g., domain controller, database files, etc.) rides inside VMware.
In the last three years, we have moved from a physical to a virtual environment. We have removed the need for backups and going to the office at three in the morning to change a server. I do everything during my business hours. It gave me my life back.
The product is very scalable. Since it is a virtualized environment where all the compute rides, it doesn't care about what is riding under it. Therefore, you can expand or shrink it as much as you want.
Most of my support goes through my third-party. The person who helped us integrate VMware is the person who we also contact for support. They have an inside support guy with VMware. While it is a middle man type of thing, it has been pretty good so far.
We started out in the Microsoft Hyper-V because it came with everything in their license. After messing with Hyper-V, we always had a small VMware environment. With some of the blade services that came out from Dell and Cisco, we moved over to VMware because they utilize all the back-end interconnects a lot better than Microsoft does. After that, we went full VMware.
I miss the Enterprise tier. When they went to Enterprise Plus, it increased the price. I was one of the guys that operated well inside the Enterprise tier. I paid a little bit more than standard but I got a lot more features. Enterprise Plus has a lot of things that I'll never use. So when they chopped that tier out, they kneecapped me.
If you go with a standard license, it's very affordable. If you start digging into how they price all of their add-ons compared to Hyper-V, you get into the mud, because Hyper-V bundles everything together. So, at least you can customize your pricing to exactly what you need, so that is a plus.
We evaluated Cisco and Dell. We have been moving more towards Cisco's computing. We did evaluate Micro-Tech for switching since they have cheap switches.
Do your homework and build it from the ground up. Set up a plan to replace everything and get started from the beginning as a full virtualized environment. It won't bite you later, which is one thing we were worried about, and we ended up having to do extra work to do small steps into virtualization.
Most important criteria when selecting a vendor:
PART I
Have you spent time searching the VMware documentation, on-line forums, venues and books to decide how to make a local dedicated direct attached storage (DAS) type device (e.g. SATA or SAS) be Raw Device Mappings (RDM)' Part two of this post looks at how to make an RDM using an internal SATA HDD.
Or how about how to make a Hybrid Hard disk drive (HHDD) that is faster than a regular Hard Disk Drive (HDD) on reads, however more capacity and less cost than a Solid State Device (SSD) actually appear to VMware as a SSD'
Recently I had these and some other questions and spent some time looking around, thus this post highlights some great information I have found for addressing the above VMware challenges and some others.
The SSD solution is via a post I found on fellow VMware vExpert Duncan Epping’s yellow-brick site which if you are into VMware or server virtualization in general, and particular a fan of high-availability in general or virtual specific, add Duncan’s site to your reading list. Duncan also has some great books to add to your bookshelves including VMware vSphere 5.1 Clustering Deepdive (Volume 1) and VMware vSphere 5 Clustering Technical Deepdive that you can find at Amazon.com.
Duncan’s post shows how to fake into thinking that a HDD was a SSD for testing or other purposes. Since I have some Seagate Momentus XT HHDDs that combine the capacity of a traditional HDD (and cost) with the read performance closer to a SSD (without the cost or capacity penalty), I was interested in trying Duncan’s tip (here is a link to his tip). Essential Duncan’s tip shows how to use esxcli storage nmp satp and esxcli storage core commands to make a non-SSD look like a SSD.
______________________________________________________________________
The commands that were used from the VMware shell per Duncan’s tip:
esxcli storage nmp satp rule add –satp VMW_SATP_LOCAL –device mpx.vmhba0:C0:T1:L0 –option “enable_local enable_ssd”
esxcli storage core claiming reclaim -d mpx.vmhba0:C0:T1:L0
esxcli storage core device list –device=mpx.vmhba0:C0:T1:L0
______________________________________________________________________
After all, if the HHDD is actually doing some of the work to boost and thus fool the OS or hypervisor that it is faster than a HDD, why not tell the OS or hypervisor in this case VMware ESX that it is a SSD. So far have not seen nor do I expect to notice anything different in terms of performance as that already occurred going from a 7,200RPM (7.2K) HDD to the HHDD.
If you know how to decide what type of a HDD or SSD a device is by reading its sense code and model number information, you will recognize the circled device as a Seagate Momentus XT HHDD. This particular model is Seagate Momentus XT II 750GB with 8GB SLC nand flash SSD memory integrated inside the 2.5-inch drive device.
Normally the Seagate HHDDs appear to the host operating system or whatever it is attached to as a Momentus 7200 RPM SATA type disk drive. Since there are not special device drivers, controllers, adapters or anything else, essentially the Momentus XT type HHDD are plug and play.
After a bit of time they start learning and caching things to boost read performance (read more about boosting read performance including Windows boot testing here).
Screen shot showing Seagate Momentus XT appearing as a SSD
Note that the HHDD (a Seagate Momentus XT II) is a 750GB 2.5” SATA drive that boost read performance with the current firmware. Seagate has hinted that there could be a future firmware version to enable write caching or optimization however, I have waited for a year.
Disclosure: Seagate gave me an evaluation copy of my first HHDD a couple of years ago and I then went on to buy several more from Amazon.com. I have not had a chance to try any Western Digital (WD) HHDDs yet, however I do have some of their HDDs. Perhaps I will hear something from them sometime in the future.
For those who are SSD fans or that actually have them, yes, I know SSD’s are faster all around and that is why I have some including in my Lenovo X1. Thus for write intensive go with a full SSD today if you can afford them as I have with my Lenovo X1 which enables me to save large files faster (less time waiting).
However if you want the best of both worlds for lab or other system that is doing more reads vs. writes as well as need as much capacity as possible without breaking the budget, check out the HHDDs.
Thanks for the great tip and information Duncan, in part II of this post, read how to make an RDM using an internal SATA HDD.
PART II
In the first part of this post I showed how to use a tip from Dunacn Epping to fake VMware into thinking that a HHDD (Hybrid Hard Disk Drive) was a SSD.
Now lets look at using a tip from Dave Warburton to make an internal SATA HDD into an RDM for one of my Windows-based VMs.
My challenge was that I have a VM with a guest that I wanted to have a Raw Device Mapping (RDM) internal SATA HDD accessible to it, expect the device was an internal SATA device. Given that using the standard tools and reading some of the material available, it would have been easy to give up and quit since the SATA device was not attached to an FC or iSCSI SAN (such as my Iomega IX4 I bought from Amazon.com).
Image of internal SATA drive being added as a RDM with vClient
Thanks to Dave’s great post that I found, I was able to create a RDM of an internal SATA drive, present it to the existing VM running Windows 7 ultimate and it is now happy, as am I.
Pay close attention to make sure that you get the correct device name for the steps in Dave’s post (link is here).
For the device that I wanted to use, the device name was:
______________________________________________________________________
From the ESX command line I found the device I wanted to use which is:
t10.ATA_____ST1500LM0032D9YH148_____Z110S6M5
Then I used the following ESX shell command per Dave’s tip to create an RDM of an internal SATA HDD:
vmkfstools -z /vmfs/devices/disks/ t10.ATA_____ST1500LM0032D9YH148_____Z110S6M5
/vmfs/volumes/datastore1/rdm_ST1500L.vmdk
______________________________________________________________________
Then the next steps were to update an existing VM using vSphere client to use the newly created RDM.
Hint, Pay very close attention to your device naming, along with what you name the RDM and where you find it. Also, recommend trying or practicing on a spare or scratch device first, if something is messed up. I practiced on a HDD used for moving files around and after doing the steps in Dave’s post, added the RDM to an existing VM, started the VM and accessed the HDD to verify all was fine (it was). After shutting down the VM, I removed the RDM from it as well as from ESX, and then created the real RDM.
As per Dave’s tip, vSphere Client did not recognize the RDM per say, however telling it to look at existing virtual disks, select browse the data stores, and low and behold, the RDM I was looking for was there. The following shows an example of using vSphere to add the new RDM to one of my existing VMs.
In case you are wondering, why I want to make a non SAN HDD as a RDM vs. doing something else' Simple, the HDD in question is a 1.5TB HDD that has backups on that I want to use as is. The HDD is also bit locker protected and I want the flexibility to remove the device if I have to being accessible via a non-VM based Windows system.
Image of my VMware server with internal RDM and other items
Could I have had accomplished the same thing using a USB attached device accessible to the VM'
Yes, and in fact that is how I do periodic updates to removable media (HDD using Seagate Goflex drives) where I am not as concerned about performance.
While I back up off-site to Rackspace and AWS clouds, I also have a local disk based backup, along with creating periodic full Gold or master off-site copies. The off-site copies are made to removable Seagate Goflex SATA drives using a USB to SATA Goflex cable. I also have the Goflex eSATA to SATA cable that comes in handy to quickly attach a SATA device to anything with an eSATA port including my Lenovo X1.
As a precaution, I used a different HDD that contained data I was not concerned about if something went wrong to test to the process before doing it with the drive containing backup data. Also as a precaution, the data on the backup drive is also backed up to removable media and to my cloud provider.
Thanks again to both Dave and Duncan for their great tips; I hope that you find these and other material on their sites as useful as I do.
Meanwhile, time to get some other things done, as well as continue looking for and finding good work a rounds and tricks to use in my various projects, drop me a note if you see something interesting.
Additional Information
Which Enterprise HDDs to use for a Content Server Platform
Obviously the data center virtualization is of importance for multiple reasons, but Horizon View as well.
I would consider our deployment, at least from the college’s deployment, vanilla, meaning we do not leverage a lot of the technologies VMware offers. We do deploy HA + DRS clustering, but that is about the extent of it.
Our vITA environment does have its uniqueness, and we continually attempt to develop labs that can address most of the products/features available from VMware.
VMware-based solutions are designed for the consolidation of servers. Also, since we had to expand our market globally to support the expense of running our vITA program, we had to come up with a delivery method to teach these courses anywhere in the world.
By using Horizon View's virtual desktop technology as the portal for participants to gain access to our virtual lab environment along with use of live online meeting tools (currently we use Adobe Connect), we became early adopters of the course delivery method now known as VILT (Virtual Instructor Led Training).
Continue to develop products that address the SMB market.
I have used VMware products for ten years.
My initial use was to teach Operating Systems at Caldwell Community College & Technical Institute. Within a year after I began using it for curriculum courses, Google decided to build its largest datacenter in the world just out our backdoor. We were invited by Google to develop a program to train individuals how to become “Datacenter Technicians”. I became intimately involved with this due to my industry background and my use of open source products, including VMware. Due to the rapid turnover in courses, preparation of VMware’s Workstation product became too time consuming so I installed the VMware Server solution, which at that time was v3.5.
Primarily since we were early adopters there was little expertise available, other than directly from VMware, which is one of VMware’s strong points in that they provide a wealth of information through their documentation (too much) and their community forums.
Hardware compatibility issues, in particularly early on, needed to be identified prior to attempting deployments. This is not really an issue with VMware products. Their guides refer back to verify compatibility with the HCL. Now most vendors ensure their hardware does comply. There were also issues arising from integration of vSphere with SAN vendor hardware. Again, most of these issues occurred early on due to our learning curve.
For the college, not only being “vanilla”, we are also not a huge institution so scalability is not an issue.
For our vITA program, we had to find ways to get the most from our available hardware. We initially had old equipment from the college as they increased the use of virtualization. I actually embrace this approach since I have been in the technology field for four decades. I consider it a challenge to get the most from limited resources. If you have ample resources, time and money, you should be able to accomplish most anything technologically. The skill/talent, at least from my point of view, is being able to accomplish this without the abundance of time/money/resources.
From the college, we have not had many occurrences to contact VMware support directly. Some of this is had to do with the relationship we had between myself, being the vITA Director, and the colleges Network/System Administrator. I did the research and development, which is basically what I have done both in my industry career and in my academic career, thus the college benefitted from my lumps on implementation on the production side.
With the vITA program, I was pretty much on our own, but did have access to some VMware internal information.
We didn't use any previous solution for server virtualization. For desktop, the college still uses XenWorks, with minimal Horizon View deployment mainly due to manpower issues and comfort.
We were early adopters, so obviously there were complexities.
We did it in-house.
From my point of view, particularly in the IT industry, you need to be continually moving forward, otherwise you are moving backwards or out. But that is not to say there is no room for improvement in particular areas, for instance, in addressing products that help the small business arena. With discussions I have had with internal VMware employees, they have known this and have introduced products, like VSAN, to help address this arena.
Get buy-in from other areas within your organization, which is typically an easy sell. But do it up front and identify a relatively small test deployment and the internal level of expertise. Then fill voids with either internal training or by establishing partnerships.
This is a logical diagram of our vITA Lab environment:
Truthfully, I'm not using many of the available features. My needs have been small in that we just needed to virtualize our environment and manage it effectively. VMware vSphere has served that purpose greatly. I’m sure what I get out of vSphere, though, could potentially be gained just as easily via other virtualization platforms available today, but at the time I felt those were too immature to risk. VMware just worked with little to no issue, so I trust them going forward.
The largest benefit for my companies that have used this is the consolidation of our physical server footprint. Never would I thought I could run as many VMs on a single host as we do today.
Overall I’m very happy with what the product brings so I can’t suggest any major improvements. However, I’m very disappointed in VMware’s decision to push management to a web-based vCenter client and away from the standalone thick client. The web client is just terrible in so many ways, mainly on a performance basis. It is very slow. I also find the thick client much easier to navigate and work with my VMs. A large user population shares my sentiment as there are a number of posts in VMware’s forums regarding the issues with the web client. I hope VMware realizes this and either greatly enhances the web client or moves back to the thick client for management.
I have been using it since vSphere 4, so approximately five to six years.
I’m sure there were issues to contend with originally, but as the product matures it gets easier and easier.
I’m sure there were issues to contend with originally, but as the product matures it gets easier and easier.
I’m sure there were issues to contend with originally, but as the product matures it gets easier and easier.
It was pretty straightforward, from what I recall, but I did not do most of the initial setup. I assisted a colleague who took the reigns.
Technical Support:I've rarely had to enlist support, but when I have it’s been what I would expect.
It was pretty straightforward, from what I recall but I did not do most of the initial set-up. I assisted a colleague who took the reigns.
My first environment was set up by a single colleague with my assistance. The only advice I can really give is to really know your requirements for the systems and software you intend to virtualize and build a proper sized VM environment to host them. Oversubscribing resources is, in my opinion, the biggest concern and something that happens easily. Also factor in proper storage built to handle the I/O load of a virtual environment. Lastly, build your VM environment to factor in an N+1 design to ensure if a host fails, the remaining host(s) can handle the load of all VMs that were running on the failed host and always allow for a 15% overhead of free resources under full load.
I really did not handle the financial aspects of my VM environments, but I do know VMware is pricey. These days, from a price point, I would take a hard look at MS Hyper-V as they are catching up with VMware fairly quickly.
When I first looked into virtualization it was back when VMware released vSphere 4. At that time I was interested in Citrix Xen and MS Hyper-V. I felt at the time VMware was the industry leader and was more mature so I trusted them above all others. I’ve been happy with the choice since, though for cost purposes I am really interested in Microsoft’s Hyper-V solution.
Cost considerations aside, be sure to properly scale your VM environment above all else. This is true regardless of product.
I agree with cchilderhose, that v6 has significant improved in the responsiveness. I currently am the VMware IT Academy (vITA) Coordinator/Instructor for our Community College: www.cccti.edu
We were invited in 2006 by VMware to assist in the development of the vITA program which means I started with in in v3.5.
We have been somewhat force to use the Web Client since we have to instruct others on how to use VMware "Features". When the Web client was first introduce even folks from within VMware did not have a lot of positive comments about it.
But no matter what we all get familiar with initially, change is change. With the vSphere Client you don't have to think about how to do something you just do it. When first using the Web client I always felt as if I were stumbling around to find how to get where I need to be to complete a task. Not good when you are trying to show others.
With the release of v6 and in particular update1, the web client operates much more responsively. In addition, now that I have been using it for 2yrs, it does not function the same as the vSphere client and I now have learned how to be more proficient.
In fact, during the last section of the ICMv6 course we just finished teaching, I actually felt I was better completing tasks than I am with the vSphere Client. I guess the comment here is: "Be patient grasshopper".
The other thing of note is that if you do tasks in the vSphere Client they do not always propagate correctly. For example in assignments of access control.
One question I have is, for those that are not interested in using PowerCLI or the Virtual Management Assistant (vma), how will you be able to manage a host directly? When the eliminate the vSphere client.
I like to make comparisons, something I do now since I have been in Higher Ed for near 20yrs now after being in industry 20yrs prior, is that if you give me an iPhone I will stumble around trying to make a call since I use an android.
In the past without virtualization, it normally took several hours to get a new server built, including cabling, racking and OS imaging. Now we can use templates with many OS flavours and we can get a new server running in few minutes.
Currently we are struggling to keep the storage capacity under control and as we do not use thin provisioning our capacity is always a challenge, but the actual used space by the guests is pretty low. We need to find a way to go to thin provisioning and keep that under control and implement automation on the capacity management and have threshold alerting.
I've been using VMware for seven years.
We have no problems deploying vSphere.
So far, I only had problems with stability when we implemented a third-party backup tool that uses VMware snapshots to take backups. It brought some instability to some guests at the beginning and it took us some weeks to get to the root cause and things back to an acceptable level of stability.
This is a very good solution and I work for a company with a global ELA with VMware. But as our contract and all our commercial team is in the US, we have a limited number of vouchers for training and events in Brazil.
The technical support is good and they always provide good suggestions and technical recommendations and the response time for critical problems has always fair been fair.
I have never tried a different server or datacenter virtualization solution before.
I was not a part of the initial set-up or project team. I work on the operations team and I have only been a part of migration team when moving from old versions to newer ones, but I have never had problems.
You should have good capacity management and you might have at least two clusters or more to separate your guests by tiers or have a good notion of resource pools to keep the resource utilization in good shape.
You should understand what is your demand and plan your capacity and resource allocation carefully to avoid double work in the near future.
With vSphere, we were able to consolidate just about every workload, server or desktop, which in turn allowed us to save a lot on hardware, power, and space. Also, of course, deploying new desktops and servers in minutes is a definite time saver.
Some modifications are still require to be done with the CLI, directly on the host, like SSL certificate management and reclaiming storage space on thin provision disk (depending on storage devices). It would save a lot of time if those could have a simple GUI in the vCenter.
I've used vSphere for more than three years in general and a a few months for version 6.0.
No issues with deployment so far.
A few months back, we had random crashes of PCoIP sessions on virtual desktops with more than one monitor. But it turned out to be a problem with vGPU drivers provided by NVIDIA. So with vSphere itself, we've had no stability issues.
The vCenter makes scalability pretty easy.
VMware’s customer service is very helpful when you need to find the right product for the right environment.
Technical Support:We had to call VMware once so far and they really followed through. They diagnosed a problem related to a third-party driver (NVIDIA) and obtained for us a patched version of the driver from the manufacturer. They were very efficient!
In my previous company, we used oVirt, the free-of-charge version of Red Hat Enterprise Virtualization, which turned out to be way more expensive than a solution like VMware in terms of both human and hardware resources.
The initial setup was very easy, very straight forward. The only downside of the process was the replacement of the auto-generated self-signed SSL certificate by an enterprise-CA-signed one, which had to be done manually via CLI.
We implemented it ourselves.
Even though the initial cost of vSphere seems a bit high, it is really going to pay off by freeing time for teams and lowering your hardware costs. Regarding licensing, if you have any doubt, just ask VMware’s customer service to help you. Some editions and kits might already include all you need.
We evaluated Microsoft Hyper-V, but it seemed unfinished. Management tools are almost non-existent and hosts constantly need to be rebooted to install patches that are purely Windows related and have nothing to do with the virtualization itself.
For small infrastructures, start with the free vSphere Hypervisor. For small businesses, VMware vSphere Essentials Kits are inexpensive but limited to three hosts. So be sure you are not going to grow more than this for a while if you are considering this option. For medium-sized businesses and corporations, go for it. It will greatly reduce your operating costs.
As Chris and Karthik have mentioned, step by step. Do you have enough hosts to handle your VM's while one host is updating? Also, you have to update firmware for each of the hosts. I did a small environment (5 hosts, 140 VM's) and used the Dell Enterprise iDRAC to get into the UEFI boot of my newer hosts to update firmware remotely. Older hosts are a bit more difficult, but possible (such as burning DVD's or USB sticks) and using the iDRAC or ILO to boot for firmware updates.
One of the things about VMWare is that it runs really well and the hosts are generally not restarted for quite a while, with the end result that firmware for NIC's, RAID and BIOS has been updated at least once... and the newer VMWare versions are tied to having the latest firmware.
VMware has announced version 6 (V6) of its software defined data center (SDDC) server virtualization hypervisor called vSphere aka ESXi. In addition to a new version of its software defined server hypervisor along with companion software defined management and convergence tools.
As a refresh for those whose world does not revolve around VMware, vSphere and software defined data centers (believe it or not there are some who exist ;), ESXi is the hypervisor that virtualizes underlying physical machines (PM’s) known as hosts.
The path to software defined data center convergence
Guest operating systems (or other hypervisors using nesting) run as virtual machines (VM’s) on top of the vSphere hypervisor host (e.g. ESXi software). Various VMware management tools (or third-party) are used for managing the virtualized data center from initial configuration, configuration, conversion from physical to virtual (P2V) or virtual to virtual (V2V) along with data protection, performance, capacity planning across servers, storage and networks.
VMware vSphere is flexible and can adapt to different sized environments from small office home office (SOHO) or small SMB, to large SMB, SME, enterprise or cloud service provider. There is a free version of ESXi along with paid versions that include support and added management tool features. Besides the ESXi vSphere hypervisor, other commonly deployed modules include the vCenter administration along with Infrastructure Controller services platform among others. In addition, there are optional solution bundles to add support for virtual networking, cloud (public and private), data protection (backup/restore, replication, HA, BC, DR), big data among other capabilities.
VMware has streamlined the installation, configuration and deployment of vSphere along with associated tools which for smaller environments makes things simply easier. For the larger environments, having to do less means being able to do more in the same amount of time which results in cost savings. In addition to easier to use, deploy and configure, VMware has extended the scaling capabilities of vSphere in terms of scaling-out (larger clusters), scaling-up (more and larger servers), as well as scaling-down (smaller environments and ease of use).
The quick synopsis of VMware vVOL’s overview:
How data storage access and managed via VMware today (read more here)
vVOL’s are not LUN’s like regular block (e.g. DAS or SAN) storage that use SAS, iSCSI, FC, FCoE, IBA/SRP, nor are they NAS volumes like NFS mount points. Likewise vVOL’s are not accessed using any of the various object storage access methods mentioned above (e.g. AWS S3, Rest, CDMI, etc) instead they are an application specific implementation. For some of you this approach of an applications specific or unique storage access method may be new, perhaps revolutionary, otoh, some of you might be having a Deja Vu moment right about now.
vVOL is not a LUN in the context of what you may know and like (or hate, even if you have never worked with them), likewise it is not a NAS volume like you know (or have heard of), neither are they objects in the context of what you might have seen or heard such as S3 among others.
Keep in mind that what makes up a VMware virtual machine are the VMK, VMDK and some other files (shown in the figure below), and if enough information is known about where those blocks of data are or can be found, they can be worked upon. Also keep in mind that at least near-term, block is the lowest common denominator that all file systems and object repositories get built-up.
How VMware data storage accessed and managed with vVOLs (read more here)
Here is the thing, while vVOL’s will be accessible via a block interface such as iSCSI, FC or FCoE or for that matter, over Ethernet based IP using NFS. Think of these storage interfaces and access mechanisms as the general transport for how vSphere ESXi will communicate with the storage system (e.g. their data path) under vCenter management.
What is happening inside the storage system that will be presented back to ESXi will be different than a normal SCSI LUN contents and only understood by VMware hypervisor. ESXi will still tell the storage system what it wants to do including moving blocks of data. The storage system however will have more insight and awareness into the context of what those blocks of data mean. This is how the storage systems will be able to more closely integrate snapshots, replication, cloning and other functions by having awareness into which data to move, as opposed to moving or working with an entire LUN where a VMDK may live.
Keep in mind that the storage system will still function as it normally would, just think of vVOL as another or new personality and access mechanism used for VMware to communicate and manage storage. Watch for vVOL storage provider support from the who’s who of existing and startup storage system providers including Cisco, Dell, EMC, Fujitsu, HDS, HP, IBM, NetApp, Nimble and many others. Read more about Storage I/O fundamentals here and vVOLs here and here.
Depending on your experiences, you might use revolutionary to describe some of the VMware vSphere V6 features and functionalities. Otoh, if you have some Deja vu moments looking pragmatically at what VMware is delivering with V6 of vSphere executing on their vision, evolutionary might be more applicable. I will leave it up to you do decide if you are having a Deja vu moment and what that might pertain to, or if this is all new and revolutionary, or something more along the lines of technolutionary.
VMware continues to execute delivering on the Virtual Data Center aka Software Defined Data Center paradigm by increasing functionality, as well as enhancing existing capabilities with performance along with resiliency improvements. These abilities enable the aggregation of compute, storage, networking, management and policies for enabling a global virtual data center while supporting existing along with new emerging applications.
If you were not part of the beta to gain early hands-on experience with VMware vSphere V6 and associated technologies, download a copy to check it out as part of making your upgrade or migration plans.
Check out the various VMware resources including communities links here
VMware vSphere Hypervisor getting started and general vSphere information (including download)
VMware vSphere data sheet, compatibility guide along with speeds and feeds (size and other limits)
VMware Blogs and VMware vExpert page
Various fellow VMware vExpert blogs including among many others vsphere-land, scott lowe, virtuallyghetto and yellow-bricks among many others found at the vpad here.
StorageIO Out and About Update – VMworld 2014 (with Video)
VMware vVOL’s and storage I/O fundamentals (Storage I/O overview and vVOL, details Part I and Part II)
How many IOPs can a HDD or SSD do in a VMware environment (Part I and Part II)
VMware VSAN overview and primer, DIY converged software defined storage on a budget.
Overall VMware vSphere V6 has a great set of features that support both ease of management for small environments as well as the scaling needs of larger organizations.
Cool Post
Virtually anything, it doesn't matter if you're trying to cross the balance and diversifying the application, that can't be done, won't be done or challenging the vendor in that regard or you're looking to scale. Virtualization is almost the only way to scale both vertically and linearly because applications are often bound by linear growth where you need to throw more at it in order to increase capacity. Some of that is where you need to ask for how much resources I can get on the fly. A lot of hot plug, a lot of hot add of memory, being able to be very flexible within an environment where traditional architecture from the past can't do that. Can't take a hard drive, can't take a motherboard out of a computer and put it in another one.
vCenter and VMware's products allow us to look at things and focus on things that we usually didn't have time for because you were architecting solutions based on hardware. This is VMware mix and hardware agnostics, so it's how fast you want to go.
Being able to itemize by using vApps, vApplications, to do starter priorities so that way if you had dependent NFS and database mounts, applications won't come up prior to that. If you're a one man shop it allows you to turn things on in a way that most people would have to sit there and wait for the next one to go up and the next one and watch the console. Peace of mind, that's what we really use VMware for.
I would like to see much more of a, maybe, application intelligence. Unfortunately you have storage vendors who are doing that for us right now with your XtremIO and storage IO and cards inside of something that has some application intelligence. To make MySQL work, SQL work with storage that you can just buy, but VMware being able to characterize database platforms based on use cases of MySQL or SQL, they're very different. Being able to tell the difference between the two and say, "Hey look, this will work here, but it won't work here." That would be nice.
It's challenging using MySQL with vCenter because Linux as a whole is a latency sensitive OS, so you're only as good as your slowest moving part. Doesn't matter if it's disc, memory or processor and sometimes it's shortest path to storage. In order to make MySQL work you need micron second processing and in some cases when you have monolithic sized databases you need to be able to scale that at the same time.
So, unfortunately with the way MySQL plays with storage and the way VMware is right now, it's where I went with the application intelligence, there's a lot of, not taboo if you will, but doesn't really work. You're not going to find a lot of use cases because, unfortunately, our business falls into a different sector if you will, by running Linux as a primary OS.
So, better support for newer Linux kernels would be always great. The fact that they've released open tools and made it the supported platform for just about every Linux distribution out there now sees that they're solving the problem like the VMXNET3 adapter. The driver's not there, the machine's not online. There has been some pitfalls but VMware's been able to, from a company that supplies an application and an OS, solve a lot of those.
They are listening to the customer. It's very difficult to say what's still left because after today you never really know, that could change.
Currently we use vCenter Operations Manager which is VCOPS, if you will, and that drives our storage analytics based on what's performing, where our bottlenecks are, how to quickly identify why is it slow? Is it memory? Is it computer? Is it solid state disk? What is the balance of which your application is not performing. We also use vSphere Replication Appliance, along and with vCenter Orchestrator to use the set it up once mentality. The machine is created at primary site A and then with Orchestrator it actually goes through the series of doing the replication, setting it up and then getting that VM set up on the other side. A cheap and easy way of doing it for free.
Incredibly stable, so much the point of there's times where we may not know that we're running at half-capacity or full-capacity based on a failover that happens on the back end. That says enough for not only architecting and choosing the right harbor vendors but it also shows that you can actually be failed over on your appliance and business still runs as normal. Things keep working. That constant non-disruptive change if you will.
In terms of evaluation for technical support from VMware, you get what you pay for. You have 24/7 support which allows you to leverage call centers in Ireland and other locations where they surprise me every time. There's always something I learn from every support case I've ever had to open. Even if it's just to kick the tires and make sure we're doing things sort of right.
I was luckily enough to come into a virtualization shop. They pretty much didn't want to do the physical server aspect anymore because again it doesn't scale. Walking into a virtualized shop is very easily, winning that battle can be very difficult. I've been on the other side a handful of times. It's really just showing the value, in which case, VMware can fix the problem. You got to be very specific about what problem you're fixing. Is it latency? Is it processing power? Is it being able to provide DR? Is it being able to move your workload to the cloud or move them to a different data center?
It's amazing how only a couple months out of the year you need DR. You don't need it 12 months out of the year. Moving from a standard virtualization shop, having everything on prem, leveraging the cloud, that's the next step. When you ask me about how would I introduce VMware, I think about introducing it now as a cloud based service provider. Not as an on prem, hey, let's scale this very easily.
They've been able to push out little things as the management agent which allow you to work through vCenter and allow you to connect through vCenter to see all your hosts and make automation very, very easy. On top of that they give you the vCenter Applicance so you're no longer tied to a SQL license. You don't have to worry about using SQL Express and running out of space or running out of license space and then re-licensing it. Then they've also solved the upgrade path. Every time a new B, C, D, whatever version of vCenter comes out I don't know how many times the Windows version blows up. Seeing a company being able to say okay you know what? Let's take a step back. Let's use a very similar OS and let's allow you to utilize vCenter just like ESXi, it's the same platform.
Anything that solves a problem. Find out what your biggest problem is and see how VMware can help you solve that problem. There's more principle architects out there that, especially with everything that's being added to the platform, that would be people specialized specifically in things. VMware has that capacity and the capability to help you solve that problem. Getting the vendor involved, maybe not necessarily a service provider but having VMware actually evaluated. They're going to tell you what you're doing wrong.
We operate within a 10% market of people who don't use Windows. You got to find somebody out there and one of the biggest problems you'll find is you won't find MySQL documentation in terms of what people are using and how they're using it. It's this big, there's not a lot of information that people, in a private sector, are even willing to share or in the public sector. They're still trying to figure it out themselves. Finding out who's successful is pretty much who's willing to write your review. That's something I'd like to contribute in terms of what we're doing to put it out there, let other people know who come to you guys and say, "Who else is doing this?" We can't be the first people.
Another great tip - if you use Nimble Storage install the Nimble Connection Manager software on your hosts for the pathing management. It works very well with these devices for connectivity.