Vice President at a computer software company with 11-50 employees
User
2020-07-02T15:16:42Z
Jul 2, 2020
No performance implications. Its just a provisioning strategy...
In thick provisioning, If I need 1GB, I provision 1 GB, even of only 10MB is being used. In thin provisioning, I initially provision 10MB and as the need for more storage grows, I grow the volume with it to the max of 1 GB...
Most everyone uses the provisioning unless there’s a specific reason not to you
Search for a product comparison in All-Flash Storage
Thin provisioning is a technique to better utilise the available physical storage space. With thick provisioning, all blocks of allocated storage space are occupied and made available to the host, regardless of how much space is actually used.
With thin provisioning, storage space from a pool is made available to the host. The host "sees" the configured capacity, but only the described storage space is actually used. This enables better allocation and higher utilisation of the storage, as unused space can be used elsewhere.
Thin provisioning, therefore, has nothing to do with deduplication or compression, it is merely an optimized distribution of the available resources.
What I would like to say here that thin provisioning in All-flash arrays should not be used because these boxes come with far better features regarding space usage; One of the relevant is built-in compression.
Almost all storage manufacturers provide guaranteed compression for all workloads starting from 3:1, without impacting performance. This was not possible in non-flash boxes.
Now for a particular question, thin provisioning is used to save some space "for now". I think it should not be required when you already have 3 times more useable space after applying the guaranteed compression ratio; and even you can go to apply more than that.
However if still thin provisioning is required to be used then the performance impact will may be visible when the thin provisioned volume is presented to a database(Oracle/SQL/etc). For workloads other than databases, there will be no notifiable impact on performance.
Generally, thin provisioning can be recommended where the data is NOT growing/changing rapidly, like O/S vDisk in virtualization/VMs, static applications, read-intensive workloads, etc, and not recommended like DB, backup&recovery volumes, DRS, data replication, NMS, System log and event management, etc.
Sr System Engineer at a computer software company with 501-1,000 employees
User
2020-09-23T04:23:06Z
Sep 23, 2020
Lets take it with an example
Suppose i have 100GB of storage in my array and if customer requested for 150GB of lun/volume, we can provision 150GB using thin provisioning , but we can provision only upto 100GB or less using thick provisioning. Of course we have to keep 10% free space for IO and system operations.
There is no concept called over provisioning in Thick, but in thin there we may get over provisioning if we provision more space than available. In some cases we have to monitor carefully for storage in case of thin provisioning overly provisioned. We should keep an eye on utilization...
Thick and thin provisioning it's a service related configuration, simply as an example you should use the thick option when you are creating it to hold a storage of database to let the virtual hard disk be ready for heavy writing to don't affect the transaction during the partition expansion. And thin you should use it when you are aiming to hold a file server data like imaging as the delay of creating the virtual hard disk file will never impacting the data writing.
For heavily data writing its more suitable to use thick provisioning.
For heavily data reading no problem to use the thin provisioning.
Thick Prov. can pre-write the parity into the RAID pool as an initialization process, so the write command (after the initialization is finished) will be way faster than Thin Prov., while Thin Prov. helps to better use the usable capacity, for those environments that don't require low latency and don't know how many capacities will be used in the very beginning of the implementation, like email server, cloud space, is very handy.
Thick and Thin does make a´that much difference with All Flash Arrays nowadays, because the system realize "free" space and won´t use it in physical due to zero detection features and UNMAP. BUT: There still are systems out there, wich have a performance impact with thin provisioning related to the first mapping of a free block which can be "feelable" in terms of user experience. If you are unsure, you can always use thick provisioning eager zero which is from many operating systems the most compatible set to use. :-)
Option 1). Thick provisioning is "provisioning storage space(100GB) now"
Option 2). Thin provisioning is" provisioning storage space(100GB) on demand"
For 1). Disk space of 100GB will be immediately reduced from storage/back-end disk
If your data now is 1 GB and you can go max by 3 year is 50GB then you are putting unnecessary I/O load on back-end storage as it will read/write as of 100GB from the day-one.However, the 100GB is already formatted and provisioned.
For 2). Disk space of 100GB will be provisioned from front-end but not immediately be reduced from storage/back-end disk. Instead it will be used/filled up when required.
If your data now is 1 GB and you can go max by 3 year at 50GB then I/O load on back-end storage will read/write as of 50GB after 3 years.
However, the 100GB is not formatted from the back-end/storage but only provisioned at front end/host; so whenever the new block/cell read/write request is initiated, it will also request the storage system to format the required new block/cell; this will end to an extra I/O to and from the storage system.
Option 2). is better for "If your data now is 1 GB and you can go max by 3 year is 50GB"
No benefit of Thin provisioning if you need 100GB to use that all of it now.
Not recommending to use Thin provisioning for databases. Only use where you are sure that you will get additional back-end space later-on.
For all flash storage systems use Build-in compression de-duplication instead. Thin provisioning should be use only for less/non-critical applications.
Thin provisioning is actually making you lazy to correctly provision your storage system.
With thick provisioning, the allocated space is reserved completely. The provided capacity is immediately subtracted from the total capacity of the storage and is therefore no longer available.
With thin provisioning, the set size is displayed, but the storage is only ever used as much capacity as was actually used.
This makes over provisioning possible, the capacity of the storage can be better utilized.
Applications require shared block storage to be provisioned. The provisioning is by capacity per LUN (logical unit number) or volume. Thick provisioning means all of the capacity allocated is owned and tied up by that application whether it's used or not. Unused capacity is not sharable by other applications. Thin provisioning essentially virtualizes the provisioning so the application thinks it has a certain amount of exclusive capacity when in reality, it's shared. This makes the capacity more flexible and reduces over provisioning.
IT Solutions Architect at nds Netzwerksysteme GmbH
Real User
2020-07-03T13:37:18Z
Jul 3, 2020
First thing first. You have to think about what is your target. as the other colleagues mentioned when you use thick provisioning in the first place - all of the capacity is used once! when you start with thin provisioned systems, the system will allocate just that capacity which is needed.
Unfortunately, Storage systems with low intelligence features, react like a dumb system - you are the admin - you have to know how much you need. When you use thick provisioning, there might be no place for the system left to swing data from the used space in an empty array for example to pack blocks together because they match and the system can gain performance from such actions. And when there is no capacity left what happens? Correct, the systems slow down till it freezes!
Intelligent systems, on the contrary, don't give you that much possibilities - you can choose thick provisioning without thinking about - how much capacity you need to left for the system to do maintenance duties (for example swing data) - so the system already has two possibilities:
One - it let you overprovisioned the capacity to a calculated threshold.
Two- or the contrary it only shows you the place you can use without thinking about the told issue because the system knows what it needs and how often.
Ok - there is something between the written below - there are really interesting systems, which can use both ways in one.
As mentioned, it belongs primarily on the used system - you have to know what kind of approach the used system has to hold up data.
Flash storage is a data storage technology that delivers high-speed, programmable memory. It is called flash storage because of the speed at which it writes data and performs input/output (I/O) operations.
No performance implications. Its just a provisioning strategy...
In thick provisioning, If I need 1GB, I provision 1 GB, even of only 10MB is being used. In thin provisioning, I initially provision 10MB and as the need for more storage grows, I grow the volume with it to the max of 1 GB...
Most everyone uses the provisioning unless there’s a specific reason not to you
Thin provisioning is a technique to better utilise the available physical storage space. With thick provisioning, all blocks of allocated storage space are occupied and made available to the host, regardless of how much space is actually used.
With thin provisioning, storage space from a pool is made available to the host. The host "sees" the configured capacity, but only the described storage space is actually used. This enables better allocation and higher utilisation of the storage, as unused space can be used elsewhere.
Thin provisioning, therefore, has nothing to do with deduplication or compression, it is merely an optimized distribution of the available resources.
What I would like to say here that thin provisioning in All-flash arrays should not be used because these boxes come with far better features regarding space usage; One of the relevant is built-in compression.
Almost all storage manufacturers provide guaranteed compression for all workloads starting from 3:1, without impacting performance. This was not possible in non-flash boxes.
Now for a particular question, thin provisioning is used to save some space "for now". I think it should not be required when you already have 3 times more useable space after applying the guaranteed compression ratio; and even you can go to apply more than that.
However if still thin provisioning is required to be used then the performance impact will may be visible when the thin provisioned volume is presented to a database(Oracle/SQL/etc). For workloads other than databases, there will be no notifiable impact on performance.
Generally, thin provisioning can be recommended where the data is NOT growing/changing rapidly, like O/S vDisk in virtualization/VMs, static applications, read-intensive workloads, etc, and not recommended like DB, backup&recovery volumes, DRS, data replication, NMS, System log and event management, etc.
Lets take it with an example
Suppose i have 100GB of storage in my array and if customer requested for 150GB of lun/volume, we can provision 150GB using thin provisioning , but we can provision only upto 100GB or less using thick provisioning. Of course we have to keep 10% free space for IO and system operations.
There is no concept called over provisioning in Thick, but in thin there we may get over provisioning if we provision more space than available. In some cases we have to monitor carefully for storage in case of thin provisioning overly provisioned. We should keep an eye on utilization...
@Krishnamohan Velpuri interesting. Thanks!
Thick and thin provisioning it's a service related configuration, simply as an example you should use the thick option when you are creating it to hold a storage of database to let the virtual hard disk be ready for heavy writing to don't affect the transaction during the partition expansion. And thin you should use it when you are aiming to hold a file server data like imaging as the delay of creating the virtual hard disk file will never impacting the data writing.
For heavily data writing its more suitable to use thick provisioning.
For heavily data reading no problem to use the thin provisioning.
I hope my answer to help you...
Thick Prov. can pre-write the parity into the RAID pool as an initialization process, so the write command (after the initialization is finished) will be way faster than Thin Prov., while Thin Prov. helps to better use the usable capacity, for those environments that don't require low latency and don't know how many capacities will be used in the very beginning of the implementation, like email server, cloud space, is very handy.
Thick and Thin does make a´that much difference with All Flash Arrays nowadays, because the system realize "free" space and won´t use it in physical due to zero detection features and UNMAP.
BUT: There still are systems out there, wich have a performance impact with thin provisioning related to the first mapping of a free block which can be "feelable" in terms of user experience. If you are unsure, you can always use thick provisioning eager zero which is from many operating systems the most compatible set to use. :-)
Option 1). Thick provisioning is "provisioning storage space(100GB) now"
Option 2). Thin provisioning is" provisioning storage space(100GB) on demand"
For 1). Disk space of 100GB will be immediately reduced from storage/back-end disk
If your data now is 1 GB and you can go max by 3 year is 50GB then you are putting unnecessary I/O load on back-end storage as it will read/write as of 100GB from the day-one.However, the 100GB is already formatted and provisioned.
For 2). Disk space of 100GB will be provisioned from front-end but not immediately be reduced from storage/back-end disk. Instead it will be used/filled up when required.
If your data now is 1 GB and you can go max by 3 year at 50GB then I/O load on back-end storage will read/write as of 50GB after 3 years.
However, the 100GB is not formatted from the back-end/storage but only provisioned at front end/host; so whenever the new block/cell read/write request is initiated, it will also request the storage system to format the required new block/cell; this will end to an extra I/O to and from the storage system.
Option 2). is better for "If your data now is 1 GB and you can go max by 3 year is 50GB"
No benefit of Thin provisioning if you need 100GB to use that all of it now.
Not recommending to use Thin provisioning for databases. Only use where you are sure that you will get additional back-end space later-on.
For all flash storage systems use Build-in compression de-duplication instead. Thin provisioning should be use only for less/non-critical applications.
Thin provisioning is actually making you lazy to correctly provision your storage system.
With thick provisioning, the allocated space is reserved completely. The provided capacity is immediately subtracted from the total capacity of the storage and is therefore no longer available.
With thin provisioning, the set size is displayed, but the storage is only ever used as much capacity as was actually used.
This makes over provisioning possible, the capacity of the storage can be better utilized.
Applications require shared block storage to be provisioned. The provisioning is by capacity per LUN (logical unit number) or volume. Thick provisioning means all of the capacity allocated is owned and tied up by that application whether it's used or not. Unused capacity is not sharable by other applications. Thin provisioning essentially virtualizes the provisioning so the application thinks it has a certain amount of exclusive capacity when in reality, it's shared. This makes the capacity more flexible and reduces over provisioning.
Die Art der Provisionierung hat nichts mit der Performance des Speichers zu tun.
---
This type of provisioning has nothing to do with the memory performance.
First thing first. You have to think about what is your target. as the other colleagues mentioned when you use thick provisioning in the first place - all of the capacity is used once! when you start with thin provisioned systems, the system will allocate just that capacity which is needed.
Unfortunately, Storage systems with low intelligence features, react like a dumb system - you are the admin - you have to know how much you need. When you use thick provisioning, there might be no place for the system left to swing data from the used space in an empty array for example to pack blocks together because they match and the system can gain performance from such actions. And when there is no capacity left what happens? Correct, the systems slow down till it freezes!
Intelligent systems, on the contrary, don't give you that much possibilities - you can choose thick provisioning without thinking about - how much capacity you need to left for the system to do maintenance duties (for example swing data) - so the system already has two possibilities:
One - it let you overprovisioned the capacity to a calculated threshold.
Two- or the contrary it only shows you the place you can use without thinking about the told issue because the system knows what it needs and how often.
Ok - there is something between the written below - there are really interesting systems, which can use both ways in one.
As mentioned, it belongs primarily on the used system - you have to know what kind of approach the used system has to hold up data.