The main thing I like about BigQuery is storage. We did an on-premise BigQuery migration with trillions of records. Usually, we have to deal with insufficient storage on-premises, but in BigQuery, we don't get that because it's like cloud storage, and we can have any number of records. That is one advantage.
The next major advantage is the column length. We have some limits on column length on-premises, like 10,000, and we have to design it based on that. However, with BigQuery, we don't need to design the column length at all. It will expand or shrink based on the records it's getting.
I can give you a real-life example based on our migration from on-premises to GCP. There was a dimension table with a general number of records, and when we queried that on-premises, like in Apache Spark or Teradata, it took around half an hour to get those records. In BigQuery, it was instant. As it's very fast, you can get it in two or three minutes. That was very helpful for our engineers.
Usually, we have to run a query on-premises and go for a break while waiting for that query to give us the results. It's not the case with BigQuery because it instantly provides results when we run it. So, that makes the work fast, it helps a lot, and it helps save a lot of time.
It also has a reasonable performance rate and smart tuning. Suppose we need to perform some joins, BigQuery has a smart tuning option, and it'll tune itself and tell us the best way a query can be done in the backend.
To be frank, the performance, reliability, and everything else have improved, even the downtime. Usually, on-premise servers have some downtime, but as BigQuery is multiregional, we have storage in three different locations. So, downtime is also not getting impacted.
For example, if the Atlantic ocean location has some downtime, or the server is down, we can use data that is stored in Africa or somewhere else. We have three or four storage locations, and that's the main advantage.