We host applications on Amazon Web Services (AWS). We set up EC2 for running various applications and utilize S3 storage for data storage. For databases, we use MySQL and Microsoft API instead of AWS databases.
Information Security Analyst at a financial services firm with 201-500 employees
Real User
Top 10
2024-11-22T15:46:18Z
Nov 22, 2024
I work with AWS Cloud since 2021. I have experience with AWS IAM for user management and love using EC2. I heavily use S3 and VPC. Additionally, I'm focused on cloud security products like security groups, policies, permissions, Security Hub, and Inspector.
I use Amazon S3 for storing objects, like files and documents. It is particularly valuable for personal projects where some of the files are stored there.
My primary use case for Amazon S3 includes using it for backups and deploying a website on an S3 bucket. Additionally, for my organization, it is used to deploy websites and leverage its architecture for deploying applications.
Storage Administrator at a insurance company with 501-1,000 employees
Real User
Top 5
2024-08-16T17:26:12Z
Aug 16, 2024
One of the use cases is to store Veeam backups. It's more complex than Veeam, which is simpler. S3 is essentially object storage, a place to store data that you can set up however you want. We usually connect to S3 using the API. It requires an access key and a secret key, which we input into the application we're connecting from. Then, that application can see the S3 bucket as if it were local, and we can copy stuff into it.
We use Amazon S3 primarily as a repository for storing objects such as images and files. It serves as a storage solution where we can efficiently save and manage the data we collect.
We have been using Amazon S3 to store some execution results. I am from a testing background. We use the solution to store execution results and whatever evidence we generate.
Cloud Engineer at a retailer with 10,001+ employees
Real User
Top 10
2024-04-30T03:04:06Z
Apr 30, 2024
I have different use cases. One major one is hosting static content for our legacy front-end application. We deploy all the static files to an S3 bucket, which is configured for web hosting. Additionally, we have an Akamai layer on top of S3, so all requests first hit Akamai and are then routed to S3, according to our HTML routing, before reaching the backend service.
We use S3 buckets to store logs from the application side. The logs stay on the server for seven days. After that, we transfer them to the S3 bucket, depending on the application, and the logs help us correctly troubleshoot problems. We can also host web apps in the historical bug bucket. If you have a domain name from Route 53, you can create a bug bucket over there and whatever is in the web applications folder. About 40 people at my company use S3.
Sr full stack java developer at JPMorgan Chase & Co.
Real User
Top 10
2023-02-17T19:51:00Z
Feb 17, 2023
We use Amazon S3 for data storage purposes. We use the stored data for restoration or backup when there is a data leakage in websites and mobile applications. For enterprise-level applications, the stored data is also used for data analytics, backup, recovery, etc.
We use the solution to put our static web pages on the backend servers when we use Cloudfront if a transformation is required. When a request reaches Cloudfront, we have to mask some information in the response by transporting it. The solution masks or transforms the request so the backend can understand them. For example, We have a proxy server in between the NGINX engine, and when a request hits the NGINX engine, it has a different format. The NGINX engine converts and secures the information before hitting the Cloudfront. We use the Lambda function which is a Python program so the code body stays in the Amazon S3 bucket and when a response is returned from the backend server we can mask some of the information, keeping only the required information for the end response. I also use Amazon S3 for batch processing. Two gigabytes is a lot of data, especially if we are adding or updating information. If we have a Java application, it can take a long time to process two gigabytes of data. I split the data into multiple 50 or 30-MB files to make it easier to process. I created a manifest file that specified which rows belonged to which files. Then I used the solutions batch processing with Lambda to create a pipeline that performs batch operations. I use the solution for life cycle management. The information goes from the standard storage to the infrequent, and after 30 days it is moved to the glacier.
Senior Database Administrator at Overonix Technologies
Real User
2022-11-21T13:31:55Z
Nov 21, 2022
Our company uses the solution for backups, data pumps, files, and logs. Our projects connect to the solution rather than individual users connecting to it.
Senior Software and Cloud Engineer at Velocis Technologies LLC
Real User
2022-07-19T11:17:21Z
Jul 19, 2022
We are making custom applications to the cloud. We are service providers. In terms of use cases, where a company wants to deploy their applications instead of building their servers on the premise, on the company premise. They can just, instead of buying servers and setting up networks and everything, buy space online on the AWS cloud. That way, you don't have to buy an expensive server. You already have servers online. You deploy your application. That's the main use case we are handling now. For example, if you have an in-house ERP, you want to move everything to the cloud. We migrate clients, for example. from on-premise to the cloud. If you have heavy tasks, there are computations that require heavy, powerful machines.
Architect - Database Administration at Mitra Innovation
Real User
Top 5
2022-06-03T12:25:10Z
Jun 3, 2022
One of our clients required certain files to be processed and stored. As another separate service, they wanted a separate integration to capture those files or pull those files. As a storage location, we used S3. It was cloud storage between two services. It was an integration between two services.
Presales Consultant - Solution Architect at Hewlett Packard Enterprise
Real User
2022-04-05T13:20:00Z
Apr 5, 2022
I have used Amazon for my personal projects. I was using it for my personal things and for test purposes. I didn't use any use case. I just put some data in and read it back again.
Most of the files and some of the services connect, using the Amazon service. Our application foundation is on that and we are writing our solutions right now. After that, we will begin to sell it to other customers.
System Administrator at a tech services company with 11-50 employees
Real User
2021-11-07T10:01:00Z
Nov 7, 2021
I am managing the S3 storage solution from Amazon. From Microsoft, I'm managing the Azure administration. I handle that. As a part of that, I need to create the virtual machines, a point to site to VPN and Azure AD, AD synchronization with Windows server, a database server, mail server, and web server. I administrate all that. We use Amazon S3 for storing files and documents on the cloud. I fill out the user whenever the files they use are downloaded. That is fully automated. After a few days, it can delete it. I've done the policy for the automated deletion. Amazon S3 is great. We are using it for storage solutions.
Senior Software Engineer at a tech services company with 501-1,000 employees
Real User
2021-06-27T11:11:17Z
Jun 27, 2021
We use this solution to store the historical data of the customer. Additionally, we are creating JSON format data while running our application and we are pushing this JSON format data to the Amazon S3.
We use it for static website hosting, storing data, and second data of some of the websites. We can then easily sync the data whenever we have the need to do so. Some of our websites are 2 or 4 pages and have static content. We can simply host it by creating a bucket, putting the content of the website there, and making it available to the public with the Make Public feature.
Manager, IT Infrastructure and Data Center at Asian Paints
Real User
2020-01-07T06:27:00Z
Jan 7, 2020
We have deployed Amazon S3 in two places, in Ireland and Northern Ireland. These act as storage for all the data, which is there. We have up to three servers there, one of these is an SQL server and one is Windows.
We haven't released the Amazon S3 platform in a production environment. It's more for prototyping and POC so far. Normally we use it for the data dump. It's something similar to a data lake wherein we use it to obtain the data.
Technical Director at a healthcare company with 5,001-10,000 employees
Real User
2019-09-08T09:50:00Z
Sep 8, 2019
The primary use case of this solution is for the storage of large amounts of data. It's a public cloud deployment model that anyone can deploy. There is no need to upgrade as they don't have versions, it's a managed service, it's hybrid. There are changes that we make internally. It's Amazon S3.
Amazon Simple Storage Service is storage for the Internet. It is designed to make web-scale computing easier for developers.
Amazon S3 has a simple web services interface that you can use to store and retrieve any amount of data, at any time, from anywhere on the web. It gives any developer access to the same highly scalable, reliable, fast, inexpensive data storage infrastructure that Amazon uses to run its own global network of web sites. The service aims to maximize benefits of scale and...
We host applications on Amazon Web Services (AWS). We set up EC2 for running various applications and utilize S3 storage for data storage. For databases, we use MySQL and Microsoft API instead of AWS databases.
I work with AWS Cloud since 2021. I have experience with AWS IAM for user management and love using EC2. I heavily use S3 and VPC. Additionally, I'm focused on cloud security products like security groups, policies, permissions, Security Hub, and Inspector.
I use Amazon S3 for storing objects, like files and documents. It is particularly valuable for personal projects where some of the files are stored there.
My primary use case for Amazon S3 includes using it for backups and deploying a website on an S3 bucket. Additionally, for my organization, it is used to deploy websites and leverage its architecture for deploying applications.
One of the use cases is to store Veeam backups. It's more complex than Veeam, which is simpler. S3 is essentially object storage, a place to store data that you can set up however you want. We usually connect to S3 using the API. It requires an access key and a secret key, which we input into the application we're connecting from. Then, that application can see the S3 bucket as if it were local, and we can copy stuff into it.
We use Amazon S3 primarily as a repository for storing objects such as images and files. It serves as a storage solution where we can efficiently save and manage the data we collect.
We have been using Amazon S3 to store some execution results. I am from a testing background. We use the solution to store execution results and whatever evidence we generate.
I use the solution to store data.
We are using the solution to host our website and manage the database.
I have different use cases. One major one is hosting static content for our legacy front-end application. We deploy all the static files to an S3 bucket, which is configured for web hosting. Additionally, we have an Akamai layer on top of S3, so all requests first hit Akamai and are then routed to S3, according to our HTML routing, before reaching the backend service.
I use the tool for my product deployment.
I use the solution for the storage of images and documents.
We use Amazon S3 to store source files. We load the data we ingest from the source system into the solution, which is then consumed on Snowflake.
Amazon S3 is used for storage service. It's designed for 99.9% durability; hence, our data is highly protected against failures.
We use the solution to store BLOB data, analytical data, and large files.
We use S3 buckets to store logs from the application side. The logs stay on the server for seven days. After that, we transfer them to the S3 bucket, depending on the application, and the logs help us correctly troubleshoot problems. We can also host web apps in the historical bug bucket. If you have a domain name from Route 53, you can create a bug bucket over there and whatever is in the web applications folder. About 40 people at my company use S3.
We use Amazon S3 for data storage purposes. We use the stored data for restoration or backup when there is a data leakage in websites and mobile applications. For enterprise-level applications, the stored data is also used for data analytics, backup, recovery, etc.
We are using Amazon services to create the SSX and other services.
We use this solution for cloud-based data storage.
My use case for Amazon S3 is storage, particularly for unstructured data.
We are using Amazon S3 mostly for block storage.
We use the solution for data blocking and analysis. It is also used as a storage solution, gene access and maintaining access.
We use Amazon s3 to store objects like data in my environment.
We primarily use the solution for file storage.
We use the solution to put our static web pages on the backend servers when we use Cloudfront if a transformation is required. When a request reaches Cloudfront, we have to mask some information in the response by transporting it. The solution masks or transforms the request so the backend can understand them. For example, We have a proxy server in between the NGINX engine, and when a request hits the NGINX engine, it has a different format. The NGINX engine converts and secures the information before hitting the Cloudfront. We use the Lambda function which is a Python program so the code body stays in the Amazon S3 bucket and when a response is returned from the backend server we can mask some of the information, keeping only the required information for the end response. I also use Amazon S3 for batch processing. Two gigabytes is a lot of data, especially if we are adding or updating information. If we have a Java application, it can take a long time to process two gigabytes of data. I split the data into multiple 50 or 30-MB files to make it easier to process. I created a manifest file that specified which rows belonged to which files. Then I used the solutions batch processing with Lambda to create a pipeline that performs batch operations. I use the solution for life cycle management. The information goes from the standard storage to the infrequent, and after 30 days it is moved to the glacier.
We are very flexible with our use cases and it fits most clients.
Our company uses the solution for backups, data pumps, files, and logs. Our projects connect to the solution rather than individual users connecting to it.
Our company uses the solution for deep CI backups. We have thousands of customers who use the solution and access our application.
Our company has 30 staff members who use the solution for data storage.
We have multiple use cases for Amazon S3. We use Amazon S3 for storing static data and as a static site load balancer.
We predominantly use this solution for our data storage, primarily making DB backups. I'm a lead data engineer.
We are making custom applications to the cloud. We are service providers. In terms of use cases, where a company wants to deploy their applications instead of building their servers on the premise, on the company premise. They can just, instead of buying servers and setting up networks and everything, buy space online on the AWS cloud. That way, you don't have to buy an expensive server. You already have servers online. You deploy your application. That's the main use case we are handling now. For example, if you have an in-house ERP, you want to move everything to the cloud. We migrate clients, for example. from on-premise to the cloud. If you have heavy tasks, there are computations that require heavy, powerful machines.
We are using Amazon S3 for image and video files.
Amazon S3 can be used for central storage, backup, log, and storage.
One of our clients required certain files to be processed and stored. As another separate service, they wanted a separate integration to capture those files or pull those files. As a storage location, we used S3. It was cloud storage between two services. It was an integration between two services.
Amazon S3 is our file storage solution. We use it for all of the backups of our storage and databases.
I have used Amazon for my personal projects. I was using it for my personal things and for test purposes. I didn't use any use case. I just put some data in and read it back again.
I am using Amazon S3 mainly for daily backups.
Most of the files and some of the services connect, using the Amazon service. Our application foundation is on that and we are writing our solutions right now. After that, we will begin to sell it to other customers.
I am managing the S3 storage solution from Amazon. From Microsoft, I'm managing the Azure administration. I handle that. As a part of that, I need to create the virtual machines, a point to site to VPN and Azure AD, AD synchronization with Windows server, a database server, mail server, and web server. I administrate all that. We use Amazon S3 for storing files and documents on the cloud. I fill out the user whenever the files they use are downloaded. That is fully automated. After a few days, it can delete it. I've done the policy for the automated deletion. Amazon S3 is great. We are using it for storage solutions.
We use this solution to store the historical data of the customer. Additionally, we are creating JSON format data while running our application and we are pushing this JSON format data to the Amazon S3.
We use it for static website hosting, storing data, and second data of some of the websites. We can then easily sync the data whenever we have the need to do so. Some of our websites are 2 or 4 pages and have static content. We can simply host it by creating a bucket, putting the content of the website there, and making it available to the public with the Make Public feature.
We primarily use the solution in order to share static files among web applications and mobile applications.
I work as an integrator of Amazon S3
The primary use case of this solution is for storage.
We have deployed Amazon S3 in two places, in Ireland and Northern Ireland. These act as storage for all the data, which is there. We have up to three servers there, one of these is an SQL server and one is Windows.
We haven't released the Amazon S3 platform in a production environment. It's more for prototyping and POC so far. Normally we use it for the data dump. It's something similar to a data lake wherein we use it to obtain the data.
The primary use case of this solution is for the storage of large amounts of data. It's a public cloud deployment model that anyone can deploy. There is no need to upgrade as they don't have versions, it's a managed service, it's hybrid. There are changes that we make internally. It's Amazon S3.