S3 is not a file system. It's used to store files for usage but does not store data in a transactional manner. I've written an article on creating a S3 bucket so in this article I'll focus on S3 characteristics and properties.
AWS S3 security:
S3 provides multiple security options/permissions and if more than one permission type is selected then the option with least privilege is implemented on the bucket. To view the available permission options select the S3 bucket and click on permissions. You would see an output similar to the one shown below:
The first section we see is that of ACLs. They are the legacy permission policy option and provide a functionality similar to that of NTFS permissions.
At the moment only my user sahil_eng008 is listed. We can add more users by clicking on add users and select the grants we'd like to give them.
Next we have bucket policies. The offer a more granular set of permissions. Click on bucket policy.
This opens up a bucket policy editor and we can write our policy here.
But if we are not proficient in write a policy in JSON in the editor we have an option of using the policy generator. So let's click that.
This opens up the policy generator and we can set our required selections here for the policy.
The policy generator is available for services other than S3 so we select the type of policy as S3 bucket policy. The principal is the user name to which we'd like to apply the policy. The service will be Amazon S3 and the actions will be the permissions we wish to grant to the user. The ARN will be the ARN of the S3 bucket we are applying the policy to. Now click on Add Statement.
We get the below output telling us that our statement has been added and is being displayed.
Next we click on generate policy to create a policy in JSON document.
We need to copy this JSON document and paste it in our policy editor window.
Now if I try to save the policy it will error out because I do not have the user ssuri007 created.
We can use IAM policies to restrict or grant access to S3 buckets. This also allows a granular security configuration.
Finally we can also use presigned URLs or Query String Authentication wherein we'll provide users with a link to access S3 objects but the link or URL will remain valid only for a limited period of time.
Storage Class:
If we select an object it's properties pop up and we can notice a section called Storage class
The durability and reliability options provided by S3 are categorized by Storage classes with Standard storage class offering the highest durability and reliability and Reduced Redundancy Storage(RRS) class offering the lowest level of durability and reliability. The storage class of e S3 bucket is standard by default and cannot be modified but we can modify the storage classes of the objects within the bucket.
Event notifications:
We can configure event notifications to perform an action when a particular event is triggered against the bucket. To configure this we need to select the bucket, go to it's properties and click on Events.
Once we click on Events the below menu opens up. Here we can name the event, select the type of event, add prefix/suffix identifiers for the objects against whom the event should be triggered and finally select the AWS service which the event should be sent to.
Logging:
We can enable audit logs to view a log of actions performed on the bucket and related object. To enable logging select the object, go to its properties and click on logging and check mark the radio button to enable logging and click save
AWS S3 security:
S3 provides multiple security options/permissions and if more than one permission type is selected then the option with least privilege is implemented on the bucket. To view the available permission options select the S3 bucket and click on permissions. You would see an output similar to the one shown below:
The first section we see is that of ACLs. They are the legacy permission policy option and provide a functionality similar to that of NTFS permissions.
At the moment only my user sahil_eng008 is listed. We can add more users by clicking on add users and select the grants we'd like to give them.
Next we have bucket policies. The offer a more granular set of permissions. Click on bucket policy.
This opens up a bucket policy editor and we can write our policy here.
But if we are not proficient in write a policy in JSON in the editor we have an option of using the policy generator. So let's click that.
This opens up the policy generator and we can set our required selections here for the policy.
The policy generator is available for services other than S3 so we select the type of policy as S3 bucket policy. The principal is the user name to which we'd like to apply the policy. The service will be Amazon S3 and the actions will be the permissions we wish to grant to the user. The ARN will be the ARN of the S3 bucket we are applying the policy to. Now click on Add Statement.
We get the below output telling us that our statement has been added and is being displayed.
Next we click on generate policy to create a policy in JSON document.
We need to copy this JSON document and paste it in our policy editor window.
Now if I try to save the policy it will error out because I do not have the user ssuri007 created.
We can use IAM policies to restrict or grant access to S3 buckets. This also allows a granular security configuration.
Finally we can also use presigned URLs or Query String Authentication wherein we'll provide users with a link to access S3 objects but the link or URL will remain valid only for a limited period of time.
Storage Class:
If we select an object it's properties pop up and we can notice a section called Storage class
The durability and reliability options provided by S3 are categorized by Storage classes with Standard storage class offering the highest durability and reliability and Reduced Redundancy Storage(RRS) class offering the lowest level of durability and reliability. The storage class of e S3 bucket is standard by default and cannot be modified but we can modify the storage classes of the objects within the bucket.
Event notifications:
We can configure event notifications to perform an action when a particular event is triggered against the bucket. To configure this we need to select the bucket, go to it's properties and click on Events.
Once we click on Events the below menu opens up. Here we can name the event, select the type of event, add prefix/suffix identifiers for the objects against whom the event should be triggered and finally select the AWS service which the event should be sent to.
Logging:
We can enable audit logs to view a log of actions performed on the bucket and related object. To enable logging select the object, go to its properties and click on logging and check mark the radio button to enable logging and click save
Versioning:
This allows us to preserve and retrieve previous versions of the objects. To enable versioning select the object, go to its properties and click on versioning and check mark the radio button to enable versioning and click save.
Versioning is also a pre-requisite for enabling cross region replication which is next.
Cross region replication:
Data stored in a S3 bucket in Standard storage class is replicated within multiple devices within a facility and multiple facilities within a region. But the replication is restricted to within the same region. If for some reason we need to replicate the bucket or an object within the bucket to a different region then we may do so with cross region replication provided we have versioning enabled as I mentioned earlier. To enable it select the object, go to its properties and click on Cross region replication and check mark the radio button to enable Cross region replication, add the bucket/object and destination region information as per requirement and click save.
We noticed a pop up on the lower left corner of the screen saying "Requester pays". Data transfer within S3 is chargeable. so in case we have a vendor/client who has requested us to transfer the data contained in the S3 bucket and they also have an AWS account then we can tell AWS to bill them for the data transfer cost and not us.
Life cycle management:
Typically data residing in S3 is not meant for long term storage. Life cycle management allows us to apply very granular rules wherein we can automate the transfer of our objects, the entire bucket or older versions of the objects to be transferred to a lower tiered cheaper storage glacier after a certain amount of time. We can also add rules to create new versions of objects after a set time and archive/expire the old versions. We can also add rules to remove objects from S3 or archived data in glacier to be deleted from AWS entirely post a set time period. To enable and add rules for life cycle management, select the object, go to management and click on add life cycle rule under lifecycle.
This was the last property I intended to discus in this article. I hope the article was informative and I thank you for reading it.
Just wanted to say thank you for taking the time to write such a comprehensive and simple
ReplyDeleteAWS Online Coourse Bangalore
good post. I just read your blog and wanted to say that I have really enjoyed reading your blog posts.Keep update with your blogs..
ReplyDeleteEvent organizers in VIZAG
best wedding planners in VIZAG
Thank you for your post. This is excellent information. It is amazing and wonderful to visit your site.
ReplyDeleteAWS Training in Hyderabad
Best AWS Training in Hyderabad
AWS Online Training
AWS Training Online
AWS Training In Bangalore