How to Mount and Use Amazon S3 as a File System with S3 Files

From Putty P Hub, the free encyclopedia of technology

Introduction

Amazon S3 Files revolutionizes how you interact with object storage by making your S3 buckets accessible as high-performance file systems. This guide walks you through the process of mounting an S3 bucket on AWS compute resources—whether EC2 instances, containers (ECS/EKS), or Lambda functions—using the new S3 Files feature. By the end, you'll have a working file system that automatically syncs changes to S3, supports NFS v4.1+ operations, and offers intelligent pre-fetching for optimal performance. No more choosing between object storage economics and file system interactivity.

How to Mount and Use Amazon S3 as a File System with S3 Files
Source: aws.amazon.com

What You Need

  • AWS Account with permissions to create and manage S3 buckets, EC2 instances, or container clusters.
  • Basic familiarity with the AWS Management Console or AWS CLI.
  • An existing S3 general purpose bucket (or create a new one).
  • Compute resource (EC2 instance, ECS task, EKS pod, or Lambda function) running a Linux operating system with NFS client support.
  • Network connectivity between the compute resource and the S3 bucket (usually within the same region).
  • IAM role for your compute resource with the minimum required permissions: s3:ListBucket, s3:GetObject, s3:PutObject, s3:DeleteObject (adjust for your use case).

Step-by-Step Guide

Step 1: Enable S3 Files on Your Bucket

Before mounting, you must enable the S3 Files filesystem on the target S3 bucket. This is a one-time configuration.

  1. Open the S3 Console and select your bucket.
  2. Go to the Properties tab and locate the section S3 Files.
  3. Click Enable S3 Files. Confirm any prompts.
  4. (Optional) Configure high-performance storage settings: choose whether to load full file data or metadata only. This affects caching behavior.

Tip: For most workloads, leaving the default settings is fine. Tune later based on access patterns (see Tips).

Step 2: Attach an IAM Role to Your Compute Resource

Your compute resource needs permissions to access the S3 bucket via the S3 Files interface.

  1. Create or update an IAM role with a policy that grants at least s3:ListBucket and s3:GetObject on the bucket and its objects.
  2. Attach this role to your EC2 instance (instance profile), ECS task definition, EKS service account, or Lambda execution role.
  3. If you plan to write data back to S3, add s3:PutObject and s3:DeleteObject permissions.

Step 3: Install NFS Client on Your Compute Resource

S3 Files uses the NFS v4.1 protocol. Your compute resource must have an NFS client installed.

  • For Amazon Linux 2/2023: sudo yum install -y nfs-utils
  • For Ubuntu: sudo apt update && sudo apt install -y nfs-common
  • For Containers: Ensure the NFS client is included in your Docker image (e.g., apt-get install nfs-common).
  • For Lambda: You can use NFS in a custom runtime or container image with the client pre-installed.

Step 4: Mount the S3 Bucket as a File System

Obtain the mount point from the S3 Files settings (usually an NFS export path) and mount it on your compute instance.

  1. Create a local mount point directory: sudo mkdir -p /mnt/s3files
  2. Mount using the NFS export path. The format is mount.nfs4 -o sync,rw,hard,noatime <S3-Files-Endpoint>:/<bucket-name> /mnt/s3files.
  3. To make the mount persistent across reboots, add an entry to /etc/fstab with the appropriate options.

Once mounted, interact with the directory like any local folder: list files, create directories, read/write files. All changes sync back to S3 automatically.

How to Mount and Use Amazon S3 as a File System with S3 Files
Source: aws.amazon.com

Step 5: Verify and Test

Confirm the file system is working correctly.

  1. List the contents: ls /mnt/s3files – you should see your S3 objects.
  2. Create a test file: echo 'Hello S3 Files' > /mnt/s3files/test.txt
  3. Check in S3 console: the object test.txt should appear in your bucket.
  4. Modify the file using a text editor or cat – changes propagate to S3.

Step 6: Attach to Multiple Compute Resources (Optional)

S3 Files supports concurrent access from multiple instances. Simply repeat Step 4 on any number of EC2/ECS/EKS/Lambda resources using the same NFS export path. All clients see consistent data, and S3 remains the single source of truth. No need to duplicate data across clusters.

Tips for Optimal Use

  • Performance Tuning: By default, frequently accessed files are cached on high-performance storage. For large sequential reads, S3 Files serves directly from S3 to maximize throughput. If your workload involves many small files, consider pre-loading metadata to reduce latency.
  • Intelligent Pre-fetching: Enable the intelligent pre-fetching option (available in bucket settings) to have the file system anticipate your access patterns. This reduces read latency for common operations.
  • Cost Management: Only the data stored on high-performance storage incurs additional costs. Use the metadata only setting for archives or rarely accessed data. Monitor storage usage with CloudWatch metrics.
  • Security: Always use IAM roles instead of long-term credentials. Restrict NFS access via security groups (ensure port 2049 is open only to necessary sources).
  • Backward Compatibility: You can still use S3 via API/console while the file system is mounted. Changes made from any interface are reflected in real time.
  • Logging: Enable S3 server access logs to audit file operations performed through the file system.

By following these steps, you can seamlessly integrate S3 into your existing workflows as a native file system, combining the durability and cost savings of object storage with the interactivity of a local filesystem.