S3 client
This page contains examples with the S3 client. See the client introduction for a more detailed description how to use a client. You may also want to consider the authentication documentation to understand the many ways you can authenticate with AWS.
The S3 package could be installed with Composer.
composer require async-aws/s3
A new client object may be instantiated by:
use AsyncAws\S3\S3Client;
$s3 = new S3Client();
The authentication parameters is read from the environment by default. You can also specify a AWS access id and secret:
use AsyncAws\S3\S3Client;
$s3 = new S3Client([
'accessKeyId' => 'my_access_key',
'accessKeySecret' => 'my_access_secret',
'region' => 'eu-central-1',
]);
For all available options, see the configuration reference.
The client supports presign of requests to be able to pass the URL to an unauthorized party so they can download a file within the next X minutes. Read more about presign here.
Note: There is a SimpleS3Client that might be easier to work with for common use cases.
Usage¶
Upload files¶
If you want to upload a 1 Gb file, you really don't want to put that file in memory before uploading. You want to do it a smarter way. AsyncAws allow you to upload files using a string, resource, closure or a iterable. See the following examples:
use AsyncAws\S3\S3Client;
$s3 = new S3Client();
// Upload plain text
$s3->PutObject([
'Bucket' => 'my-company-website',
'Key' => 'robots.txt',
'Body' => "User-agent: *\nDisallow:",
]);
// Upload with stream
$resource = \fopen('/path/to/big/file', 'r');
$s3->PutObject([
'Bucket' => 'my-company-website',
'Key' => 'file.jpg',
'Body' => $resource,
]);
// Upload with Closure
$fp = \fopen('/path/to/big/file', 'r');
$s3->PutObject([
'Bucket' => 'my-company-website',
'Key' => 'file.jpg',
'ContentLength' => filesize('/path/to/big/file'), // This is important
'Body' => static function(int $length) use ($fp): string {
return fread($fp, $length);
},
]);
// Upload with an iterable
$files = ['/path/to/file1.txt', '/path/to/file2.txt'];
$s3->PutObject([
'Bucket' => 'my-company-website',
'Key' => 'file_merged.jpg',
'ContentLength' => array_sum(array_map('filesize', $files)), // This is important
'Body' => (static function() use($files): iterable {
foreach ($files as $file) {
yield file_get_contents($file);
}
})(),
]);
When using a Closure
, it's important to provide the property ContentLength
.
This information is required by AWS, and cannot be guessed by AsyncAws.
If ContentLength
is absent, AsyncAws will read the output before sending the
request which could have a performance impact.
Download files¶
When you download a file from S3, AsyncAws gives you a ResultStream
which
can be used as a string, as a resource, or iterated over. This allows you to handle
larger files without having them in memory.
// download a file and use it directly as string
$result = $s3->GetObject([
'Bucket' => 'my-company-website',
'Key' => 'metadata.json',
]);
$metadata = json_decode($result->getBody()->getContentAsString());
// download a big file and save it efficiently
$result = $s3->GetObject([
'Bucket' => 'my-company-website',
'Key' => 'bunny.mkv',
]);
$fp = fopen('/path/to/big_file.mkv', 'wb');
stream_copy_to_stream($result->getBody()->getContentAsResource(), $fp);
// use an iterable to perform some business logic on chunks while downloading (or show a progress bar)
$result = $s3->GetObject([
'Bucket' => 'my-company-website',
'Key' => 'orders.csv',
]);
$fp = fopen('/path/to/orders.csv', 'wb');
foreach ($result->getBody()->getChunks() as $chunk) {
fwrite($fp, $chunk);
$progress->advance();
}
Virtual Hosted-Style Requests¶
When calling AWS endpoints, AsyncAws uses Virtual Hosted-Style Requests:
The bucket name is part of the endpoint's host. To change this behavior, and use
"path styled endpoints" instead, set pathStyleEndpoint
parameter to true
when
initializing the client.
use AsyncAws\S3\S3Client;
$s3 = new S3Client(['pathStyleEndpoint' => true]);
Chunked body¶
When sending data to AWS endpoints, AsyncAws split the content in multiple
chunks. This improves UX by avoiding reading the file twice (required
to compute the signature
) which could be a performance issue when file is really big, or the uploaded
content is not a file (ie. streamed from an HTTP request). But some 3rd party
services like Openstack Swift, pretending being "S3-compatible" does not
support chunked body. To change this behavior, set sendChunkedBody
parameter
to false
when initializing the client.
use AsyncAws\S3\S3Client;
$s3 = new S3Client(['sendChunkedBody' => false]);
Non-AWS S3 endpoints¶
To use the S3Client
with example Digital Oceans' Spaces, you need to initialize
the S3Client
with your endpoint.
$s3 = new S3Client([
'endpoint' => 'https://fra1.digitaloceanspaces.com',
'pathStyleEndpoint' => true,
]);
The source code to this page is found on GitHub.