Amazon::S3::Bucket(3pm) | User Contributed Perl Documentation | Amazon::S3::Bucket(3pm) |
NAME¶
Amazon::S3::Bucket - A container class for a S3 bucket and its contents.
SYNOPSIS¶
use Amazon::S3; # creates bucket object (no "bucket exists" check) my $bucket = $s3->bucket("foo"); # create resource with meta data (attributes) my $keyname = 'testing.txt'; my $value = 'T'; $bucket->add_key( $keyname, $value, { content_type => 'text/plain', 'x-amz-meta-colour' => 'orange', } ); # list keys in the bucket $response = $bucket->list or die $s3->err . ": " . $s3->errstr; print $response->{bucket}."\n"; for my $key (@{ $response->{keys} }) { print "\t".$key->{key}."\n"; } # check if resource exists. print "$keyname exists\n" if $bucket->head_key($keyname); # delete key from bucket $bucket->delete_key($keyname);
DESCRIPTION¶
METHODS AND SUBROUTINES¶
new¶
Instaniates a new bucket object.
Pass a hash or hash reference containing various options:
- bucket (required)
- The name (identifier) of the bucket.
- account (required)
- The S3::Amazon object (representing the S3 account) this bucket is associated with.
- buffer_size
- The buffer size used for reading and writing objects to S3.
default: 4K
- region
- If no region is set and "verify_region" is set to true, the region of the bucket will be determined by calling the "get_location_constraint" method. Note that this will decrease performance of the constructor. If you know the region or are operating in only 1 region, set the region in the "account" object ("Amazon::S3").
- logger
- Sets the logger. The logger should be a blessed reference capable of providing at least a "debug" and "trace" method for recording log messages. If no logger object is passed the "account" object's logger object will be used.
- verify_region
- Indicates that the bucket's region should be determined by calling the
"get_location_constraint" method.
default: false
NOTE: This method does not check if a bucket actually exists unless you set "verify_region" to true. If the bucket does not exist, the constructor will set the region to the default region specified by the Amazon::S3 object ("account") that you passed.
Typically a developer will not call this method directly, but work through the interface in S3::Amazon that will handle their creation.
add_key¶
add_key( key, value, configuration)
Write a new or existing object to S3.
- key
- A string identifier for the object being written to the bucket.
- value
- A SCALAR string representing the contents of the object.
- configuration
- A HASHREF of configuration data for this key. The configuration is generally the HTTP headers you want to pass to the S3 service. The client library will add all necessary headers. Adding them to the configuration hash will override what the library would send and add headers that are not typically required for S3 interactions.
- acl_short (optional)
- In addition to additional and overridden HTTP headers, this HASHREF can have a "acl_short" key to set the permissions (access) of the resource without a separate call via "add_acl" or in the form of an XML document. See the documentation in "add_acl" for the values and usage.
Returns a boolean indicating the sucess or failure of the call. Check "err" and "errstr" for error messages if this operation fails. To examine the raw output of the response from the API call, use the "last_response()" method.
my $retval = $bucket->add_key('foo', $content, {}); if ( !$retval ) { print STDERR Dumper([$bucket->err, $bucket->errstr, $bucket->last_response]); }
add_key_filename¶
The method works like "add_key" except the value is assumed to be a filename on the local file system. The file will be streamed rather then loaded into memory in one big chunk.
copy_object %parameters¶
Copies an object from one bucket to another bucket. Note that the bucket represented by the bucket object is the destination. Returns a hash reference to the response object ("CopyObjectResult").
Headers returned from the request can be obtained using the "last_response()" method.
my $headers = { $bucket->last_response->headers->flatten };
Throws an exception if the response code is not 2xx. You can get an extended error message using the "errstr()" method.
my $result = eval { return $s3->copy_object( key => 'foo.jpg', source => 'boo.jpg' ); }; if ($@) { die $s3->errstr; }
Examples:
$bucket->copy_object( key => 'foo.jpg', source => 'boo.jpg' ); $bucket->copy_object( key => 'foo.jpg', source => 'boo.jpg', bucket => 'my-source-bucket' ); $bucket->copy_object( key => 'foo.jpg', headers => { 'x-amz-copy-source' => 'my-source-bucket/boo.jpg' );
See CopyObject for more details.
%parameters is a list of key/value pairs described below:
- key (required)
- Name of the destination key in the bucket represented by the bucket object.
- headers (optional)
- Hash or array reference of headers to send in the request.
- bucket (optional)
- Name of the source bucket. Default is the same bucket as the destination.
- source (optional)
- Name of the source key in the source bucket. If not provided, you must provide the source in the `x-amz-copy-source` header.
head_key $key_name¶
Returns a configuration HASH of the given key. If a key does not exist in the bucket "undef" will be returned.
HASH will contain the following members:
delete_key $key_name¶
Permanently removes $key_name from the bucket. Returns a boolean value indicating the operations success.
delete_keys @keys¶
delete_keys $keys¶
Permanently removes keys from the bucket. Returns the response body from the API call. Returns "undef" on non '2xx' return codes.
See <Deleting Amazon S3 object | https://docs.aws.amazon.com/AmazonS3/latest/userguide/DeletingObjects.html>
The argument to "delete_keys" can be:
- list of key names
- an array of hashes where each hash reference contains the keys "Key" and optionally "VersionId".
- an array of scalars where each scalar is a key name
- a hash of options where the hash contains
- a callback that returns the key and optionally the version id
Examples:
# delete a list of keys $bucket->delete_keys(qw( foo bar baz)); # delete an array of keys $bucket->delete_keys([qw(foo bar baz)]); # delete an array of keys in quiet mode $bucket->delete({ quiet => 1, keys => [ qw(foo bar baz) ]); # delete an array of versioned objects $bucket->delete_keys([ { Key => 'foo', VersionId => '1'} ]); # callback my @key_list = qw(foo => 1, bar => 3, biz => 1); $bucket->delete_keys( sub { return ( shift @key_list, shift @key_list ); } );
When using a callback, the keys are deleted in bulk. The "DeleteObjects" API is only called once.
delete_bucket¶
Permanently removes the bucket from the server. A bucket cannot be removed if it contains any keys (contents).
This is an alias for "$s3->delete_bucket($bucket)".
get_key $key_name, [$method]¶
Takes a key and an optional HTTP method and fetches it from S3. The default HTTP method is GET.
The method returns "undef" if the key does not exist in the bucket and throws an exception (dies) on server errors.
On success, the method returns a HASHREF containing:
- content_type
- etag
- value
- @meta
get_key_filename $key_name, $method, $filename¶
This method works like "get_key", but takes an added filename that the S3 resource will be written to.
list¶
List all keys in this bucket.
See "list_bucket" in Amazon::S3 for documentation of this method.
list_v2¶
See "list_bucket_v2" in Amazon::S3 for documentation of this method.
list_all¶
List all keys in this bucket without having to worry about 'marker'. This may make multiple requests to S3 under the hood.
See "list_bucket_all" in Amazon::S3 for documentation of this method.
list_all_v2¶
Same as "list_all" but uses the version 2 API for listing keys.
See "list_bucket_all_v2" in Amazon::S3 for documentation of this method.
get_acl¶
Retrieves the Access Control List (ACL) for the bucket or resource as an XML document.
- key
- The key of the stored resource to fetch. This parameter is optional. By default the method returns the ACL for the bucket itself.
set_acl¶
set_acl(acl)
Retrieves the Access Control List (ACL) for the bucket or resource. Requires a HASHREF argument with one of the following keys:
- acl_xml
- An XML string which contains access control information which matches Amazon's published schema.
- acl_short
- Alternative shorthand notation for common types of ACLs that can be used
in place of a ACL XML document.
According to the Amazon S3 API documentation the following recognized acl_short types are defined as follows:
- private
- Owner gets FULL_CONTROL. No one else has any access rights. This is the default.
- public-read
- Owner gets FULL_CONTROL and the anonymous principal is granted READ access. If this policy is used on an object, it can be read from a browser with no authentication.
- public-read-write
- Owner gets FULL_CONTROL, the anonymous principal is granted READ and WRITE access. This is a useful policy to apply to a bucket, if you intend for any anonymous user to PUT objects into the bucket.
- authenticated-read
- Owner gets FULL_CONTROL, and any principal authenticated as a registered Amazon S3 user is granted READ access.
- key
- The key name to apply the permissions. If the key is not provided the bucket ACL will be set.
Returns a boolean indicating the operations success.
get_location_constraint¶
Returns the location constraint (region the bucket resides in) for a bucket. Returns undef if no location constraint.
Valid values that may be returned:
af-south-1 ap-east-1 ap-northeast-1 ap-northeast-2 ap-northeast-3 ap-south-1 ap-southeast-1 ap-southeast-2 ca-central-1 cn-north-1 cn-northwest-1 EU eu-central-1 eu-north-1 eu-south-1 eu-west-1 eu-west-2 eu-west-3 me-south-1 sa-east-1 us-east-2 us-gov-east-1 us-gov-west-1 us-west-1 us-west-2
For more information on location constraints, refer to the documentation for GetBucketLocation <https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketLocation.html>.
err¶
The S3 error code for the last error the account encountered.
errstr¶
A human readable error string for the last error the account encountered.
error¶
The decoded XML string as a hash object of the last error.
last_response¶
Returns the last "HTTP::Response" to an API call.
MULTIPART UPLOAD SUPPORT¶
From Amazon's website:
Multipart upload allows you to upload a single object as a set of parts. Each part is a contiguous portion of the object's data. You can upload these object parts independently and in any order. If transmission of any part fails, you can retransmit that part without affecting other parts. After all parts of your object are uploaded, Amazon S3 assembles these parts and creates the object. In general, when your object size reaches 100 MB, you should consider using multipart uploads instead of uploading the object in a single operation.
See <https://docs.aws.amazon.com/AmazonS3/latest/userguide/mpuoverview.html> for more information about multipart uploads.
- Maximum object size 5TB
- Maximum number of parts 10,000
- Part numbers 1 to 10,000 (inclusive)
- Part size 5MB to 5GB. There is no limit on the last part of your multipart upload.
- Maximum nubmer of parts returned for a list parts request - 1000
- Maximum number of multipart uploads returned in a list multipart uploads request - 1000
A multipart upload begins by calling "initiate_multipart_upload()". This will return an identifier that is used in subsequent calls.
my $bucket = $s3->bucket('my-bucket'); my $id = $bucket->initiate_multipart_upload('some-big-object'); my $part_list = {}; my $part = 1; my $etag = $bucket->upload_part_of_multipart_upload('my-bucket', $id, $part, $data, length $data); $part_list{$part++} = $etag; $bucket->complete_multipart_upload('my-bucket', $id, $part_list);
upload_multipart_object( ... )
Convenience routine "upload_multipart_object" that encapsulates the multipart upload process. Accepts a hash or hash reference of arguments. If successful, a reference to a hash that contains the part numbers and etags of the uploaded parts.
You can pass a data object, callback routine or a file handle.
- key
- Name of the key to create.
- data
- Scalar object that contains the data to write to S3.
- callback
- Optionally provided a callback routine that will be called until you pass a buffer with a length of 0. Your callback will receive no arguments but should return a tuple consisting of a reference to a scalar object that contains the data to write and a scalar that represents the length of data. Once you return a zero length buffer the multipart process will be completed.
- fh
- File handle of an open file. The file must be greater than the minimum chunk size for multipart uploads otherwise the method will throw an exception.
- abort_on_error
- Indicates whether the multipart upload should be aborted if an error is
encountered. Amazon will charge you for the storage of parts that have
been uploaded unless you abort the upload.
default: true
abort_multipart_upload¶
abort_multipart_upload(key, multpart-upload-id)
Abort a multipart upload
complete_multipart_upload¶
complete_multipart_upload(key, multpart-upload-id, parts)
Signal completion of a multipart upload. "parts" is a reference to a hash of part numbers and etags.
initiate_multipart_upload¶
initiate_multipart_upload(key, headers)
Initiate a multipart upload. Returns an id used in subsequent call to "upload_part_of_multipart_upload()".
list_multipart_upload_parts¶
List all the uploaded parts of a multipart upload
list_multipart_uploads¶
List multipart uploads in progress
upload_part_of_multipart_upload¶
upload_part_of_multipart_upload(key, id, part, data, length)
Upload a portion of a multipart upload
- key
- Name of the key in the bucket to create.
- id
- The multipart-upload id return in the "initiate_multipart_upload" call.
- part
- The next part number (part numbers start at 1).
- data
- Scalar or reference to a scalar that contains the data to upload.
- length (optional)
- Length of the data.
SEE ALSO¶
Amazon::S3
AUTHOR¶
Please see the Amazon::S3 manpage for author, copyright, and license information.
CONTRIBUTORS¶
Rob Lauer Jojess Fournier Tim Mullin Todd Rinaldo luiserd97
POD ERRORS¶
Hey! The above document had some coding errors, which are explained below:
- Around line 1714:
- Unknown directive: =heads
2023-11-30 | perl v5.36.0 |