See: Description
| Interface | Description |
|---|---|
| BucketDeploymentProps |
(experimental) Properties for `BucketDeployment`.
|
| DeploymentSourceContext |
(experimental) Bind context for ISources.
|
| ISource |
(experimental) Represents a source for bucket deployments.
|
| ISource.Jsii$Default |
Internal default implementation for
ISource. |
| SourceConfig |
(experimental) Source information.
|
| UserDefinedObjectMetadata |
(experimental) Custom user defined metadata.
|
| Class | Description |
|---|---|
| BucketDeployment |
(experimental) `BucketDeployment` populates an S3 bucket with the contents of .zip files from other S3 buckets or from local disk.
|
| BucketDeployment.Builder |
(experimental) A fluent builder for
BucketDeployment. |
| BucketDeploymentProps.Builder |
A builder for
BucketDeploymentProps |
| BucketDeploymentProps.Jsii$Proxy |
An implementation for
BucketDeploymentProps |
| CacheControl |
(experimental) Used for HTTP cache-control header, which influences downstream caches.
|
| DeploymentSourceContext.Builder |
A builder for
DeploymentSourceContext |
| DeploymentSourceContext.Jsii$Proxy |
An implementation for
DeploymentSourceContext |
| ISource.Jsii$Proxy |
A proxy class which represents a concrete javascript instance of this type.
|
| Source |
(experimental) Specifies bucket deployment source.
|
| SourceConfig.Builder |
A builder for
SourceConfig |
| SourceConfig.Jsii$Proxy |
An implementation for
SourceConfig |
| UserDefinedObjectMetadata.Builder |
A builder for
UserDefinedObjectMetadata |
| UserDefinedObjectMetadata.Jsii$Proxy |
An implementation for
UserDefinedObjectMetadata |
| Enum | Description |
|---|---|
| ServerSideEncryption |
(experimental) Indicates whether server-side encryption is enabled for the object, and whether that encryption is from the AWS Key Management Service (AWS KMS) or from Amazon S3 managed encryption (SSE-S3).
|
| StorageClass |
(experimental) Storage class used for storing the object.
|
---
Status: Experimental
This library allows populating an S3 bucket with the contents of .zip files from other S3 buckets or from local disk.
The following example defines a publicly accessible S3 bucket with web hosting enabled and populates it from a local directory on disk.
// Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
Bucket websiteBucket = new Bucket(this, "WebsiteBucket", new BucketProps()
.websiteIndexDocument("index.html")
.publicReadAccess(true));
BucketDeployment.Builder.create(this, "DeployWebsite")
.sources(asList(s3deploy.Source.asset("./website-dist")))
.destinationBucket(websiteBucket)
.destinationKeyPrefix("web/static")
.build();
This is what happens under the hood:
cdk deploy or via CI/CD), the
contents of the local website-dist directory will be archived and uploaded
to an intermediary assets bucket. If there is more than one source, they will
be individually uploaded.BucketDeployment construct synthesizes a custom CloudFormation resource
of type Custom::CDKBucketDeployment into the template. The source bucket/key
is set to point to the assets bucket.aws s3 sync --delete against the destination bucket (in this case
websiteBucket). If there is more than one source, the sources will be
downloaded and merged pre-deployment at this step.
The following source types are supported for bucket deployments:
s3deploy.Source.asset('/path/to/local/file.zip')s3deploy.Source.asset('/path/to/local/directory')s3deploy.Source.bucket(bucket, zipObjectKey)
To create a source from a single file, you can pass AssetOptions to exclude
all but a single file:
s3deploy.Source.asset('/path/to/local/directory', { exclude: ['**', '!onlyThisFile.txt'] })
By default, the contents of the destination bucket will not be deleted when the
BucketDeployment resource is removed from the stack or when the destination is
changed. You can use the option retainOnDelete: false to disable this behavior,
in which case the contents will be deleted.
Configuring this has a few implications you should be aware of:
Changing the logical ID of the BucketDeployment construct, without changing the destination
(for example due to refactoring, or intentional ID change) will result in the deletion of the objects.
This is because CloudFormation will first create the new resource, which will have no affect,
followed by a deletion of the old resource, which will cause a deletion of the objects,
since the destination hasn't changed, and retainOnDelete is false.
When the destination bucket or prefix is changed, all files in the previous destination will first be deleted and then uploaded to the new destination location. This could have availability implications on your users.
If the destination bucket is not dedicated to the specific BucketDeployment construct (i.e shared by other entities),
we recommend to always configure the destinationKeyPrefix property. This will prevent the deployment from
accidentally deleting data that wasn't uploaded by it.
If the destination bucket is dedicated, it might be reasonable to skip the prefix configuration,
in which case, we recommend to remove retainOnDelete: false, and instead, configure the
autoDeleteObjects
property on the destination bucket. This will avoid the logical ID problem mentioned above.
By default, files in the destination bucket that don't exist in the source will be deleted
when the BucketDeployment resource is created or updated. You can use the option prune: false to disable
this behavior, in which case the files will not be deleted.
// Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
BucketDeployment.Builder.create(this, "DeployMeWithoutDeletingFilesOnDestination")
.sources(asList(s3deploy.Source.asset(path.join(__dirname, "my-website"))))
.destinationBucket(destinationBucket)
.prune(false)
.build();
This option also enables you to specify multiple bucket deployments for the same destination bucket & prefix, each with its own characteristics. For example, you can set different cache-control headers based on file extensions:
// Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
BucketDeployment.Builder.create(this, "BucketDeployment")
.sources(asList(Source.asset("./website", Map.of("exclude", asList("index.html")))))
.destinationBucket(bucket)
.cacheControl(asList(CacheControl.fromString("max-age=31536000,public,immutable")))
.prune(false)
.build();
BucketDeployment.Builder.create(this, "HTMLBucketDeployment")
.sources(asList(Source.asset("./website", Map.of("exclude", asList("*", "!index.html")))))
.destinationBucket(bucket)
.cacheControl(asList(CacheControl.fromString("max-age=0,no-cache,no-store,must-revalidate")))
.prune(false)
.build();
You can specify metadata to be set on all the objects in your deployment.
There are 2 types of metadata in S3: system-defined metadata and user-defined metadata.
System-defined metadata have a special purpose, for example cache-control defines how long to keep an object cached.
User-defined metadata are not used by S3 and keys always begin with x-amz-meta- (this prefix is added automatically).
System defined metadata keys include the following:
// Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
Bucket websiteBucket = new Bucket(this, "WebsiteBucket", new BucketProps()
.websiteIndexDocument("index.html")
.publicReadAccess(true));
BucketDeployment.Builder.create(this, "DeployWebsite")
.sources(asList(s3deploy.Source.asset("./website-dist")))
.destinationBucket(websiteBucket)
.destinationKeyPrefix("web/static")// optional prefix in destination bucket
.metadata(Map.of("A", "1", "b", "2"))// user-defined metadata
// system-defined metadata
.contentType("text/html")
.contentLanguage("en")
.storageClass(StorageClass.getINTELLIGENT_TIERING())
.serverSideEncryption(ServerSideEncryption.getAES_256())
.cacheControl(asList(CacheControl.setPublic(), CacheControl.maxAge(cdk.Duration.hours(1))))
.build();
You can provide a CloudFront distribution and optional paths to invalidate after the bucket deployment finishes.
// Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
import software.amazon.awscdk.aws_cloudfront;
import software.amazon.awscdk.aws_cloudfront_origins;
Bucket bucket = new Bucket(this, "Destination");
// Handles buckets whether or not they are configured for website hosting.
Distribution distribution = new Distribution(this, "Distribution", new DistributionProps()
.defaultBehavior(new BehaviorOptions().origin(new S3Origin(bucket))));
BucketDeployment.Builder.create(this, "DeployWithInvalidation")
.sources(asList(s3deploy.Source.asset("./website-dist")))
.destinationBucket(bucket)
.distribution(distribution)
.distributionPaths(asList("/images/*.png"))
.build();
The default memory limit for the deployment resource is 128MiB. If you need to
copy larger files, you can use the memoryLimit configuration to specify the
size of the AWS Lambda resource handler.
NOTE: a new AWS Lambda handler will be created in your stack for each memory limit configuration.
BucketDeployment is removed from the stack, the contents are retained
in the destination bucket (#952).
The custom resource is implemented in Python 3.6 in order to be able to leverage
the AWS CLI for "aws sync". The code is under lib/lambda and
unit tests are under test/lambda.
This package requires Python 3.6 during build time in order to create the custom resource Lambda bundle and test it. It also relies on a few bash scripts, so might be tricky to build on Windows.
Copyright © 2021. All rights reserved.