Skip navigation links

Package software.amazon.awscdk.services.s3.deployment

AWS S3 Deployment Construct Library

See: Description

Package software.amazon.awscdk.services.s3.deployment Description

AWS S3 Deployment Construct Library

---

cdk-constructs: Stable


Status: Experimental

This library allows populating an S3 bucket with the contents of .zip files from other S3 buckets or from local disk.

The following example defines a publicly accessible S3 bucket with web hosting enabled and populates it from a local directory on disk.

 // Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
 Bucket websiteBucket = new Bucket(this, "WebsiteBucket", new BucketProps()
         .websiteIndexDocument("index.html")
         .publicReadAccess(true));
 
 BucketDeployment.Builder.create(this, "DeployWebsite")
         .sources(asList(s3deploy.Source.asset("./website-dist")))
         .destinationBucket(websiteBucket)
         .destinationKeyPrefix("web/static")
         .build();
 

This is what happens under the hood:

  1. When this stack is deployed (either via cdk deploy or via CI/CD), the contents of the local website-dist directory will be archived and uploaded to an intermediary assets bucket. If there is more than one source, they will be individually uploaded.
  2. The BucketDeployment construct synthesizes a custom CloudFormation resource of type Custom::CDKBucketDeployment into the template. The source bucket/key is set to point to the assets bucket.
  3. The custom resource downloads the .zip archive, extracts it and issues aws s3 sync --delete against the destination bucket (in this case websiteBucket). If there is more than one source, the sources will be downloaded and merged pre-deployment at this step.

Supported sources

The following source types are supported for bucket deployments:

To create a source from a single file, you can pass AssetOptions to exclude all but a single file:

Retain on Delete

By default, the contents of the destination bucket will not be deleted when the BucketDeployment resource is removed from the stack or when the destination is changed. You can use the option retainOnDelete: false to disable this behavior, in which case the contents will be deleted.

Configuring this has a few implications you should be aware of:

General Recommendations

Shared Bucket

If the destination bucket is not dedicated to the specific BucketDeployment construct (i.e shared by other entities), we recommend to always configure the destinationKeyPrefix property. This will prevent the deployment from accidentally deleting data that wasn't uploaded by it.

Dedicated Bucket

If the destination bucket is dedicated, it might be reasonable to skip the prefix configuration, in which case, we recommend to remove retainOnDelete: false, and instead, configure the autoDeleteObjects property on the destination bucket. This will avoid the logical ID problem mentioned above.

Prune

By default, files in the destination bucket that don't exist in the source will be deleted when the BucketDeployment resource is created or updated. You can use the option prune: false to disable this behavior, in which case the files will not be deleted.

 // Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
 BucketDeployment.Builder.create(this, "DeployMeWithoutDeletingFilesOnDestination")
         .sources(asList(s3deploy.Source.asset(path.join(__dirname, "my-website"))))
         .destinationBucket(destinationBucket)
         .prune(false)
         .build();
 

This option also enables you to specify multiple bucket deployments for the same destination bucket & prefix, each with its own characteristics. For example, you can set different cache-control headers based on file extensions:

 // Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
 BucketDeployment.Builder.create(this, "BucketDeployment")
         .sources(asList(Source.asset("./website", Map.of("exclude", asList("index.html")))))
         .destinationBucket(bucket)
         .cacheControl(asList(CacheControl.fromString("max-age=31536000,public,immutable")))
         .prune(false)
         .build();
 
 BucketDeployment.Builder.create(this, "HTMLBucketDeployment")
         .sources(asList(Source.asset("./website", Map.of("exclude", asList("*", "!index.html")))))
         .destinationBucket(bucket)
         .cacheControl(asList(CacheControl.fromString("max-age=0,no-cache,no-store,must-revalidate")))
         .prune(false)
         .build();
 

Objects metadata

You can specify metadata to be set on all the objects in your deployment. There are 2 types of metadata in S3: system-defined metadata and user-defined metadata. System-defined metadata have a special purpose, for example cache-control defines how long to keep an object cached. User-defined metadata are not used by S3 and keys always begin with x-amz-meta- (this prefix is added automatically).

System defined metadata keys include the following:

 // Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
 Bucket websiteBucket = new Bucket(this, "WebsiteBucket", new BucketProps()
         .websiteIndexDocument("index.html")
         .publicReadAccess(true));
 
 BucketDeployment.Builder.create(this, "DeployWebsite")
         .sources(asList(s3deploy.Source.asset("./website-dist")))
         .destinationBucket(websiteBucket)
         .destinationKeyPrefix("web/static")// optional prefix in destination bucket
         .metadata(Map.of("A", "1", "b", "2"))// user-defined metadata
 
         // system-defined metadata
         .contentType("text/html")
         .contentLanguage("en")
         .storageClass(StorageClass.getINTELLIGENT_TIERING())
         .serverSideEncryption(ServerSideEncryption.getAES_256())
         .cacheControl(asList(CacheControl.setPublic(), CacheControl.maxAge(cdk.Duration.hours(1))))
         .build();
 

CloudFront Invalidation

You can provide a CloudFront distribution and optional paths to invalidate after the bucket deployment finishes.

 // Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
 import software.amazon.awscdk.aws_cloudfront;
 import software.amazon.awscdk.aws_cloudfront_origins;
 
 
 Bucket bucket = new Bucket(this, "Destination");
 
 // Handles buckets whether or not they are configured for website hosting.
 Distribution distribution = new Distribution(this, "Distribution", new DistributionProps()
         .defaultBehavior(new BehaviorOptions().origin(new S3Origin(bucket))));
 
 BucketDeployment.Builder.create(this, "DeployWithInvalidation")
         .sources(asList(s3deploy.Source.asset("./website-dist")))
         .destinationBucket(bucket)
         .distribution(distribution)
         .distributionPaths(asList("/images/*.png"))
         .build();
 

Memory Limit

The default memory limit for the deployment resource is 128MiB. If you need to copy larger files, you can use the memoryLimit configuration to specify the size of the AWS Lambda resource handler.

NOTE: a new AWS Lambda handler will be created in your stack for each memory limit configuration.

Notes

Development

The custom resource is implemented in Python 3.6 in order to be able to leverage the AWS CLI for "aws sync". The code is under lib/lambda and unit tests are under test/lambda.

This package requires Python 3.6 during build time in order to create the custom resource Lambda bundle and test it. It also relies on a few bash scripts, so might be tricky to build on Windows.

Roadmap

Skip navigation links

Copyright © 2021. All rights reserved.