stacker.hooks package

Submodules

stacker.hooks.aws_lambda module

stacker.hooks.aws_lambda.select_bucket_region(custom_bucket, hook_region, stacker_bucket_region, provider_region)[source]

Returns the appropriate region to use when uploading functions.

Select the appropriate region for the bucket where lambdas are uploaded in.

Parameters:
  • custom_bucket (str, None) – The custom bucket name provided by the bucket kwarg of the aws_lambda hook, if provided.
  • hook_region (str) – The contents of the bucket_region argument to the hook.
  • stacker_bucket_region (str) – The contents of the stacker_bucket_region global setting.
  • provider_region (str) – The region being used by the provider.
Returns:

The appropriate region string.

Return type:

str

stacker.hooks.aws_lambda.upload_lambda_functions(context, provider, **kwargs)[source]

Builds Lambda payloads from user configuration and uploads them to S3.

Constructs ZIP archives containing files matching specified patterns for each function, uploads the result to Amazon S3, then stores objects (of type troposphere.awslambda.Code) in the context’s hook data, ready to be referenced in blueprints.

Configuration consists of some global options, and a dictionary of function specifications. In the specifications, each key indicating the name of the function (used for generating names for artifacts), and the value determines what files to include in the ZIP (see more details below).

Payloads are uploaded to either a custom bucket or stackers default bucket, with the key containing it’s checksum, to allow repeated uploads to be skipped in subsequent runs.

The configuration settings are documented as keyword arguments below.

Keyword Arguments:
 
  • bucket (str, optional) – Custom bucket to upload functions to. Omitting it will cause the default stacker bucket to be used.
  • bucket_region (str, optional) – The region in which the bucket should exist. If not given, the region will be either be that of the global stacker_bucket_region setting, or else the region in use by the provider.
  • prefix (str, optional) – S3 key prefix to prepend to the uploaded zip name.
  • follow_symlinks (bool, optional) – Will determine if symlinks should be followed and included with the zip artifact. Default: False
  • payload_acl (str, optional) – The canned S3 object ACL to be applied to the uploaded payload. Default: private
  • functions (dict) –

    Configurations of desired payloads to build. Keys correspond to function names, used to derive key names for the payload. Each value should itself be a dictionary, with the following data:

    • path (str):
      Base directory or path of a ZIP file of the Lambda function payload content.

      If it not an absolute path, it will be considered relative to the directory containing the stacker configuration file in use.

      When a directory, files contained will be added to the payload ZIP, according to the include and exclude patterns. If not patterns are provided, all files in the directory (respecting default exclusions) will be used.

      Files are stored in the archive with path names relative to this directory. So, for example, all the files contained directly under this directory will be added to the root of the ZIP file.

      When a ZIP file, it will be uploaded directly to S3. The hash of whole ZIP file will be used as the version key by default, which may cause spurious rebuilds when building the ZIP in different environments. To avoid that, explicitly provide a version option.

    • include(str or list[str], optional):
      Pattern or list of patterns of files to include in the payload. If provided, only files that match these patterns will be included in the payload.

      Omitting it is equivalent to accepting all files that are not otherwise excluded.

    • exclude(str or list[str], optional):
      Pattern or list of patterns of files to exclude from the payload. If provided, any files that match will be ignored, regardless of whether they match an inclusion pattern.

      Commonly ignored files are already excluded by default, such as .git, .svn, __pycache__, *.pyc, .gitignore, etc.

    • version(str, optional):
      Value to use as the version for the current function, which will be used to determine if a payload already exists in S3. The value can be any string, such as a version number or a git commit.

      Note that when setting this value, to re-build/re-upload a payload you must change the version manually.

Examples

pre_build:
  - path: stacker.hooks.aws_lambda.upload_lambda_functions
    required: true
    enabled: true
    data_key: lambda
    args:
      bucket: custom-bucket
      follow_symlinks: true
      prefix: cloudformation-custom-resources/
      payload_acl: authenticated-read
      functions:
        MyFunction:
          path: ./lambda_functions
          include:
            - '*.py'
            - '*.txt'
          exclude:
            - '*.pyc'
            - test/
from troposphere.awslambda import Function
from stacker.blueprints.base import Blueprint

class LambdaBlueprint(Blueprint):
    def create_template(self):
        code = self.context.hook_data['lambda']['MyFunction']

        self.template.add_resource(
            Function(
                'MyFunction',
                Code=code,
                Handler='my_function.handler',
                Role='...',
                Runtime='python2.7'
            )
        )

stacker.hooks.ecs module

stacker.hooks.ecs.create_clusters(provider, context, **kwargs)[source]

Creates ECS clusters.

Expects a “clusters” argument, which should contain a list of cluster names to create.

Parameters:

Returns: boolean for whether or not the hook succeeded.

stacker.hooks.iam module

stacker.hooks.iam.create_ecs_service_role(provider, context, **kwargs)[source]

Used to create the ecsServieRole, which has to be named exactly that currently, so cannot be created via CloudFormation. See:

http://docs.aws.amazon.com/AmazonECS/latest/developerguide/IAM_policies.html#service_IAM_role

Parameters:

Returns: boolean for whether or not the hook succeeded.

stacker.hooks.iam.ensure_server_cert_exists(provider, context, **kwargs)[source]
stacker.hooks.iam.get_cert_contents(kwargs)[source]

Builds parameters with server cert file contents.

Parameters:kwargs (dict) – The keyword args passed to ensure_server_cert_exists, optionally containing the paths to the cert, key and chain files.
Returns:
A dictionary containing the appropriate parameters to supply to
upload_server_certificate. An empty dictionary if there is a problem.
Return type:dict

stacker.hooks.keypair module

stacker.hooks.keypair.create_key_pair(ec2, keypair_name)[source]
stacker.hooks.keypair.create_key_pair_from_public_key_file(ec2, keypair_name, public_key_path)[source]
stacker.hooks.keypair.create_key_pair_in_ssm(ec2, ssm, keypair_name, parameter_name, kms_key_id=None)[source]
stacker.hooks.keypair.create_key_pair_local(ec2, keypair_name, dest_dir)[source]
stacker.hooks.keypair.ensure_keypair_exists(provider, context, **kwargs)[source]

Ensure a specific keypair exists within AWS.

If the key doesn’t exist, upload it.

Parameters:
  • provider (stacker.providers.base.BaseProvider) – provider instance
  • context (stacker.context.Context) – context instance
  • keypair (str) – name of the key pair to create
  • ssm_parameter_name (str, optional) – path to an SSM store parameter to receive the generated private key, instead of importing it or storing it locally.
  • ssm_key_id (str, optional) – ID of a KMS key to encrypt the SSM parameter with. If omitted, the default key will be used.
  • public_key_path (str, optional) – path to a public key file to be imported instead of generating a new key. Incompatible with the SSM options, as the private key will not be available for storing.
Returns:

status (str): one of “exists”, “imported” or “created” key_name (str): name of the key pair fingerprint (str): fingerprint of the key pair file_path (str, optional): if a new key was created, the path to

the file where the private key was stored

Return type:

In case of failure False, otherwise a dict containing

stacker.hooks.keypair.get_existing_key_pair(ec2, keypair_name)[source]
stacker.hooks.keypair.import_key_pair(ec2, keypair_name, public_key_data)[source]
stacker.hooks.keypair.interactive_prompt(keypair_name)[source]
stacker.hooks.keypair.read_public_key_file(path)[source]

stacker.hooks.route53 module

stacker.hooks.route53.create_domain(provider, context, **kwargs)[source]

Create a domain within route53.

Parameters:

Returns: boolean for whether or not the hook succeeded.

stacker.hooks.utils module

stacker.hooks.utils.full_path(path)[source]
stacker.hooks.utils.handle_hooks(stage, hooks, provider, context)[source]

Used to handle pre/post_build hooks.

These are pieces of code that we want to run before/after the builder builds the stacks.

Parameters:
  • stage (string) – The current stage (pre_run, post_run, etc).
  • hooks (list) – A list of stacker.config.Hook containing the hooks to execute.
  • provider (stacker.provider.base.BaseProvider) – The provider the current stack is using.
  • context (stacker.context.Context) – The current stacker context.

Module contents