Link Search Menu Expand Document

Easily create, deploy, orchestrate and manage tasks in AWS

CloudReactor provides Dockerfiles and scripts that enable you to get up and running with a local Python development environment, deploy code seamlessly to your AWS environment, and monitor, manage and orchestrate deployed tasks with CloudReactor – all in record time and with a minimum of fuss.

Get started Learn more about CloudReactor

Table of contents

  1. Getting Started
    1. Set up AWS infrastructure, link to CloudReactor
    2. Deploy example tasks to AWS
  2. The example tasks
  3. Development workflow
    1. Running the tasks locally
    2. More development options
  4. Deploying your own tasks
    1. Adding new tasks
    2. Removing tasks
  5. Next steps
  6. Contact us

Getting Started

First, we have to set up the serverless AWS infrastructure (ECS) where your tasks will run, and link it to CloudReactor. More specifically:

  1. Infrastructure to run tasks in your AWS environment: ECS cluster, VPC, etc.
  2. A role in AWS that allows CloudReactor to schedule and manage tasks that you deploy
  3. Letting CloudReactor know what that role and other AWS settings is

You might already have some of this set up (e.g. an ECS cluster, or a VPC) – or you might not. Either way, we’ve created a super easy AWS Setup Wizard that can ensure you have everything you need. It takes < 15 minutes.

Because the AWS Setup Wizard will be setting up ECS clusters, VPCs, subnets etc., you’ll need to run it with AWS Administrator user privileges. The code behind the wizard can be inspected at the above link.

If you don’t want to use the Setup Wizard for some reason, you can refer to the manual setup instructions.

Either run the wizard, or complete manual setup, before moving to the next step.

Note: setting up an AWS user with deployment permissions

Note that below, we’ll be using an AWS user to deploy tasks to AWS ECS.

This AWS user must have permissions to deploy Docker images to ECR and to create tasks in ECS. You can either:

  1. Use an admin user or a power user with broad permissions; or
  2. Create a user and role with specific permissions for deployment.

If you’re using an admin or power user, feel free to skip to the next step.

However, if you want to create a new user and role, we’ve prepared a “CloudReactor AWS deployer” CloudFormation template with all the necessary permissions. In AWS, select the CloudFormation service, and upload this template. It will create a user with all the necessary permissions (and output user credentials – save these for use later!).

For more details, see AWS permissions required to deploy.

Deploy example tasks to AWS

With the underlying infrastructure set-up, we can now go ahead and deploy tasks to AWS and have them managed by CloudReactor.

We do this by providing a “quickstart” repo. The repo contains simple toy tasks, as well as scripts that enable easy deployment to AWS. You can replace these toy tasks with your own scripts.

First, fork this repo. Then, once forked, clone it locally:

git clone [ to the forked repo]

This repo contains a Dockerfile (container) that has all the dependencies (python, ansible, aws-cli etc.) required to build and deploy your tasks. Our next step is to build this “local” container, and then use it to deploy tasks from your local machine. This is the most straightforward way to configure and deploy, since:

  • you don’t need to have python installed directly on your machine
  • you don’t need to add another set of dependencies to your libraries
  • you can deploy irrespective of your OS (e.g. if you’re running Windows).

Note: You can also use this method on an EC2 instance that has an instance profile containing a role that has permissions to create ECS tasks. When deploying, the AWS CLI in the container will use the temporary access key associated with the role assigned to the EC2 instance.

However, if you want to deploy natively – perhaps you have Python installed (possibly in a VM), and you want to use Python directly to deploy – see this section.

Otherwise, let’s continue:

  1. Install Docker Compose. Once installed, run it (if using Windows or Mac, the “Docker Desktop” installation includes Docker Compose, so just install / run that)
  2. AWS configuration: Copy deploy/docker_deploy.env.example to deploy/docker_deploy.env
    • Fill in your AWS access key, access key secret, and default region. The AWS keys used here must be for a user with privileges to deploy tasks to AWS ECS, as mentioned above.
    • The access key and secret should be for the AWS user you plan on using to deploy with, possibly created above in Prerequisites: AWS user with deployment permissions.
    • You may also populate this file with a script you write yourself, for example with something that uses the AWS CLI to assume a role and gets temporary credentials.
    • If you are running this on an EC2 instance with an instance profile that has deployment permissions, you can leave this file blank.
  3. CloudReactor configuration: Copy deploy/vars/example.yml to deploy/vars/<environment>.yml, where <environment> is the name of the Run Environment created in CloudReactor above (e.g. staging, production)
    • Open the .yml file you just created, and enter your CloudReactor API key next to api_key; or, if you’re not using CloudReactor, set enabled: false instead
  4. Build the Docker container that will deploy the project.
    • In a bash shell, run:
       ./ <environment>
    • In a Windows command prompt, run:
       docker_build_deployer <environment>

      <environment> is a required argument, which is the name of the Run Environment and .yml file created immediately above.

    This step is only necessary once, unless you add additional configuration to deploy/Dockerfile.

  5. With the deployment container created, we can deploy the tasks
    • In a bash shell, run:
       ./ <environment> [task_name]
    • In a Windows command prompt, run:
       docker_deploy <environment>  [task_names]

      In both of these commands, <environment> is a required argument, which is the name of the Run Environment. [task_names] is an optional argument, which is a comma-separated list of tasks to be deployed. In this project, this can be one or more of task_1, file_io, etc, separated by commas. If [task_names] is omitted, all tasks will be deployed.

    To troubleshoot deployment issues:

    • In a bash shell, run:
       ./ <environment>
    • In a Windows command prompt, run:
       docker_deploy_shell.bat <environment>

      These commands will take you to a bash shell inside the deployer Docker container where you can re-run the deployment script with ./ and inspect the files it produces in the build/ directory.

The example tasks

Successfully deploying this example project will push two ECS tasks to AWS. You can log into CloudReactor to see these tasks.

The code for each tasks is in the ./src folder, i.e. ./src/ and ./src/ Feel free to take a look.

Next, open ./deploy/vars/common.yml – you’ll see entries for both task_1 and file_io. You can think of this as a manifest of tasks to push to ECS; the CloudReactor deployment script you just ran will look for the files defined here, push them to ECS, and register them with CloudReactor.

These tasks have the following behavior:

  • task_1 prints 30 numbers and exits successfully. While it does so, it updates the “successful” count and the “last status message” that is shown in CloudReactor, using the CloudReactor status updater library. It is configured to run daily via deploy/vars/common.yml
  • file_io uses non-persistent file storage to write and read numbers
  • web_server uses a python library dependency (Flask) to implement a web server and shows how to link an AWS Application Load Balancer (ALB) to a service. It requires that an ALB and target group be setup already, so it is not enabled by default (i.e. is commented out in the ./deploy/vars/common.yml file).

Development workflow

Running the tasks locally

The tasks are setup to be run with Docker Compose in docker-compose.yml. For example, you can build the Docker image that runs the tasks by typing:

docker-compose build

(You only need to run this again when you change the dependencies required by the project.)

Then to run, say task_1, type:

docker-compose run --rm task_1

Docker Compose is setup so that changes in the environment file deploy/files/ and the files in src will be available without rebuilding the image.

When ready to deploy, as before:

  • In a bash shell, run:

      ./ <environment> [task_name]
  • In a Windows command prompt, run:

      docker_deploy <environment>  [task_names]

task_names is optional; if omitted, all tasks defined in ./deploy/vars/common.yml will be pushed.

More development options

See the development guide for instructions on how to debug, add dependencies, and run tests and checks.

Deploying your own tasks

Adding new tasks

Now that you have deployed the example tasks, you can move your existing code to this project. To add your own task:

  1. Place task code itself in a new file in ./src, e.g.
  2. Add a configuration block for the task in deploy/vars/common.yml, below task_name_to_config: A minimal configuration block is:
    <<: *default_task_config
    command: "python src/"
- `<<: *default_task_config` allows new_task to inherit properties from the default task configuration
- `command: "python src/"` contains the command to run (in this case, to execute new_task via python)
- Additional parameters include the run schedule (cron expression), retry parameters, and environment variables. See [additional configuration](docs/

Removing tasks

Delete tasks within the CloudReactor dashboard. This will remove the task from AWS also.

You should also remove the reference to the tasks (and maybe the task code itself if you want) in ./deploy/vars/common.yml. If you don’t, if you run ./ [environment] (without task names), this will (re-)push all tasks – which might include tasks you had intended to remove.

For example, if you want to delete the task_1 task, open ./deploy/vars/common.yml and delete the entire task_1: code block i.e.:

<<: *default_task_config
description: "This description shows up in CloudReactor dashboard"
command: "python src/"
schedule: cron(9 15 * * ? *)
    <<: *default_task_wrapper
    enable_status_updates: true

Next steps

Contact us

Hopefully, this example project has helped you get up and running with ECS and CloudReactor. Feel free to reach out to us at to setup an account, or if you have any questions or issues!