Link Search Menu Expand Document

Why CloudReactor?

The volume and complexity of data generated by companies is always growing: your Customer Success team just set up a ticket-tracking system and wants to pull metrics from it. Engineering wants to monitor data deliveries to customers, or from suppliers. Finance wants to join marketing data with revenue data to measure lifetime value by channel.

All these requests require data ETL tasks to be written, and over time, these tasks accumulate. Understanding the current status of each of these tasks becomes more burdensome, and the frequency of failure increases. If a task fails or encounters an error, you need to quickly figure out which task failed and why.

CloudReactor provides a simple way to deploy, monitor, manage and orchestrate tasks.

How is CloudReactor different from other schedulers?

We aim to make the end-to-end developer experience as painless as possible.

  • Minimize developer headaches: our Quickstart repo comes with a Docker python container. Containerisation ensures that all engineers on your team can develop in an identical environment – and that the production instance will also be the same.
  • Deploying to AWS ECS: deploy to AWS with just a single command.
  • Running tasks on ECS Fargate: ECS Fargate is serverless, which eliminates server maintenance and reduces cost (see below for more)
  • Monitoring and managing tasks: the CloudReactor SaaS dashboard allows you to view historical executions, change run schedules and stop / start tasks with just a few clicks. Once CloudReactor has been integrated with a PagerDuty account, alert methods can be set up for any task or workflow with just a click
  • Orchestrating tasks: Our drag & drop workflow composer allows you to create a workflow in seconds
  • System maintenance: CloudReactor is hosted, meaning you don’t need to set up or maintain any additional servers, databases, message queues, etc.

Why should I use ECS?

Using AWS ECS with the Fargate execution method to run tasks has several benefits:

  1. Near infinite scalability – run as many task instances as you want at any time
  2. No need to manage EC2 instances or other server hardware – security patches, library upgrades, library compatibility issues, downtime. This leads to increased reliability and more time for developers
  3. Running Docker images leads to more predictable, isolated execution. Here’s a good summary of the advantages of Docker.
  4. Reduce costs: only pay for the CPU/memory you reserve while your tasks are running
  5. Reliable scheduling by AWS without a separate scheduling server (e.g. no risk of scheduling server hanging)