- EverythingDevOps
- Posts
- The hidden cost of reusable Terraform
The hidden cost of reusable Terraform
Why reuse breaks down as requests scale.
PRESENTED BY INCIDENT.IO

AI is starting to show up as a meaningful catalyst in SRE workflows. Incident.io recently hosted a product showcase that digs into what “AI‑powered reliability” actually looks like in practice.
If you’ve been wondering whether AI can really reduce toil, automate operational overhead, or shrink the time it takes to resolve 3 a.m. incidents by 80%, this session tackles those claims with real workflows and helpful demos.
For teams exploring how AI fits into their reliability stack, this session is a chance to dive deeper. View the playback now.
Hey there,
Imagine you are a platform engineer, and a developer asks for an S3 bucket for testing. You write the Terraform configuration, apply it, and confirm everything works.
A week later, another request comes in, followed by more, and before long you are managing dozens of configuration files with small variations, version drift, and constant updates just to keep environments aligned.
What started as simple infrastructure quickly turns into ongoing maintenance. The pattern is familiar, and it does not scale.
In today’s issue, we highlight:
How Kratix (an an open-source platform framework) removes repetitive infrastructure setup
What Promises are and why they simplify platform management
How Kratix handles resource requests in a scalable, consistent way
Let's dive in.
Was this email forwarded to you? Subscribe here to get your weekly updates directly into your inbox.
Why traditional workflows break down.
Traditional infrastructure workflows struggle when consistency and scale are required. Every team needs similar resources, but small differences in configuration lead to duplication, drift, and manual fixes.
Kratix addresses this by letting platform teams define their platform once, using a Promise model. Developers then request resources through an API, while Kratix handles provisioning automatically behind the scenes.
A Promise captures the standards your platform team wants to enforce and applies them the same way every time. Instead of maintaining separate configurations for each bucket, Jenkins instance, or database, you define a single Promise that works across environments.
This shift changes how teams operate day to day:
Standardized resource provisioning without duplicated configuration
Fewer errors caused by manual changes and version conflicts
Faster delivery of resources without giving up platform control
By clearly separating platform definition from resource consumption, Kratix enables teams to move faster while maintaining reliable and aligned infrastructure that meets organizational standards.

PRESENTED BY INCIDENT.IO

AI is starting to show up as a meaningful catalyst in SRE workflows. Incident.io recently hosted a product showcase that digs into what “AI‑powered reliability” actually looks like in practice.
If you’ve been wondering whether AI can really reduce toil, automate operational overhead, or shrink the time it takes to resolve 3 a.m. incidents by 80%, this session tackles those claims with real workflows and helpful demos.
For teams exploring how AI fits into their reliability stack, this session is a chance to dive deeper. View the playback now.
How Promises enable self-service.
At the core of Kratix is the Promise, the mechanism that allows developers to request resources without relying on manual setup from the platform team.
A Promise is a YAML definition created by the platform team that specifies:
The type of resource to provision, such as S3 buckets or Jenkins instances
The parameters developers can customize, like names or sizes
The provisioning logic, whether implemented with Helm, Terraform, or custom scripts
Promises are implemented as Kubernetes Custom Resource Definitions, extending the cluster with platform-specific capabilities. Once deployed to the Kratix control plane, each Promise becomes a resource developers can request directly.
When a request is submitted, Kratix manages the full lifecycle:
Validating the request against defined standards
Dispatching execution to worker clusters
Updating the request with status and outputs, such as URLs or credentials
With the control plane coordinating requests and worker clusters handling execution, Kratix scales cleanly as demand increases while keeping standards consistent.
We walk through how Promises move from definition to real resource requests in the full guide, which you can read here.
Learn more about Kratix
Explore real-world Kratix use cases, integrations, and insights to streamline platform engineering and empower developers.
Setting up MongoDB with Kratix and Port - A practical walkthrough of creating a MongoDB Promise and integrating it with Port's developer portal.
Kratix + Backstage: Upgrade Your Portal to a Platform - Learn how to connect Kratix Promises with Backstage to give developers a self-service interface for platform resources.
The Missing Middle: Why Platform Orchestration is the Key to Better Developer Platforms - Explores why platform teams need orchestration tools like Kratix to bridge the gap between infrastructure and developer needs.
And it’s a wrap!
See you Friday for the week’s news, upcoming events, and opportunities.
If you found this helpful, share this link with a colleague or fellow DevOps engineer.
Divine Odazie
Founder of EverythingDevOps