Announcing the Open Source Terraform Provider for OpenAI

We've wrote and open sourced a Terraform provider for OpenAI resources.
As with any Terraform provider, the reason is simple: ClickOps'ing configuration starting from a certain scale becomes messy and leads to inconsistent configurations, and inconsistent configurations are both a security and productivity problem.
While developing this provider, we've learned that there are two sides of the OpenAI APIs. Let's talk about both.
OpenAI Administration APIs
Administration API endpoints allow managing your organization. This includes creation of projects, granting project access, creating API keys, sending out invitations, configuring rate limits and more. This is probably the least exciting part of the API, but at the same time most important one for Infrastructure as Code. We (and I hope you too) want to nicely manage all our OpenAI projects, service accounts and memberships and nicely connected with other cloud providers.
Our primary focus for this release was to get administrative resources right - which was not that straightforward, given that, for example there are no endpoints for provisioning users (but there are for inviting users), and there is no API way of deleting an API limit (thus requiring us to smartly track and restore limits to their initial default state on deletion).
The first release of the provider will let you completely move user management to Terraform, but keep in mind that invitation of new users and granting these users new access needs to be done in two steps currently.
To access administration APIs, you need to use a special Admin API Key, which is different from the key that you would use for all other APIs. This OpenAI Terraform provider supports 2 API keys at the same time, which you can set with environment variables - OPENAI_ADMIN_KEY
and OPENAI_API_KEY
.
Quick example of how you could provision your OpenAI project:
terraform {
required_version = ">= 1.12.2"
required_providers {
openai = {
source = "mkdev-me/openai"
version = ">= 1.0.4"
}
}
}
provider "openai" {
# Configuration will be automatically loaded from environment variables
}
resource "openai_project" "claimora-mini" {
name = "claimora-mini"
}
resource "openai_project_service_account" "claimora-mini" {
project_id = openai_project.claimora-mini.id
name = "claimora-mini-production"
}
resource "openai_invite" "claimora-mini-owners" {
for_each = toset(["kirill@mkdev.me"])
email = each.value
role = "owner"
projects {
id = openai_project.claimora-mini.id
role = "owner"
}
}
OpenAI Platform APIs
Once you provision a project - via Infrastructure as Code - you can have some fun with over 20 resources, that can run some generative AI calls and pass this output to other resources. How about we try and do some vibe coding entirely with Terraform, mixing AWS provider with OpenAI provider?
To do that, we are going to use 2 resources: openai_model_response
and openai_image_generation
. The full GenAI flow will be:
Via Responses API, create a prompt for creating an image;
Use this prompt to create an image;
Again via Responses API, create a tiny Lambda function that renders the HTML page, with the image included
resource "openai_model_response" "logo_prompt" {
model = "gpt-4.1-2025-04-14"
input = <<EOF
Create a prompt to generate super fun logo for a new Terraform OpenAI provider.
EOF
}
resource "openai_image_generation" "logo" {
prompt = openai_model_response.logo_prompt.output["text"]
model = "dall-e-3"
n = 1
size = "1024x1024"
}
resource "openai_model_response" "my_app" {
model = "gpt-4.1-2025-04-14"
input = <<EOF
Create a simple Lambda function handler, in Ruby, that returns a beautiful marketing HTML page,
that explains why Terraform OpenAI provider is the best way to use OpenAI in your infrastructure.
Use Tailwind CSS for styling.
Assume that the function is exposed over Function URL,
make sure to comply with the response format defined here: https://docs.aws.amazon.com/lambda/latest/dg/urls-invocation.html.
Answer with the code only, no other text, no markdown, no nothing, just Ruby code.
Make sure to use the logo generated by DALL-E 3 in the HTML page, logo URL is ${openai_image_generation.logo.data[0].url}.
EOF
}
Next steps is to package this new code as an archive:
data "archive_file" "my_app" {
type = "zip"
source {
content = openai_model_response.my_app.output["text"]
filename = "${path.module}/function.rb"
}
output_path = "${path.module}/function-${var.index}.zip"
}
Note how openai_model_response.my_app.output
is just a map of strings, containing whatever OpenAI API returned. We decided against giving more fancy attributes for API responses, as it would limit the use-cases you might want to build.
Finally, we can deploy this code as an AWS Lambda function:
resource "aws_lambda_function" "my_app" {
filename = data.archive_file.my_app.output_path
function_name = "vibe-coded-lambda-${var.index}"
role = aws_iam_role.my_app.arn
source_code_hash = data.archive_file.my_app.output_base64sha256
handler = "modules/function.lambda_handler"
runtime = "ruby3.3"
}
resource "aws_lambda_function_url" "my_app" {
function_name = aws_lambda_function.my_app.function_name
authorization_type = "NONE"
}
This way, we are not only vibe coding our application - we also deploy it and expose to the outside world in the same step. And to make it even more fun, we can wrap this code in a module and execute it 10 times:
module "main" {
count = 10
index = count.index
source = "./modules"
}
The result is 10 different landing pages for our OpenAI Terraform Provider. Let's look at the 3 examples:
Example 1
Example 2
Example 3
Infrastructure as Code with Generative AI
While the example above was built quickly as a fun way to demonstrate the features of this provider, it also shows the powerful new platform engineering capabilities you could provide.
Similar, but more sophisticated self-service module could enable your development teams - or even teams without any developers yet - to go from an idea to a deployed application in one prompt.
On one side, you have good old Terraform, wrapping up nicely all the infrastructure resources.
On the other side, within the same Terraform, you can deploy the quick demo or even an initial release of your idea, tiny internal applications and what not. Depending on how you wrap the execution of your infrastructure code, you could roll out a whole internal GenAI app builder, running on top of a trusted, compliant and secure infrastructure - the one you already built yourself Terraform.
Give OpenAI Terraform Provider it a try!
Terraform OpenAI Provider is the first major open source project mkdev puts out to the world. Our first focus was to cover the administration APIs, but we also tried to cover all of the other APIs that OpenAI provides.
There are definitely some bugs that need to be fixed, some resources that don't function that well yet and a certain potential for documentation improvements.
If you will find this provider useful and feel like you can contribute - please do, we are happy to review and release any PRs that will make this provider better. And if you just stumble upon a problem and don't have capacity to submit a fix - just open an issue in the repo and we'll look into it.