logo
linkedin
menu
logo
linkedin

Extending Terraform with custom providers

by Pedro Santos

April 19, 2022


Note: this is a fairly advanced topic. It assumes you have some experience with Go and understand the Terraform state and resource life-cycle.

One of Terraform’s most significant drawbacks is that there is no clean way of injecting custom functionalities. The canonical solution for injecting custom functionality is to use a local_exec provisioner combined with a shell script. In my opinion, this functionality is not enough for the following reasons:

  • The script in a local_exec provisioner contains implicit dependencies that need to be installed in any machine running the Terraform code. For example, if the local_exec provisioner calls a Nodejs script, any machine deploying the infrastructure must have a compatible version of Python.
  • The script in local_exec provisioner may break cross-compatibility between operating systems. For example, calling a bash script may not work on your colleague’s computer running Windows.
  • local_exec does not follow the same lifecycle rules as the provider resources. It does not interact with the tfstate nor understand when to create, update, or destroy its resources.

In my opinion, the cleanest way to create moderate to highly custom functionality is to create your own Terraform providers.

A simple example

A quick disclaimer: This example does not represent a real use-case. My idea was first and foremost to create an example exposing the most functionality with the least code. Hence, some parts may look out of place or forced, and the code may not always follow best-practices.

Say that, for some deployment, we’d like to create a local file with custom data. For that functionality, we’ve decided to make a custom provider. Our Terraform code should look something like

terraform {
  required_providers {
    myfile = {
      version = "=0.1.0"
      source  = "myorg.com/custom/myfile"
    }
  }
}

provider "myfile" {
  encoding = "utf8"
}

resource "myfile_file" "this" {
  path     = "some/path/to/file.txt"
  contents = "Some contents"
}

Implementing the provider

Our code will be structured as follows:

myfile-provider
├── go.mod
├── go.sum
├── main.go
└── myfile
    ├── client.go
    └── provider.go

The file main.go contains the entry-point of our provider, client.go includes a simple filesystem client, and provider.go contains most of the provider functionality. The files go.mod and go.sum are needed for go modules.

The main.go file defined as such:

package main

import (
	"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
	"github.com/hashicorp/terraform-plugin-sdk/v2/plugin"
	"terraform-provider-myfile/myfile"
)

func main() {
	plugin.Serve(&plugin.ServeOpts{
	    ProviderFunc: func() *schema.Provider {
			return myfile.Provider()
		},
	})
}

We’ll be using Hashicorp’s SDK to create our provider. The function Provider is defined in myfile/provider.go as such:

package myfile

import (
	"context"
	"github.com/hashicorp/go-cty/cty"
	"github.com/hashicorp/terraform-plugin-sdk/v2/diag"
	"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
)

// Provider for custom file handling
func Provider() *schema.Provider {
	return &schema.Provider{
		Schema: map[string]*schema.Schema{
			"encoding": {
				Type:             schema.TypeString,
				Optional:         true,
				Default:          "uft8",
				ValidateDiagFunc: validateEncoding,
				Description:      "File Encoding",
			},
		},
		ConfigureContextFunc: providerConfigure,
		ResourcesMap:         getProviderResources(),
	}
}

//...

The parameter Schema defines the configuration options of the provider. In our case, we have a single option named encoding. We’ll use this option to define file encoding, for example UTF-8, base64, etc. This option is an optional parameter that defaults to utf8. For this example, it’s only implemented support for UTF-8 encoding. With the validateEncoding function, we can ensure no other option is allowed and use the diag functionality to provide a helpful error message.

func validateEncoding(v interface{}, path cty.Path) diag.Diagnostics {
	encoding := v.(string)
	if encoding != "utf8" {
		return diag.Diagnostics{
			diag.Diagnostic{
				Severity: diag.Error,
				Summary:  "Only supported option is 'utf8'",
			},
		}
	}
	return diag.Diagnostics{}
}

The provider configuration step is defined in the function ConfigureContextFunc: In this function we create our file client. We pass in the configuration parameters through the d argument. Since it’s not relevant to the Terraform functionality, I’ve will not show the implementation of the file client. For now, it is enough to assume this client has an interface to create/update/read/delete a file and an interface to get the file owner.

The parameter ResourcesMap in the function Provider defines the configuration options for provider resources. We define a function named getProviderResources to return this parameter

func getProviderResources() map[string]*schema.Resource {
	return map[string]*schema.Resource{
		"myfile_file": {
			Schema: map[string]*schema.Schema{
				"path": {
					Type:        schema.TypeString,
					Required:    true,
					ForceNew:    true,
					Description: "File path",
				},
				"contents": {
					Type:        schema.TypeString,
					Required:    true,
					ForceNew:    false,
					Description: "File contents",
				},
				"owner": {
					Type:        schema.TypeString,
					Computed:    true,
					Description: "File owner",
				},
			},
			CreateContext: resourceFileCreate,
			ReadContext:   resourceFileRead,
			UpdateContext: resourceFileUpdate,
			DeleteContext: resourceFileDelete,
		},
	}
}

We implement a single resource named myfile_file. This resource will have the following configuration parameters:

  • path: The path of the file. The file must not exist before being created by Terraform. This field is a required parameter. Changing this will create a new resource.
  • contents: The file contents. This field is a required parameter. Changing this will update the file in-place.
  • owner: The filesystem owner of the file created. This field is a computed parameter and serves as an output.

The last four parameters defined in this schema refer to the four lifecycle steps used on Terraform: creation/read/updating and deletion.

Let’s look first on how to implement the resource creation step, as defined in the function resourceFileCreate:

func resourceFileCreate(
	ctx context.Context, d *schema.ResourceData, m interface{},
) diag.Diagnostics {
	path := d.Get("path").(string)
	contents := d.Get("contents").(string)
	client := m.(FileClient)

	err := client.Create(path, contents)
	if err != nil {
		return diag.FromErr(err)
	}

	d.SetId(path)

	owner, err := client.Owner(path)
	if err != nil {
		return diag.FromErr(err)
	}
	d.Set("owner", owner)

	return diag.Diagnostics{}
}

First, we get the schema parameters from the d argument. The m argument will be the value returned by the configuration step - in our case, the file client. We create the file using our file client, use the file path as the unique resource ID, and set the output parameter owner.

When we request a plan or apply, Terraform reads the contents of the tfstate and decides what needs to change. This first step of reading the tfstate uses the reading functionality, as implemented in the resourceFileRead function

func resourceFileRead(
	ctx context.Context, d *schema.ResourceData, m interface{},
) diag.Diagnostics {
	path := d.Get("path").(string)
	client := m.(FileClient)

	contents, err := client.Read(path)
	if err != nil {
		return diag.FromErr(err)
	}
	if err = d.Set("contents", contents); err != nil {
		return diag.FromErr(err)
	}
	return diag.Diagnostics{}
}

Terraform will then compare the file contents to the contents previously saved in the tfstate. If they are different, it will do an update in place, as defined in the resourceFileUpdate function

func resourceFileUpdate(
	ctx context.Context, d *schema.ResourceData, m interface{},
) diag.Diagnostics {
	path := d.Get("path").(string)
	contents := d.Get("contents").(string)
	client := m.(FileClient)

	err := client.Update(path, contents)
	if err != nil {
		return diag.FromErr(err)
	}
	return diag.Diagnostics{}
}

Finally, the deletion step, as defined in the resourceFileDelete function

func resourceFileDelete(
	ctx context.Context, d *schema.ResourceData, m interface{},
) diag.Diagnostics {
	path := d.Get("path").(string)
	client := m.(FileClient)

	err := client.Delete(path)
	if err != nil {
		return diag.FromErr(err)
	}
	return diag.Diagnostics{}
}

Testing the provider

I’ve shared the complete example here. Download the code with

$ git clone https://github.com/MilheiroSantos/terraform-provider-example.git

The easiest way to test the module is to inject your compiled provider in terraform’s implied local mirror directory. This path is, among other possibilities,

  • For Linux, and macOS X: $HOME/.terraform.d/plugins
  • For Windows: %APPDATA%/terraform.d/plugins

The path expected for the custom provider is

<TF_DIR>/<HOSTNAME>/<NAMESPACE>/<TYPE>/<VERSION>/<OS>_<ARCH>/your_provider

You can define HOSTAME, NAMESPACE and TYPE as you prefer. The VERSION should follow semver guidelines. The OS can be linux for Linux targets, darwin for Mac OS X, or windows for Windows machines. The ARCH can be amd64 for X86-64bit architectures, or arm64 for ARMv8 machines.

On my side, I’m using Linux on amd64, so I’ll create the path with:

$ mkdir -p ~/.terraform.d/plugins/myorg.com/custom/myfile/0.1.0/linux_amd64/

Now we build the provider with:

$ go build terraform-provider-myfile && mv terraform-provider-myfile ~/.terraform.d/plugins/myorg.com/custom/myfile/0.1.0/linux_amd64/

Finally, we create the Terraform code leveraging the custom provider

terraform {
  required_providers {
    myfile = {
      version = "=0.1.0"
      source  = "myorg.com/custom/myfile"
    }
  }
}

provider "myfile" {
  encoding = "utf8"
}

resource "myfile_file" "this" {
  path     = "${path.module}/here.txt"
  contents = "Hello Word!"
}

output "file_path" { 
  value = myfile_file.this.path 
}

output "file_contents" { 
  value = myfile_file.this.contents 
}

output "file_owner" { 
  value = myfile_file.this.owner
}

Run a terraform init and terraform apply:

$ terraform init && terraform apply -auto-approve

If all steps run successfully, you shall see a file named here.txt in the root of your terraform code with the contents ‘Hello World!’

Shipping the provider

Unlike modules or state files, the registry functionality only supports http endpoints implementing the registry API.

If you can share the provider with the community, the easiest way is to push it to terraform’s public registry.

A quick workaround is to use a git repository, blob storage, or ftp server to store your custom providers. As an initial step on your deployment pipeline, copy the files into the implied local mirror directory of Terraform.

For more complex functionality, there are other possibilities. You could look into Hachicorp’s paid private registry, or an open-source registry implementation, or even roll your own!

No matter what method you use to ship the provider, there are two important caveats:

  • Every time you update the provider, make sure you also update the provider’s version using the semver convention.
  • Make sure you compile your provider for all OS and architectures used by your team. Go’s compiler supports cross-compiling by setting a few env flags.

As an example, the following snipped build an executable for a 386 Windows machine:

$ GOARCH=386 GOOS=windows go build terraform-provider-myfile

Further reading

This blog post barely scratches the functionality of what is possible with custom providers. For example, we did not talk about custom data sources.

For me information, Hashicorp provides excellent documentation for custom providers here. You can also find a more in-depth tutorial here.

For more information about the custom provider file format and allowed locations, click here.

When compiling a provider, Terraform supports the same OS and architectures as the Go compiler. See here for the list of allowed values.

Finally, you may find here more information about the private registry protocol.

Happy coding!