Featured image of post How we planned for Workspace Segmentation in Terraform

How we planned for Workspace Segmentation in Terraform

Part 2, Planning and configuring for our Workspace Segmentation for Github, Terraform, and Okta

Introduction

Just a brief recap from last week’s blog post, we covered:

  • Introduction to Terraform:

  • I began using Terraform for AWS in 2017 and expanded to IT services like Okta after its Terraform provider was released in 2019.

  • Challenges with Legacy Terraform:

    • Inflexible, complex code designed for specific infrastructure.
    • Scaling issues addressed via custom scripts.
    • Single pipeline for all resources and confusing workflows.
  • Solutions Implemented:

    • Workspace Segmentation:
      • Divided into Core (critical automation) and Standard (team-specific features).
      • Development cycle: Local → Preview → Production.
      • Branch protections for stability.
    • Scalable Code:
      • Dynamic, flexible, and heavily documented.
      • Handles drift intelligently (corrects critical drift, tolerates controlled drift).
    • Monitoring & Alerts:
      • Dashboards for drift detection, resource states, and run metrics.
      • Automations for re-applying Terraform on drift.
  • Implementation Plan:

    • The seven-part series covers planning, workspace segmentation, automation, and security policy management.
  • Community Resources:

    • Engage with the #okta-terraform MacAdmins Slack channel for support.

Last week, I went over the historic concept of Terraform and the ideas we have had while planning it out. This week, I will introduce our requirements for introducing Terraform into our environment and segmentation across Github, Terraform, and Okta.

The Implementation

The best thing to do is have a long conversation around the build pipelines, segmentation, and expected results. Utilizing Architectural Decision Records. The concerns that were presented:

  • Utilizing multiple GitHub Repos didn’t reduce risk or surface issues; it would just cause unnecessary complexity
    • For example, the API key utilization associated with each repo would still have to have a significant API key associated with the “main” repo if we split out several others
    • Multiple areas where code would occur could cause drift and conflicting code situations.
  • Utilizing “prod” and “preview” folders could cause drift and code alignment issues, especially when having to make “emergency fixes” in the production environment.
  • The ability to emergency/break glass, which is a common situation in coding environments, would cause issues longer term in allowing for drift and unwanted changes/configurations. We wanted to prevent this.
  • We want to have the ability to allow auto-run in our Terraform Workspaces where needed.
  • We need to abide by the EU’s DORA Regulation and, eventually, the US equivalent when they create their requirements.

Flowchart

Workspace Segmentation in Github

Directory Structure

The configuration of the setup that we will use will allow us to utilize a single repo for both Preview and production and allow for multiple terraform workspaces.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
$ tree
.
├── Architectural Decision Records
│   ├── file1.MD
│   ├── file2.MD
│   ├── file3.MD
│   └── template.md
├── CODE OWNERS
├── README.md
├── prod
│   ├── groups-ad-hoc-requests.tf
│   ├── main.tf
│   └── variables.tf
└── core-groups
    ├── groups-admin.tf
    ├── groups-business-structure.tf
    ├── groups-core.tf
    ├── groups-countries.tf
    ├── groups-default-schema.tf
    ├── groups-legal-entities.tf
    ├── groups-offices.tf
    └── main.tf

Additionally, we can continue to create folder structures if we need to auto-run them, for example:

1
2
3
├── core-policies-authentication
├── core-users
├── core-branding

This way, the org can utilize each folder with a separate workspace in Terraform, and we can have self-contained Terraform functions (e.g., those that don’t require dependencies, rely on outputs, or rely on inputs from other information) and auto-run those that are needed. If we do not need auto-run, we can still utilize the production folder for any other necessary items that will be run on demand or when needed.

Feature, Preview, and Main Branches and Protections

With the setup above, we cannot potentially have an individual Preview/staging environment folder, which is what we want. Our preview and production environments should be a direct mirror of each other. Preview receives the configurations first to validate the configuration, changes, and any other setup before it goes to production. Subsequently, we can test new features and services directly against our preview environment.

So, how do we deal with accidental PRs from Feature X > Main? Once we are sure that we have the necessary configurations we need setup and everything is working according to plan, We will create a GitHub Action that does one of two things:

  1. Auto-close the PR, commenting that it was opened against the wrong base and informing them they need to reopen it into Preview.
  2. Using Github’s API, we re-adjusted the PR to switch automatically from the Main branch to Preview.

You can see the code below.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
name: Enforce Branch PR Restrictions

on:
  pull_request_target:
    types:
      - opened
      - reopened
      - synchronize
      - edited

permissions:
  pull-requests: write # Ensure the workflow can comment and edit pull requests
  contents: read

jobs:
  advanced-actions:
    runs-on: ubuntu-latest
    steps:
      - name: Checkout Repository
        uses: actions/checkout@v3
        with:
          fetch-depth: 0

      - name: Extract branch information
        id: branch_info
        run: |
          echo "source_branch=${{ github.event.pull_request.head.ref }}" >> $GITHUB_ENV
          echo "target_branch=${{ github.event.pull_request.base.ref }}" >> $GITHUB_ENV

          # Check if PR needs adjustments
          if [[ "${{ github.event.pull_request.base.ref }}" == "main" && "${{ github.event.pull_request.head.ref }}" != "preview" ]]; then
            echo "action_required=true" >> $GITHUB_ENV
          else
            echo "action_required=false" >> $GITHUB_ENV
          fi

      - name: Comment on Invalid PRs
        if: env.action_required == 'true'
        env:
          GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
        run: |
          echo "Commenting on invalid PR."
          gh pr comment ${{ github.event.pull_request.number }} \
            --body "## 🚨 Incorrect PR Merge Attempt Detected

            This PR targets the **main** branch from \`${{ github.event.pull_request.head.ref }}\`.

            ---

            ### ⚠️ Reasoning:
            Only PRs to **main** from **preview** are allowed. This ensures all changes go through the proper testing environment in **preview** before being merged into **main**.

            ---

            ### ✅ What Happens Next:
            This GitHub Action will automatically update your target branch to **preview**..

            ---

            ### 🙌 Your Options:
            If you would prefer to handle this process manually:
            1. **Close this PR**.
            2. Open a new PR targeting the **preview** branch.
            3. Follow the proper testing and review process before attempting to merge into **main**.

            Thank you for ensuring a smooth and efficient development workflow! 🚀"

      - name: Reassign Branch (Optional)
        if: env.action_required == 'true'
        env:
          GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
        run: |
          echo "Reassigning PR target to 'preview'."
          pr_url="${{ github.event.pull_request.url }}"
          curl -X PATCH \
            -H "Authorization: token $GH_TOKEN" \
            -H "Accept: application/vnd.github.v3+json" \
            "$pr_url" \
            -d '{"base":"preview"}'

Cleaning up git branches

GitHub allows for the ability to keep branch information clean, by deleting PRs after one has been merged. This generally works great, however, if we will constantly PR preview against main, that will cause preview to be auto deleted.

So let’s disable that feature, and handle it in GitHub Actions. This way, the functionality can also be change managed if we ever need to branch out of it.

So adding to the code above, create another file in .github/workflows and populate it with the following content:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
name: Delete PR'ed Branches Except Preview or Main

on:
  pull_request:
    types:
      - closed

jobs:
  delete-branches:
    if: github.event.pull_request.merged == true # Only run when the PR is merged
    runs-on: ubuntu-latest

    steps:
      - name: Determine the Head Branch
        id: branch-check
        run: echo "branch_name=${{ github.event.pull_request.head.ref }}" >> $GITHUB_ENV

      - name: Delete Head Branch (If Not Preview or Main)
        env:
          GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
          BRANCH_NAME: ${{ env.branch_name }}
        run: |
          if [ "${BRANCH_NAME}" != "preview" ] && [ "${BRANCH_NAME}" != "main" ]; then
            echo "Branch '${BRANCH_NAME}' is neither 'preview' nor 'main'. Proceeding with deletion."
            curl -X DELETE \
              -H "Authorization: token $GITHUB_TOKEN" \
              -H "Accept: application/vnd.github.v3+json" \
              https://api.github.com/repos/${{ github.repository }}/git/refs/heads/${BRANCH_NAME}
          else
            echo "Branch '${BRANCH_NAME}' is 'preview' or 'main'. Skipping deletion."
          fi

Workspace Segmentation in Terraform

Importing stock Okta Configurations in Preview and Prod

So, we do not need any wrappers here; we will just be using Terraform natively. Thankfully, Terraform 1.7 added the ability and support for a for_each loop in import resources:

So, how will we need to write or import these for preview and production environments simultaneously, depending on the Terraform Workspace?

Note

The information presented below, where “import_ids” and preview and prod, should be called the equivalent of your Terraform workspace environment name.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
locals {
  import_ids = {
    preview = {
      enhanced_dynamic_zone = "no*****************"
      exempt_ip_zone        = "no*****************"
      legacy_ip_zone        = "no*****************"
      blocked_ip_zone       = "no*****************"
    }
    prod = {
      enhanced_dynamic_zone = "nzo*****************"
      exempt_ip_zone        = "nzo*****************"
      legacy_ip_zone        = "nzo*****************"
      blocked_ip_zone       = "nzo*****************"
    }
  }[terraform.workspace]
}

## !!! The configuration below is for Okta Created Network Zones, which must be imported and managed via TF.
## !!! Any future Network Zones should be added after this segment of the code block.
## !!! --- Begin Okta Created Network Zone Block ---
## TODO: You can add IP Addresses below under the "exempt_ip_zone" resource.
## TODO: You can add IP Addresses to the "blocked_ip_zone" if needed.

# Local variable for consistent resource configuration
locals {
  network_zones = {
    enhanced_dynamic_zone = {
      name                          = "DefaultEnhancedDynamicZone"
      type                          = "DYNAMIC_V2"
      usage                         = "BLOCKLIST"
      status                        = "ACTIVE" # Set to ACTIVE or INACTIVE
      ip_service_categories_include = []       # Optional IP service categories for DYNAMIC_V2
      ip_service_categories_exclude = []       # Optional IP service categories for DYNAMIC_V2
    }
    exempt_ip_zone = {
      ### Note: this Zone is used to exclude an IP Address from ALL blocks within Okta
      ### Use this sparingly and cautiously. Addresses added here should only be added after being
      ### approved through our process that is outlined here: {YET TO BE CREATED}
      name     = "DefaultExemptIpZone"
      type     = "IP"
      gateways = ["3.4.5.6/32"]
      usage    = "POLICY"
      status   = "ACTIVE"
      proxies  = [] # Optional proxies for IP zones
    }
    legacy_ip_zone = {
      name     = "LegacyIpZone"
      type     = "IP"
      gateways = ["2.3.4.5/32"]
      usage    = "POLICY"
      status   = "ACTIVE"
      proxies  = [] # Optional proxies for IP zones
    }
    blocked_ip_zone = {
      name     = "BlockedIpZone"
      type     = "IP"
      gateways = ["1.2.3.4/32"]
      usage    = "BLOCKLIST"
      status   = "ACTIVE"
      proxies  = [] # Optional proxies for IP zones
    }
  }
}

# Import Block for IDs (Run Only Once)
import {
  for_each = local.import_ids
  to       = okta_network_zone.default[each.key]
  id       = each.value
}

# Resource Definitions
resource "okta_network_zone" "default" {
  for_each = local.network_zones

  name                          = each.value.name
  type                          = each.value.type
  usage                         = try(each.value.usage, "POLICY")
  status                        = try(each.value.status, "ACTIVE") # Default to ACTIVE if not specified
  gateways                      = try(each.value.gateways, [])
  proxies                       = try(each.value.proxies, [])                       # Optional proxies configuration
  ip_service_categories_include = try(each.value.ip_service_categories_include, []) # Optional for DYNAMIC_V2
  ip_service_categories_exclude = try(each.value.ip_service_categories_exclude, []) # Optional for DYNAMIC_V2
}
## !!! --- End Okta Created Network Zone Block ---

With this setup, we can run the imports once and remove the import blocks afterward once the state has them. Of course, this will vary for every “stock” application, but this should be a relatively good example of how this works for each environment. As previously mentioned, each folder inside the GitHub repo will be linked to a Terraform workspace. In combination with the GitHub branches.

Workspace Segmentation in Okta

Preview vs. Prod Environments

While I think it is a relatively straightforward concept, the conclusion of the above setup allows us to manage.

  • Everything runs through our preview environment first.
  • We can do release-based setups of our Okta preview and/or production environment in either a CalVer, or SemVer, based notation or a variation of the two.
  • We are required to test our environment before pushing to production.
  • We provide safety in the configuration.

We now have a terraform configuration that commits Preview and requires a PR for production. We can safely terraform applications and other configurations in our preview environment before pushing them to production.

Local Development

So, how do we develop features for Preview?

When trying to develop features and changes for the preview environment, an individual wouldn’t be able to properly develop against the Okta environment if three people were working or features being worked on simultaneously. The Terraform state would be in a constant state of conflict. This results in each team member needing to have their local environment with Okta, which means setting up Terraform locally for development.

To execute that, we will use override.tf files for variables with them in a .gitignore path, which would allow us to individually specify our environments locally while then being able to run the terraform apply in the cloud.

And that is it

This covers all of the situations we have encountered when trying to convert certain pieces of our Github, Terraform, and Okta environments. I’ll also be publishing a new blog post next week about automatically creating and destroying department groups so be on the lookout.

A lot will be covered over the next several parts, which sums up how we have terraformed certain pieces of our Okta environment. If you have questions and are looking for a community resource, I would heavily recommend reaching out to #okta-terraform on MacAdmins, as I would say at least 30% (note, I made this statistic up) of the organizations using Terraform hang out in this channel. Otherwise, you can always find an alternative unofficial community for assistance or ideas.

Licensed under CC BY-NC-SA 4.0
Last updated on January 14, 2025 at 11:30 CET
 
Thanks for stopping by!
Built with Hugo