Introduction
As part 6 has yet to be released pending approval from my work, here is a brief overview of part 5.
- Overview: Addressing Okta’s 100 network zone limit through automation with Terraform.
- Network Zones Challenges: Managing SaaS services, office networks, and proxies efficiently under zone limits.
- Cloud Services: Automating IP address handling for API key restrictions using dynamic data sources.
- Office Network Zones: Leveraging Meraki Terraform provider for dynamic office gateway IP configurations.
- Proxies: Automating proxy blocklists and policy categories with Okta’s new dynamic IP service features.
- Key Improvements:
- Incorporating multiple data formats (JSON, plaintext) and sources (local, public, private).
- Chunking IP ranges to respect Okta’s 100 gateway limit per zone.
- Transitioning to Okta’s new features like IP Exempt Zones to handle false positives.
- Limitations: Issues with Terraform Cloud environment variables and Okta’s IP blocklist sensitivity.
- Best Practices:
- Avoid overreaching categories like “All IP Services” to minimize false positives.
- Leverage IP Exempt Zones for static IPs sparingly.
- Utilize debugging outputs to inspect and validate configurations dynamically.
- Resources: Contact
#okta-terraform
on MacAdmins for community insights or visit alternative forums.- Future Topics: Automating security policies (authentication, password, authenticator policies).
This section, will cover user and group schemas as well as creating dependency files and outputs so that other teams know what is going on for use. This will also be the last section of my discussions of the original series, more may come after this.
The Implementation
Manage Profile Schemas
For various reasons, it is best to manage your profile schemas through Terraform instead of clickops. Primarily, it makes it easier to update schemas and provide a list of this information to other teams so that if they try and codify/develop against Okta (EG, creating an internal SCIM-based application), the developer will know what data they will have is and what the context/situation is.
User Schemas
First, import the core user attributes in Okta so you can manage priority ownership, visibility, etc. You will want to start by using the information found on this page. These are the base (or standard user schema properties) in Okta.
|
|
And then subsequently use the following:
|
|
The bottom three are the most important for most of the base resources.
For Custom User Schemas, you will want to inspect this page. An example of the code (you can use a similar import resource as above if you would like) can be found below.
|
|
And there you go, your user schemas are now managed via Terraform.
While you could manage this through modules, I think the benefit is fairly limited here, for the complexity of the module to the cleanliness of the user schema itself. You are very likely, not, requiring a lot of standardization over iteration in the module, and given that these schemas should not be updated very often, and the code is likely not re-used often, it just didn’t make sense.
Group Schemas
For group schemas, the same process is done as above, however slightly more straightforward, as we have no configurable options for the base schema properties. Which means we can just create custom ones as we need.
However, the documentation outlined appears to be somewhat misleading, as multiple items are seemingly only available for External (EG: Application Imported Groups) and would not apply for Terraform or Okta groups. As part of the Okta API Documentation. I would recommend just using the base of what is shown in the API example for now, shown below:
|
|
From there, you can create a custom group schema without much of an issue, as needed for your environment. So as an example:
|
|
Creating Dependency Files for other teams
Now that we have this all done, it could be problematic if a team starts to change something we depend on, not necessarily schemas or resources, but the data from the Source of Truth we use. However, the schemas would also be problematic.
So, how do we create files such as a CSV, PDF, Markdown Compatible Table, or other ways for other teams to ingest, look at, or use that data?
Simple, we utilize outputs to create:
- JSON
- CSV
- Markdown
- Parquet
These are then hosted in both Github and a data storage bucket of our choosing. The URLs will stay consistent, but the data can be updated routinely. Anytime another team needs to consume data, we can point them to the same URLs.
And that is it
A lot has been covered here over the past several weeks, which sums up how we have terraformed certain pieces of our Okta environment using Terraform. If you have questions and are looking for a community resource, I would heavily recommend reaching out to #okta-terraform
on MacAdmins, as I would say at least 30% (note, I made this statistic up) of the organizations using Terraform hang out in this channel. Otherwise, you can always find an alternative unofficial community for assistance or ideas.