Tutorial Part 5: Compliance as Code
Translate security and regulatory requirements into executable code.
Part 5: Compliance as Code - Securing the Blueprint
Goal: In this section, we will apply security and compliance policies directly to the graph. The rules in the data/compliance/ directory are processed after the architectural model is built, allowing them to mutate the graph to enforce controls. We’ll translate requirements into executable code that modifies our hybrid application’s architecture and supply chain.
Step 17: Enriching Connections with Security Requirements
A connection in the graph shows that two components talk to each other, but it doesn’t describe how. We can use compliance rules to enrich a relationship with specific security requirements.
Let’s mandate that the connection between our order-frontend application and its aurora-main database must use TLS 1.3.
Create your first compliance file:
data/compliance/dora.toml
audit_id = "DORA-APP-SEC"
audit_name = "DORA Application Security Policy"
[[control]]
id = "SEC-TRANSIT-01"
name = "Database Encryption in Transit"
# A key-value store for control parameters.
[control.config]
min_tls_version = "1.3"
status = "mandatory"
[[control.target]]
# This pattern targets an existing RELATIONSHIP.
# We find the 'CONNECTS_TO' relation (which we renamed in Part 2).
relation_origin_type = "application"
relation_target_type = "database"
# Filter to apply this only to the specific connection we care about.
relation_origin_match_on = [ { property = "name", value = "order-frontend" } ]
# Copy these keys from [control.config] onto the properties of the matched relation.
properties_from_config = ["min_tls_version", "status"]
Run rescile-ce serve.
Result: The graph now explicitly documents the encryption requirements for this specific data-in-transit connection. This makes the security control auditable directly from the blueprint.
Verify by querying the properties of the relation:
query VerifyConnectionEnrichment {
application(filter: {name: "order-frontend"}) {
name
connects_to {
# The 'properties' block queries the relation itself.
properties {
relation
controls # The importer groups compliance properties under 'controls'
}
node {
name
}
}
}
}
The result will show the controls property on the edge containing an object with min_tls_version: "1.3".
Step 18: Attaching Security Policy Nodes
Another common pattern is to attach a new node representing a policy to an existing asset. This is useful for modeling requirements that apply to a component but don’t sit in a data path, like MFA.
First, create a simple identity.csv asset file.
data/assets/identity.csv
name,role,privileged
app-admin,administrator,true
app-user,user,false
Now, add a control to your dora.toml file that finds all privileged identities and attaches a policy node to them.
data/compliance/dora.toml (add this block)
[[control]]
id = "SEC-MFA-01"
name = "MFA for Privileged Identities"
[control.config]
policy_name = "MFA-Mandatory"
factors = ["TOTP", "FIDO2"]
[[control.target]]
# 1. Find the source resources to apply the policy to.
origin_resource_type = "identity"
match_on = [ { property = "privileged", value = true } ]
# 2. For each match, define a new 'security_policy' resource to create.
[control.target.resource]
type = "security_policy"
name = "mfa-policy-for-{{ origin_resource.name }}"
properties_from_config = ["policy_name", "factors"]
# 3. Define the relation to link the identity to the new policy.
[control.target.relation]
type = "GOVERNED_BY"
Run rescile-ce serve.
Result: The app-admin identity now has a GOVERNED_BY relationship to a new (security_policy) node. This creates a clear, queryable link between privileged identities and the MFA policies that must apply to them.
Step 19: Linking Existing Assets for Policy Enforcement
Sometimes a policy requires creating a connection between two existing sets of resources. For example, ensuring all internet-facing applications send logs to a central aggregator.
First, create an asset for the logging aggregator.
data/assets/logging_aggregator.csv
name,type
central-splunk,splunk-enterprise
Now, add a control to dora.toml that links our edge zone application (order-frontend) to this aggregator.
data/compliance/dora.toml (add this block)
[[control]]
id = "AUDIT-LOG-01"
name = "Centralized Logging for Edge Applications"
[[control.target]]
# 1. Find all applications in the 'edge' network zone.
origin_resource_type = "application"
match_on = [ { property = "network_zone", value = "edge" } ]
# 2. Find the single, existing logging aggregator to link to.
# Note: No 'name' means we are finding, not creating.
[control.target.resource]
type = "logging_aggregator"
match_on = [{ property = "name", value = "central-splunk" }]
# 3. Create a 'LOGS_TO' link between them.
[control.target.relation]
type = "LOGS_TO"
Run rescile-ce serve.
Result: The order-frontend application now has a LOGS_TO relationship pointing to the central-splunk node. The blueprint now enforces and makes auditable a relationship between two existing resources based on a policy.
Step 20: Inserting Security Intermediaries (WAF)
A powerful compliance pattern is to topologically insert a security component into an existing data path. Let’s place a Web Application Firewall (WAF) in front of our internet-facing order-frontend application.
First, let’s create a simple edge_gateway asset.
data/assets/edge_gateway.csv
name,routes_to
internet-gw,order-frontend
Now, add a control to dora.toml that will intercept the connection from the gateway to the application and insert a WAF.
data/compliance/dora.toml (add this block)
[[control]]
id = "SEC-WAF-01"
name = "WAF for Internet-Facing Applications"
[control.config]
waf_provider = "aws-cloud"
ruleset = "aws-managed-rules-core"
[[control.target]]
# 1. Find the relationship to intercept: from an 'edge_gateway' to an 'application'.
relation_origin_type = "edge_gateway"
relation_target_type = "application"
# Filter for the specific application we want to protect.
relation_target_match_on = [{ property = "name", value = "order-frontend" }]
# 2. Define the new 'web_application_firewall' resource to insert.
[control.target.resource]
type = "web_application_firewall"
name = "waf-for-{{ target_resource.name }}"
properties_from_config = ["waf_provider", "ruleset"]
Run rescile-ce serve.
Result: The graph topology has been automatically updated. rescile found the original (edge_gateway) --[routes_to]--> (application) edge, deleted it, created the new (web_application_firewall) node, and created two new edges. The path is now (edge_gateway) --> (waf) --> (application), providing a precise and auditable architectural view of the security control.
Step 21: Auditing the Supply Chain
Let’s combine compliance with the supply chain risk we modeled in Part 4. Instead of using another compliance rule, we can leverage rescile’s reporting engine to generate a remediation ticket. Reports are declarative templates that query the completed graph to produce structured data artifacts.
This pattern is powerful because it separates the act of observing a compliance violation (flagging the application in the model/compliance phase) from the act of generating an artifact from that observation (creating a ticket in the report phase).
Create a new report definition file in your data/reports/ directory.
data/reports/remediation_tickets.toml
# This report processes 'application' resources to generate remediation tickets.
origin_resource_type = "application"
# Define some static data available to the template for the ticket.
ticket_config = { ticket_system = "jira", priority = "Highest", assignee = "app-sec-team" }
[[output]]
# The new resource type for the generated report artifact.
resource_type = "remediation_ticket"
# Dynamically name the ticket based on the application it's for.
name = "ticket-for-{{ origin_resource.name }}"
# This rule only applies to edge applications with a critical vulnerability.
match_on = [
{ property = "has_critical_vulnerability", value = true },
{ property = "network_zone", value = "edge" }
]
# The 'template' block defines the structure of the generated resource's properties.
# The rendered JSON string becomes the set of properties for the new 'remediation_ticket' resource.
template = """
{
"ticket_system": "{{ ticket_config.ticket_system }}",
"priority": "{{ ticket_config.priority }}",
"assignee": "{{ ticket_config.assignee }}",
"summary": "Critical vulnerability found in {{ origin_resource.name }}",
"application_name": "{{ origin_resource.name }}"
}
"""
Run rescile-ce serve.
Result: The rescile engine processes the report definition after the model and compliance phases are complete. It finds the order-frontend application (which was flagged with has_critical_vulnerability in Part 4 and matches the network_zone), and generates a new (remediation_ticket) node. The properties of this new node are the JSON object rendered from the template.
The blueprint is now demonstrating a proactive security posture by automatically generating actionable tasks based on a correlation of supply chain risk (has_critical_vulnerability) and architectural context (network_zone = 'edge').
This automated ticket creation becomes actionable when external systems can query it. For example, a script connected to a ticketing system like Jira could run the following GraphQL query periodically to find new remediation tasks:
query GetRemediationTickets {
remediation_ticket {
name
ticket_system
priority
assignee
# The schema automatically creates a reverse link from the ticket
# back to the application that needs remediation.
application {
node {
name
has_critical_vulnerability
network_zone
}
}
}
}
Step 21a: Overriding Configuration for Compliance
A key requirement for enterprise governance is the ability for compliance policies to enforce a desired state, even if the underlying configuration specifies otherwise. Let’s demonstrate this with a backup policy. In our infrastructure model, we defined our self-hosted oracle-db1 with a default backup_policy of "none". Our Business Continuity Plan (BCP), however, mandates daily backups for all self-hosted databases. We can create a compliance rule that finds and overwrites this property.
Create a new compliance file to enforce this.
data/compliance/bcp.toml
audit_id = "BCP-POLICY"
audit_name = "Business Continuity Policy"
[[control]]
id = "BACKUP-ENFORCE-01"
name = "Enforce Daily Backups for Self-Hosted Databases"
[[control.target]]
# Find self-hosted databases where backup is not 'daily'.
origin_resource_type = "database"
match_on = [
{ property = "type", value = "self-hosted" },
{ property = "backup_policy", not = "daily" }
]
# This rule targets the *same node* by reusing its name. The effect is to
# merge/overwrite properties, thereby enforcing the desired state.
[control.target.resource]
type = "database" # The same type as the origin
name = "{{ origin_resource.name }}" # Target the origin node itself
[control.target.resource.properties]
backup_policy = "daily"
backup_enforced_by = "BCP-POLICY" # Add an audit trail property
Run rescile-ce serve.
Result: Query the oracle-db1 database node. You will see that its backup_policy is now "daily" and it has a new backup_enforced_by property with the value "BCP-POLICY". The compliance engine has effectively overridden the configuration from the asset file to enforce the policy as code.