Tutorial Part 2: Architectural Modeling

Define the hybrid cloud blueprint using declarative architectural rules.


Part 2: Architectural Modeling - Building the Hybrid Cloud Blueprint

Goal: In this section, we will build a multi-layered architectural model that mirrors a real-world enterprise stack. We will chain models together, creating new resources from the output of previous models. This demonstrates how to build a rich, detailed blueprint from a small set of initial assets.

Step 3: Modeling the Technology Stack (OS & Platform)

Our first step is to model the technology stack for our applications. We’ll start by adding os and architecture data to our asset file, and then create two chained models: one to create (os) resources, and a second to create (platform) resources from the OSes.

First, update application.csv with the new columns.

data/assets/application.csv

--- a/data/assets/application.csv
+++ b/data/assets/application.csv
@@ -1,5 +1,5 @@
-name,function,domain,datastore
-dataguard,replication,core,OracleDB
-zabbix,monitoring,operation,PostgreSQL
-asseteditor,management,access,
-openbao,encryption,operation,
+name,function,domain,datastore,os,architecture
+dataguard,replication,core,OracleDB,ol7,x86_64
+zabbix,monitoring,operation,PostgreSQL,linux,i686
+asseteditor,management,access,,linux,workerd
+openbao,encryption,operation,,oci,gke

Now, create a model to derive (os) resources from this new data. This model copies the architecture property from the application onto the new os resource, which is crucial for the next model in the chain.

data/models/os.toml

origin_resource = "application"

[[create_resource]]
create_from_property = "os"
relation_type = "USES_IMAGE"
[create_resource.properties]
description = "The {{property.value}} operating system is the master control program for this stack, acting as an intermediary between applications and the hardware."
# Copy the architecture from the source application to the new os resource
architecture = "{{origin_resource.architecture}}"
created = "{{now(utc=true) | date(format='%Y-%m-%dT%H:%M:%SZ')}}"

Next, create the platform model. Critically, its origin_resource is os, not application. It reads the (os) nodes created by the previous model to build the next layer of our blueprint. This model also demonstrates defining a variable (system_container) for use within the template logic.

data/models/platform.toml

# This model reads from 'os' resources created by os.toml
origin_resource = "os"

# Define a variable to make the Tera logic more readable and maintainable.
system_container = "oci"

[[create_resource]]
resource_type = "platform"
relation_type = "DEFINED_BY"
# Use templating to create a unique name for the platform
name = "{{origin_resource.name}}_{{origin_resource.architecture}}"
[create_resource.properties]
os = "{{origin_resource.name}}"
architecture = "{{origin_resource.architecture}}"
description = "The {{origin_resource.name}} is provisioned on a {{origin_resource.architecture}} platform."
# Use Tera 'if/else' logic with our variable to determine the platform function
function = "{% if origin_resource.name == system_container %}container{% else %}server{% endif %}"
created = "{{now(utc=true) | date(format='%Y-%m-%dT%H:%M:%SZ')}}"

Run rescile-ce serve and explore the graph.

Result: Our blueprint now has a clear technology stack: (application) --[USES_IMAGE]--> (os) --[DEFINED_BY]--> (platform). We have successfully demonstrated model chaining.

graph TD App["application
dataguard"] --> OS["os
ol7"] --> Plat["platform
ol7_x86_64"]

Step 4: Modeling the Cloud Environment (Providers & Networks)

Now we’ll model the environment within our logical (domain) resources. We’ll create (provider) and (network) resources, showing how to embed configuration data inside a model and use conditional logic.

Create a model to define the service providers for our domains. This model introduces two advanced concepts:

  1. A local TOML table (operator) is used to store rich configuration data directly inside the model.
  2. A Tera for loop is used to look up the correct provider for each domain and construct its properties.

data/models/provider.toml

origin_resource = "domain"

# Store rich configuration data inside the model file as a map.
operator = { core = { name = "oracle", function = "cloud" }, operation = { name = "zabbix-corp", function = "application" }, access = { name = "google", function = "cloud" } }

[[create_resource]]
# We exclude certain domains from having a provider created.
match_on = [ { property = "name", not = "internet" }, { property = "name", not = "operation" } ]
resource_type = "provider"
relation_type = "BASELINE_TEMPLATE"
# Use a Tera for loop to find the right provider name from the map and construct a unique name.
name = "{% for key, value in operator %}{% if key == origin_resource.name %}{{ value.name }}_{{ value.function }}{% endif %}{% endfor %}"
[create_resource.properties]
domain = "{{ origin_resource.name }}"
function = "{% for key, value in operator %}{% if key == origin_resource.name %}{{ value.function }}{% endif %}{% endfor %}"
operator = "{% for key, value in operator %}{% if key == origin_resource.name %}{{ value.name }}{% endif %}{% endfor %}"
description = "A service provider offers on-demand computing services, like storage, databases, networking, and software, typically over the internet on a pay-as-you-go basis."
created = "{{now(utc=true) | date(format='%Y-%m-%dT%H:%M:%SZ')}}"

Next, create the network model. This model has two rules with different match_on conditions to create differently configured networks for operational vs. other domains.

data/models/network.toml

origin_resource = "domain"

# Rule 1: Create a management network for the 'operation' domain
[[create_resource]]
match_on = [ { property = "name", value = "operation" } ]
resource_type = "network"
relation_type = "DEFINED_BY"
name = "{{origin_resource.name}}_network"
[create_resource.properties]
function = "management"
domain = "{{origin_resource.name}}"
description = "The {{origin_resource.name}} network is a virtual, on-demand infrastructure for management services."
cidr = "172.16.0.0/12"
created = "{{now(utc=true) | date(format='%Y-%m-%dT%H:%M:%SZ')}}"

# Rule 2: Create a standard network for all other domains (excluding connectivity domains)
[[create_resource]]
match_on = [ { property = "name", not = "operation" }, { property = "function", not = "connectivity" } ]
resource_type = "network"
relation_type = "DEFINED_BY"
name = "{{origin_resource.name}}_network"
[create_resource.properties]
function = "{{origin_resource.function}}"
domain = "{{origin_resource.name}}"
description = "The {{origin_resource.name}} network is a virtual, on-demand infrastructure that connects users and applications to computing resources."
cidr = "192.168.0.0/16"
created = "{{now(utc=true) | date(format='%Y-%m-%dT%H:%M:%SZ')}}"

Step 5: Modeling the Underlying Infrastructure & Its Providers

We’ve defined the hosting platform, but where does that platform live? Let’s model the final infrastructure layer, such as on-premise datacenters and cloud regions, and their providers.

Create an infrastructure.toml model. This model will read the platform nodes we created in the previous step and link them to their physical or virtual locations.

data/models/infrastructure.toml

# Rule 1: Link on-premise VMs to our on-prem datacenter.
origin_resource = "onprem_vm"
[[create_resource]]
# No 'match_on' means this rule applies to all 'onprem_vm' resources.
resource_type = "onprem_datacenter"
relation_type = "LOCATED_IN"
name = "dc01-frankfurt"
[create_resource.properties]
provider = "equinix"

# Rule 2: Link managed Kubernetes clusters to a cloud region.
origin_resource = "managed_k8s_cluster"
[[create_resource]]
resource_type = "cloud_region"
relation_type = "LOCATED_IN"
name = "eu-central-1"
[create_resource.properties]
provider = "aws-cloud"

# Rule 3: Also model where our databases are hosted.
origin_resource = "database"

# Self-hosted databases run on on-premise VMs.
[[create_resource]]
match_on = [ { property = "type", value = "self-hosted" } ]
resource_type = "onprem_vm"
relation_type = "HOSTED_ON"
name = "db-vm-for-{{ origin_resource.name }}"
[create_resource.properties]
os_provider = "redhat"
hardware_provider = "dell"

# Also, update the self-hosted database node itself with its default backup policy.
[[create_resource]]
match_on = [ { property = "type", value = "self-hosted" } ]
resource_type = "database"
name = "{{ origin_resource.name }}"
relation_type = "_UPDATED_WITH_BACKUP_POLICY"
[create_resource.properties]
backup_policy = "none"

# Managed service databases are hosted in a cloud region.
[[create_resource]]
match_on = [ { property = "type", value = "managed-service" } ]
resource_type = "cloud_region"
relation_type = "HOSTED_IN"
name = "eu-central-1" # Re-uses the existing node
[create_resource.properties]
provider = "aws-cloud"

# Also, update the managed database node itself with its backup policy.
[[create_resource]]
match_on = [ { property = "type", value = "managed-service" } ]
resource_type = "database"
name = "{{ origin_resource.name }}"
relation_type = "_UPDATED_WITH_BACKUP_POLICY"
[create_resource.properties]
backup_policy = "daily"

# Rule 4: Replicate production Kubernetes clusters to a DR region for business continuity.
origin_resource = "managed_k8s_cluster"
[[create_resource]]
# Find the production EKS cluster for the order-frontend
match_on = [ { property = "name", value = "eks-for-order-frontend" } ]
resource_type = "cloud_region"
relation_type = "REPLICATED_TO"
name = "eu-west-1" # Our DR region in Ireland
[create_resource.properties]
provider = "aws-cloud"
is_dr_site = true

Run rescile-ce serve.

Result: We now have a rich, multi-layered graph that clearly distinguishes between on-premise and cloud infrastructure stacks, complete with provider information at each layer.

You can trace the full dependency chain with a GraphQL query:

query TraceHybridStacks {
  onPremApp: application(filter: {name: "billing-legacy"}) {
    name
    onprem_vm {
      node {
        name
        onprem_datacenter {
          node {
            name
            provider
          }
        }
      }
    }
  }
  cloudApp: application(filter: {name: "order-frontend"}) {
    name
    managed_k8s_cluster {
      node {
        name
        cloud_region {
          node {
            name
            provider
          }
        }
      }
    }
  }
}

Step 6: Deriving Multiple Resources with create_from_property

Sometimes, a single property in our asset data represents a list of components. rescile can automatically “unfurl” this property into multiple distinct nodes.

First, add a microservices column to order-frontend in your application.csv file.

data/assets/application.csv (add microservices column)

name,type,network_zone,environment,database,supported_runtimes,microservices
billing-legacy,monolith,backend,production,oracle-db1,"java, tomcat",
order-frontend,container,edge,production,aurora-main,"nodejs","auth-service,payment-service,order-api"
image-processor,function,backend,production,,"nodejs, python",
billing-legacy-dev,monolith,backend,development,oracle-db1,"java, tomcat",

Now, create a model that uses create_from_property. This powerful directive tells rescile to look at the microservices property. It will automatically create a (microservice) node for each comma-separated value.

data/models/microservice.toml

origin_resource = "application"

[[create_resource]]
# This rule only applies to applications that have a 'microservices' property.
match_on = [ { property = "microservices", exists = true } ]
# The name of the property to read from.
create_from_property = "microservices"
# The label of the relationship to the new nodes.
relation_type = "COMPOSED_OF"

Run rescile-ce serve.

Result: The order-frontend application node now has three new outgoing COMPOSED_OF edges, one for each microservice, modeling its composition from a single flat property.

graph TD App["application
order-frontend"] App -- COMPOSED_OF --> S1["microservice
auth-service"] App -- COMPOSED_OF --> S2["microservice
payment-service"] App -- COMPOSED_OF --> S3["microservice
order-api"]

The create_from_property directive can also derive nodes from properties on related resources. This allows you to model how capabilities from one component (like a database) are leveraged by another (the application).

First, add a features column to database.csv.

data/assets/database.csv

name,type,vendor,features
oracle-db1,self-hosted,oracle-corp,
aurora-main,managed-service,aws-cloud,"autoscaling,serverless"

Now, create a model for application. This rule will:

  1. Look at the application’s connected database (property_origin = "database").
  2. Read the features property from that database.
  3. Create (cloud_feature) nodes for each feature.
  4. Create the new relationship from the application itself, not from the database (relation_origin = "origin_resource").

data/models/application.toml

origin_resource = "application"

[[create_resource]]
# Look for properties on the connected 'database' node.
property_origin = "database"
# Read the 'features' property from the database.
create_from_property = "features"
# Create the new edge starting from the 'application' (this model's origin_resource).
relation_origin = "origin_resource"
relation_type = "LEVERAGES"

Run rescile-ce serve.

Result: The order-frontend application is now linked to (cloud_feature) nodes for autoscaling and serverless, even though that data originated on the connected database. This powerful pattern lets you model indirect capabilities and dependencies.

graph TD App["application
order-frontend"] --> DB["database
aurora-main
{features: '...'}"] App -- LEVERAGES --> F1["cloud_feature
autoscaling"] App -- LEVERAGES --> F2["cloud_feature
serverless"]

Step 8: Improving Semantics with retype_relation

In Step 2, rescile automatically created a relationship from (application) to (database) and named it database, after the column header. While functional, this isn’t very descriptive. We can use retype_relation to improve the semantics.

Add this block to your data/models/application.toml file.

data/models/application.toml (add this block)

# Find the auto-generated relationship from the 'database' property and rename it.
[[retype_relation]]
property_key = "database"
new_type = "CONNECTS_TO"

Run rescile-ce serve.

Result: The relationship between applications and databases is now labeled CONNECTS_TO. The graph is more readable and better communicates the architectural intent. You can verify this with the same GraphQL query from Step 2; the field will now be named connects_to instead of database.

We have now built a sophisticated, multi-layered architectural model from simple data files and declarative rules, capturing technology, providers, and location.