Skip to main content

8 posts tagged with "IaC"

View All Tags

· 4 min read

Understanding roles is integral to applying the principal of least privilege to GCP environments.

A quick primer on roles in GCP

A Role in GCP is a collection of permissions to services and APIs on the platform. Roles are "bound" to principals or members (users, groups and service accounts).

These bindings are referred to as "policies" which are scoped at a particular level - organisation, folder, project, resource.

There are three types of roles - Primitive Roles, Predefined Roles and Custom Roles.

Primitive (or Basic) Roles

These are legacy roles set at a GCP project level which include Owner, Editor, and Viewer. These are generally considered to be excessive in terms of permissions and their use should be minimised if not avoided altogether.

Predefined Roles

These are roles with fine grained access to discrete services in GCP. Google has put these together for your convenience. In most cases predefined roles are the preferred mechanism to assign permissions to members.

Custom Roles

Custom roles can be created with a curated collection of permissions if required, reasons for doing so include:

  • if the permissions in predefined roles are excessive for your security posture
  • if you want to combine permissions across different services, and cannot find a suitable predefined role although it is preferred to assign multiple predefined roles to a given member

Anatomy of an IAM Policy

An IAM Policy is a collection of bindings of one or more members (user, group or service account) to a role (primitive, predefined or custom). Policies are normally expressed as JSON objects as shown here:

{
"bindings": [
{
"members": [
"group:project-admins@my-cloud-identity-domain.com"
],
"role": "roles/owner"
},
{
"members": [
"serviceAccount:provisioner@my-project.iam.gserviceaccount.com",
"user:javen@avensolutions.com"
],
"role": "roles/resourcemanager.folderViewer"
}
]
}

Groups are Google Groups created in Cloud Identity or Google Workspace (formerly known as G-Suite)

Application of policies is an atomic operation, which will overwrite any existing policy attached to an entity (org, folder, project, resource).

Querying Roles with StackQL

Predefined and primitive roles are defined in the roles resource in StackQL (google.iam.roles) - which returns the following fields (as returned by DESCRIBE google.iam.roles):

NameDescription
nameName of the role in the format roles/[{service}.]{role}
for predefined or basic roles, or qualified for custom roles,
e.g. organizations/{org_id}/roles/[{service}.]{role}
descriptionAn optional, human-readable description for the role
includedPermissionsAn array of permissions this role grants (only displayed with
VIEW = 'full')
etagOutput only, used internally for consistency
titleAn optional, human-readable title for the role (visible in the
Console)
deletedA read only boolean field showing the current deleted state
of the role
stageThe current launch stage of the role, e.g. ALPHA

Get the name for a role

Often, you may know the "friendly" title for a role like "Logs Bucket Writer", but you need the actual role name to use in an Iam policy - which is roles/logging.bucketWriter. A simple query to find this using StackQL is shown here:

SELECT name
FROM google.iam.roles
WHERE title = 'Logs Bucket Writer';
/* RETURNS:
|----------------------------|
| name |
|----------------------------|
| roles/logging.bucketWriter |
|----------------------------|
*/

Conversely, if you have the name but want the friendly title you could use:

SELECT title
FROM google.iam.roles
WHERE name = 'roles/logging.bucketWriter';
/* RETURNS:
|--------------------|
| title |
|--------------------|
| Logs Bucket Writer |
|--------------------|
*/

Wildcards can also be used with the LIKE operator, for example to get the name and title for each predefined role in the logging service you could run:

SELECT name, title
FROM google.iam.roles
WHERE name LIKE 'roles/logging.%';

Get the permissions for a role

To return the includedPermissions you need to add the following WHERE clause:

WHERE view = 'FULL'

An example query to list the permissions for a given role is shown here:

SELECT includedPermissions
FROM google.iam.roles
WHERE view = 'FULL' AND
name = 'roles/cloudfunctions.viewer';
/* RETURNS
["cloudbuild.builds.get","cloudbuild.builds.list",...]
*/

A more common challenge is that you know a particular permission such as cloudfunctions.functions.get and you want to know which roles contain this permission you could run the following query:

SELECT name, title
FROM google.iam.roles
WHERE view = 'FULL'
AND includedPermissions LIKE '%cloudfunctions.functions.get%';

Creating custom roles and more...

In forthcoming articles, we will demonstrate how you can create custom roles using StackQL INSERT operations, as well as how you can construct a simple IAM framework to manage and provision access to resources in GCP, stay tuned!

· 2 min read

I grappled with Terraform for the better part of a day trying to provision a GKE Autopilot cluster in a Shared VPC service project, I was able to do this with StackQL in 2 minutes, this is how...

Before starting you will need the following to use GKE Autopilot in your Shared VPC:

  • control plane IP address range
  • control plane authorized networks (if desired)
  • the host network and node subnet you intend to use
  • pod and services secondary CIDR ranges

(all of the above would typically be pre-provisioned in the Shared VPC design and deployment)

Step 1: Using the GCP console, navigate to your service project, go to Kubernetes Engine --> Clusters --> Create --> GKE Autopilot --> Configure. Enter in all of the desired configuration options (including the network configuration specified above). Do not select CREATE.

Step 2: At the bottom of the dialog used to configure the cluster in the console, use the Equivalent REST button to generate the GKE Autopilot API request body.

Step 3: Supply this as input data to an StackQL INSERT command, either via an iql file, on as inline configuration. Optionally you can convert this to Jsonnet and parameterise for use in other environments.

<<<json
{
"cluster": {
..from equivalent REST command..
}
}
>>>

INSERT INTO google.container.`projects.locations.clusters`(
parent,
data__cluster
)
SELECT 'projects/my-svc-project/locations/australia-southeast1',
'{{ .cluster }}'
;

easy!

· 2 min read

Jsonnet is a fantastic configuration language as discussed in Using Jsonnet to Configure Multiple Environments. Going slightly beyond the basics, this article is an introduction to anonymous functions and the map and format methods in the Jsonnet standard library.

Similar to map methods in various other functional programming languages or data processing frameworks, map in Jsonnet evaluates a named or anonymous function for each element within an array. map is a higher order function, meaning it is a function that calls another function. Its signature is here:

std.map(func, arr)

the func argument could be a named function or an unnamed (or anonymous function). arr is an input array which could include embedded dictionaries or other lists as well.

In this example I am templating some config for a NAT gateway in GCP for use in an StackQL routine, where I have a list of external IP's that need to be formatted in the Google selfLink format. Perfect use for the map method as well as the format command similar to the printf or equivalent commands found in various other languages. The easiest way to use this is similar to the way you would invoke this in Python:

"%s/%s/%s" % [string1, string2, string3]

Putting it all together in the following practical example, you can see the input Jsonnet in the Jsonnet tab and the templated or rendered output in the Json tab. x represents an element of the extIps array, then the function returns the fully qualified selfLink url.

{
local project_id = 'myproject-123',
local region = 'australia-southeast1',
local self_link_prefix = 'https://compute.googleapis.com/compute/v1/projects/',
local extIps = [{name: 'syd-extip1', region: region},{name: 'syd-extip2', region: region}],

nats: [
{
name: 'nat-config',
natIpAllocateOption: 'MANUAL_ONLY',
natIps: std.map((function(x) "%s/%s/regions/%s/addresses/%s" % [self_link_prefix, project_id, x.region, x.name]), extIps),
sourceSubnetworkIpRangesToNat: 'ALL_SUBNETWORKS_ALL_IP_RANGES'
},
],
}

more to come...

· 2 min read

Its easy enough for anyone to deploy a Cloud Storage bucket in google, this can be done through the console, gcloud, terraform or stackql as shown here: Deploying and Querying GCS Buckets using StackQL. It is also easy to inadvertently allow users to set public ACLs on a bucket, therefore making its contents publicly visible by default. There is an easy way to prevent this from happening by Using public access prevention.

Let's work through a real life scenario using StackQL.

Step 1 : Run a query to find buckets which do not have public access prevention enforced

Run the following StackQL query from the shell or via exec:

SELECT name, 
JSON_EXTRACT(iamConfiguration, '$.publicAccessPrevention') as publicAccessPrevention
FROM google.storage.buckets
WHERE project = 'myco-terraform';
/* returns
|-------------------|------------------------|
| name | publicAccessPrevention |
|-------------------|------------------------|
| myco-tf-nonprod | unspecified |
|-------------------|------------------------|
| myco-tf-prod | enforced |
|-------------------|------------------------|
*/

We can see from the query results that the myco-tf-nonprod bucket does not have public access prevention enforced, lets fix it...using StackQL.

Step 2 : Configure public access prevention for a bucket

Run the following StackQL procedure to enforce public access prevention:

EXEC google.storage.buckets.patch 
@bucket = 'myco-tf-nonprod'
@@json = '{
"iamConfiguration": {
"publicAccessPrevention": "enforced"
}
}';

Step 3: Confirm public access prevention is enforced

Run the first query again, and you should see that the desired result is in place.

SELECT name, 
JSON_EXTRACT(iamConfiguration, '$.publicAccessPrevention') as publicAccessPrevention
FROM google.storage.buckets
WHERE project = 'myco-terraform';
/* returns
|-------------------|------------------------|
| name | publicAccessPrevention |
|-------------------|------------------------|
| myco-tf-nonprod | enforced |
|-------------------|------------------------|
| myco-tf-prod | enforced |
|-------------------|------------------------|
*/

Easy!

· 5 min read

In the previous post, we showed you how to enable usage and storage logging for GCS buckets. Now that we have enabled logging, let's load and analyze the logs using Big Query. We will build up a data file vars.jsonnet as we go and show the queries step by step, at the end we will show how to run this as one batch using StackQL.

Step 1 : Create a Big Query dataset

We will need a dataset (akin to a schema or a database in other RDMBS parlance), basically a container for objects such as tables or views, the data and code to do this are shown here:

INSERT INTO google.bigquery.datasets(
projectId,
data__location,
data__datasetReference,
data__description,
data__friendlyName
)
SELECT
'{{ .projectId }}',
'{{ .location }}',
'{ "datasetId": "{{ .datasetId }}", "projectId": "{{ .projectId }}" }',
'{{ .description }}',
'{{ .friendlyName }}'
;

Step 2 : Create usage table

Let's use StackQL to create a table named usage to host the GCS usage logs, the schema for the table is defined in a file named cloud_storage_usage_schema_v0.json which can be downloaded from the location provided, for reference this is provided in the Table Schema tab in the example provided below:

/* create_table.iql */

INSERT INTO google.bigquery.tables(
datasetId,
projectId,
data__description,
data__friendlyName,
data__tableReference,
data__schema
)
SELECT
'{{ .datasetId }}',
'{{ .projectId }}',
'{{ .table.usage.description }}',
'{{ .table.usage.friendlyName }}',
'{"projectId": "{{ .projectId }}", "datasetId": "{{ .datasetId }}", "tableId": "{{ .table.usage.tableId }}"}',
'{{ .table.usage.schema }}'
;

Run the following to execute the StackQL command with the input data shown:

stackql exec -i ./create_table.iql --iqldata ./vars.jsonnet

Step 3 : Load the usage data

We have a Big Query dataset and a table, lets load some data. To do this we will need to create and submit a load job, we can do this by inserting into the google.bigquery.jobs resource as shown here:

/* bq_load_job.iql */

INSERT INTO google.bigquery.jobs(
projectId,
data__configuration
)
SELECT
'stackql',
'{
"load": {
"destinationTable": {
"projectId": "{{ .projectId }}",
"datasetId": "{{ .datasetId }}",
"tableId": "{{ .table.usage.tableId }}"
},
"sourceUris": [
"gs://{{ .logs_bucket }}/{{ .object_prefix }}"
],
"schema": {{ .table.usage.schema }},
"skipLeadingRows": 1,
"maxBadRecords": 0,
"projectionFields": []
}
}'
;

Run the following to execute:

stackql exec -i ./bq_load_job.iql --iqldata ./vars.jsonnet

Clean up (optional)

If you want to clean up what you have done, you can do so using StackQL DELETE statements, as provided below:

NOTE: To delete a Big Query dataset, you need to delete all of the tables contained in the dataset first, as shown in the following example

-- delete table(s) 

DELETE FROM google.bigquery.tables
WHERE projectId = '{{ .projectId }}'
AND datasetId = '{{ .datasetId }}'
AND tableId = '{{ .table.usage.tableId }}';

-- delete dataset

DELETE FROM google.bigquery.datasets
WHERE projectId = '{{ .projectId }}'
AND datasetId = '{{ .datasetId }}';