Categories
News

Maintaining Business Continuity Amid Epidemic Response

After a tumultuous week which concluded with President Trump declaring a national emergency in response to novel coronavirus concerns, companies have been enacting their business continuity plans and instructing their employees to work remotely, some for the first time.  As a company who has enabled remote-work since our founding, we wanted to share how we make certain our people work remotely in the most effective and secure manner. We offer these lessons learned to help those mitigating these disruptions and challenges.

How We Do Business Securely

Live Anywhere, Work Anywhere

Our employees work from home but also work where our clients need them. This means we understand how to work securely from home, our office, and our clients’ offices. In normal circumstances, these include hotels, sky clubs, coffee shops. If you have questions or concerns, we can help. 

Cloud-Based Availability

We help companies securely migrate to the cloud and it’s been our home since the beginning. Everything we utilize for internal operations and client-facing business is cloud-based – allowing Trility to seamlessly operate from anywhere with no disruption. Cloud security and reliability are a No. 1 priority to us. With the majority of breaches in 2019 due to cloud storage misconfigurations, we must continue to be diligent and never compromise when it comes to security.

Secure Device Management

As businesses transition to work from home, reinforce device management policies, and determine if they are secure enough. Our team is geographically distributed across the country. As a best-practice, consider some of these approaches we use:

  • Hard drives must have full disk encryption
  • Screens must be locked when you are away from the keyboard
  • Screens must be set to auto-lock after five minutes if you are away from the keyboard and forget to lock
  • Do not store customer data locally to the laptop
  • Don’t attach your laptop to insecure public wireless networks – every time it is available, use VPN
  • Use modern operating systems and tools and keep them current at all times

Lines of Communication

The days of operational 8-5 hours are gone. Maintain several lines of communication in order to meet the needs of clients, partners, and employees. We use Slack, Zoom, Microsoft Teams, Confluence, JIRA, GSuite (Google Productivity Suite), and several more to align with our client’s preferred use of communication. Find one that works for your business.

  • Have dynamic communication tools to serve as a foundation for most written communications.   This is preferably not email. 
  • Use communication tools that allow for one-to-one, group, team, and enterprise-wide communications through various channels – video and voice-capable ones are ideal.  
  • Provide a stream of news and updates to employees throughout the organization.
  • Integrate the communication tool via APIs to your various productivity tools like your project management, ERP, CRM, or issue tracking systems.  
  • For dynamic document collaboration, use cloud-based tools with traceable version control.

Redundant Systems

No one person or location holds information, and backups are always in place. Whether it’s a person, a process, or line of code, we have systems and backups for completing work and documentation is always provided because we never want our clients handcuffed to us.

Start Simple. Then Automate. And Always Be Secure.

Under normal circumstances, we help companies defend or extend their market share in an era of rapid disruption by simplifying, automating, and securing each iteration. The era of disruption has been disrupted. We are here to help.

The Internet is full of noise right now. Big problems need to be solved, and they can’t all be Googled.

Need a Sounding Board?

If you are in need of sound advice, reach out to our team. If you aren’t sure who to contact, email marketing@trility.io and they will direct you to the right person.

About Trility

For those wanting to defend or extend their market share in an era of rapid disruption, Trility simplifies, automates, and secures the journey and has a proven history of reliable delivery results. Headquartered in Des Moines, Iowa, with teams in Omaha, Neb., and Chicago, Ill., our people live everywhere and work where needed to help clients navigate their business evolution with certainty.

Categories
Cloud & Infrastructure

Part IV: Complex Practical Examples of DevOps Unit Testing

In my previous article, I provided a simple example of mocking an AWS resource using localstack, and testing with the python terraform-compliance module. In this example, I will provide a more extensive example using kitchen-terraform and terraform-compliance to deploy the following resources in AWS us-east-1 and us-west-2 regions.

  1. VPC
  2. Subnet
  3. Internet Gateway
  4. Route Table
  5. Route Table Association
  6. Security Group
  7. Key Pair
  8. 2 X EC2 Instance

To begin this example, you will need the following:

  1. Terraform 
  2. Ruby
  3. Python3
  4. Python3 virtualenv module
  5. An AWS account with credentials configured in ~/.aws
  6. An AWS role or user with at least the minimum permissions:
{
 "Version": "2012-10-17",
 "Statement":
   [
     {
       "Sid": "Stmt1469773655000",
       "Effect": "Allow",
       "Action": ["ec2:*"],
       "Resource": ["*"]
     }
   ]
}

Next, we need to set up a Python3 virtual environment, activate the environment and install the python terraform-compliance module.

which python3
/Library/Frameworks/Python.framework/Versions/3.8/bin/python3
cd ~
mkdir virtualenvs
cd virtualenvs
virtualenv terraform-test  -p /Library/Frameworks/Python.framework/Versions/3.8/bin/python3
source terraform-test/bin/activate
pip install terraform-compliance

Now, we need to create a projects directory and download the sample code from github.

cd ~
mkdir projects
cd projects
git clone git@github.com:rubelw/terraform-kitchen.git
cd terraform-kitchen

Now we are ready to run our tests, by executing the ‘execute_kitchen_terraform.sh’ file.

This script will perform the following functions:

  1. Install bundler
  2. Install required gems
  3. Create public and private key pair
  4. Initialize terraform project
  5. Test terraform plan output against terraform-compliance features
  6. Execute kitchen test suite
  • kitchen destroy centos(us-east-1)
  • kitchen create centos(us-east-1)
  • kitchen converge centos(us-east-1)
  • kitchen verify centos (us-east-1)
  • kitchen destroy centos(us-east-1)
  • kitchen destroy ubuntu(us-west-2)
  • kitchen create ubuntu(us-west-2)
  • kitchen converge ubuntu(us-west-2)
  • kitchen verify ubuntu(us-west-2)
  • kitchen destroy ubuntu(us-west-2)
./execute_kitchen_terraform.sh

This script will begin by checking if bundler is installed, and then installing the necessary ruby gems.

Successfully installed bundler-2.1.4
Parsing documentation for bundler-2.1.4
Done installing documentation for bundler after 2 seconds
1 gem installed
Fetching gem metadata from https://rubygems.org/.........
Fetching gem metadata from https://rubygems.org/.
Resolving dependencies..
…
Using kitchen-terraform 5.2.0
Bundle complete! 1 Gemfile dependency, 185 gems now installed.
Use `bundle info [gemname]` to see where a bundled gem is installed.

Next the script will test if the public/private keypair exists in the test/assets directory, if not, it will create the key pair.

checking if test/assets directory exists
Generating public/private rsa key pair.
Your identification has been saved in test/assets/id_rsa.
Your public key has been saved in test/assets/id_rsa.pub.
The key fingerprint is:
SHA256:0oryWP5ff8kBwQPUSCrLGlVMFzU0rL7TQtJSi6iftyo Kitchen-Terraform AWS provider tutorial
The key's randomart image is:
+---[RSA 4096]----+
|       ooo*X=    |
|       ..o. *o   |
|      o .  . o   |
|     o +  o .    |
|    . +.S= . .   |
|     +.o+ =   .  |
|  . +..  +.o . o |
|   *E  ...+.. +  |
|  . o+=+o. o..   |
+----[SHA256]-----+

Next, the script will test the terraform project, using the python terraform-compliance module, and features located in test/features.

The script begins by testing if the terraform project has been initialized, and if not, initializing the project.

Initializing the backend...

Initializing provider plugins...
- Checking for available provider plugins...
- Downloading plugin for provider "random" (hashicorp/random) 2.1.2...
- Downloading plugin for provider "aws" (hashicorp/aws) 2.51.0...

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.

After terraform initialization, the script will execute ‘terraform plan’ and output the plan in json format. It will then test the terraform output against the features in the test directory.

Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.


------------------------------------------------------------------------

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # aws_instance.reachable_other_host will be created
  + resource "aws_instance" "reachable_other_host" {
      + ami                          = "ami-1ee65166"
      + arn                          = (known after apply)
      + associate_public_ip_address  = true
      + availability_zone            = (known after apply)
…
Plan: 11 to add, 0 to change, 0 to destroy.

------------------------------------------------------------------------

This plan was saved to: myout

To perform exactly these actions, run the following command to apply:
    terraform apply "myout"

terraform-compliance v1.1.11 initiated

🚩 Features	: /terraform-kitchen/test/features
🚩 Plan File	: /terraform-kitchen/myout.json

🚩 Running tests. 🎉

Feature: security_group  # /terraform-kitchen/test/features/security_group.feature
    In order to ensure the security group is secure:

    Scenario: Only selected ports should be publicly open
        Given I have AWS Security Group defined
        When it contains ingress
        Then it must only have tcp protocol and port 22,443 for 0.0.0.0/0

1 features (1 passed)
1 scenarios (1 passed)
3 steps (3 passed)

You may be asking, why do we need both terraform-compliance features and kitchen-terraform fixtures for our testing? The purpose of terraform-compliance features is to have a repository of global, enterprise-level features and tests, which get applied to all projects. For example, the test displayed above will test security groups, so only ports 22 and 443 are open. No other ports should be open in the security group.

The kitchen-terraform fixtures and tests are designed for unit testing a single terraform project, and are not to be applied to every terraform project. 

Continuing with the script execution, the script will now run the kitchen-terraform tests. It begins by attempting to destroy any existing terraform state in the applicable region.

-----> Starting Test Kitchen (v2.3.4)
-----> Destroying <complex-suite-centos>...
$$$$$$ Verifying the Terraform client version is in the supported interval of >= 0.11.4, < 0.13.0...
$$$$$$ Reading the Terraform client version...
       Terraform v0.12.21
       + provider.aws v2.51.0
       + provider.random v2.1.2
$$$$$$ Finished reading the Terraform client version.
$$$$$$ Finished verifying the Terraform client version.
$$$$$$ Initializing the Terraform working directory...
       Initializing modules...
       
       Initializing the backend...
       
       Initializing provider plugins...
       
       Terraform has been successfully initialized!
$$$$$$ Finished initializing the Terraform working directory.
$$$$$$ Selecting the kitchen-terraform-complex-suite-centos Terraform workspace...
$$$$$$ Finished selecting the kitchen-terraform-complex-suite-centos Terraform workspace.
$$$$$$ Destroying the Terraform-managed infrastructure...
       module.complex_kitchen_terraform.random_string.key_name: Refreshing state... [id=none]
…
       Destroy complete! Resources: 11 destroyed.
$$$$$$ Finished destroying the Terraform-managed infrastructure.
$$$$$$ Finished destroying the Terraform-managed infrastructure.
$$$$$$ Selecting the default Terraform workspace...
       Switched to workspace "default".
$$$$$$ Finished selecting the default Terraform workspace.
$$$$$$ Deleting the kitchen-terraform-complex-suite-centos Terraform workspace...
       Deleted workspace "kitchen-terraform-complex-suite-centos"!
$$$$$$ Finished deleting the kitchen-terraform-complex-suite-centos Terraform workspace.
       Finished destroying <complex-suite-centos> (3m31.75s).
-----> Test Kitchen is finished. (3m32.88s)

The script will then initialize the terraform working directory and select a new terraform workspace.

-----> Starting Test Kitchen (v2.3.4)
-----> Creating <complex-suite-centos>...
$$$$$$ Verifying the Terraform client version is in the supported interval of >= 0.11.4, < 0.13.0...
$$$$$$ Reading the Terraform client version...
       Terraform v0.12.21
       + provider.aws v2.51.0
       + provider.random v2.1.2
$$$$$$ Finished reading the Terraform client version.
$$$$$$ Finished verifying the Terraform client version.
$$$$$$ Initializing the Terraform working directory...
       Upgrading modules...
       - complex_kitchen_terraform in ../../..
       
       Initializing the backend...
       
       Initializing provider plugins...
       - Checking for available provider plugins...
       - Downloading plugin for provider "random" (hashicorp/random) 2.1.2...
       - Downloading plugin for provider "aws" (hashicorp/aws) 2.51.0...
       
       Terraform has been successfully initialized!
$$$$$$ Finished initializing the Terraform working directory.
$$$$$$ Creating the kitchen-terraform-complex-suite-centos Terraform workspace...
       Created and switched to workspace "kitchen-terraform-complex-suite-centos"!
       
       You're now on a new, empty workspace. Workspaces isolate their state,
       so if you run "terraform plan" Terraform will not see any existing state
       for this configuration.
$$$$$$ Finished creating the kitchen-terraform-complex-suite-centos Terraform workspace.
       Finished creating <complex-suite-centos> (0m16.81s).
-----> Test Kitchen is finished. (0m17.97s)

The next step in the script is to run the ‘kitchen converge’.  This step will converge the platforms in the kitchen.yml file.

-----> Starting Test Kitchen (v2.3.4)
-----> Creating <complex-suite-centos>...
$$$$$$ Verifying the Terraform client version is in the supported interval of >= 0.11.4, < 0.13.0...
$$$$$$ Reading the Terraform client version...
       Terraform v0.12.21
       + provider.aws v2.51.0
       + provider.random v2.1.2
$$$$$$ Finished reading the Terraform client version.
$$$$$$ Finished verifying the Terraform client version.
$$$$$$ Initializing the Terraform working directory...
       Upgrading modules...
       - complex_kitchen_terraform in ../../..
       
       Initializing the backend...
       
       Initializing provider plugins...
       - Checking for available provider plugins...
       - Downloading plugin for provider "random" (hashicorp/random) 2.1.2...
       - Downloading plugin for provider "aws" (hashicorp/aws) 2.51.0...
       
       Terraform has been successfully initialized!
$$$$$$ Finished initializing the Terraform working directory.
$$$$$$ Creating the kitchen-terraform-complex-suite-centos Terraform workspace...
       Created and switched to workspace "kitchen-terraform-complex-suite-centos"!
       
       You're now on a new, empty workspace. Workspaces isolate their state,
       so if you run "terraform plan" Terraform will not see any existing state
       for this configuration.
$$$$$$ Finished creating the kitchen-terraform-complex-suite-centos Terraform workspace.
       Finished creating <complex-suite-centos> (0m16.81s).
-----> Test Kitchen is finished. (0m17.97s)

Finally, the script will execute ‘kitchen verify’ to test the deployed project against the test suite.

-----> Starting Test Kitchen (v2.3.4)
-----> Setting up <complex-suite-centos>...
       Finished setting up <complex-suite-centos> (0m0.00s).
-----> Verifying <complex-suite-centos>...
$$$$$$ Reading the Terraform input variables from the Kitchen instance state...
$$$$$$ Finished reading the Terraform input variables from the Kitchen instance state.
$$$$$$ Reading the Terraform output variables from the Kitchen instance state...
$$$$$$ Finished reading the Terraform output variables from the Kitchen instance state.
-----> Starting verification of the systems.
$$$$$$ Verifying the 'local' system...

Profile: complex kitchen-terraform (complex_suite)
Version: 0.1.0
Target:  local://

  ✔  state_file: 0.12.21
     ✔  0.12.21 is expected to match /\d+\.\d+\.\d+/
  ✔  inspec_attributes: static terraform output
     ✔  static terraform output is expected to eq "static terraform output"
     ✔  static terraform output is expected to eq "static terraform output"


Profile Summary: 2 successful controls, 0 control failures, 0 controls skipped
Test Summary: 3 successful, 0 failures, 0 skipped
$$$$$$ Finished verifying the 'local' system.
…
$$$$$$ Finished verifying the 'remote' system.
$$$$$$ Verifying the 'remote2' system...
DEPRECATION: AWS resources shipped with core InSpec are being moved to a resource pack for faster iteration. Please update your profiles to depend on git@github.com:inspec/inspec-aws.git . Resource 'aws_vpc' (used at /private/tmp/terraform-kitchen/test/integration/complex_suite/controls/aws_resources.rb:11)
DEPRECATION: AWS resources shipped with core InSpec are being moved to a resource pack for faster iteration. Please update your profiles to depend on git@github.com:inspec/inspec-aws.git . Resource 'aws_subnets' (used at /private/tmp/terraform-kitchen/test/integration/complex_suite/controls/aws_resources.rb:16)
DEPRECATION: AWS resources shipped with core InSpec are being moved to a resource pack for faster iteration. Please update your profiles to depend on git@github.com:inspec/inspec-aws.git . Resource 'aws_security_group' (used at /private/tmp/terraform-kitchen/test/integration/complex_suite/controls/aws_resources.rb:22)

Profile: complex kitchen-terraform (complex_suite)
Version: 0.1.0
Target:  aws://

  ✔  aws_resources: VPC vpc-00aa64d66abfa8e9c
     ✔  VPC vpc-00aa64d66abfa8e9c is expected to exist
     ✔  VPC vpc-00aa64d66abfa8e9c cidr_block is expected to eq "192.168.0.0/16"
     ✔  EC2 VPC Subnets with vpc_id == "vpc-00aa64d66abfa8e9c" states is expected not to include "pending"
     ✔  EC2 VPC Subnets with vpc_id == "vpc-00aa64d66abfa8e9c" cidr_blocks is expected to include "192.168.1.0/24"
     ✔  EC2 VPC Subnets with vpc_id == "vpc-00aa64d66abfa8e9c" subnet_ids is expected to include "subnet-000c991d9264c3a5f"
     ✔  EC2 Security Group sg-0bcdd1f63ba2a4b6f is expected to exist
     ✔  EC2 Security Group sg-0bcdd1f63ba2a4b6f is expected to allow in {:ipv4_range=>"198.144.101.2/32", :port=>22}
     ✔  EC2 Security Group sg-0bcdd1f63ba2a4b6f is expected to allow in {:ipv4_range=>"73.61.21.227/32", :port=>22}
     ✔  EC2 Security Group sg-0bcdd1f63ba2a4b6f is expected to allow in {:ipv4_range=>"198.144.101.2/32", :port=>443}
     ✔  EC2 Security Group sg-0bcdd1f63ba2a4b6f is expected to allow in {:ipv4_range=>"73.61.21.227/32", :port=>443}
     ✔  EC2 Security Group sg-0bcdd1f63ba2a4b6f group_id is expected to cmp == "sg-0bcdd1f63ba2a4b6f"
     ✔  EC2 Security Group sg-0bcdd1f63ba2a4b6f inbound_rules.count is expected to cmp == 3
     ✔  EC2 Instance i-0db748e47640739ea is expected to exist
     ✔  EC2 Instance i-0db748e47640739ea image_id is expected to eq "ami-ae7bfdb8"
     ✔  EC2 Instance i-0db748e47640739ea instance_type is expected to eq "t2.micro"
     ✔  EC2 Instance i-0db748e47640739ea vpc_id is expected to eq "vpc-00aa64d66abfa8e9c"
     ✔  EC2 Instance i-0db748e47640739ea tags is expected to include {:key => "Name", :value => "kitchen-terraform-reachable-other-host"}


Profile Summary: 1 successful control, 0 control failures, 0 controls skipped
Test Summary: 17 successful, 0 failures, 0 skipped
$$$$$$ Finished verifying the 'remote2' system.
-----> Finished verification of the systems.
       Finished verifying <complex-suite-centos> (0m43.58s).
-----> Test Kitchen is finished. (0m44.76s)

The last step in the script is the ‘kitchen destroy’.  This will destroy all AWS resources instantiated for the test.

-----> Starting Test Kitchen (v2.3.4)
-----> Destroying <complex-suite-centos>...
$$$$$$ Verifying the Terraform client version is in the supported interval of >= 0.11.4, < 0.13.0...
$$$$$$ Reading the Terraform client version...
       Terraform v0.12.21
       + provider.aws v2.51.0
       + provider.random v2.1.2
$$$$$$ Finished reading the Terraform client version.
$$$$$$ Finished verifying the Terraform client version.
$$$$$$ Initializing the Terraform working directory...
       Initializing modules...
       
       Initializing the backend...
       
       Initializing provider plugins...
       
       Terraform has been successfully initialized!
$$$$$$ Finished initializing the Terraform working directory
…
       module.complex_kitchen_terraform.aws_vpc.complex_tutorial: Destroying... [id=vpc-00aa64d66abfa8e9c]
       module.complex_kitchen_terraform.aws_vpc.complex_tutorial: Destruction complete after 1s
       
       Destroy complete! Resources: 11 destroyed.
$$$$$$ Finished destroying the Terraform-managed infrastructure.
$$$$$$ Selecting the default Terraform workspace...
       Switched to workspace "default".
$$$$$$ Finished selecting the default Terraform workspace.
$$$$$$ Deleting the kitchen-terraform-complex-suite-centos Terraform workspace...
       Deleted workspace "kitchen-terraform-complex-suite-centos"!
$$$$$$ Finished deleting the kitchen-terraform-complex-suite-centos Terraform workspace.
       Finished destroying <complex-suite-centos> (2m47.02s).
-----> Test Kitchen is finished. (2m48.17s)

Now the scripts will perform the same steps with ubuntu instances in us-west-2 region.

Future of Infrastructure Testing and Standards

In summary, I hope you have enjoyed this four-part series regarding infrastructure testing.  While these articles only covered specific situations and scenarios for infrastructure testing and deployments, I hope it causes your organization to open a discussion about the future direction of infrastructure testing and standards.

Read the Entire DevOps Testing Series


Categories
Cloud & Infrastructure

Part III: Practical Examples of DevOps Unit Testing

In my last two articles, I’ve talked conceptually and theoretically about the need for DevOps testers.

Part I: Does DevOps Need Dedicated Testers?
Part II: 2019 Cloud Breaches Prove DevOps Needs Dedicated Testers

In this article, I will provide practical examples of unit testing.

Since public cloud storage seems to be a common problem, I will begin with an example unit test for a terraform project which creates a simple S3 bucket.

First, we need to install localstack, so we can test AWS locally.

pip install localstack
export SERVICES=s3
export DEFAULT_REGION='us-east-1'
localstack start

In a new console/terminal and new directory, create a simple terraform project. The provider.tf file should point to the localstack ports.

provider "aws" {
	region = "us-east-1"
	skip_credentials_validation = true
	skip_metadata_api_check = true
	s3_force_path_style = true
	skip_requesting_account_id = true
	skip_get_ec2_platforms = true
	access_key = "mock_access_key"
	secret_key = "mock_secret_key"
	endpoints {
    	s3 = "http://localhost:4572"
	}
}

resource "aws_s3_bucket" "b" {
  bucket = "test"
  acl    = "private"

  tags = {
	Name    	= "My bucket"
	Environment = "Dev"
  }
}

Deploy the terraform project.

terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.

aws_s3_bucket.b: Refreshing state... [id=test]

------------------------------------------------------------------------


An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # aws_s3_bucket.b will be created
  + resource "aws_s3_bucket" "b" {
  	+ acceleration_status     	= (known after apply)
  	+ acl                     	= "private"
  	+ arn                     	= (known after apply)
  	+ bucket   	               = "test"
  	+ bucket_domain_name      	= (known after apply)
  	+ bucket_regional_domain_name = (known after apply)
  	+ force_destroy           	= false
  	+ hosted_zone_id          	= (known after apply)
  	+ id                          = (known after apply)
  	+ region                  	= (known after apply)
  	+ request_payer           	= (known after apply)
  	+ tags                    	= {
      	+ "Environment" = "Dev"
      	+ "Name"	    = "My bucket"
    	}
  	+ website_domain          	= (known after apply)
  	+ website_endpoint        	= (known after apply)

  	+ versioning {
      	+ enabled	= (known after apply)
      	+ mfa_delete = (known after apply)
    	}
	}

Plan: 1 to add, 0 to change, 0 to destroy.

------------------------------------------------------------------------

Note: You didn't specify an "-out" parameter to save this plan, so Terrafor can't guarantee that exactly these actions will be performed if
"terraform apply" is subsequently run.

$ terraform apply
aws_s3_bucket.b: Refreshing state... [id=test]

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # aws_s3_bucket.b will be created
  + resource "aws_s3_bucket" "b" {
  	+ acceleration_status     	= (known after apply)
  	+ acl                     	= "private"
  	+ arn         	            = (known after apply)
  	+ bucket                  	= "test"
  	+ bucket_domain_name      	= (known after apply)
  	+ bucket_regional_domain_name = (known after apply)
  	+ force_destroy           	= false
  	+ hosted_zone_id          	= (known after apply)
  	+ id                      	= (known after apply)
  	+ region                  	= (known after apply)
  	+ request_payer           	= (known after apply)
  	+ tags                    	= {
      	+ "Environment" = "Dev"
      	+ "Name"    	= "My bucket"
    	}
  	+ website_domain          	= (known after apply)
  	+ website_endpoint        	= (known after apply)

  	+ versioning {
      	+ enabled	= (known after apply)
      	+ mfa_delete = (known after apply)
    	}
	}

Plan: 1 to add, 0 to change, 0 to destroy.

Do you want to perform these actions in workspace "kitchen-terraform-base-aws"?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

aws_s3_bucket.b: Creating...
aws_s3_bucket.b: Creation complete after 0s [id=test]

Create a test.py file with the following code to test the deployment of the S3 bucket.

import boto3


def test_s3_bucket_creation():
	s3 = boto3.client(
    	's3',
    	endpoint_url='http://localhost:4572',
    	region_name='us-east-1'
	)
	# Call S3 to list current buckets
	response = s3.list_buckets()

	# Get a list of all bucket names from the response
	buckets = [bucket['Name'] for bucket in response['Buckets']]

	assert len(buckets) == 1

Test that the bucket was created.

$ pytest test.py
=============================================================== test session starts ===============================================================
platform darwin -- Python 3.6.0, pytest-5.2.2, py-1.8.0, pluggy-0.13.0
rootdir: /private/tmp/myterraform/tests/test/fixtures
plugins: localstack-0.4.1
collected 1 item

test.py .

Now, let’s destroy the S3 bucket.

$ terraform destroy
aws_s3_bucket.b: Refreshing state... [id=test]

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  - destroy

Terraform will perform the following actions:

  # aws_s3_bucket.b will be destroyed
  - resource "aws_s3_bucket" "b" {
  	- acl                     	= "private" -> null
  	- arn                     	= "arn:aws:s3:::test" -> null
  	- bucket                  	= "test" -> null
  	- bucket_domain_name      	= "test.s3.amazonaws.com" -> null
  	- bucket_regional_domain_name = "test.s3.amazonaws.com" -> null
  	- force_destroy           	= false -> null
  	- hosted_zone_id          	= "Z3AQBSTGFYJSTF" -> null
  	- id                      	= "test" -> null
  	- region                  	= "us-east-1" -> null
  	- tags                    	= {
      	- "Environment" = "Dev"
      	- "Name"    	= "My bucket"
    	} -> null

  	- object_lock_configuration {
    	}

  	- replication_configuration {
    	}

  	- server_side_encryption_configuration {
    	}

  	- versioning {
      	- enabled	= false -> null
      	- mfa_delete = false -> null
    	}
	}

Plan: 0 to add, 0 to change, 1 to destroy.

Do you really want to destroy all resources?
  Terraform will destroy all your managed infrastructure, as shown above.
  There is no undo. Only 'yes' will be accepted to confirm.

  Enter a value: yes

aws_s3_bucket.b: Destroying... [id=test]
aws_s3_bucket.b: Destruction complete after 0s

Destroy complete! Resources: 1 destroyed.

Next, we will install the terraform-compliance python module.

pip install terraform-compliance

Next, we will set up the directory for our test.

mk features
cd features

Next, make a file named s3.features inside the features directory with the following content.

Feature: test

 	In order to make sure the s3 bucket is secure:

 	Scenario: No public read
     	Given I have AWS S3 Bucket defined
     	When it contains acl
     	Then its value must not match the "public-read" regex

Now, we will return to the root directory for the project and run a terraform plan to get the plans output in json format.

terraform plan -out=myout
terraform show -json myout > myout.json

Lastly, we will test the terraform project against the feature file to see if the project is compliant.

$ terraform-compliance -p /tmp/junk/myout.json -f /tmp/junk/features
terraform-compliance v1.1.7 initiated

🚩 Features : /tmp/junk/features
🚩 Plan File : /tmp/junk/myout.json

🚩 Running tests. 🎉

Feature: test  # /tmp/junk/features/s3.feature
	In order to make sure the s3 bucket is secure:

	Scenario: No public read
    	Given I have AWS S3 Bucket defined
    	When it contains acl
    	Then its value must not match the "public-read" regex

1 features (1 passed)
1 scenarios (1 passed)
3 steps (3 passed)

As you will notice from the results, all tests passed because the S3 bucket deployed is private.

While these are just basic examples, they are intended to demonstrate the concept of unit testing infrastructure-as-code, and testing for various rules.

Read the Entire DevOps Testing Series


Categories
News

Nathan Levis Joins Trility as Senior Sales Engineer

Trility Consulting® is proud to announce Nathan Levis has joined the Trility team as a Senior Sales Engineer. In this role, Levis will help identify and craft solution engagements for clients to simplify, automate, and secure their paths forward.

Leveraging his breadth of technical expertise and an open-minded approach, Levis will focus on building partnerships to ensure organizations defend or extend their market share in an era of rapid disruption.

Nathan Levis Joins Trility as Senior Sales Engineer

A Holistic View of Business

Nathan brings invaluable Cloud, DevOps, Software Engineering, and agile experience, coupled with a keen interest in holistic business impact to help companies deliver on their most important technology-enabled priorities. He will be a great asset for our clients and for the Trility team.”

Brody Deren, Chief Strategy Officer for Trility

Trility’s outcome-based delivery method means clients receive observations, recommendations, and options to iterate for the best, highest-priority outcome. Levis will help build upon this proven approach and ensure we continue to deliver over and over again on our promises – meeting time, budget, and defined scopes that align with business and technical requirements. 

Comprised of technologists and business consultants, Trility helps organizations of all sizes achieve business and technology outcomes while equipping them for the next iteration in these areas of focus:

  • Cloud and Infrastructure
  • Product Design and Development
  • Information Security
  • Data Strategy and Management
  • Internet of Things (IoT)
  • Operational Modernization

About Trility

For those wanting to defend or extend their market share in an era of rapid disruption, Trility simplifies, automates, and secures the journey and has a proven history of reliable delivery results.

Headquartered in Des Moines, Iowa, with teams in Omaha, Neb., and Chicago, Ill., our people live everywhere and work where needed to help clients navigate their business evolution with certainty.

Categories
Cloud & Infrastructure

Part II: 2019 Cloud Breaches Prove DevOps Needs Dedicated Testers

To prove that DevOps needs a tester, you have to look no further than IdentityForce.com‘s biggest breaches of 2019 and review the types of breaches involved, and investigate why they occurred.

If you notice, a large percentage of the breaches were related to misconfiguration of cloud storage, and the lack of multi-factor authentication to access systems.

So who is primarily at fault: development, operations, security, networking, or DevOps?

While there could be many reasons for ‘open’ cloud storage and single-factor authentication to systems, I would suggest these are DevOps-related mistakes, and DevOps failed to: (1) properly test the security configuration of cloud storage prior to deployment, and (2) also failed to set up multi-factor authentication for accessing systems, and scan images for proper authentication to systems.

Last Line of Defense before Deployment is the Continuous Integration/Continuous Delivery

Some may argue that operations, security and/or networking departments are at fault, but the last line of defense before deployment is the Continuous Integration/Continuous Delivery (CI/CD) pipeline, which should include the application of common rule-sets and tests and is primarily the responsibility of DevOps.

Terraform Sentinal, Nexpose, Other Tools

Others will argue, proper CI/CD tools, such as Terraform Sentinal or Nexpose, or setting-up AWS config rules and using OpsWorks will prevent these issues; and they would be partially correct. These tools provide a layer of security and protection which is similar to application vulnerability scanning tools, but they do not replace unit-testing or integration testing.

Unit Testing Ensures Actual Results Meet Expected Ones

The purpose of unit testing is to ensure the actual results match the expected results. Using public cloud storage as an example, the infrastructure project to create the cloud storage should contain unit-tests which include a: 

  1. check for existence 
  2. check authorizations
  3. check security settings 

Upon deployment of the project, the CI/CD pipeline will execute the unit-test, and if passing, perform integration testing.

Integration Testing for Cross-Boundary Access

The purpose of integration testing is to test individual units combined as a group. From an infrastructure perspective, this means the testing of cross-boundary access, permissions, and functionality. Utilize the public cloud storage example, and assuming the cloud storage had a permission to allow another account access to the storage, there would need to be an integration test for an external account to access the cloud storage – but who writes this code, and how do they know they need to write it?

This is where the concept of a DevOps tester is most applicable. Two separate infrastructure projects have been deployed; one for an account which has a dependency on cloud storage in a separate account, and one for cloud storage in a separate account. Ideally, DevOps should have recognized the dependency when creating the account, and created a unit-test that tests the permission of a mocked-up storage account. Someone would then need to write a separate integration test that is run in the CI/CD pipeline upon completion of both deployments.

Managing the inter/intra project dependencies, ordering, and priority of various infrastructure projects could become very overwhelming for DevOps, and is one of the primary reasons a DevOps tester is needed. Currently, I’m only seeing minimal infrastructure unit-testing, and not seeing any coordinated integration testing across infrastructure projects.

Just like when developers first began performing unit and integration testing, they performed these functions themselves. As the need arose, organizations would hire a software tester, and software testers would take more and more of the testing responsibilities; until software testing fully matured. DevOps is no different than normal software development and is still maturing as a concept.

Infrastructure as Code Maturity Will Require Quality Gateways

As Infrastructure As Code becomes the norm, unit-testing and integration testing will become more common. Eventually, we will mature to a point where we are evaluating infrastructure code for code-quality, and preventing deployments that do not meet quality gateways.

The bottom-line: Infrastructure As Code will eventually mature to include unit-tests and integration testing, and become very similar to a normal software development lifecycle. Organizations should begin to further refine their own strategy on how this maturation will occur, and who will be responsible for the infrastructure testing.

In my next article to publish tomorrow, I provide Practical Examples of DevOps Unit Testing

Read the Entire DevOps Testing Series


Categories
Cloud & Infrastructure

Part I: Does DevOps Need Dedicated Testers?

As a DevOps/Cloud Engineering professional, and human being, I will make eight mistakes for every 100 words typed. This means I make hundreds, if not thousands, of mistakes each week. 

So how do I catch my mistakes? I would like to say I write good unit and integration tests for my infrastructure-related code and have over 90 percent code coverage, but this would be a lie. 

In fact, if you’re like most DevOps and cloud engineering professionals, you are not expected to write unit and integration tests and will rely on external tools to test the infrastructure-related errors. So, why aren’t the same unit and integration testing procedures, which are applied to application code, being applied to infrastructure code?

So, why aren’t the same unit and integration testing procedures, which are applied to application code, being applied to infrastructure code?

While the infrastructure team can utilize resources like Terraform, localstack and Terraform-compliance to mock and test resources, they can not mock the platform and services which will live within the infrastructure. Thus, infrastructure teams will do actual deployments to the development environment, in order to test their infrastructure.

Unfortunately, from a developer-perspective, the development environment is ‘production’, and is expected to be stable, and always available. Developers do not want downtime because the infrastructure team is deploying and testing an infrastructure change – and breaks something.

So, how do we resolve this conflict, in the simplest way possible (assuming the development environment is used 24 hours per day)?

I’ve had good results utilizing the same software testing strategy utilized for applications,  for the infrastructure code-base.

By having infrastructure-related unit and integration tests written and tested against the infrastructure code prior to deployment to a development environment, you can ensure infrastructure changes will not break the development environment.

Infrastructure Unit tests might include:

  1. Testing the resource is created and has the proper parameters
  2. Testing pipeline logic to handle exceptions

Infrastructure Integration tests might include:

  1. Testing connectivity
  2. Testing security
  3. Testing permissions

Application/Platform/Service integration tests might include:

  1. Testing Network Access Control Lists
  2. Testing Security Groups
  3. Testing Route Tables
  4. Testing Permissions
  5. Testing for infrastructure controlled keys
  6. Testing for shared resources, and access to shared resources

Writing Good Tests Requires Infrastructure, Architectural Knowledge 

While development software testers could write Application/Platform/Service tests, they may not have the infrastructure and architectural knowledge to understand how to write good tests. Instead, a DevOps Software Tester team should be responsible for coordinating with all development software testers for infrastructure-related integration tests.

The infrastructure-related integration tests would then become part of the infrastructure deployment pipeline.

For example, before any infrastructure-related changes are deployed to the ‘development’ environment, the infrastructure should be deployed to a new environment and validated. Once all tests are passing, then the infrastructure is deployed. In addition, like with application code, infrastructure code should have at least 90 percent code coverage for all infrastructure resources, contain good infrastructure-related integration tests, and have  90 percent coverage for application-related integration tests.

While this solution does not guarantee an outage to your development environment, it applies a consistent, organizational-wide testing strategy for all code, and should help to catch many of the infrastructure-related coding mistakes.

It also provides an additional career path for software testers to enhance their knowledge and skills, and teach DevOps and cloud engineers how to do proper testing.

Or, you can continue to deploy infrastructure-as-code and not write any unit or integration tests.

Read the Entire DevOps Testing Series

To further support this growing need, I will publish three more articles in the coming days.


Categories
Operational Modernization

A Basis for Automation: Predictable, Repeatable, Auditable Workflow

This article was previously published on LinkedIn.

I remember a math teacher from long ago telling the class, “Until you know how to do the math with a pencil and eraser, you cannot use a calculator.” And I have subscribed to this logic my entire life journey. Learn everything at the most basic level. Understand why not just what. Optimize the information after I understand so I can then learn even more. This is one of the great experiences that made me want to be a lifelong learner.

Another was a physics teacher sitting beside me, walking me through a book from the Smithsonian Institute on space. His questions to me, as we thumbed through the pages looking at high-resolution pictures of the planet were simple, “Can you imagine what it would take to travel to that planet?” “What do you think it would take to live on that planet?” And I, thereafter, sought to understand what it would take to solve these problems. This experience taught me, “Think very big and do not be intimidated by large, opaque, complex, high-risk problems.”

I also remember, with great sadness, when I realized there was more to learn than I would live long enough to understand and value.

All Big Things Become Small Things

So, I looked for methods of discovering, organizing, and leveraging greater volumes of information in my everyday life. I needed a way to discover and appreciate the depth and breadth of a seemingly infinite set of bodies of knowledge while accepting I only have one lifetime to do it. I needed a way to take big things and break them into small things so I could choose what, when, where, why, and under what circumstances they may directly apply, or cross-apply, to other endeavors in my life.

Common words for organizing big things into little things and so on include paradigms, frameworks, patterns, templates, reference architectures, taxonomy, decomposition, configuration management, deconstruction, classifications and even Table of Contents.

Figure 1. Big things decompose into small things.

To the dismay of all logophiles out there, I’m going to condense this discussion to one word – patterns.

I look for reusable patterns in everything. Patterns to understand things around me. Patterns to organize. Patterns to do work. Patterns to assess risk. Patterns to deliver. Life is full of reusable patterns.

Life is full of reusable patterns.

So, when you’re trying to understand something big, break it down into parts and pieces. It will be easier to understand, organize, prioritize, and build back up again at scale because you will discover one or more patterns. Whether something is broken down to the atomic level or to user classes, epics and user story threads depends upon your project.

Work Has A Predictable Pattern

Most people understand the idea of work. I am cold. I put on a coat. I am warm. I have dirty dishes. I wash them. I have clean dishes.

State A. Do something. State B.

And many people want to see the results they want to see – right now.

I want pecan pie right now. I want it to look and taste just like what my Grandmother used to make for me on holidays and birthdays. My Grandmother has passed away, I don’t have her recipe and I failed to ask her to teach me how to make the pie. I want it nonetheless. Now. Exactly the way she used to make it. With whipped cream. Do not fail me, maker of pecan pie.

What I want and what I can get is dependent upon an ability to:

  • See the big thing (pecan pie);
  • See the parts and pieces (ingredients); and
  • Understand the method (order and method of operations).

Whether I’m making Grandmother’s pecan pie, building a barn or implementing a continuous delivery pipeline in the cloud, they all require the same steps, in the same order, to begin and complete the work.

The basic pattern of work looks like this:

  • a request for work,
  • entry criteria (criteria by which we agree to start work),
  • a method of doing and a method of checking work,
  • exit criteria (criteria defining “done and acceptable”),
  • and a deliverable.
Figure 2. “Do Work” – A simple view of how work happens for one person.

People Do Not Perform Consistently

The unpredictable, non-conforming, variability of people is one of the things that makes life an adventure. People and their culture are rich and unique, and our memories and the stories we tell about them form a vibrant tapestry.

When it comes to work, if we know the pattern of work, then why do so many hard-working teams fail to deliver, let alone deliver well?

Humans are not automatons. Humans are, by nature, variably behaved, expressed and experienced. For 100 people to complete 50 individual tasks 100 times in a predictable, repeatable and auditable manner is a pretty tall order. Which may explain why a grande caramel macchiato retrieved from the same chain store in different cities tastes different so often. There is a recipe, an order of steps and method. Nonetheless, sometimes the coffee tastes like the highly caffeinated liquid weight gain I expected and other times a painful waste of money.

If there is a pattern and the work is human and manual, there will be variable results. Human results are variable. Why does only one team win the annual football bowl game while others do not? More interestingly for our conversation, explain why that same team doesn’t win every single year of their existence. Variability.

We know what a predictable, repeatable pattern of work looks like and we know how human variability can impact that pattern in everyday life.

Now let’s look at what happens when we have many people.

Work Patterns Get Complicated As They Scale

Assembling around an objective to achieve a result is seen in all of nature. It isn’t unique to humans.

Bees. Ants. Animal packs. Sports teams. Military units. Projects.

For Humans, work becomes more complicated as the number of people involved increases. With more people come more units of work, more steps and more latency between steps.

Consider the following behavioral pattern many of us see in organizations, “This team does work, then this team does work, then this team does work.” Have you seen this before?

Queue work, start work, do work, hand-off. Repeat.

When we watch ants decompose food, there appears to be a constant flow of activity. When we watch bees collect honey, we see the same characteristics. Flow.

When we watch people on projects, it simply isn’t the same.

Imagine being in the passenger seat of an old 1968 F-150 pickup while a teenager is learning to drive a stick-shift. Now imagine every time said teen pops the clutch, taps the accelerator, hits the brakes, or all of them at the same time, your head rocks back and forth between the glass behind you and dash in front of you. There was no padding in the dashboard. Learning to control the clutch is a practiced, learned behavior. Learning to push the accelerator while letting off the clutch is also a learned behavior.

Flow is a learned behavior. Flow must be sought on-purpose.

Now, imagine smacking your face on the dashboard, glass or both every time work moves from person to person on a project in your company. Start. Go fast. Stop. Start. Stop suddenly. Kiss the dashboard.

What if your face was the barometer of your organization’s flow? Imagine increasing the number of people, concurrent projects and tasks to span the company and you are the only person who hits glass and dashboard for all projects, all people, all steps, all starts and stops. Need a helmet?

If you think I’m exaggerating to make a point, I am. A little. The stick shift story is real. I feel bad for my dad and all the watermelons we ran over in the field that day. He ended up sitting in the seat sideways with arms against the seat and dash in a self-defense position. If there had been predictable start and stop patterns that day, perhaps he could have navigated the situation more enjoyably. I still remember the look on his face.

Figure 3. “Do Work and Wait” – A simple view of work for multiple people.

When it comes to performing work and delivering results, the ideal experience achieves a smooth flow of work being performed, with little to no wait times in between steps, and smooth transitions.

Wait time and unpredictable transitions likely cost my dad time on this earth realized as dynamic, premature aging. Wait time and unpredictable transitions cost companies time and money.

The “start and stop” method is also known by some as the “throw it over the wall” principle.

“My part is done. Worked for me. Good luck!”

We want flow like ants and bees. We more likely experience starting, stopping and pain like my dad while teaching me to drive a stick-shift.

Manufacturing Controls Flow-Through Batch Sizes

If you and I are on the same page regarding the value of flow versus kissing a dashboard with your face, let’s talk about how to get there.

Decades ago, the manufacturing industry began the use of assembly lines (automation) to increase flow, throughput, predictability, quality and manage their scaling economics. They received another boost in productivity and value when they moved from large to small batches along the same assembly line. Wait times decreased and flow increased.

And due to batch sizes, their ability to adapt to change increased.

In the manufacturing world, wait times (inventory in a wait state) are considered unrealized revenue and therefore waste. Manufacturing supply chains, therefore, seek to eliminate waste. They build things to make money. They do not build things to store them in the supply chain. The key? Flow.

Figure 4. Too much in-flight, undelivered work is unrealized revenue.

If warehouses full of product are considered unrealized revenue and therefore waste in the manufacturing industry, how do we then categorize in-flight, incomplete, undelivered or otherwise unfinished software solutions in companies? What do you think that implies with regards to the numbers of in-flight user stories or numbers of in-flight software projects?

What do you think happens when we introduce the idea of rework?

Work Always Has Rework

Ideally, when we run projects, things always go as planned. And when they don’t? We end up dealing with two subjects that weren’t in the original plan – technical debt and refactoring.

Technical debt is defined by work you know you need to do now, but decide to kick down the road until later. This creates additional work in the backlog. Just like interest rates on debt, the longer it sits there, the more time, complexity and/or cost it will take to address. Technical debt is work.

Refactoring is defined by changing, modifying or otherwise evolving something from a previously acceptable state of existence to a new and improved state of existence for the purposes of delivering desired value. Refactoring is also work.

Figure 5. Rework – “I found a problem. What do I do with it?”

When John finishes his task, the deliverables move down-line for Jane to complete her task. Jane finds a problem with the inherited deliverable and either fixes it, ignores it or sends it back up-line for John’s eventual attention.

If Jane fixes it on behalf of John, was it correct and complete? If she ignores it, will it be found and addressed later? By whom? If she sends it back up-line, how will John know? And when will John get to the refactoring work given his existing backlog of prioritized work?

The problem discovered by Jane impacts her ability to complete planned work. And depending upon her decision, it will become work for one or more others.

Now multiply John’s and Jane’s experience by the numbers of people, teams, projects, stories, and associative decisions to acknowledge, fix, send it back up-line to someone else’s queue or ignore it altogether.

This churn contributes to wait times between steps. And if a person doesn’t plan for rework, it also contributes to cranky people.

Rework happens. Plan for it. Manage impact to flow by decomposing all work into small, edible pieces. Manage your batch sizes. Seek flow.

How Do We Achieve Flow?

To consider automation, we have to first understand work, batch sizes, and flow. Otherwise, with automation, all we’re really doing is taking bad things, making them go faster and calling it digital transformation.

Steps to achieve flow:

1. Manage batch sizes. Break big things down into small things.

2. Minimize and eliminate wait times between steps and people.

3. Plan for, invite, and accept rework. Manage it through batch sizes.

4. Automate.

5. Repeat.

Automation is not the goal. Predictable, repeatable, auditable flow is the goal.

Automation is only the medium.

My math teacher has made sense for a very long time.

Pencil first. TI-88 programmable calculator second.

Read Basis for Automation, Part II, to learn the steps for ensuring a project team addresses requirements and prioritizes backlog before automating a continuous delivery pipeline.

Interested in Examining Flow?

Trility joined Bâton Global for a podcast series on Human-Centric Transformation. In the four-part series, they discuss how leaders and technology can simplify and automate processes for team members, stakeholders, and customers.

Before you listen, view our companion infographics highlighting key takeaways as these organizations discuss how to better respond to industry headwinds and pressures.

Categories
Leadership

Defining Key Attributes of Exceptional Delivery Managers

This article was originally published on LinkedIn.

Monster team sizes, long delivery timelines, embarrassing expenditures, more headaches than deliverables, hypothetical and yet intangible value, unknown compliance and team attrition. You thought you had the right leadership team in place to deliver your desired outcomes. Now you’re wondering.

Reliable delivery is something we all seek in our organizations. We all face the same questions when approving priorities and efforts, allocating money, forming projects, teams and, in particular, appointing leaders.

We all ask: “What problem do I actually need to solve?” And, “Who can I depend upon to make sure this happens?”

It always comes down to leadership. While it seems like it should be easy, finding the right person is actually hard. We don’t know we’re staffed incorrectly until we’re already heading off the road (or in the ditch).

Title history, degrees, professional certifications, training programs and certificates of completion should, in theory, weed out people who can deliver from people who might not or cannot. In my experience, all of those things point to someone who desires learning, advancement and success, yet doesn’t always equate to great attitudes, aptitudes, abilities or results. In other words, often those things are false positives.

Then how do we increase the probability of finding someone who will predictably and repeatably deliver value for our organizations?

If you’re looking for a shortcut, I don’t have one. I do, however, have some experienced recommendations. And if you take the time to follow these steps, the return on investment window is long.

Your company is on a journey of growth, opportunity, change, aggressive pursuits, adaptation, highs, lows, easy days, hard days and sometimes ludicrous days. You need someone leading your projects who is on a journey just like your company. In the fight, not just studying it over a weekend for a two-day certificate of completion and calling it good enough.

For me and my teams, there are three classes of information I explore when considering teammates as members of our delivery teams. It isn’t foolproof. However, it has been very reliable. I have found great people who delight our teams and clients. I ask and research the following areas:

  • What behavioral attributes do we want exhibited in our company and people?
  • What knowledge attributes do we expect leaders to gain or bring with them?
  • What experience attributes do we expect leaders to gain or bring with them?

A challenge for anyone trying to find the right people is with so many titles, words, certifications, methods, philosophies, influencers, founders, books, conferences, etc., how do we know which ones are meaningful at all, let alone for our unique context?

For example, what is the meaningful difference between a Program, Project or Product Manager? When do we use a Scrum Master or Product Owner versus a Delivery Manager? If I have a Scrum Master on my delivery team, am I good? Do today’s Agile titles mean we’re doing things new and better? Are pre-Agile ideas less valuable? Are PMI certifications outdated while Scaled Agile certifications are actually the best solutions for our tomorrow?

I believe these are all interesting philosophical conversations we can have over a pot of tea, but not the most important problem to solve. We want to hire a great person, not a great bowl of word soup.

We want great people who predictably, repeatably deliver value in our teams, across our projects, in our companies, and with our clients. If we’re debating certifications and titles, let alone hiring based upon them, we’re discussing the wrong subject. We want people who illustrate themselves by their past and desired journey versus define themselves by their past alone.

The below attribute lists are our ideal target lists. Given everyone is on a journey, we’re looking for people who bring these attributes with them, are on a journey to attain them, or have the right attitude and aptitude to be taught.

1. Look for People with Healthy Behaviors

At the end of the day, our teams, projects, and clients will be a reflection of the people we hire. We want people who want to win. People who never quit. People who regularly bring out the best in themselves and everyone around them. People who will never stop yearning to become more today than they were yesterday and expect the same of everyone around them.

2. Look for Diverse Bodies of Knowledge Awareness

It is fine to be an expert in a body of knowledge. Even expected. However, to believe that body of knowledge will transcend industry, context, and time is small, limited thinking. We look for people capable of more than one thing; else our results will be limited by the one thing that person knows. Find people who pursue knowledge, have broad interests and are life-learners. If you don’t know what all of these things are, why you care, or when you would use them, get busy.

3. Look for Diverse Experience

There is value in experience. For a life-long learner, experience is the ultimate teacher. We look for people who have breadth and depth of experience because we like people with larger and larger experiential data sets upon which to reflect, learn, and apply their realizations.

4. Log Aggregation, Machine Learning, and Artificial Intelligence

Using one too many redundant and/or popular terms of the day, companies increasingly pursue the ideas of log aggregation, data lakes, data warehouses, and data cubes on a regular basis.

  • How do I get the data out?
  • How do I put it all in the same place?
  • How do I correlate, corroborate or otherwise discover patterns and relationships which reveal new ways of seeing, thinking, deciding and acting thereafter?
  • If I put all of my data in one place and start using machine learning to process, organize, and extrapolate meaning as my data set grows, how do I use it?
  • And, if I want an artificial intelligence (AI) to begin making decisions for me where it makes sense, how do I leverage that as well?

I submit to you that an exceptional delivery manager encompasses all of these things including data aggregation, constant learning, and an intelligent decision layer.

At the same time companies pursue these ideas in modern technology, they are overlooking these qualities in experienced Delivery Managers.

Look at the below picture. Consider that the knowledge, behavioral and experiential attributes are the ever-growing data pool of a great Delivery Manager. Consider that your Delivery Manager is your machine learning solution which continues to derive patterns and possibilities by constantly increasing the data pool with new knowledge while continually churning the data, relationships, realizations, and decision possibilities thereafter. Consider that your Delivery Manager becomes an increasingly valuable AI seeing, hearing, learning, thinking, deciding AND thereafter acting on your behalf.

A two-day “how-to-deliver” certification course will not get you, your Delivery Manager, or your company and clients where you want to go. It is only a blip in the data pool. A valid experience that led to specific acquired knowledge. A very small, singular, moment of data on a long journey. Do it anyway. And then do 10 more.

5. Do Your Job to Enable Exceptional Delivery Managers

No matter who you hire, all Delivery Managers will need to know your desired outcomes and any particular constraints that matter to you and your organization. Look at it as defining done (desired outcomes) and the parameters of the game (methods and tools).

Your company and teams need to know where they are meant to go and under what conditions they can travel and arrive there.

An experienced Delivery Manager will notice if you have them in place, if they are clear and achievable, and help create, modify, manage, and complete them accordingly. Their job is to see the entire company, not just the problem of the moment.

What does that look like? Let’s look at a snippet of a conversation between a senior leader at your company and an exception Delivery Manager being considered for hire.

Senior Leader at your company speaking: “Hello Janice. I’m happy you’ve considered ABZ Company for your next adventure. We’re currently a USD 50MM pharmaceutical company on track to be an 80MM company in the next five years. We have adopted Scaled Agile for our preferred technology delivery framework, love the Agile space, but need help becoming more educated, experienced, and successful along the way. Most of our tool-sets are modern, our folks have been training on many things in the last three years and we have clear goals we’d like achieved over the next 18 months. We’ve been having quality and compliance problems with our deliverables, and I’m not sure how we need to fix this using our current tools and methods. What are your thoughts?”

Exceptional Delivery Manager Janice: “It sounds like you are experiencing quite a bit of success regarding the company, as well as, cultural transformation. Those are both hard alone; but doing them both at the same time and well, says a lot about the leadership and people in this company. Impressive.

It is outstanding that you know where you are and where you want to go. And it is outstanding that you know how you’d like to get there using the Agile body of knowledge, new tools, and retraining your people for the future. Well done.

You mentioned you’ve been having challenges with quality and compliance. Of course, I have very many questions and cannot pretend to fully understand your company in such a short period of time. However, I wonder, since your company has adopted Scaled Agile to help with delivery behaviors, have you also introduced evolutionary ideas for the engineering and information security teams? In other words, while Scaled Agile is designed as a delivery framework, it is not itself, and it is not designed to be so, an engineering and information security body of knowledge. You have to look elsewhere for those things. Teach me about the engineering changes that have been introduced to date.”

You, as a senior leader, are continually faced with more questions than you have answers, and always looking for options and recommendations which lead to choices. If you don’t know something, you tend to look in places where the data pool is deeper and wider than you currently possess.

Look to and hire exceptional Delivery Managers. They are the embodiment of ever-increasing pools of aggregated data with the machine learning and AI you seek. Just like no software is ever done, so too is it with exceptional Delivery Managers. Yesterday was good. Today will be better. Tomorrow, better again.


I drink a lot of caffeinated coffee and tea. And I’m on airplanes a lot. Drinking coffee and tea. I’m making a commitment to write more articles in 2020 – and increase the number of speaking engagements at which I drink coffee and tea. It is material we discuss every day at Trility and with our clients. It is material that you may find helpful as well. If you’d like to keep informed, and even interact, please connect or follow me on LinkedIn. Or we can send you an email

We are also always looking for system thinkers to join us – those who can see the larger landscape and do the work as well. If this resembles you, email us

Categories
News

Gerard Forbes Joins Team

Trility Consulting® is proud to announce Gerard Forbes has joined the Trility team as Director of Business Development in Omaha. In this role, Gerard is responsible for business development, strategic and client partnerships in Nebraska. His focus is to build trusted relationships by understanding when and how our team can help organizations create predictable, repeatable, and auditable digital solutions – one iteration at a time.

Gerard Forbes, Director of Business Development in Omaha

A Proven Approach

Forbes relies on his diverse professional and technology background to help clients understand and define challenges and align services to outcomes that help those clients take leaps forward. 

“Gerard is going to be a great asset for Trility and our clients,” said Brody Deren, Chief Strategy Officer for Trility. “He has a great ability to meet clients where they are, understand both their business and technical challenges and opportunities, and help them find the best approach to solving their most critical problems.”

Gerard will leverage his experience working for a technology consulting and recruiting services firm, where he additionally played roles in leadership and corporate training in the consumer packaged goods and retail industries. He also serves an adjunct professor at the University of Nebraska-Omaha where he instructs business students on Leadership theory and application.

About Trility

Trility is a collection of value-driven advisors, technologists, and business people who have critical skills and experience to help clients win in the modern digital economy. We solve complex problems, help guide clients down roads they’ve never ventured, and offer solutions in Cloud & Infrastructure, Product Design and Development, Data Strategy, IoT, Information Security, and Operational Modernization.

Trility’s proven approach is to provide observations and recommendations along the way, presenting options for clients to iterate for the best, high-priority outcome. We never compromise on security and leverage our team’s full-stack expertise and ability to train your team to question security requirements from Day 1 and build with continuous delivery everywhere.

Interested in learning more?

Connect with Gerard Forbes on LinkedIn, email or call him at (402) 212-7835.

Categories
Information Security

Information Security Can’t Rely on Pinky Swears

This article was originally published on LinkedIn.

“We hire great people” is something we all hear companies regularly communicate.

How do you feel about a hypothetical company that believes the risk of an information security breach is low largely because they hire good people? In other words, their information security strategy is to hire good people and trust them individually to do the right thing. Maybe they even sign a paper pinky swearing they’ll always do what’s right.

Let’s say this hypothetical company houses some of the most sensitive data about you and your family or company that exists. Your information is passed around via email or attachments inside and outside the company. Information is even passed between teammates via chat tools sometimes. Said information is also accessible, editable and exchangeable between partner/vendor companies in the background. This data is unencrypted when stored (at rest) and when passed around (in transit).

Do you know about companies like this? Is this your company? Is a company like this minding your personal data?

While information security is everyone’s responsibility, it is first the responsibility of the company itself. Hiring great people does not alleviate, or defer, the responsibility an organization has to be compliant with information security policies, legislation and industry best practices. If we can’t trust a company to do the right thing, why would we value their brand?

Interesting Things We’ve Heard Through the Years:

  • “Our people, vendors, and partners do the right thing. That’s why we work with them. I don’t think we have anyone in the company who would abuse our customer data.”
  • “First, we must get product to market and prove the idea is viable. We’ll validate viability of our product by customer adoption velocity and demand for new features. If the numbers suggest customers want to buy and use our product, then we’ll figure out what security we need thereafter.”
  • “We’re going to wait and see if this policy/legislation has any teeth. If we start getting fined for non-compliance, then we’ll begin considering if, how and to what extent we need to invest in information security.”
  • “Our industry is not very interesting to most folks. We don’t believe our company, products or services are really a threat to anyone. And we believe the likelihood of being attacked or otherwise exploited is pretty low to non-existent. We’ll wait until it makes sense before investing in some of the information security measures we hear about. It all sounds so expensive anyway.”
  • “It is actually cheaper for us to pay the fines.”
  • “Our customers don’t know any better.”

“Security first” or “security by design” is a choice. And it must first be the choice of the governing board and company leadership before it will become a reality for employees, partners and vendors. If it is not a top-down, constantly communicated, verifiable expectation, it does not exist.

7 Steps to Become a Security-First Organization

1. Internally declare that your company will become “security-first”

When initiatives start at the bottom of the company, they risk dying out due to lack of energy, resources, and attention. Sometimes they risk actually burning up the people trying to get the changes implemented as hope turns into apathy. It is the proverbial “fight against the man.”

As a Board or Senior Leadership of a company, what is important to you is important to the company. If it isn’t, that is a different problem altogether.

For a company to become a security-first focused organization, the declaration of importance, direction, and expected actions must come from Senior Leadership first.

An example “From the CEO” communication:

“Folks, effective immediately, we will put security, privacy, and compliance first in our daily operations. This means with every product, service, interaction, and communication, internally and externally, we will consider what must be secured, how it must be secured and under what conditions we must secure it – data, systems, teams, company and client interests inclusive. It is not a task to accomplish and be done. This must be our DNA. It must be our daily lifestyle. And it will take time to get to a proper baseline of competency and time to maintain, evolve and increase it.

From this day forward there will exist training expectations that must be pursued and accomplished monthly, quarterly and annually. Look for them in your Learning Management System (LMS) assignments. All roles, titles, and capacities. No exceptions. Me included.

And from this day forward you will see our CISO take a more prominent role in defining our pursuits, our strategies and validation of our compliance readiness. We as a leadership team choose to proactively educate our teams, protect our assets and behave in a manner expected by our Founders and those who have come before us to build this great company.

Thank you for your commitment to being the best.”

Top-down declarations become realities.

2. Determine what industry regulations apply to your company

Information Security / Regulatory Compliance is a career. And there is a shortage of people who do this type of work. Find them. Hire them. Leverage them. Knowing what you must align to will save you money. Knowing what you need not align to will also save you money.

There are quick determinants to flush out directions, follow-up actions, and investment. The road will not be small, nor easy; though this list will help point you in a direction of what matters, when it matters and to what extent.

  • In what industry do you operate?
  • Is your business localized to your state only? Your country only?
  • Do you do business internationally? What countries?
  • Do you exchange money with customers?
  • Do you ask for and store personally identifiable information?
  • Are you working with non-governmental organizations? Charities? Governments? Militaries? Public companies? Private companies?
  • Have you failed any previous compliance audits?
  • Have you been fined by a third-party organization for non-compliance?

3. Determine what industry best practices will help your company

You may discover your information security folks want impenetrable castle walls, which eventually mean your employees are unable to use the bathroom in the name of security. An extreme.

You may also discover your engineers want the freedom to use anything at any time for any reason in the name of innovation, digital transformation or being competitive. Probable.

And your business unit leaders? You’re expecting them to grow the business, delight the industry and client base. They want to do whatever is necessary and appropriate to meet the goals expected of them as well.

Security, innovation and growth are not mutually exclusive. They must be collaborative and it will require constant, purposeful and involved leadership. Otherwise, it is just theater.

Regulated industries communicate best practices and compliance expectations, which makes it easier to know what matters and what doesn’t. Where your time will be spent is determining how tightly to dial up the security requirements on your operation and how they will impact friction, flow, deliverable velocity and value from the organization.

Unregulated industries still have communicated best practices and compliance recommendations. In the absence of all knowledge, ask the following questions of your Chief {Information Officer, Information Security Officer, Product Officer, Technology Officer}:

  • Against what information security / regulatory compliance standards must we be measuring ourselves?
  • How are we training our people to be predictably and repeatably compliant with these expectations in our everyday lives?
  • How can we regularly prove that what we expect is actually being employed?
  • How do we culturally make security and compliance a behavioral assumption versus a Learning Management System (LMS) assigned task?

4. Implement role-based security awareness training

No one is exempt from information security. No person, role or title. Like leadership and teams, security is a “we” endeavor.

Not all roles in the company have the same requirements. Some roles are specialized while others are more general. Below is a simplification of this idea.

Specialized: Information Security folks may say higher-level things like confidentiality, integrity, and availability. They may roll out policies, procedures and learning courses while facilitating internal and third-party audits. They’ll even be discussing Plans of Actions & Milestones (POA&M or POAM) items resultant from audits. They’ll need to know frameworks, behaviors, implementations, monitoring methods, and reaction/response ladders and industry standards like NIST-CSF, PCI-DSS, HIPAA and so many more.

Specialized: Engineers who focus on infrastructure, networks, data, and software technology stacks need to know about the what, but more importantly, they need to understand the why and how as they do their work. For example, data encryption at rest and in transit, authorization, and authentication, securing failover infrastructures, hybrid cloud solutions, bring your own device security, separation of duties, least privilege and need-to-know principles. There is more than one way to implement any one of these concepts and Engineers need to know them.

Generalized Awareness: Everyone else.

Figure 1. The diagram above demonstrates at a high level how role-based security awareness training could be rolled out and that everyone is a part of it. No one ever gets to be “clueless.”

5. Include the information security role in solution delivery teams

Whether your company calls them Scrum, Strike, AgileProduct or Project Teams, the team construct used to deliver an idea from inception to conclusion often contains multiple roles and therefore multiple people.

In order to become a security-by-design or security-first company, your teams must be shaped to enable the desired outcome. Which then suggests that an information security/regulatory compliance expert must be included from project inception through the course of the project.

This conversation is less about the recipe for roles and teams and more about the desired outcome. Context-driven teams influenced by desired outcomes.

Strike Team Delivery Model
Figure 2. Trility’s preferred team pattern is the use of a Strike Team that always includes an Information Security/Regulatory Compliance expert involved throughout the lifecycle of the project or product. While we tend to construct teams based upon the desired project outcomes, we include an Information Security expert on the team by default.

If the information security people are technical, they may be helpful with design, development, and implementation every step of the way, all day every day. If the information security people are non-technical, they may be more aptly leveraged in a principle-based guidance role during iteration planning, stand-ups, demos and reviews to ensure the project continues to move forward between the fences.

Either way, there must be a full-time champion for the company and clients in terms of privacy, compliance and best practices to achieve the desired outcome.

6. Determine how you will proactively test your ongoing compliance

There are any number of methods to test ongoing compliance. Blind trust. Word of mouth. Internal (infrequent) manual inspection. Third-party annual inspections. Or continuously through automation.

Our typical practice is to identify what attributes of compliance must continually exist, automate those attributes into a series of tests that are called, executed, logged and tagged every time new infrastructure and applications are built. When non-compliance happens, alert someone (as shown below). Otherwise, keep moving. We have some examples out there in the ether for you to thoughtfully consider.

Automated Security Tests
Figure 3. The diagram above shows how you can build-in automated security/compliance tests such that every build now has the capability of logging activity, events, alerts and compliance status.

7. Attach quality and compliance tools to the delivery pipeline

Continuous delivery pipeline behaviors are not new. Wide-spread awareness and adoption of new concepts takes time to expand across industries, companies, leaders, and teams. As more companies implement continuous delivery principles, more of the things many companies used to exclude because it took too much time, or did perform, but manually in arrears and infrequently, will be automated providing real-time information radiators.

Look for vendors and tools that are API-driven, have a great online community, openly available developer and administrative documentation, as well as, active tool support. These tools enable you to perform automated analysis-refactor loops now versus waiting until later and hoping for the best. It is worth your money to know your risk exposure now.

Continuous delivery pipeline with security built-in
Figure 4. This diagram illustrates wherein the continuous delivery pipeline predictable, repeatable and auditable security behaviors may be baked into the solution delivery process now versus waiting until later.

Hire great people. Cast a vision, communicate desired outcomes, define clear objectives, give them the resources to be successful, give them rules of engagement and stay involved.

Great people make mistakes. And even great people some times do not know what to do. Security frameworks help mitigate oversights, mistakes and provide guidance when people are in new, different and complex situations.


I drink a lot of caffeinated coffee and tea. And I’m on airplanes a lot. Drinking coffee and tea. I’m making a commitment to write more articles in 2020 – and increase the number of speaking engagements at which I drink coffee and tea. It is material we discuss every day at Trility and with our clients. It is material that you may find helpful as well. If you’d like to keep informed, and even interact, please connect or follow me on LinkedIn. Or we can send you an email

We are also always looking for system thinkers to join us – those who can see the larger landscape and do the work as well. If this resembles you, email us.