Google Next ’18

My favorite talk was from a guy named Martin Nally (https://www.linkedin.com/in/martinpnally) on “Designing Quality APIs”. << used the British style of putting the period outside of the quotes for clarity’s sake

He was phenominal and has inspired me to stop doing so much in BASH and make a hard move to Python. I know Python but I use BASH for roughly everything since I am good at it. Having the same proficiency with Python would be a huge benefit and would allow me to work on better things. That wasn’t the point of his talk but it was my next step…

Nally had a great line that was something like: “If you only have an hour go listen to Hammings talk on important work. I am hoping that you have two hours so that you can listen to mine, too.”

Spend the two hours well and listen to both:

Nally Talk
Hamming Talk

Transcript of the Hamming talk – http://www.cs.virginia.edu/~robins/YouAndYourResearch.html

Breakdown of the Hamming talk – http://calnewport.com/blog/2014/10/15/how-to-win-a-nobel-prize-notes-from-richard-hammings-talk-on-doing-great-research/

Oliver Wendell Holmes, Jr — “A mind that is stretched by a new experience can never go back to its old dimensions.”

#googlenext18

Using an existing (PCF) CredHub in Concourse

We have an existing CredHub in PAS 2.0 and I wanted to use it with Concourse so that I can pull newly rotated keys along with my custom values.

So here is what I did:

# Get authenticated so that you can create a new credhub client:

$ uaac target https://IPOFCREDHUBHOST:8443 –skip-ssl-validation

https://OPSMAN/api/v0/deployed/director/credentials/uaa_login_client_credentials
{“credential”:{“type”:”simple_credentials”,”value”:{“identity”:”MYLOGINNAME”,”password”:”PASSWORDHERE”}}}

https://OPSMAN/api/v0/deployed/director/credentials/uaa_admin_user_credentials
{“credential”:{“type”:”simple_credentials”,”value”:{“identity”:”MYADMINNAME”,”password”:”PASSWORDHERE”}}}

$ uaac token owner get login -s PASSWORDHERE #### the SECRET from the UAA LOGIN CLIENT
User name: MYADMINNAME #### the username from the UAA ADMIN USER
Password: ********* #### the SECRET from the UAA ADMIN USER goes here
Successfully fetched token via owner password grant.
Target: https://OPSMAN:8443
Context: MYADMINNAME, from client login

# Add a credhub client:

$ uaac client add –name MYCREDCLIENT –scope uaa.none –authorized_grant_types client_credentials –authorities “credhub.write,credhub.read”;
Client ID: MYCREDCLIENT
New client secret: **************
Verify new client secret: **************
scope: uaa.none
client_id: MYCREDCLIENT
resource_ids: none
authorized_grant_types: client_credentials
autoapprove:
authorities: credhub.write credhub.read
name: MYCREDCLIENT
required_user_groups:
lastmodified: 1528
id: MYCREDCLIENT
created_by: 027

# Now get logged in:

$ uaac token client get MYCREDCLIENT -s PASSWORDHERE
$ credhub api OPSMANIP:8844 –skip-tls-validation
$ credhub login –client-name=MYCREDCLIENT –client-secret=PASSWORDHERE

# Now see what is already in CREDHUB:

$ credhub f
credentials:
– name: /bosh_dns_health_client
version_created_at:

$ credhub set -n /concourse/test –type value –value MYVALUE
id: e4bd-324-34-34b5
name: /concourse/test
type: value
value: MYVALUE
version_created_at:

$ credhub get -n /concourse/test
id: e4bd-324-34-34b5
name: /concourse/test
type: value
value: MYVALUE
version_created_at:

$ credhub delete -n /concourse/test

# ADD the lines after ###### NEW STUFF STARTS HERE ###### to your
concourse.yml file then redeploy concourse (I have my command below)

instance_groups:
– name: web
instances: 1
azs: [z1]
networks: [{name: ((network_name))}]
stemcell: HERE
vm_type: ((web_vm_type))
jobs:
– release: concourse
name: atc
properties:
log_level: debug
token_signing_key: ((token_signing_key))
external_url: ((external_url))
###### NEW STUFF STARTS HERE ######
credhub:
url: https://OPSMAN:8844
client_id: MYCREDCLIENT
client_secret: PASSWORDHERE
tls:
insecure_skip_verify: false
credhub_ca_cert: |
—–BEGIN CERTIFICATE—–
KEYKEYKEYKEYKEYKEYKEYKEYKEYKEYKEYKEYKEYKEYKEYKEYKEYKEYKEYKEYKEYKEY
KEYKEYKEYKEYKEYKEYKEYKEYKEYKEYKEYKEYKEYKEYKEYKEYKEYKEYKEYKEYKEYKEY
KEYKEYKEYKEYKEYKEYKEYKEYKEYKEYKEYKEYKEYKEYKEYKEYKEYKEYKEYKEYKEYKEY
KEYKEYKEYKEYKEYKEYKEYKEYKEYKEYKEYKEYKEYKEYKEYKEYKEYKEYKEYKEYKEYKEY
KEYKEYKEYKEYKEYKEYKEYKEYKEYKEYKEYKEYKEYKEYKEYKEYKEYKEYKEYKEYKEYKEY
KEYKEYKEYKEYKEYKEYKEYKEYKEYKEYKEYKEYKEYKEYKEYKEYKEYKEYKEYKEYKEYKEY
KEYKEYKEYKEYKEYKEYKEYKEYKEYKEYKEYKEYKEYKEYKEYKEYKEYKEYKEYKEYKEYKEY
—–END CERTIFICATE—–
###### NEW STUFF ENDS HERE ######

^^^^ the credhub_ca_cert is from https://OPSMAN/api/v0/deployed/products/cf-BLAH/credentials/.properties.credhub_tls ((the second key))

# After this, you just redeploy your concourse… Prob shouldnt use mine — yours will be different

bosh -e BOSHDIRECTORIP deploy -d concourse concourse.yml -l ../versions.yml –vars-store cluster-creds.yml -o operations/static-web.yml –var web_ip=WEBSERVERIP –var external_url=https://CONCOURSEFQDN –var network_name=default –var web_vm_type=small –var db_vm_type=medium –var db_persistent_disk_type=db –var worker_vm_type=worker –var deployment_name=concourse

##### I did have to pay with this a bit and the easiest way to see what was happening was to `BOSH -e ENV -d concourse ssh web/STUFFSTUFFSTUFF` then watch /var/vcap/sys/log/atc/ stdout and stderr.

Also, I created a little pipeline to test with:

test.yml:
jobs:
– name: credhub-test
plan:
– do:
– task: credhub-test
config:
platform: linux
image_resource:
type: docker-image
source:
repository: ubuntu
run:
path: sh
args:
– -exc
– |
echo “Did it work? $TEST_PARAM”
params:
TEST_PARAM: ((test))

# Then I added the value to credhub like above:
$ credhub set -n /concourse/test –type value –value “It worked!”

# Tested with:
$ fly -t MYENV login
$ fly -t MYENV set-pipeline -p CredHubTest -c test.yml
$ fly -t MYENV unpause-pipeline –pipeline CredHubTest
$ fly -t MYENV trigger-job -j CredHubTest/credhub-test -w

started CredHubTest/credhub-test #3

initializing
Pulling ubuntu@sha256:4592d67b6d…

running sh -exc echo “Did it work? $TEST_PARAM”

+ echo Did it work? It worked!
Did it work? It worked!
succeeded

# Make your victory lap around the office and be thrilled to get your SECRETS off of your local machine

Uploading a specific stemcell when PCF Opsman fails

$ bosh2 -e PCF login
Using environment ‘10.0.0.10’

Email (): USERNAME
Password (): PASSWORD

Successfully authenticated with UAA

Succeeded
$ bosh2 -e PCF upload-stemcell ./light-bosh-stemcell-3445.30-aws-xen-hvm-ubuntu-trusty-go_agent.tgz
Using environment ‘10.0.0.10’ as user ‘USERNAME’ (bosh.*.read, openid, bosh.*.admin, bosh.read, bosh.admin)

######################################################### 100.00% 177.57 KB/s 0s
Task 26821

Task 26821 | 05:12:14 | Update stemcell: Extracting stemcell archive (00:00:00)
Task 26821 | 05:12:14 | Update stemcell: Verifying stemcell manifest (00:00:00)
Task 26821 | 05:12:21 | Update stemcell: Checking if this stemcell already exists (00:00:00)
Task 26821 | 05:12:21 | Update stemcell: Uploading stemcell bosh-aws-xen-hvm-ubuntu-trusty-go_agent/3445.30 to the cloud (00:00:11)
Task 26821 | 05:12:32 | Update stemcell: Save stemcell bosh-aws-xen-hvm-ubuntu-trusty-go_agent/3445.30 (ami-7660be0b light) (00:00:00)

Task 26821 Started Thu Apr 5 05:12:14 UTC 2018
Task 26821 Finished Thu Apr 5 05:12:32 UTC 2018
Task 26821 Duration 00:00:18
Task 26821 done

Succeeded

Installing Pivotal Cloud Foundary patch pipelines

I wasn’t too sure about using upgrade pipelines for PCF but it turns out that I was being too risk-adverse. I set these up in a test environment and found that they cover all of my secret fears most thoroughly 🙂

Do read the readme as it contains good stuff.

Download the pipelines from https://network.pivotal.io/products/pcf-automation/. I used 0.23 — unzipped ~/github/pcf-pipelines then cd into that dir: (cd ~/github/pcf-pipelines).

I created a file with the base credentials needed for these pipelines to run in BLAH1:

~/upgrade_pipeline.yml
iaas_type: aws
pivnet_token: BLAHBLAHBLAHBLAHBLAHBLAH
aws_access_key_id: BLAHBLAHBLAH
aws_secret_access_key: BLAHBLAHBLAHBLAHBLAHBLAHBLAHBLAH
aws_region: us-east-1
aws_vpc_id: vpc-BLAHBLAH
# Use the client_id and client_secret if you login to opsman with UAA. Otherwise use the opsman_admin and password if you login directly to opsman with simple auth.
opsman_client_id:
opsman_client_secret:
opsman_admin_username: BLAH
opsman_admin_password:BLAHBLAHBLAH
opsman_passphrase:BLAHBLAHBLAH
opsman_domain_or_ip_address: opsman.BLAHBLAH.BLAH

Setup a quick ENV variable
# ENV=BLAH1; target=https://BLAH1
-or-
# ENV=BLAH2; target=https://BLAH2

Login to Concourse
# fly -t $target login

For Upgrade-ops-manager
~/${ENV}_upgrade_pipeline-OPSMAN.yml

opsman_major_minor_version: ^1\.09\..*$
existing_opsman_vm_name: opsman-1.09

# fly -t $target set-pipeline –pipeline upgrade-opsman-pl-23 –config upgrade-ops-manager/aws/pipeline.yml –load-vars-from ~/${ENV}_upgrade_pipeline.yml –load-vars-from ~/${ENV}_upgrade_pipeline-OPSMAN.yml

For ERT
~/${ENV}_upgrade_pipeline-ERT.yml

product_globs: "cf*pivotal"
product_version_regex: ^1\.09\..*$

# fly -t $target set-pipeline –pipeline upgrade-ert-pl-23 –config upgrade-ert/pipeline.yml –load-vars-from ~/${ENV}_upgrade_pipeline.yml –load-vars-from ~/${ENV}_upgrade_pipeline-ERT.yml

For Rabbit
~/${ENV}_upgrade_pipeline-Rabbit.yml

product_slug: pivotal-rabbitmq-service
product_globs: "*pivotal"
product_version_regex: ^1\.9\..*$

# fly -t $target set-pipeline –pipeline upgrade-rmq-pl-23 –config upgrade-tile/pipeline.yml –load-vars-from ~/${ENV}_upgrade_pipeline.yml –load-vars-from ~/${ENV}_upgrade_pipeline-Rabbit.yml

For AWS Service Broker
~/${ENV}_upgrade_pipeline-Service-Broker.yml

product_slug: aws-services
product_globs: "*pivotal"
product_version_regex: ^1\.1\..*$

# fly -t $target set-pipeline –pipeline upgrade-service-broker-pl-23 –config upgrade-tile/pipeline.yml –load-vars-from ~/${ENV}_upgrade_pipeline.yml –load-vars-from ~/${ENV}_upgrade_pipeline-Service-Broker.yml

For Redis
~/${ENV}_upgrade_pipeline-Redis.yml

product_slug: p-redis
product_globs: "*pivotal"
product_version_regex: ^1\.8\..*$

# fly -t $target set-pipeline –pipeline upgrade-redis-pl-23 –config upgrade-tile/pipeline.yml –load-vars-from ~/${ENV}_upgrade_pipeline.yml –load-vars-from ~/${ENV}_upgrade_pipeline-Redis.yml