Set up Kubernetes and Vault for Boundary
- 18min
- |
- VaultVault
- BoundaryBoundary
Deployment
In this tutorial you will take on the role of the operations
team to deploy
Boundary, Vault, and Kubernetes.
Prerequisites
This tutorial requires you to have completed the Connect to Kubernetes using Boundary introduction tutorial.
Deploy Kubernetes
(Persona: operations
)
minikube is a CLI tool that provisions and manages the lifecycle of single-node Kubernetes cluster locally on your system.
Deploy a Kubernetes cluser using minikube.
Open a new terminal session.
Create a new working directory in your home directory called
boundary-kubernetes
to complete the lab exercises. Execute all commands from this working directory unless otherwise specified.$ mkdir ~/boundary-kubernetes && cd ~/boundary-kubernetes/
$ mkdir ~/boundary-kubernetes && cd ~/boundary-kubernetes/
Start a Kubernetes cluster.
$ minikube start 😄 minikube v1.25.2 on Darwin 12.3 ✨ Automatically selected the docker driver. Other choices: hyperkit, virtualbox, ssh 👍 Starting control plane node minikube in cluster minikube 🚜 Pulling base image ... 🔥 Creating docker container (CPUs=2, Memory=8100MB) ... 🐳 Preparing Kubernetes v1.23.3 on Docker 20.10.12 ... ▪ kubelet.housekeeping-interval=5m ▪ Generating certificates and keys ... ▪ Booting up control plane ... ▪ Configuring RBAC rules ... 🔎 Verifying Kubernetes components... ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5 🌟 Enabled addons: storage-provisioner 🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
$ minikube start 😄 minikube v1.25.2 on Darwin 12.3 ✨ Automatically selected the docker driver. Other choices: hyperkit, virtualbox, ssh 👍 Starting control plane node minikube in cluster minikube 🚜 Pulling base image ... 🔥 Creating docker container (CPUs=2, Memory=8100MB) ... 🐳 Preparing Kubernetes v1.23.3 on Docker 20.10.12 ... ▪ kubelet.housekeeping-interval=5m ▪ Generating certificates and keys ... ▪ Booting up control plane ... ▪ Configuring RBAC rules ... 🔎 Verifying Kubernetes components... ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5 🌟 Enabled addons: storage-provisioner 🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
The initialization process takes several minutes as it retrieves any necessary dependencies and executes various container images.
Verify the status of the Minikube cluster.
$ minikube status minikube type: Control Plane host: Running kubelet: Running apiserver: Running kubeconfig: Configured
$ minikube status minikube type: Control Plane host: Running kubelet: Running apiserver: Running kubeconfig: Configured
Kubernetes is now set up.
Start a pod that represents a production workload a
developer
may need to view.$ kubectl run nginx --image=nginx pod/nginx created
$ kubectl run nginx --image=nginx pod/nginx created
Deploy Boundary
(Persona: operations
)
HashiCorp Boundary is an identity-aware proxy aimed at simplifying and securing least-privileged access to cloud infrastructure.
In this workflow you will test integrating Kubernetes with HCP Boundary.
Launch the HCP Portal and login.
From the Overview page, click Boundary in the left navigation menu.
Click Deploy Boundary.
In the Instance Name text box, provide a name for your Boundary instance.
Under the Create an administrator account section, enter the Username and Password for the initial Boundary administrator account. You will use the administrative username and password to authenticate with Boundary.
Note
The Boundary instance is publicly accessible. Be sure to use a non-standard username (e.g. not root or administrator) and create a strong password.
Click Deploy.
Wait for the instance to initialize before proceeding.
Click the copy icon for the Cluster URL in the Getting started with Boundary section.
Return to the terminal you started Kubernetes in and set the
BOUNDARY_ADDR
environment variable to the copied URL.$ export BOUNDARY_ADDR=<actual-boundary-address>
$ export BOUNDARY_ADDR=<actual-boundary-address>
HCP Boundary is now set up.
Deploy Vault
(Persona: operations
)
Vault is an identity-based secrets and encryption management system. Vault can generate secrets on-demand for some systems, such as AWS, and Kubernetes.
Select the appropriate tab to deploy an HCP Vault Dedicated cluster or deploy a Vault in dev mode.
Launch the HCP Portal and login.
From the Overview page, click Vault in the left navigation menu.
From the Vault overview click Create cluster under the Start from scratch section.
Select your preferred cloud provider.
Click the Vault tier pull down menu and select Development.
Click the Cluster size pull down menu and select Extra Small.
Under the Network section, accept or edit the Network ID, Region selection, and CIDR block for the HVN.
Leave Cluster accessibility set to Public.
Security consideration
All new development tier Vault Dedicated clusters are configured with public access enabled by default. This means clients can connect from anywhere. For production tiers (starter, standard, and plus) private access will be enabled by default. This means you can only connect from a transit gateway or peered VPC (AWS) or VNet (Azure).
Under the Basics section, accept or edit the default Cluster ID (
vault-cluster
).Under Templates, select Start from scratch.
Click Create cluster.
Wait for the cluster to initialize before proceeding.
Under Quick actions, click Public Cluster URL.
Return to the terminal you started Kubernetes in and set the
VAULT_ADDR
environment variable to the copied URL.$ export VAULT_ADDR=<public_cluster_URL>
$ export VAULT_ADDR=<public_cluster_URL>
Return to the Overview page and click Generate token.
Within a few moments a new token will be generated.
Copy the Admin Token.
Return to the terminal you started Kubernetes in and set the
VAULT_TOKEN
environment variable to the copied token.$ export VAULT_TOKEN=<admin_token>
$ export VAULT_TOKEN=<admin_token>
Set the
VAULT_NAMESPACE
environment variable toadmin
.$ export VAULT_NAMESPACE=admin
$ export VAULT_NAMESPACE=admin
Open a new terminal window, and start a proxy to expose the Kubernetes API.
$ kubectl proxy --disable-filter=true Request filter disabled, your proxy is vulnerable to XSRF attacks, please be cautious Starting to serve on 127.0.0.1:8001
$ kubectl proxy --disable-filter=true Request filter disabled, your proxy is vulnerable to XSRF attacks, please be cautious Starting to serve on 127.0.0.1:8001
Leave this terminal open with the proxy running.
Open a new terminal window, and start ngrok and create a tunnel to the proxy listening on port
8001
.Warning
ngrok is used to expose the Kubernetes API to Vault Dedicated. Using
--scheme=http
exposes the API without encryption to avoid TLS certificate errors.For production workloads, use a private peering or transit gateway connection with trusted certificates.
$ ngrok http --scheme=http 127.0.0.1:8001
$ ngrok http --scheme=http 127.0.0.1:8001
Example output:
ngrok (Ctrl+C to quit) Session Status online Account username (Plan: Free) Update update available (version 3.0.5, Ctrl-U to update) Version 3.1.1 Region United States (us) Latency 32.791235ms Web Interface http://127.0.0.1:4040 Forwarding http://d12b-34-567-89-10.ngrok.io -> 127.0.0.1:8001 Connections ttl opn rt1 rt5 p50 p90 0 0 0.00 0.00 0.00 0.00
ngrok (Ctrl+C to quit) Session Status online Account username (Plan: Free) Update update available (version 3.0.5, Ctrl-U to update) Version 3.1.1 Region United States (us) Latency 32.791235ms Web Interface http://127.0.0.1:4040 Forwarding http://d12b-34-567-89-10.ngrok.io -> 127.0.0.1:8001 Connections ttl opn rt1 rt5 p50 p90 0 0 0.00 0.00 0.00 0.00
Leave this terminal open with ngrok running.
Copy the ngrok forwarding address.
Return to the terminal you started Kubernetes in and set an environment variable for the ngrok forwarding address.
$ export KUBE_API_URL=<actual-address-from-ngrok>
$ export KUBE_API_URL=<actual-address-from-ngrok>
Vault Dedicated is now set up.
Learn the current terminal window open.
In a new terminal, start Vault in dev mode.
$ vault server -dev -dev-root-token-id=root ==> Vault server configuration: Api Address: http://127.0.0.1:8200 Cgo: disabled Cluster Address: https://127.0.0.1:8201 Go Version: go1.19.1 Listener 1: tcp (addr: "127.0.0.1:8200", cluster address: "127.0.0.1:8201", max_request_duration: "1m30s", max_request_size: "33554432", tls: "disabled") Log Level: info Mlock: supported: false, enabled: false Recovery Mode: false Storage: inmem Version: Vault v1.13.0-dev1, built 2022-09-26T14:39:49Z Version Sha: 2a7c3f2f76e6fd6a7f8622ea68d82bcf9dcf9686 ==> Vault server started! Log data will stream in below: # ...snip... WARNING! dev mode is enabled! In this mode, Vault runs entirely in-memory and starts unsealed with a single unseal key. The root token is already authenticated to the CLI, so you can immediately begin using Vault. You may need to set the following environment variables: $ export VAULT_ADDR='http://127.0.0.1:8200' The unseal key and root token are displayed below in case you want to seal/unseal the Vault or re-authenticate. Unseal Key: PLV0OXO9VmF5VB8qAnq4pQIGzWkzzYypRNcDtrhSSgU= Root Token: root Development mode should NOT be used in production installations!
$ vault server -dev -dev-root-token-id=root ==> Vault server configuration: Api Address: http://127.0.0.1:8200 Cgo: disabled Cluster Address: https://127.0.0.1:8201 Go Version: go1.19.1 Listener 1: tcp (addr: "127.0.0.1:8200", cluster address: "127.0.0.1:8201", max_request_duration: "1m30s", max_request_size: "33554432", tls: "disabled") Log Level: info Mlock: supported: false, enabled: false Recovery Mode: false Storage: inmem Version: Vault v1.13.0-dev1, built 2022-09-26T14:39:49Z Version Sha: 2a7c3f2f76e6fd6a7f8622ea68d82bcf9dcf9686 ==> Vault server started! Log data will stream in below: # ...snip... WARNING! dev mode is enabled! In this mode, Vault runs entirely in-memory and starts unsealed with a single unseal key. The root token is already authenticated to the CLI, so you can immediately begin using Vault. You may need to set the following environment variables: $ export VAULT_ADDR='http://127.0.0.1:8200' The unseal key and root token are displayed below in case you want to seal/unseal the Vault or re-authenticate. Unseal Key: PLV0OXO9VmF5VB8qAnq4pQIGzWkzzYypRNcDtrhSSgU= Root Token: root Development mode should NOT be used in production installations!
Dev mode starts Vault to listen on port
8200
and uses the-dev-root-token-id
parameter to set the root token toroot
.Return to the terminal you started Kubernetes in and set the
VAULT_ADDR
environment variable.$ export VAULT_ADDR='http://127.0.0.1:8200'
$ export VAULT_ADDR='http://127.0.0.1:8200'
Set the
VAULT_TOKEN
environment variable.$ export VAULT_TOKEN=root
$ export VAULT_TOKEN=root
Open a new terminal window, and start a proxy to expose the Kubernetes API.
$ kubectl proxy --disable-filter=true Request filter disabled, your proxy is vulnerable to XSRF attacks, please be cautious Starting to serve on 127.0.0.1:8001
$ kubectl proxy --disable-filter=true Request filter disabled, your proxy is vulnerable to XSRF attacks, please be cautious Starting to serve on 127.0.0.1:8001
Leave this terminal open with the proxy running.
Return to the terminal you started Kubernetes in and set an environment variable for the Kubernetes API URL.
$ export KUBE_API_URL=http://127.0.0.1:8001
$ export KUBE_API_URL=http://127.0.0.1:8001
Vault is now set up.
Deploy an HCP Boundary worker
An HCP worker deployed on the same network as Vault is required for integrating private Vault clusters with HCP Boundary. To learn more about setting up self-managed workers, refer to the Self-Managed Worker Registration with HCP Boundary tutorial.
Download the Boundary Enterprise binary
Download the Boundary Enterprise binary to the ~/boundary-kubernetes/
directory.
You can manually download the latest binary for your operating system by navigating to the Boundary releases page. The example below demonstrates downloading the binary using the command line.
Note
The binary version should match the version of the HCP control plane. Check the version of the control plane in the HCP Boundary portal, and download the appropriate version using wget. The example below installs the 0.13.2 version of the Boundary Enterprise binary.
Below is an example of downloading and unzipping the Boundary Enterprise binary on Ubuntu, MacOS, and Windows.
The following command downloads the Boundary Enterprise binary and unzips it to the current directory.
$ wget -q https://releases.hashicorp.com/boundary/0.13.2+ent/boundary_0.13.2+ent_linux_amd64.zip ;\ sudo apt-get update && sudo apt-get install unzip ;\ unzip *.zip
$ wget -q https://releases.hashicorp.com/boundary/0.13.2+ent/boundary_0.13.2+ent_linux_amd64.zip ;\
sudo apt-get update && sudo apt-get install unzip ;\
unzip *.zip
Once downloaded, verify the version of the boundary
binary.
$ ./boundary version Version information: Build Date: 2023-08-04T12:29:52Z Git Revision: b1f75f5c731c843f5c987feae310d86e635806c7 Metadata: ent Version Number: 0.13.2+ent
$ ./boundary version
Version information:
Build Date: 2023-08-04T12:29:52Z
Git Revision: b1f75f5c731c843f5c987feae310d86e635806c7
Metadata: ent
Version Number: 0.13.2+ent
The following command downloads the Boundary Enterprise binary and unzips it to the current directory.
$ wget -q https://releases.hashicorp.com/boundary/0.13.2+ent/boundary_0.13.2+ent_darwin_amd64.zip ;\ /usr/bin/unzip *.zip
$ wget -q https://releases.hashicorp.com/boundary/0.13.2+ent/boundary_0.13.2+ent_darwin_amd64.zip ;\
/usr/bin/unzip *.zip
Once downloaded, verify the version of the boundary
binary.
$ ./boundary version Version information: Build Date: 2023-08-04T12:29:52Z Git Revision: b1f75f5c731c843f5c987feae310d86e635806c7 Metadata: ent Version Number: 0.13.2+ent
$ ./boundary version
Version information:
Build Date: 2023-08-04T12:29:52Z
Git Revision: b1f75f5c731c843f5c987feae310d86e635806c7
Metadata: ent
Version Number: 0.13.2+ent
The following command downloads the Boundary Enterprise binary and unzips it to the current directory.
$ Invoke-WebRequest -OutFile worker.zip https://releases.hashicorp.com/boundary/0.13.2+ent/boundary_0.13.2+ent_windows_amd64.zip ; Expand-Archive -Path worker.zip -DestinationPath .
$ Invoke-WebRequest -OutFile worker.zip https://releases.hashicorp.com/boundary/0.13.2+ent/boundary_0.13.2+ent_windows_amd64.zip ;
Expand-Archive -Path worker.zip -DestinationPath .
Once downloaded, verify the version of the boundary
binary.
$ .\boundary.exe version Version information: Build Date: 2023-06-07T16:41:10Z Git Revision: fb4ed58459d555d480e70ddc20d2639c26ad0f8f Metadata: ent Version Number: 0.13.2+ent
$ .\boundary.exe version
Version information:
Build Date: 2023-06-07T16:41:10Z
Git Revision: fb4ed58459d555d480e70ddc20d2639c26ad0f8f
Metadata: ent
Version Number: 0.13.2+ent
Ensure the Version Number matches the version of the HCP Boundary control plane. They should match in order to get the latest HCP Boundary features.
Write the worker config
Next, create a new file named hcp-worker.hcl
in the ~/boundary-kubernetes/
directory.
$ touch ~/boundary-kubernetes/hcp-worker.hcl
$ touch ~/boundary-kubernetes/hcp-worker.hcl
Open the file with a text editor, such as Vi.
Paste the following configuration into the worker config file:
disable_mlock = true hcp_boundary_cluster_id = "<cluster-id>" listener "tcp" { address = "127.0.0.1:9202" purpose = "proxy" } worker { auth_storage_path = "/home/myusername/boundary/hcp-worker" tags { type = ["worker", "dev"] } }
1 2 3 4 5 6 7 8 9 101112131415disable_mlock = true
hcp_boundary_cluster_id = "<cluster-id>"
listener "tcp" {
address = "127.0.0.1:9202"
purpose = "proxy"
}
worker {
auth_storage_path = "/home/myusername/boundary/hcp-worker"
tags {
type = ["worker", "dev"]
}
}
Update the cluster id in the hcp-worker.hcl
file:
The <cluster-id>
on line 3 can be determined from the UUID in the HCP
Boundary Cluster URL. For example, if your Cluster URL is:
https://c3a7a20a-f663-40f3-a8e3-1b2f69b36254.boundary.hashicorp.cloud
,
then the cluster id is c3a7a20a-f663-40f3-a8e3-1b2f69b36254
The auth_storage_path
should match the full path to the ~/boundary-kubernetes/hcp-worker
directory, such as /home/myusername/boundary/hcp-worker
.
disable_mlock = true hcp_boundary_cluster_id = "<cluster-id>" listener "tcp" { address = "127.0.0.1:9202" purpose = "proxy" } worker { auth_storage_path = "/Users/myusername/boundary-kubernetes/hcp-worker" tags { type = ["worker", "dev"] } }
1 2 3 4 5 6 7 8 9 101112131415disable_mlock = true
hcp_boundary_cluster_id = "<cluster-id>"
listener "tcp" {
address = "127.0.0.1:9202"
purpose = "proxy"
}
worker {
auth_storage_path = "/Users/myusername/boundary-kubernetes/hcp-worker"
tags {
type = ["worker", "dev"]
}
}
Update the cluster id in the hcp-worker.hcl
file:
The <cluster-id>
on line 3 can be determined from the UUID in the HCP
Boundary Cluster URL. For example, if your Cluster URL is:
https://c3a7a20a-f663-40f3-a8e3-1b2f69b36254.boundary.hashicorp.cloud
,
then the cluster id is c3a7a20a-f663-40f3-a8e3-1b2f69b36254
The auth_storage_path
should match the full path to the ~/boundary-kubernetes/hcp-worker
directory, such as /Users/myusername/boundary-kubernetes/hcp-worker
.
disable_mlock = true hcp_boundary_cluster_id = "<cluster-id>" listener "tcp" { address = "127.0.0.1:9202" purpose = "proxy" } worker { auth_storage_path = "C:/Users/myusername/boundary-kubernetes/hcp-worker" tags { type = ["worker", "dev"] } }
1 2 3 4 5 6 7 8 9 101112131415disable_mlock = true
hcp_boundary_cluster_id = "<cluster-id>"
listener "tcp" {
address = "127.0.0.1:9202"
purpose = "proxy"
}
worker {
auth_storage_path = "C:/Users/myusername/boundary-kubernetes/hcp-worker"
tags {
type = ["worker", "dev"]
}
}
Update the cluster id in the hcp-worker.hcl
file:
The <cluster-id>
on line 3 can be determined from the UUID in the HCP
Boundary Cluster URL. For example, if your Cluster URL is:
https://c3a7a20a-f663-40f3-a8e3-1b2f69b36254.boundary.hashicorp.cloud
,
then the cluster id is c3a7a20a-f663-40f3-a8e3-1b2f69b36254
The auth_storage_path
should match the full path to the ~/boundary-kubernetes/hcp-worker
directory, such as C:/Users/myusername/boundary-kubernetes/hcp-worker
.
Save this file.
Start the worker
With the worker config defined, start the worker server. Provide the full path
to the worker config file (such as /home/myusername/boundary/hcp-worker.hcl
).
$ ./boundary server -config="/home/myusername/boundary/hcp-worker.hcl" ==> Boundary server configuration: Cgo: disabled Listener 1: tcp (addr: "0.0.0.0:9202", max_request_duration: "1m30s", purpose: "proxy") Log Level: info Mlock: supported: true, enabled: false Version: Boundary v0.13.2+ent Version Sha: b1f75f5c731c843f5c987feae310d86e635806c7 Worker Auth Current Key Id: preaching-favored-verbalize-widen-duchess-relish-jurist-sly Worker Auth Registration Request: GzusqckarbczHoLGQ4UA25uSRmUHY1BGSRA6cePp8RWHQFUYSrf3hnDw4ETPswFnMrcxx6tq7BUWD5azGULzPecPicuYGD6qg3qYvaGRgHgKwvh9FLY9Gu891KSj8hAef19JjHog8d7qpo9f9KoiwrhfcV2YxGyVu1P943656iNGCFHWiBR3ofsyTatQ7fzcMV2ciKtuYYGfx4FfiRStnkAzoE98RdR2LeCk2huRkFt7ayeeWVfD7Awm8xaZfFJn4pYRJwu2LRBeNs915warEBaS8XHXSKoi3cRUYif8Qu Worker Auth Storage Path: /home/ubuntu/boundary-kubernetes/hcp-worker Worker Public Proxy Addr: 127.0.0.1:9202 ==> Boundary server started! Log data will stream in below: {"id":"Rb7PMMBdZa","source":"https://hashicorp.com/boundary/ip-172-31-84-100/worker","specversion":"1.0","type":"system","data":{"version":"v0.1","op":"worker.(Worker).StartControllerConnections","data":{"msg":"Setting HCP Boundary cluster address e58fe114-7624-431c-994d-b6670e90b09f.proxy.boundary.hashicorp.cloud:9202 as upstream address"}},"datacontentype":"application/cloudevents","time":"2022-09-19T22:16:38.770232603Z"}
1 2 3 4 5 6 7 8 9 101112131415161718$ ./boundary server -config="/home/myusername/boundary/hcp-worker.hcl"
==> Boundary server configuration:
Cgo: disabled
Listener 1: tcp (addr: "0.0.0.0:9202", max_request_duration: "1m30s", purpose: "proxy")
Log Level: info
Mlock: supported: true, enabled: false
Version: Boundary v0.13.2+ent
Version Sha: b1f75f5c731c843f5c987feae310d86e635806c7
Worker Auth Current Key Id: preaching-favored-verbalize-widen-duchess-relish-jurist-sly
Worker Auth Registration Request: GzusqckarbczHoLGQ4UA25uSRmUHY1BGSRA6cePp8RWHQFUYSrf3hnDw4ETPswFnMrcxx6tq7BUWD5azGULzPecPicuYGD6qg3qYvaGRgHgKwvh9FLY9Gu891KSj8hAef19JjHog8d7qpo9f9KoiwrhfcV2YxGyVu1P943656iNGCFHWiBR3ofsyTatQ7fzcMV2ciKtuYYGfx4FfiRStnkAzoE98RdR2LeCk2huRkFt7ayeeWVfD7Awm8xaZfFJn4pYRJwu2LRBeNs915warEBaS8XHXSKoi3cRUYif8Qu
Worker Auth Storage Path: /home/ubuntu/boundary-kubernetes/hcp-worker
Worker Public Proxy Addr: 127.0.0.1:9202
==> Boundary server started! Log data will stream in below:
{"id":"Rb7PMMBdZa","source":"https://hashicorp.com/boundary/ip-172-31-84-100/worker","specversion":"1.0","type":"system","data":{"version":"v0.1","op":"worker.(Worker).StartControllerConnections","data":{"msg":"Setting HCP Boundary cluster address e58fe114-7624-431c-994d-b6670e90b09f.proxy.boundary.hashicorp.cloud:9202 as upstream address"}},"datacontentype":"application/cloudevents","time":"2022-09-19T22:16:38.770232603Z"}
$ ./boundary server -config="/Users/myusername/boundary/hcp-worker.hcl" ==> Boundary server configuration: Cgo: disabled Listener 1: tcp (addr: "0.0.0.0:9202", max_request_duration: "1m30s", purpose: "proxy") Log Level: info Mlock: supported: true, enabled: false Version: Boundary v0.13.2+ent Version Sha: b1f75f5c731c843f5c987feae310d86e635806c7 Worker Auth Current Key Id: preaching-favored-verbalize-widen-duchess-relish-jurist-sly Worker Auth Registration Request: GzusqckarbczHoLGQ4UA25uSRmUHY1BGSRA6cePp8RWHQFUYSrf3hnDw4ETPswFnMrcxx6tq7BUWD5azGULzPecPicuYGD6qg3qYvaGRgHgKwvh9FLY9Gu891KSj8hAef19JjHog8d7qpo9f9KoiwrhfcV2YxGyVu1P943656iNGCFHWiBR3ofsyTatQ7fzcMV2ciKtuYYGfx4FfiRStnkAzoE98RdR2LeCk2huRkFt7ayeeWVfD7Awm8xaZfFJn4pYRJwu2LRBeNs915warEBaS8XHXSKoi3cRUYif8Qu Worker Auth Storage Path: /Users/myusername/boundary-kubernetes/hcp-worker Worker Public Proxy Addr: 127.0.0.1:9202 ==> Boundary server started! Log data will stream in below: {"id":"Rb7PMMBdZa","source":"https://hashicorp.com/boundary/ip-172-31-84-100/worker","specversion":"1.0","type":"system","data":{"version":"v0.1","op":"worker.(Worker).StartControllerConnections","data":{"msg":"Setting HCP Boundary cluster address e58fe114-7624-431c-994d-b6670e90b09f.proxy.boundary.hashicorp.cloud:9202 as upstream address"}},"datacontentype":"application/cloudevents","time":"2022-09-19T22:16:38.770232603Z"}
1 2 3 4 5 6 7 8 9 101112131415161718$ ./boundary server -config="/Users/myusername/boundary/hcp-worker.hcl"
==> Boundary server configuration:
Cgo: disabled
Listener 1: tcp (addr: "0.0.0.0:9202", max_request_duration: "1m30s", purpose: "proxy")
Log Level: info
Mlock: supported: true, enabled: false
Version: Boundary v0.13.2+ent
Version Sha: b1f75f5c731c843f5c987feae310d86e635806c7
Worker Auth Current Key Id: preaching-favored-verbalize-widen-duchess-relish-jurist-sly
Worker Auth Registration Request: GzusqckarbczHoLGQ4UA25uSRmUHY1BGSRA6cePp8RWHQFUYSrf3hnDw4ETPswFnMrcxx6tq7BUWD5azGULzPecPicuYGD6qg3qYvaGRgHgKwvh9FLY9Gu891KSj8hAef19JjHog8d7qpo9f9KoiwrhfcV2YxGyVu1P943656iNGCFHWiBR3ofsyTatQ7fzcMV2ciKtuYYGfx4FfiRStnkAzoE98RdR2LeCk2huRkFt7ayeeWVfD7Awm8xaZfFJn4pYRJwu2LRBeNs915warEBaS8XHXSKoi3cRUYif8Qu
Worker Auth Storage Path: /Users/myusername/boundary-kubernetes/hcp-worker
Worker Public Proxy Addr: 127.0.0.1:9202
==> Boundary server started! Log data will stream in below:
{"id":"Rb7PMMBdZa","source":"https://hashicorp.com/boundary/ip-172-31-84-100/worker","specversion":"1.0","type":"system","data":{"version":"v0.1","op":"worker.(Worker).StartControllerConnections","data":{"msg":"Setting HCP Boundary cluster address e58fe114-7624-431c-994d-b6670e90b09f.proxy.boundary.hashicorp.cloud:9202 as upstream address"}},"datacontentype":"application/cloudevents","time":"2022-09-19T22:16:38.770232603Z"}
$ .\boundary.exe server -config="C:\Users\myusername\boundary\hcp-worker.hcl" ==> Boundary server configuration: Cgo: disabled Listener 1: tcp (addr: "0.0.0.0:9202", max_request_duration: "1m30s", purpose: "proxy") Log Level: info Mlock: supported: true, enabled: false Version: Boundary v0.13.2+ent Version Sha: b1f75f5c731c843f5c987feae310d86e635806c7 Worker Auth Current Key Id: preaching-favored-verbalize-widen-duchess-relish-jurist-sly Worker Auth Registration Request: GzusqckarbczHoLGQ4UA25uSRmUHY1BGSRA6cePp8RWHQFUYSrf3hnDw4ETPswFnMrcxx6tq7BUWD5azGULzPecPicuYGD6qg3qYvaGRgHgKwvh9FLY9Gu891KSj8hAef19JjHog8d7qpo9f9KoiwrhfcV2YxGyVu1P943656iNGCFHWiBR3ofsyTatQ7fzcMV2ciKtuYYGfx4FfiRStnkAzoE98RdR2LeCk2huRkFt7ayeeWVfD7Awm8xaZfFJn4pYRJwu2LRBeNs915warEBaS8XHXSKoi3cRUYif8Qu Worker Auth Storage Path: /Users/myusername/boundary-kubernetes/hcp-worker Worker Public Proxy Addr: 127.0.0.1:9202 ==> Boundary server started! Log data will stream in below: {"id":"Rb7PMMBdZa","source":"https://hashicorp.com/boundary/ip-172-31-84-100/worker","specversion":"1.0","type":"system","data":{"version":"v0.1","op":"worker.(Worker).StartControllerConnections","data":{"msg":"Setting HCP Boundary cluster address e58fe114-7624-431c-994d-b6670e90b09f.proxy.boundary.hashicorp.cloud:9202 as upstream address"}},"datacontentype":"application/cloudevents","time":"2022-09-19T22:16:38.770232603Z"}
1 2 3 4 5 6 7 8 9 101112131415161718$ .\boundary.exe server -config="C:\Users\myusername\boundary\hcp-worker.hcl"
==> Boundary server configuration:
Cgo: disabled
Listener 1: tcp (addr: "0.0.0.0:9202", max_request_duration: "1m30s", purpose: "proxy")
Log Level: info
Mlock: supported: true, enabled: false
Version: Boundary v0.13.2+ent
Version Sha: b1f75f5c731c843f5c987feae310d86e635806c7
Worker Auth Current Key Id: preaching-favored-verbalize-widen-duchess-relish-jurist-sly
Worker Auth Registration Request: GzusqckarbczHoLGQ4UA25uSRmUHY1BGSRA6cePp8RWHQFUYSrf3hnDw4ETPswFnMrcxx6tq7BUWD5azGULzPecPicuYGD6qg3qYvaGRgHgKwvh9FLY9Gu891KSj8hAef19JjHog8d7qpo9f9KoiwrhfcV2YxGyVu1P943656iNGCFHWiBR3ofsyTatQ7fzcMV2ciKtuYYGfx4FfiRStnkAzoE98RdR2LeCk2huRkFt7ayeeWVfD7Awm8xaZfFJn4pYRJwu2LRBeNs915warEBaS8XHXSKoi3cRUYif8Qu
Worker Auth Storage Path: /Users/myusername/boundary-kubernetes/hcp-worker
Worker Public Proxy Addr: 127.0.0.1:9202
==> Boundary server started! Log data will stream in below:
{"id":"Rb7PMMBdZa","source":"https://hashicorp.com/boundary/ip-172-31-84-100/worker","specversion":"1.0","type":"system","data":{"version":"v0.1","op":"worker.(Worker).StartControllerConnections","data":{"msg":"Setting HCP Boundary cluster address e58fe114-7624-431c-994d-b6670e90b09f.proxy.boundary.hashicorp.cloud:9202 as upstream address"}},"datacontentype":"application/cloudevents","time":"2022-09-19T22:16:38.770232603Z"}
The worker will start and begin attempting to connect to the upstream Controller.
The worker also outputs its authorization request as the Worker Auth
Registration Request token. This will also be saved to a file,
auth_request_token
, defined by the auth_storage_path
in the worker config.
Note the Worker Auth Registration Request:
value. This value can also be
located in the ~/boundary-kubernetes/hcp-worker/auth_request_token
directory. Copy this
value.
Register the worker with HCP
HCP workers can be registered using the Boundary CLI or Admin Console Web UI.
Authenticate to HCP Boundary as the admin user.
Log in to the HCP portal.
From the HCP Portal's Boundary page, click Open Admin UI - a new page will open.
Enter the admin username and password you created when you deployed the new instance and click Authenticate.
Once logged in, navigate to the Workers page.
Notice that only HCP workers are listed.
Click New.
Scroll down to the bottom of the New Worker page and paste the Worker Auth Registration Request key you copied earlier.
Click Register Worker.
Click Done and notice the new worker on the Workers page.
Ensure that the BOUNDARY_AUTH_METHOD_ID
and BOUNDARY_ADDR
variables are set.
$ export BOUNDARY_AUTH_METHOD_ID=copied-value-from-boundary-ui; export BOUNDARY_ADDR=copied-value-from-hcp-portal
$ export BOUNDARY_AUTH_METHOD_ID=copied-value-from-boundary-ui; export BOUNDARY_ADDR=copied-value-from-hcp-portal
Log into the CLI, providing the admin login name and admin password when prompted.
$ boundary authenticate Please enter the login name (it will be hidden): Please enter the password (it will be hidden): Authentication information: Account ID: acctpw_VOeNSFX8pQ Auth Method ID: ampw_ZbB6UXpW3B Expiration Time: Mon, 13 Feb 2023 12:35:32 MST User ID: u_ogz79sV4sT The token was successfully stored in the chosen keyring and is not displayed here.
$ boundary authenticate
Please enter the login name (it will be hidden):
Please enter the password (it will be hidden):
Authentication information:
Account ID: acctpw_VOeNSFX8pQ
Auth Method ID: ampw_ZbB6UXpW3B
Expiration Time: Mon, 13 Feb 2023 12:35:32 MST
User ID: u_ogz79sV4sT
The token was successfully stored in the chosen keyring and is not displayed here.
Next, export the Worker Auth Request Token value as an environment variable.
$ export WORKER_TOKEN=<Worker Auth Registration Request Value>
$ export WORKER_TOKEN=<Worker Auth Registration Request Value>
The token is used to issue a create worker request that will authorize the worker to Boundary and make it available. Currently worker creation is only supported for workers with an authorization token.
Now, create the HCP worker:
$ boundary workers create worker-led -worker-generated-auth-token=$WORKER_TOKEN Worker information: Active Connection Count: 0 Created Time: Tue, 27 Sep 2022 16:20:21 MDT ID: w_2yjenAlGvV Type: pki Updated Time: Tue, 27 Sep 2022 16:20:21 MDT Version: 1 Scope: ID: global Name: global Type: global Authorized Actions: no-op read update delete add-worker-tags set-worker-tags remove-worker-tags
$ boundary workers create worker-led -worker-generated-auth-token=$WORKER_TOKEN
Worker information:
Active Connection Count: 0
Created Time: Tue, 27 Sep 2022 16:20:21 MDT
ID: w_2yjenAlGvV
Type: pki
Updated Time: Tue, 27 Sep 2022 16:20:21 MDT
Version: 1
Scope:
ID: global
Name: global
Type: global
Authorized Actions:
no-op
read
update
delete
add-worker-tags
set-worker-tags
remove-worker-tags
The worker logs should now show a successful authentication event:
$ docker logs boundary-worker-hcp ==> Boundary server configuration: Cgo: disabled Listener 1: tcp (addr: "127.0.0.1:9202", max_request_duration: "1m30s", purpose: "proxy") Log Level: info Mlock: supported: true, enabled: false Version: Boundary v0.13.2+ent Version Sha: b1f75f5c731c843f5c987feae310d86e635806c7 Worker Auth Current Key Id: peddling-exploring-playing-tripping-morphine-ammonium-subsiding-observant Worker Auth Registration Request: pdZ5SAAebKa9DmnokkNu5EuBPbKECPFbRtxRpPL8uTVyUHz77jE1tvoHxdPWshGfbXEz8RoQ7hFpDK88nAzqZjysgetKHcskDnAW3WGmLnN42KUb4YbvU5UyXCNkr1yrjCekUrzSuaEQQEyFGp4g7R4h9ZCg2caTzERqCyg8SuPcyzpphHDtErpWKuFF6JVUiCuKaN2E1qTxNjsTuvnYpikD6vpMfsg8D3dHVhqH1vU4dFz64pjYkmr1Vp4b4BiuiBchcpdhegUnMRW2xBPYfUm99AojYsVge8j9Yjm Worker Auth Storage Path: /boundary-kubernetes/hcp-worker Worker Public Proxy Addr: 127.0.0.1:9202 ==> Boundary server started! Log data will stream in below: {"id":"mdMRjeMyL5","source":"https://hashicorp.com/boundary/c0d799c7b255/worker","specversion":"1.0","type":"system","data":{"version":"v0.1","op":"worker.(Worker).StartControllerConnections","data":{"msg":"Setting HCP Boundary cluster address cf8ecb28-11df-468b-a093-8c5807bb5652.proxy.boundary.hashicorp.cloud:9202 as upstream address"}},"datacontentype":"application/cloudevents","time":"2022-09-27T22:13:16.09562876Z"} {"id":"YYyh4B2xTl","source":"https://hashicorp.com/boundary/c0d799c7b255/worker","specversion":"1.0","type":"system","data":{"version":"v0.1","op":"worker.(Worker).controllerDialerFunc","data":{"msg":"worker has successfully authenticated"}},"datacontentype":"application/cloudevents","time":"2022-09-27T22:20:24.417419275Z"} {"id":"e8piDOzNc1","source":"https://hashicorp.com/boundary/c0d799c7b255/worker","specversion":"1.0","type":"error","data":{"error":"rpc error: code = Unavailable desc = last connection error: connection error: desc = \"transport: Error while dialing node is not yet authorized\"","error_fields":{},"id":"e_sW2xqiJJml","version":"v0.1","op":"worker.(Worker).sendWorkerStatus","info":{"msg":"error making status request to controller"}},"datacontentype":"application/cloudevents","time":"2022-09-27T22:20:24.420569017Z"} {"id":"KOpQeo8GTx","source":"https://hashicorp.com/boundary/c0d799c7b255/worker","specversion":"1.0","type":"error","data":{"error":"status error grace period has expired, canceling all sessions on worker","error_fields":{},"id":"e_mmLXDKAWys","version":"v0.1","op":"worker.(Worker).sendWorkerStatus","info":{"grace_period":15000000000,"last_status_time":"2022-09-27 22:13:16.092815366 +0000 UTC m=+0.232876928"}},"datacontentype":"application/cloudevents","time":"2022-09-27T22:20:24.4207781Z"} {"id":"EcvpxxQMqO","source":"https://hashicorp.com/boundary/c0d799c7b255/worker","specversion":"1.0","type":"system","data":{"version":"v0.1","op":"worker.(Worker).sendWorkerStatus","data":{"msg":"Upstreams after first status set to: [2e5750e5-59d0-8655-2d57-2bc07642143c.proxy.boundary.hashicorp.cloud:9202 b25ab36d-99b2-68bf-108e-25ac1582288f.proxy.boundary.hashicorp.cloud:9202 d6da51d4-89ab-4521-d8f5-109b45681775.proxy.boundary.hashicorp.cloud:9202]"}},"datacontentype":"application/cloudevents","time":"2022-09-27T22:20:26.995860237Z"} {"id":"ZtPAnCcxe6","source":"https://hashicorp.com/boundary/c0d799c7b255/worker","specversion":"1.0","type":"system","data":{"version":"v0.1","op":"worker.(Worker).controllerDialerFunc","data":{"msg":"worker has successfully authenticated"}},"datacontentype":"application/cloudevents","time":"2022-09-27T22:20:27.36120034Z"} {"id":"rBrS6MBLSN","source":"https://hashicorp.com/boundary/c0d799c7b255/worker","specversion":"1.0","type":"system","data":{"version":"v0.1","op":"worker.checkHcpConnection","data":{"msg":"validated HCP Boundary upstream"}},"datacontentype":"application/cloudevents","time":"2022-09-27T22:20:27.463751132Z"}
$ docker logs boundary-worker-hcp
==> Boundary server configuration:
Cgo: disabled
Listener 1: tcp (addr: "127.0.0.1:9202", max_request_duration: "1m30s", purpose: "proxy")
Log Level: info
Mlock: supported: true, enabled: false
Version: Boundary v0.13.2+ent
Version Sha: b1f75f5c731c843f5c987feae310d86e635806c7
Worker Auth Current Key Id: peddling-exploring-playing-tripping-morphine-ammonium-subsiding-observant
Worker Auth Registration Request: pdZ5SAAebKa9DmnokkNu5EuBPbKECPFbRtxRpPL8uTVyUHz77jE1tvoHxdPWshGfbXEz8RoQ7hFpDK88nAzqZjysgetKHcskDnAW3WGmLnN42KUb4YbvU5UyXCNkr1yrjCekUrzSuaEQQEyFGp4g7R4h9ZCg2caTzERqCyg8SuPcyzpphHDtErpWKuFF6JVUiCuKaN2E1qTxNjsTuvnYpikD6vpMfsg8D3dHVhqH1vU4dFz64pjYkmr1Vp4b4BiuiBchcpdhegUnMRW2xBPYfUm99AojYsVge8j9Yjm
Worker Auth Storage Path: /boundary-kubernetes/hcp-worker
Worker Public Proxy Addr: 127.0.0.1:9202
==> Boundary server started! Log data will stream in below:
{"id":"mdMRjeMyL5","source":"https://hashicorp.com/boundary/c0d799c7b255/worker","specversion":"1.0","type":"system","data":{"version":"v0.1","op":"worker.(Worker).StartControllerConnections","data":{"msg":"Setting HCP Boundary cluster address cf8ecb28-11df-468b-a093-8c5807bb5652.proxy.boundary.hashicorp.cloud:9202 as upstream address"}},"datacontentype":"application/cloudevents","time":"2022-09-27T22:13:16.09562876Z"}
{"id":"YYyh4B2xTl","source":"https://hashicorp.com/boundary/c0d799c7b255/worker","specversion":"1.0","type":"system","data":{"version":"v0.1","op":"worker.(Worker).controllerDialerFunc","data":{"msg":"worker has successfully authenticated"}},"datacontentype":"application/cloudevents","time":"2022-09-27T22:20:24.417419275Z"}
{"id":"e8piDOzNc1","source":"https://hashicorp.com/boundary/c0d799c7b255/worker","specversion":"1.0","type":"error","data":{"error":"rpc error: code = Unavailable desc = last connection error: connection error: desc = \"transport: Error while dialing node is not yet authorized\"","error_fields":{},"id":"e_sW2xqiJJml","version":"v0.1","op":"worker.(Worker).sendWorkerStatus","info":{"msg":"error making status request to controller"}},"datacontentype":"application/cloudevents","time":"2022-09-27T22:20:24.420569017Z"}
{"id":"KOpQeo8GTx","source":"https://hashicorp.com/boundary/c0d799c7b255/worker","specversion":"1.0","type":"error","data":{"error":"status error grace period has expired, canceling all sessions on worker","error_fields":{},"id":"e_mmLXDKAWys","version":"v0.1","op":"worker.(Worker).sendWorkerStatus","info":{"grace_period":15000000000,"last_status_time":"2022-09-27 22:13:16.092815366 +0000 UTC m=+0.232876928"}},"datacontentype":"application/cloudevents","time":"2022-09-27T22:20:24.4207781Z"}
{"id":"EcvpxxQMqO","source":"https://hashicorp.com/boundary/c0d799c7b255/worker","specversion":"1.0","type":"system","data":{"version":"v0.1","op":"worker.(Worker).sendWorkerStatus","data":{"msg":"Upstreams after first status set to: [2e5750e5-59d0-8655-2d57-2bc07642143c.proxy.boundary.hashicorp.cloud:9202 b25ab36d-99b2-68bf-108e-25ac1582288f.proxy.boundary.hashicorp.cloud:9202 d6da51d4-89ab-4521-d8f5-109b45681775.proxy.boundary.hashicorp.cloud:9202]"}},"datacontentype":"application/cloudevents","time":"2022-09-27T22:20:26.995860237Z"}
{"id":"ZtPAnCcxe6","source":"https://hashicorp.com/boundary/c0d799c7b255/worker","specversion":"1.0","type":"system","data":{"version":"v0.1","op":"worker.(Worker).controllerDialerFunc","data":{"msg":"worker has successfully authenticated"}},"datacontentype":"application/cloudevents","time":"2022-09-27T22:20:27.36120034Z"}
{"id":"rBrS6MBLSN","source":"https://hashicorp.com/boundary/c0d799c7b255/worker","specversion":"1.0","type":"system","data":{"version":"v0.1","op":"worker.checkHcpConnection","data":{"msg":"validated HCP Boundary upstream"}},"datacontentype":"application/cloudevents","time":"2022-09-27T22:20:27.463751132Z"}
Validate lab setup
The tutorials in this series use environment variables to simplify the provided commands.
Verify all necessary environment variables are set.
$ printenv | grep 'VAULT_\|BOUNDARY_\|KUBE_' BOUNDARY_ADDR=https://6a6eade6-example.boundary.hashicorp.cloud VAULT_ADDR=https://vault-cluster-exampe-012034567.06f0568a.z1.hashicorp.cloud:8200 VAULT_TOKEN=hvs.CAESIPA-K6F9TfY5Vm2nfObyzYum-peHhXPuYzX_BsybIKJMGicKImh2cy4wVXN4NWpyN3A4NUJ VAULT_NAMESPACE=admin KUBE_API_URL=http://d12b-34-567-89-10.ngrok.io
$ printenv | grep 'VAULT_\|BOUNDARY_\|KUBE_' BOUNDARY_ADDR=https://6a6eade6-example.boundary.hashicorp.cloud VAULT_ADDR=https://vault-cluster-exampe-012034567.06f0568a.z1.hashicorp.cloud:8200 VAULT_TOKEN=hvs.CAESIPA-K6F9TfY5Vm2nfObyzYum-peHhXPuYzX_BsybIKJMGicKImh2cy4wVXN4NWpyN3A4NUJ VAULT_NAMESPACE=admin KUBE_API_URL=http://d12b-34-567-89-10.ngrok.io
If you are missing any of the environment variables, go back and verify each product is running and set the required variables.
HCP Vault Dedicated
VAULT_NAMESPACE
is only required when using Vault Dedicated and will not be present when following the Vault Dev mode workflow.Verify connectivity by authenticating to Boundary. Enter the admin username and password when prompted.
$ boundary authenticate Please enter the login name (it will be hidden): Please enter the password (it will be hidden): Authentication information: Account ID: acctpw_NgTnYJHTls Auth Method ID: ampw_PqQpz2sqvx Expiration Time: Wed, 19 Jul 2023 09:52:02 EDT User ID: u_09ja9DkXo3 The token was successfully stored in the chosen keyring and is not displayed here.
$ boundary authenticate Please enter the login name (it will be hidden): Please enter the password (it will be hidden): Authentication information: Account ID: acctpw_NgTnYJHTls Auth Method ID: ampw_PqQpz2sqvx Expiration Time: Wed, 19 Jul 2023 09:52:02 EDT User ID: u_09ja9DkXo3 The token was successfully stored in the chosen keyring and is not displayed here.
Verify connectivity to Vault.
$ vault login token=$VAULT_TOKEN WARNING! The VAULT_TOKEN environment variable is set! The value of this variable will take precedence; if this is unwanted please unset VAULT_TOKEN or update its value accordingly. Success! You are now authenticated. The token information displayed below is already stored in the token helper. You do NOT need to run "vault login" again. Future Vault requests will automatically use this token. Key Value --- ----- token hvs.EXamPl3t7782QvHbHatL2f56i98VpKePzgqvHGicKImh2cy55bXZyMUVseWNZa00yem9pM3NuaHppRnQuOXpoQ0UQ9gE token_accessor tzwWshH6PwGHIFWq1dCCN2Xz.9zhCE token_duration 5h54m30s token_renewable false token_policies ["default" "hcp-root"] identity_policies [] policies ["default" "hcp-root"]
$ vault login token=$VAULT_TOKEN WARNING! The VAULT_TOKEN environment variable is set! The value of this variable will take precedence; if this is unwanted please unset VAULT_TOKEN or update its value accordingly. Success! You are now authenticated. The token information displayed below is already stored in the token helper. You do NOT need to run "vault login" again. Future Vault requests will automatically use this token. Key Value --- ----- token hvs.EXamPl3t7782QvHbHatL2f56i98VpKePzgqvHGicKImh2cy55bXZyMUVseWNZa00yem9pM3NuaHppRnQuOXpoQ0UQ9gE token_accessor tzwWshH6PwGHIFWq1dCCN2Xz.9zhCE token_duration 5h54m30s token_renewable false token_policies ["default" "hcp-root"] identity_policies [] policies ["default" "hcp-root"]
Verify connectivity to Kubernetes.
$ kubectl config view apiVersion: v1 clusters: - cluster: certificate-authority: /Users/username/.minikube/ca.crt extensions: - extension: last-update: Wed, 12 Jul 2023 10:04:18 EDT provider: minikube.sigs.k8s.io version: v1.30.1 name: cluster_info server: https://127.0.0.1:63060 name: minikube contexts: - context: cluster: minikube extensions: - extension: last-update: Wed, 12 Jul 2023 10:04:18 EDT provider: minikube.sigs.k8s.io version: v1.30.1 name: context_info namespace: default user: minikube name: minikube current-context: minikube ...snip...
$ kubectl config view apiVersion: v1 clusters: - cluster: certificate-authority: /Users/username/.minikube/ca.crt extensions: - extension: last-update: Wed, 12 Jul 2023 10:04:18 EDT provider: minikube.sigs.k8s.io version: v1.30.1 name: cluster_info server: https://127.0.0.1:63060 name: minikube contexts: - context: cluster: minikube extensions: - extension: last-update: Wed, 12 Jul 2023 10:04:18 EDT provider: minikube.sigs.k8s.io version: v1.30.1 name: context_info namespace: default user: minikube name: minikube current-context: minikube ...snip...
A local
minikube
cluster will be listed undercontexts
.
Next steps
Boundary, Vault, and Kubernetes have been deployed and are ready to be configured.
In the Connect to Kubernetes using Boundary configuration tutorial, you will configure Kubernetes, configure Vault for Kubernetes, and configure Boundary to broker credentials from Vault to the Kubernetes cluster.