Expose Gateway and configure HTTPS¶
Learn how to make deployKF accessible to your users, and how to configure HTTPS/TLS.
Help Us Improve
This guide covers an incredibly broad topic with near limitless possible implementations. If you see anything incorrect or missing, please help us by raising an issue!
Expose the Gateway Service¶
deployKF uses Istio for networking. Clients access deployKF through the Pods of an Istio Gateway Deployment via a Kubernetes Service
named deploykf-gateway
. This Service needs to be accessible from outside the cluster to allow users to access the deployKF dashboard and other tools.
Public Internet
The default Service type is LoadBalancer
, this may expose your deployKF Gateway to the public internet (depending on how your Kubernetes cluster is configured).
You should seriously consider the security implications of exposing the deployKF Gateway to the public internet. Most organizations choose to expose the gateway on a private network, and then use a VPN or other secure connection to access it.
You can expose the deploykf-gateway
Service in a few different ways, depending on your platform and requirements:
Use kubectl port-forward¶
If you are just testing the platform, you may use kubectl port-forward
to access the Service from your local machine.
Step 1 - Modify Hosts
You can't access deployKF using localhost
, 127.0.0.1
, or any other IP address.
Without an HTTP Host header, deployKF won't know which service you are trying to access. You must update your hosts file to resolve deploykf.example.com
and its subdomains to 127.0.0.1
.
Edit the hosts file on your local machine (where you run your web browser), NOT the Kubernetes cluster itself.
The /etc/hosts
can ONLY be edited by a user with root privileges.
Run the following command to open the hosts file in a text editor:
sudo nano /etc/hosts
# OR: sudo vim /etc/hosts
Add the following lines to the END of your /etc/hosts
file:
127.0.0.1 deploykf.example.com
127.0.0.1 argo-server.deploykf.example.com
127.0.0.1 minio-api.deploykf.example.com
127.0.0.1 minio-console.deploykf.example.com
If you have configured a custom domain, replace deploykf.example.com
with your custom domain.
The /etc/hosts
can ONLY be edited by a user with root privileges.
Run the following command to open the hosts file in a text editor:
sudo nano /etc/hosts
# OR: sudo vim /etc/hosts
Add the following lines to the END of your /etc/hosts
file:
127.0.0.1 deploykf.example.com
127.0.0.1 argo-server.deploykf.example.com
127.0.0.1 minio-api.deploykf.example.com
127.0.0.1 minio-console.deploykf.example.com
If you have configured a custom domain, replace deploykf.example.com
with your custom domain.
The hosts file can ONLY be edited by the Windows Administrator user.
Run this PowerShell command to start an Administrator Notepad:
Start-Process notepad.exe -ArgumentList "C:\Windows\System32\drivers\etc\hosts" -Verb RunAs
Add the following lines to the END of your C:\Windows\System32\drivers\etc\hosts
file:
127.0.0.1 deploykf.example.com
127.0.0.1 argo-server.deploykf.example.com
127.0.0.1 minio-api.deploykf.example.com
127.0.0.1 minio-console.deploykf.example.com
If you have configured a custom domain, replace deploykf.example.com
with your custom domain.
Step 2 - Port-Forward the Gateway
The kubectl port-forward
command creates a private tunnel to the Kubernetes cluster. Run the following command on your local machine to expose the deploykf-gateway
Service on 127.0.0.1
:
kubectl port-forward \
--namespace "deploykf-istio-gateway" \
svc/deploykf-gateway 8080:http 8443:https
If your browser suddenly stops working, press CTRL+C
to stop the port-forward, and then run the command again (kubernetes/kubernetes#74551
).
The deployKF dashboard should now be available on your local machine at:
https://deploykf.example.com:8443/
Remember that you can NOT access deployKF using localhost
or 127.0.0.1
!
Use a LoadBalancer Service¶
Most Kubernetes platforms provide a LoadBalancer-type Service that can be used to expose Pods on a private or public IP address. How you configure a LoadBalancer Service will depend on the platform you are using, for example:
AWS - Network Load Balancer
The AWS Load Balancer Controller is commonly used to configure LoadBalancer services on EKS.
Tip
AWS EKS does NOT have the AWS Load Balancer Controller installed by default.
Follow the official instructions to install the controller.
For example, you might set the following values to use a Network Load Balancer (NLB):
deploykf_core:
deploykf_istio_gateway:
## these values configure the deployKF Gateway Service
##
gatewayService:
name: "deploykf-gateway"
type: "LoadBalancer"
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: "external"
service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: "ip"
service.beta.kubernetes.io/aws-load-balancer-scheme: "internal"
## for external-dns integration (if not `--source=istio-gateway` config)
#external-dns.alpha.kubernetes.io/hostname: "deploykf.example.com, *.deploykf.example.com"
## for static private IP addresses
#service.beta.kubernetes.io/aws-load-balancer-private-ipv4-addresses: "192.168.XXX.XXX, 192.168.YYY.YYY"
#service.beta.kubernetes.io/aws-load-balancer-subnets: "subnet-XXX, subnet-YYY"
## for static public IP addresses
#service.beta.kubernetes.io/aws-load-balancer-eip-allocations: "eipalloc-XXX, eipalloc-YYY"
#service.beta.kubernetes.io/aws-load-balancer-subnets: "subnet-XXX, subnet-YYY"
## the ports the gateway Service listens on
## - defaults to the corresponding port under `gateway.ports`
## - these are the "public" ports which clients will connect to
## (they impact the user-facing HTTP links)
##
ports:
http: 80
https: 443
Google Cloud - Network Load Balancer
GKE, has a LoadBalancer Service type, which is configured with annotations like networking.gke.io/load-balancer-type
.
For example, you might set the following values to use an INTERNAL Passthrough Network Load Balancer:
deploykf_core:
deploykf_istio_gateway:
## these values configure the deployKF Gateway Service
##
gatewayService:
name: "deploykf-gateway"
type: "LoadBalancer"
annotations:
networking.gke.io/load-balancer-type: "Internal"
## for external-dns integration (if not `--source=istio-gateway` config)
#external-dns.alpha.kubernetes.io/hostname: "deploykf.example.com, *.deploykf.example.com"
## for static IP addresses
#loadBalancerIP: "192.168.XXX.XXX"
#loadBalancerSourceRanges: ["192.168.XXX.XXX/32"]
## the ports the gateway Service listens on
## - defaults to the corresponding port under `gateway.ports`
## - these are the "public" ports which clients will connect to
## (they impact the user-facing HTTP links)
##
ports:
http: 80
https: 443
MetalLB
MetalLB is a popular LoadBalancer implementation for bare-metal Kubernetes clusters.
For example, you might set the following values to use MetalLB:
deploykf_core:
deploykf_istio_gateway:
## these values configure the deployKF Gateway Service
##
gatewayService:
name: "deploykf-gateway"
type: "LoadBalancer"
annotations:
## for static IP addresses (specific IP)
metallb.universe.tf/loadBalancerIPs: 192.168.XXX.XXX
## for static IP addresses (from IP pool)
#metallb.universe.tf/address-pool: my-pool-XXX
## for external-dns integration (if not `--source=istio-gateway` config)
#external-dns.alpha.kubernetes.io/hostname: "deploykf.example.com, *.deploykf.example.com"
## the ports the gateway Service listens on
## - defaults to the corresponding port under `gateway.ports`
## - these are the "public" ports which clients will connect to
## (they impact the user-facing HTTP links)
##
ports:
http: 80
https: 443
Use a Kubernetes Ingress¶
Most Kubernetes platforms provide an Ingress class that can expose cluster services behind an application-layer load balancer. How you configure an Ingress will depend on the platform you are using, for example:
AWS - Application Load Balancer
The AWS Load Balancer Controller is commonly used to configure Ingress resources on EKS.
Tip
AWS EKS does NOT have the AWS Load Balancer Controller installed by default.
Follow the official instructions to install the controller.
Because ALB does NOT support TLS-passthrough, you must manually create an AWS Certificate Manager (ACM) wildcard certificate for your domain. The alb.ingress.kubernetes.io/certificate-arn
Ingress annotation will be used to select the certificate and allow the Ingress to terminate TLS before forwarding to the Gateway Service.
Hostname | Certificate Field |
---|---|
*.deploykf.example.com | CN, SAN |
deploykf.example.com | SAN |
For example, you might set the following values to use an Application Load Balancer (ALB):
deploykf_core:
deploykf_istio_gateway:
## this value adds arbitrary manifests to the generated output
##
extraManifests:
- |
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: deploykf-gateway
annotations:
alb.ingress.kubernetes.io/scheme: internal
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/backend-protocol: HTTPS
## the 'deploykf-gateway' service has a named "status-port" pointing to Istio's 15021 health port
## see: https://istio.io/latest/docs/ops/deployment/requirements/#ports-used-by-istio
alb.ingress.kubernetes.io/healthcheck-port: "status-port"
alb.ingress.kubernetes.io/healthcheck-protocol: HTTP
alb.ingress.kubernetes.io/healthcheck-path: "/healthz/ready"
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS": 443}]'
alb.ingress.kubernetes.io/ssl-redirect: "443"
alb.ingress.kubernetes.io/certificate-arn: |
arn:aws:acm:REGION_NAME:ACCOUNT_ID:certificate/CERTIFICATE_ID
spec:
ingressClassName: alb
rules:
- host: "deploykf.example.com"
http:
paths:
- path: "/"
pathType: Prefix
backend:
service:
name: "deploykf-gateway"
port:
name: https
- host: "*.deploykf.example.com"
http:
paths:
- path: "/"
pathType: Prefix
backend:
service:
name: "deploykf-gateway"
port:
name: https
## these values configure the deployKF Istio Gateway
##
gateway:
## the "base domain" for deployKF
## - this domain MUST align with your Ingress hostnames
## - this domain and its subdomains MUST be dedicated to deployKF
##
hostname: deploykf.example.com
## the ports that gateway Pods listen on
## - for an Ingress, these MUST be the standard 80/443
## - note, defaults from 'sample-values.yaml' are 8080/8443
##
ports:
http: 80
https: 443
## these values configure TLS
##
tls:
## ALB does NOT forward the SNI after TLS termination,
## so we must disable SNI matching in the gateway
matchSNI: false
## these values configure the deployKF Gateway Service
##
gatewayService:
name: "deploykf-gateway"
## WARNING: must be "NodePort" if "alb.ingress.kubernetes.io/target-type" is "instance"
type: "ClusterIP"
annotations: {}
Google Cloud - Application Load Balancer
GKE, has an Ingress class that can be used to configure Ingress resources for external or internal access.
In the following example, we are configuring the GKE Ingress to use the same TLS certificate as the deployKF Gateway Service (found in Secret/deploykf-istio-gateway-cert
). Later in this guide you will learn how to make this certificate valid, and not self-signed.
Warning
Google Managed Certificates are only supported by EXTERNAL Application Load Balancers (ALB). Because using an EXTERNAL ALB would expose deployKF to the public internet, we instead strongly recommend configuring cert-manager to generate a valid certificate.
For example, you might set the following values to use an INTERNAL Application Load Balancer:
deploykf_core:
deploykf_istio_gateway:
## this value adds arbitrary manifests to the generated output
##
extraManifests:
- |
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: deploykf-gateway
annotations:
kubernetes.io/ingress.class: "gce-internal"
kubernetes.io/ingress.allow-http: "false"
spec:
tls:
## NOTE: this secret is created as part of the deployKF installation
- secretName: "deploykf-istio-gateway-cert"
rules:
- host: "deploykf.example.com"
http:
paths:
- path: "/*"
pathType: ImplementationSpecific
backend:
service:
name: "deploykf-gateway"
port:
name: https
- host: "*.deploykf.example.com"
http:
paths:
- path: "/*"
pathType: ImplementationSpecific
backend:
service:
name: "deploykf-gateway"
port:
name: https
## these values configure the deployKF Istio Gateway
##
gateway:
## the "base domain" for deployKF
## - this domain MUST align with your Ingress hostnames
## - this domain and its subdomains MUST be dedicated to deployKF
##
hostname: deploykf.example.com
## the ports that gateway Pods listen on
## - for an Ingress, these MUST be the standard 80/443
## - note, defaults from 'sample-values.yaml' are 8080/8443
##
ports:
http: 80
https: 443
## these values configure the deployKF Gateway Service
##
gatewayService:
name: "deploykf-gateway"
type: "NodePort"
annotations:
cloud.google.com/app-protocols: '{"https":"HTTPS","http":"HTTP"}'
## this annotation may be required if you are using a Shared VPC
## https://cloud.google.com/kubernetes-engine/docs/how-to/internal-load-balance-ingress#shared_vpc
#cloud.google.com/neg: '{"ingress": true}'
Nginx - Kubernetes Community
Many clusters are configured with the Nginx Ingress Controller made by the Kubernetes community.
Warning
There are two independant Nginx Ingress Controller projects, each with their own configuration options. We have guides for both, so make sure you are using the correct one.
kubernetes/ingress-nginx
- made by the Kubernetes communitynginxinc/kubernetes-ingress
- made by NGINX, Inc.
In the following example, we are configuring the Nginx Ingress to use the same TLS certificate as the deployKF Gateway Service (found in Secret/deploykf-istio-gateway-cert
). Later in this guide you will learn how to make this certificate valid, and not self-signed.
For example, you might set the following values:
deploykf_core:
deploykf_istio_gateway:
## this value adds arbitrary manifests to the generated output
##
extraManifests:
- |
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: deploykf-gateway
annotations:
nginx.ingress.kubernetes.io/backend-protocol: HTTPS
## nginx wil NOT proxy the SNI by default
## see: https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#backend-certificate-authentication
nginx.ingress.kubernetes.io/proxy-ssl-name: "deploykf.example.com"
nginx.ingress.kubernetes.io/proxy-ssl-server-name: "on"
## this config is needed due to a bug in ingress-nginx
## see: https://github.com/kubernetes/ingress-nginx/issues/6728
nginx.ingress.kubernetes.io/proxy-ssl-secret: "deploykf-istio-gateway/deploykf-istio-gateway-cert"
spec:
ingressClassName: nginx
tls:
## NOTE: this secret is created as part of the deployKF installation
- secretName: "deploykf-istio-gateway-cert"
rules:
- host: "deploykf.example.com"
http:
paths:
- path: "/"
pathType: Prefix
backend:
service:
name: "deploykf-gateway"
port:
name: https
- host: "*.deploykf.example.com"
http:
paths:
- path: "/"
pathType: Prefix
backend:
service:
name: "deploykf-gateway"
port:
name: https
## these values configure the deployKF Istio Gateway
##
gateway:
## the "base domain" for deployKF
## - this domain MUST align with your Ingress hostnames
## - this domain and its subdomains MUST be dedicated to deployKF
##
hostname: deploykf.example.com
## the ports that gateway Pods listen on
## - for an Ingress, these MUST be the standard 80/443
## - note, defaults from 'sample-values.yaml' are 8080/8443
##
ports:
http: 80
https: 443
## these values configure the deployKF Gateway Service
##
gatewayService:
name: "deploykf-gateway"
type: "ClusterIP"
annotations: {}
Nginx - NGINX, Inc.
You may be using the Nginx Ingress Controller made by NGINX, Inc.
Warning
There are two independant Nginx Ingress Controller projects, each with their own configuration options. We have guides for both, so make sure you are using the correct one.
kubernetes/ingress-nginx
- made by the Kubernetes communitynginxinc/kubernetes-ingress
- made by NGINX, Inc.
In the following example, we are configuring the Nginx Ingress to use the same TLS certificate as the deployKF Gateway Service (found in Secret/deploykf-istio-gateway-cert
). Later in this guide you will learn how to make this certificate valid, and not self-signed.
For example, you might set the following values:
deploykf_core:
deploykf_istio_gateway:
## this value adds arbitrary manifests to the generated output
##
extraManifests:
- |
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: deploykf-gateway
annotations:
## this annoataion must be set to the name of the Service
## it tells Nginx to talk to the Service over HTTPS
nginx.org/ssl-services: "deploykf-gateway"
spec:
ingressClassName: nginx
tls:
## NOTE: this secret is created as part of the deployKF installation
- secretName: "deploykf-istio-gateway-cert"
rules:
- host: "deploykf.example.com"
http:
paths:
- path: "/"
pathType: Prefix
backend:
service:
name: "deploykf-gateway"
port:
name: https
- host: "*.deploykf.example.com"
http:
paths:
- path: "/"
pathType: Prefix
backend:
service:
name: "deploykf-gateway"
port:
name: https
## these values configure the deployKF Istio Gateway
##
gateway:
## the "base domain" for deployKF
## - this domain MUST align with your Ingress hostnames
## - this domain and its subdomains MUST be dedicated to deployKF
##
hostname: deploykf.example.com
## the ports that gateway Pods listen on
## - for an Ingress, these MUST be the standard 80/443
## - note, defaults from 'sample-values.yaml' are 8080/8443
##
ports:
http: 80
https: 443
## these values configure TLS
##
tls:
## nginx does NOT forward the SNI after TLS termination,
## so we must disable SNI matching in the gateway
matchSNI: false
## these values configure the deployKF Gateway Service
##
gatewayService:
name: "deploykf-gateway"
type: "ClusterIP"
annotations: {}
There are a few important things to note when using an Ingress:
Considerations when terminating TLS at the Ingress
If you put the deployKF Gateway behind a proxy which terminates TLS (like AWS ALB), you will probably need to disable SNI Matching. This is because most proxies don't forward the original request's Server Name Indication (SNI) to the backend service after TLS termination.
To disable SNI Matching, set deploykf_core.deploykf_istio_gateway.gateway.tls.matchSNI
to false
:
deploykf_core:
deploykf_istio_gateway:
gateway:
tls:
matchSNI: false
Ingress must talk to the Gateway over HTTPS
By default, the deployKF Gateway redirects all HTTP requests to HTTPS. This means any proxy you place in front of the gateway will need to talk to the gateway over HTTPS.
By default, the deployKF Istio Gateway uses a self-signed certificate, to make your proxy trust this certificate you will probably need to do ONE of the following:
- Configure a valid certificate for the gateway
- Trust the certificate in
Secret/deploykf-istio-gateway-cert
(Namespace:deploykf-istio-gateway
) - Trust the CA found in
Secret/selfsigned-ca-issuer-root-cert
(Namespace:cert-manager
) - Disable backend certificate validation in your proxy
If your proxy is simply unable to use HTTPS backends, and you don't require end-to-end encryption, you may disable the automatic redirection by setting deploykf_core.deploykf_istio_gateway.gateway.tls.redirect
to false
:
deploykf_core:
deploykf_istio_gateway:
gateway:
tls:
redirect: false
Configure DNS Records¶
Now that the deployKF Gateway Service has an IP address, you must configure DNS records which point to it.
You can't use the Gateway's IP address
You can't access deployKF using the Gateway's IP address. This is because deployKF hosts multiple services on the same IP address using virtual hostname routing.
Base Domain and Ports ¶
You will need to tell deployKF which hostnames to use, and which ports to listen on.
The following values set the base domain to deploykf.example.com
(default), and the ports to 80
and 443
(not default):
deploykf_core:
deploykf_istio_gateway:
gateway:
## the "base domain" for deployKF
## - this domain and its subdomains MUST be dedicated to deployKF
##
hostname: deploykf.example.com
## the ports that gateway Pods listen on
## - 80/443 are the defaults, but if you are using 'sample-values.yaml'
## as a base, the defaults are 8080/8443, so you will need to
## override them to use the standard ports
##
ports:
http: 80
https: 443
#gatewayService:
## the ports the gateway Service listens on
## - defaults to the corresponding port under `gateway.ports`
## - these are the "public" ports which clients will connect to
## (they impact the user-facing HTTP links)
##
#ports:
# http: ~
# https: ~
Depending on which tools you have enabled, the gateway may serve the following hostnames:
Hostname | Description |
---|---|
deploykf.example.com | Base Domain (dashboard and other apps) |
argo-server.deploykf.example.com | Argo Server |
minio-api.deploykf.example.com | MinIO API |
minio-console.deploykf.example.com | MinIO Console |
Use External-DNS ¶
External-DNS is a Kubernetes controller that automatically configures DNS records for Kubernetes resources. The following steps explain how to install and configure External-DNS to set DNS records for the deployKF Gateway Service.
Step 1 - Install External-DNS
The External-DNS documentation provides instructions for installing External-DNS on various platforms.
The following table links to the documentation for some popular DNS providers:
Cloud Platform | DNS Provider Documentation |
---|---|
Amazon Web Services | Route53 |
Google Cloud | Cloud DNS |
Microsoft Azure | Azure DNS, Azure Private DNS |
Any | Cloudflare, Akamai Edge DNS |
Deletion of DNS Records
Unless the --policy=upsert-only
argument is used, external-dns will delete DNS records when a resource is deleted (or changed in a way that would affect the records). Records take time to propagate, so you may experience downtime if you delete resources and then recreate them.
Step 2 - Configure External-DNS
There are a few ways to configure External-DNS so that it sets DNS records for the deployKF Gateway Service.
Option 1 - Istio Gateway Source:
You may configure External-DNS to extract hostnames names from Istio Gateway
resources. If you do this, a separate DNS record is created for each domain selected by our Istio Gateway
.
To connect External-DNS with Istio, you will need to:
- Update your
Deployment/external-dns
to include the--source=istio-gateway
start argument - Update your
ClusterRole/external-dns
to allow access to IstioGateway
andVirtualService
resources
Option 2 - Ingress Source:
You may configure External-DNS to automatically extract the domain names from Kubernetes Ingress
resources. If you do this, a separate DNS record is created for each hostname in the Ingress.
To connect External-DNS with Ingress, you will need to:
- Update your
Deployment/external-dns
to include the--source=ingress
start argument - Update your
ClusterRole/external-dns
to allow access to KubernetesIngress
resources
Option 3 - Manual Annotation:
You can manually configure External-DNS by annotating the Service
or Ingress
resource with the external-dns.alpha.kubernetes.io/hostname
annotation. Multiple hostnames can be specified in a single annotation using a comma-separated list.
The annotation can be set in one of the following ways:
- Service: setting the
deploykf_core.deploykf_istio_gateway.gatewayService.annotations
value - Ingress: manually annotating your Ingress resource
See the manually create DNS records section for information about which records to create.
Manually Create DNS Records¶
Alternatively, you may manually configure DNS records with your DNS provider. The following steps explain how to manually create DNS records for the deployKF Gateway Service.
Step 1 - Get Service IP
You will need to find the IP address of the deployKF Gateway Service, this can be done by running the following command:
kubectl get service deploykf-gateway --namespace deploykf-istio-gateway
The EXTERNAL-IP
field will contain the IP address of the deployKF Gateway Service.
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
deploykf-gateway LoadBalancer 10.43.24.148 172.23.0.2 15021:30XXXX/TCP,80:30XXXX/TCP,443:30XXXX/TCP 1d
Step 2 - Configure DNS Records
You can now configure DNS records with your DNS provider that target the IP address of the deployKF Gateway Service.
You need to create records for BOTH the base domain AND subdomains. You can avoid the need to specify each subdomain by using a wildcard DNS record, but you will still need to specify the base domain.
For example, you might set the following DNS records:
Type | Name | Value |
---|---|---|
A | *.deploykf.example.com | IP Address of the deployKF Gateway Service |
A | deploykf.example.com | IP Address of the deployKF Gateway Service |
Propagation Time
DNS records can take time to propagate, so you may experience downtime if you delete records and then recreate them.
Configure TLS Certificates¶
deployKF uses cert-manager to manage TLS certificates.
Existing cert-manager Installation
If your cluster already has a cert-manager installation, you should follow these instructions to disable the deployKF cert-manager installation and use your own.
Use Let's Encrypt with Cert-Manager ¶
By default, the deployKF Gateway will use a self-signed certificate. Therefore, if you are not using an external proxy to terminate TLS, you will likely want to configure a valid TLS certificate for the gateway itself.
For almost everyone, the best Certificate Authority (CA) is Let's Encrypt. The following steps show how to use Let's Encrypt to generate valid TLS certificates for the Gateway.
Step 1 - Connect Cert-Manager to DNS Provider
Because deployKF uses a wildcard Certificate
, you MUST use the DNS-01
challenge to verify domain ownership (rather than HTTP-01
). This requires you to configure cert-manager so that it is able to create DNS records.
The cert-manager documentation provides instructions for configuring DNS-01
challenges for various DNS providers. The following table links to the documentation for some popular DNS providers:
Cloud Platform | DNS Provider Documentation |
---|---|
Amazon Web Services | Route53 |
Google Cloud | Cloud DNS |
Microsoft Azure | Azure DNS |
Any | Cloudflare, Akamai Edge DNS |
ServiceAccount Annotations
To use Pod-based authentication with your DNS Provider (for example, to use IRSA on EKS), you may need to annotate the cert-manager ServiceAccount.
Custom ServiceAccount annotations may be applied to the embedded cert-manager with the deploykf_dependencies.cert_manager.controller.serviceAccount.annotations
value:
deploykf_dependencies:
cert_manager:
controller:
## EXAMPLE: for Azure AD Workload Identity
#podLabels:
# azure.workload.identity/use: "true"
serviceAccount:
annotations:
## EXAMPLE: for AWS IRSA
#eks.amazonaws.com/role-arn: "arn:aws:iam::MY_ACCOUNT_ID:role/MY_ROLE_NAME"
## EXAMPLE: for GCP Workload Identity
#iam.gke.io/gcp-service-account=GSA_NAME@GSA_PROJECT.iam.gserviceaccount.com
## EXAMPLE: for Azure AD Workload Identity
#azure.workload.identity/client-id: "00000000-0000-0000-0000-000000000000"
#azure.workload.identity/tenant-id: "00000000-0000-0000-0000-000000000000"
Step 2 - Create a ClusterIssuer
Once cert-manager is connected to your DNS provider, you must create a ClusterIssuer
resource that can generate certificates for your domain from Let's Encrypt.
For example, you may create a ClusterIssuer
resource like this when using Google Cloud DNS:
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: my-cluster-issuer
spec:
acme:
server: https://acme-staging-v02.api.letsencrypt.org/directory
email: user@example.com
privateKeySecretRef:
name: letsencrypt-staging
key: tls.key
solvers:
- dns01:
cloudDNS:
project: my-project-id
serviceAccountSecretRef:
name: my-service-account-secret
key: service-account.json
selector:
dnsNames:
- "*.deploykf.example.com"
- "deploykf.example.com"
Issuer Kind
Most cert-manager examples show an Issuer
resource. Note that any issuer may be converted to its equivalent cluster version by changing the kind
field from "Issuer"
to "ClusterIssuer"
and removing the metadata.namespace
field.
Step 3 - Configure the Istio Gateway
Once you have a ClusterIssuer
resource that can generate certificates for your domain, you must configure the deployKF Istio Gateway to use it. This is done by using the deploykf_dependencies.cert_manager.clusterIssuer
values.
For example, if you created a ClusterIssuer
named my-cluster-issuer
, you would set the following values:
deploykf_dependencies:
cert_manager:
clusterIssuer:
## this tells deployKF that you have created a ClusterIssuer
enabled: false
## this value should match the name of your ClusterIssuer
issuerName: "my-cluster-issuer"
Use a Custom Certificate¶
If you already have a valid TLS certificate for your domain, and don't want to configure cert-manager, you will need to use an Ingress to terminate TLS before the Gateway using your certificate.
Please note, not all Ingress controllers support reading a Kubernetes Secret for TLS termination, so this process may vary depending on your platform.
Step 1 - Get your TLS Certificate
Create a wildcard certificate with the following fields, replacing deploykf.example.com
with the base domain you have configured.
Field | Value |
---|---|
Common Name (CN) | *.deploykf.example.com |
Subject Alternative Name (SAN) | DNS Name: *.deploykf.example.com DNS Name: deploykf.example.com |
You will need the .crt
and .key
files for your TLS certificate.
The .crt
file should contain the public certificate in PEM format, it should look something like this:
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
The .key
file should contain the private signing key in PEM format, it should look something like this:
-----BEGIN RSA PRIVATE KEY-----
...
-----END RSA PRIVATE KEY-----
Step 2 - Create Kubernetes Secret
First, you will need to create a Kubernetes Secret containing your TLS certificate and key.
The following command will create a Secret named my-tls-secret
in the deploykf-istio-gateway
Namespace:
kubectl create secret tls "my-tls-secret" \
--cert /path/to/tls.crt \
--key /path/to/tls.key \
--namespace "deploykf-istio-gateway"
Step 3 - Configure the Ingress
Next, you will need to configure an Ingress resource to use the my-tls-secret
Secret for TLS termination. You may start from one of the above examples and update the tls
field to use your Secret.
For example, your updated Ingress resource will probably look something like this:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: deploykf-gateway
namespace: deploykf-istio-gateway
annotations:
...
...
spec:
ingressClassName: XXXXXX
tls:
## this tells the Ingress to use `my-tls-secret` for TLS termination
- secretName: "my-tls-secret"
## NOTE: some Ingress controllers allow multiple certificates to be specified
## for different hostnames
#- secretName: "other-tls-secret"
# hosts:
# - "*.deploykf.example.com"
# - "deploykf.example.com"
rules:
...
...
Terminate TLS before the Gateway¶
It is common to terminate TLS at a proxy in front of the deployKF Gateway. For example, you might be using an Ingress to expose the deployKF Gateway Service (like AWS ALB), or have a proxy like Cloudflare in front of your cluster.
In both of these cases it may be unnecessary to configure a valid TLS certificate for the deployKF Gateway. Because, off-cluster clients will see the certificate of the proxy, not the Gateway itself.
In-Mesh Traffic to Gateway
When Pods inside the Istio mesh make requests to the gateway hostname/ports, this traffic bypasses your public LoadBalancer/Ingress and goes directly to the Gateway Deployment Pods (through the mesh).
Therefore, even if your Ingress has its own valid TLS termination (e.g. from AWS ALB), in-mesh Pods will see the certificate of the Istio Gateway itself (which by default is self-signed).
Why does this happen?
Traffic from in-mesh Pods gets intercepted by the Istio sidecar because of this ServiceEntry
, and because we enable Istio's DNS Proxying feature by setting ISTIO_META_DNS_CAPTURE
and ISTIO_META_DNS_AUTO_ALLOCATE
to true
.
How can I prevent these TLS errors?
All core deployKF apps are configured to trust the default self-signed certificate (e.g. oauth2-proxy). However, your own in-mesh apps will need to do ONE of the following (unless you use a valid certificate):
- Disable Istio DNS Proxying on your app's Pods:
- Set the
proxy.istio.io/config
Pod annotation to{"proxyMetadata": {"ISTIO_META_DNS_CAPTURE": "false", "ISTIO_META_DNS_AUTO_ALLOCATE": "false"}}
- Set the
- Disable certificate validation in your app:
- See your app's documentation for information on how to do this.
- Trust the CA found in
Secret/selfsigned-ca-issuer-root-cert
(Namespace:cert-manager
):- See your app's documentation for information on how to do this.
- Note, we create a trust-manager
Bundle
for this CA by default; All Namespaces with the labeldeploykf.github.io/inject-root-ca-cert: "enabled"
will have aConfigMap
nameddeploykf-gateway-issuer-root-ca-cert
with a key namedroot-cert.pem
containing the CA certificate.
Created: 2023-08-16