Skip to content

Helm Chart Components

Helm chart components allow you to deploy any helm from a public or connected repo. Please refer to the Helm reference for full configuration options.

Configuring a Helm component

To configure a Helm component, specify a repo, the required configuration values and version to deploy it with.

[[components]]
name = "helm_chart"
type = "terraform_module"
helm_chart_version = "1.6.3"
[components.connected_repo]
directory = "helm_chart"
repo = "<your-org>/<your-repo>"
branch = "main"

You can configure Helm components to use either a public repo (using a public_repo block) or a private GitHub repo (using a connected_repo block). Read more about VCS configuration here.

Open Source and Private Charts

Nuon supports any Helm chart that can be accessed using git. It is common for apps to have both a combination of public, open source helm charts for deploying standard components and private helm charts for application specific configuration.

Helm Value Configuration

One of the most important parts of deploying a Helm chart into a customer account, includes setting values properly. Values can be used for everything from accessing infrastructure, setting images or scaling pods.

Variables allow you to set default values for a Nuon app, expose customer configuration options, reference infrastructure from other components or access an image that was synced into the customer account.

Examples for accessing common interpolated variables:

[[components]]
name = "helm_chart"
type = "helm_chart"
chart_name = "<your-app>"
[components.connected_repo]
directory = "components/helm-chart"
repo = "<your-org>/<your-repo>"
branch = "main"
[components.values]
# access a synced image
image_repository = "{{.nuon.components.image.image.repository.uri}}"
image_tag = "{{.nuon.components.image.image.tag}}"
# access outputs from a terraform component
output_value = "{{.nuon.components.terraform.outputs.output_value}}"
# access outputs from the sandbox
aws_region = "{{.nuon.install.sandbox.outputs.aws_region}}"
vpc_id = "{{.nuon.install.sandbox.outputs.vpc.id}}"
# access information about the install domain
public_root_domain = "{{.nuon.install.public_domain}}"
internal_root_domain = "{{.nuon.install.internal_domain}}"

Using a Helm Values File

You can add a values file, and set multiple values at once.

[[components]]
name = "helm_chart"
type = "helm_chart"
chart_name = "<your-app>"
[components.connected_repo]
directory = "components/helm-chart"
repo = "<your-org>/<your-repo>"
branch = "main"
[[components.values_file]]
contents = """
image.tag = {{.nuon.components.docker_build.image.name}}
"""

Default Sandbox Components and DNS

The aws-eks managed sandbox ships with standard Helm components + DNS zones to solve for common use cases, such as exposing a public or private https service. This includes:

The sandboxes are open source and can be customized, if these components do not work for your application.

Using Domains

The Nuon managed sandboxes automatically deploy the components required to provision DNS, Certificate and Load Balancer resources using Helm. There are multiple ways to use these components. Here are examples of the two most common.

AWS NLB with AWS ACM

If you want to leverage AWS services, you can provision an NLB that uses an ACM certificate.

Your chart would contain a Service resource, configured so that the operator will create an NLB.

---
apiVersion: v1
kind: Service
metadata:
name: nlb-public
namespace: {{ .Release.Namespace }}
labels: {}
annotations:
service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip
service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
service.beta.kubernetes.io/aws-load-balancer-target-group-attributes: preserve_client_ip.enabled=false
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: {{ .Values.api.nlbs.public_domain_certificate_arn }}
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: https
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
external-dns.alpha.kubernetes.io/hostname: {{ .Values.api.nlbs.public_domain }}
spec:
type: LoadBalancer
loadBalancerClass: service.k8s.aws/nlb
allocateLoadBalancerNodePorts: false
externalTrafficPolicy: Local
internalTrafficPolicy: Local
selector:
{}
ports:
- name: https
port: 443
targetPort: http

You would parametrize the Service with the following values in your chart’s values.yaml file, to accept public and private domains an ACM certificate ARN.

nlbs:
public_domain: nlb.INSTALL_PUBLIC_DOMAIN
internal_domain: nlb.internal.INSTALL_INTERNAL_DOMAIN
public_domain_certificate: nbl.PUBLIC_DOMAIN_CERTIFICATE

Then, you could set those values per install with the following component config.

resource "nuon_helm_chart_component" "nlb" {
name = "nlb"
app_id = nuon_app.<your-app>.id
chart_name = "nlb"
connected_repo = {
directory = "chart"
repo = "<your-org>/<your-repo>"
branch = "main"
}
value {
name = "nlbs.public_domain"
value = "nlb.{{.nuon.install.public_domain}}"
}
value {
name = "nlbs.public_domain_certificate_arn"
value = "{{.nuon.components.infra.outputs.public_domain_certificate_arn}}"
}
}

Nginx Ingress with Cert Manager

If you would prefer to manage resources within Kubernetes instead, you could use the Nginx ingress and have the cert-manager operator provision a certificate.

In your chart, you would define the Ingress and Certificate resources.

---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: api-public
namespace: {{ .Release.Namespace }}
labels: {}
annotations:
external-dns.alpha.kubernetes.io/hostname: {{ .Values.nginx.public_domain }}
kubernetes.io/ingress.class: nginx
spec:
tls:
- hosts:
- {{ .Values.nginx.public_domain }}
secretName: e2e-ingress-public-tls
rules:
- host: {{ .Values.nginx.public_domain}}
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: api
port:
number: 80
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: api-nginx
namespace: {{ .Release.Namespace }}
labels:
{{- include "common.apiLabels" . | nindent 4 }}
spec:
secretName: e2e-ingress-public-tls
dnsNames:
- {{ .Values.nginx.public_domain }}
issuerRef:
name: public-issuer
kind: ClusterIssuer

Then, allow configuring the domain in the chart’s values.yaml file.

nginx:
public_domain: nlb.INSTALL_PUBLIC_DOMAIN

You could then set the domain per install in your component config.

[[components]]
name = "nlb"
type = "helm_chart"
chart_name = "nlb"
[components.connected_repo]
directory = "components/nlb"
repo = "<your-org>/<your-repo>"
branch = "main"
[components.values]
"nlbs.public_domain" = "nlb.{{.nuon.install.public_domain}}"