Core Cluster Installation
core-cluster
is the only component of the VUI project not released under the Apache 2.0 license. It is a proprietary, closed-source component, distributed under a custom End-User License Agreement (EULA) and available exclusively to active Seriohub sponsors at the required tier.
Please make sure to review the applicable licenses before continuing: see the VUI License Overview and the VUI-Core License Terms.
Starting from VUI version 0.3.0, the new vui-core
component is available.
It enables centralized management of multiple remote clusters from a single UI instance.
π Read the announcement and learn more about the project.
This scenario sets up a centralized VUI environment using VUI-Core, designed for multi-cluster visibility, coordination, and control.
Evaluating VUI-Coreβ
The VUI-Core component is available for evaluation in demo mode for 14 days. This allows you to explore its centralized management features and multi-cluster capabilities before becoming a sponsor.
For any questions, clarifications, or additional information, feel free to get in touch or open a issue.
Before installing, we recommend reviewing the Multi-Cluster Architecture Blueprint to better understand how VUI-Core fits into a distributed setup.
Managing NATS Usersβ
In this initial release, user authentication for NATS is handled via static user credentials.
These must be created manually and stored in a Kubernetes secret.
- Create a temporary file called
users.conf
with the following content:
users = [
{ user = "nats-Core-User", password = "nats-Core-Pwd" },
{ user = "nats-Agent-1-User", password = "nats-Agent-1-pwd" },
{ user = "nats-Agent-2-User", password = "nats-Agent-2-pwd" },
{ user = "nats-Agent-3-User", password = "nats-Agent-3-pwd" },
]
- Create the Kubernetes secret from this file:
kubectl create secret generic vui-nats-user-auth \
--from-file=users.conf=./tmp/users.conf \
-n vui \
--dry-run=client -o yaml -n vui | kubectl apply -f -
π The NATS service includes a sidecar that watches this secret and automatically reloads its configuration.
You do not need to restart any pods when updating cluster credentials.
Requirementsβ
- Ingress or NodePort access to expose the UI and API
- A static IP for
natsService
(recommended for stable multi-cluster communication)
Configurationβ
Use the predefined override file:
core.yaml
Minimal required configuration:
global:
veleroNamespace: <your-velero-namespace>
clusterName: <core-cluster-name>
core: true
apiService:
secret:
defaultAdminUsername: <admin>
defaultAdminPassword: <password>
natsUsername: <nats-Agent-1-User>
natsPassword: <nats-Agent-1-Pwd>
coreService:
secret:
# clientKey: <client-key>
defaultAdminUsername: <admin>
defaultAdminPassword: <password>
natsUsername: <nats-Core-User>
natsPassword: <nats-Core-Pwd>
exposure:
mode: ingress
ingress:
spec:
tls:
- hosts:
- vui-core.yourdomain.com
natsService:
loadBalancerIP: <ip>
Login to the UI with the credentials defined in:
- Username:
coreService.secret.defaultAdminUsername
- Password:
coreService.secret.defaultAdminPassword
Installationβ
helm repo add seriohub https://seriohub.github.io/velero-helm
helm repo update
helm install vui seriohub/vui \
-n vui \
--create-namespace \
-f core.yaml
Accessβ
Once deployed, the UI will be accessible at:
https://vui-core.yourdomain.com
Use the Core dashboard to manage and monitor multiple remote clusters using Agent installations.
Additional Useful Override Filesβ
The velero-helm
repository includes other override files for alternative use cases:
core-ldap.yaml
β Enables LDAP authentication
Notesβ
- By default, NATS is configured to use the
nats
protocol (non-TLS). - NATS supports TLS, but enabling it depends on the network and ingress configuration.
LoadBalancer IP Configurationβ
If your environment does not allow reserving a static LoadBalancer IP ahead of time, you can retrieve it after deployment and update the chart:
- Get the assigned IP:
kubectl get svc -n vui -o jsonpath="{.items[?(@.spec.type=='LoadBalancer')].status.loadBalancer.ingress[0].ip}"
- Update the deployment with the resolved IP:
helm upgrade vui ./chart/ -f core.yaml --set natsService.loadBalancerIP=<resolved-ip>
kubectl rollout restart deployment -l component=api -n vui
kubectl rollout restart deployment -l component=core -n vui