Creating Clusters on Huawei Cloud Stack
This document provides comprehensive instructions for creating Kubernetes clusters on the Huawei Cloud Stack platform using Cluster API.
TOC
Prerequisites1. Required Plugin Installation2. HCS Infrastructure Input PreparationCluster Creation OverviewControl Plane ConfigurationConfigure HCS AuthenticationConfigure Machine Configuration PoolConfigure Machine TemplateConfigure KubeadmControlPlaneConfigure HCSClusterConfigure ClusterCluster VerificationUsing kubectlExpected ResultsAdding Worker NodesUpgrading ClustersPrerequisites
Before creating clusters, ensure all of the following prerequisites are met:
1. Required Plugin Installation
Install the following plugins on the 's global cluster:
- Alauda Container Platform Kubeadm Provider
- Alauda Container Platform HCS Infrastructure Provider
For detailed installation instructions, refer to the Installation Guide.
2. HCS Infrastructure Input Preparation
Prepare all HCS-specific inputs before writing any YAML in this document:
- HCS credential Secret values
- Provider-recognized compute values such as
imageName,flavorName, andavailabilityZone - Cluster network inventory, including the subnets and free IP ranges used by the cluster
- Control plane ELB address planning, including
vipAddress,vipSubnetName, and optional fixed L4 and L7 IPs - Static IP pool planning for control plane and worker nodes
See Infrastructure Resources for Huawei Cloud Stack for the complete checklist, source information, and constraints.
Cluster Creation Overview
At a high level, you'll create the following Cluster API resources in the 's global cluster to provision infrastructure and bootstrap a functional Kubernetes cluster.
Before you write any YAML in this page, complete the preparation checklist in Infrastructure Resources for Huawei Cloud Stack. This checklist covers the values that the provider expects, where to get them, and which values must be planned before you fill the manifests.
Important Namespace Requirement
To ensure proper integration with the as business clusters, all resources must be deployed in the cpaas-system namespace. Deploying resources in other namespaces may result in integration issues.
The cluster creation process follows this order:
- Configure HCS authentication (Secret)
- Create machine configuration pool (HCSMachineConfigPool)
- Configure machine template (HCSMachineTemplate)
- Configure KubeadmControlPlane
- Configure HCSCluster
- Create the Cluster
Control Plane Configuration
The control plane manages cluster state, scheduling, and the Kubernetes API. This section shows how to configure a highly available control plane.
Configuration Parameter Guidelines
When configuring resources, exercise caution with parameter modifications:
- Replace only values enclosed in
<>with your environment-specific values - Preserve all other parameters as they represent optimized or required configurations
- Modifying non-placeholder parameters may result in cluster instability or integration issues
Configure HCS Authentication
HCS authentication information is stored in a Secret resource.
You can reuse an existing HCS credential Secret. Its name does not need to match the cluster name, but HCSCluster.spec.identityRef.name must reference this Secret.
Configure Machine Configuration Pool
The HCSMachineConfigPool defines pre-configured hostnames and static IP addresses for VMs.
Pool Size Requirement
The configuration pool must include at least as many entries as the number of control plane nodes you plan to deploy.
Use one subnet selector per networks[] entry. For new manifests, set either subnetName or subnetId, but not both. Existing manifests may keep the deprecated subenetName field; if you also add subnetName while updating that manifest, its value must exactly match subenetName. Do not supply conflicting values across subenetName, subnetName, and subnetId.
If you use subnetName in the machine configuration pool, include the same subnet name in HCSCluster.spec.network.subnets.
*For new manifests, set either subnetName or subnetId. Existing manifests may continue to use subenetName, and may add subnetName only if both fields use the same value. Do not provide conflicting subnet selector values.
Note: The CRD schema lists subnetName, subenetName, and subnetId as optional fields and does not express their allowed combinations. Follow the provider-level rules above when writing manifests.
Configure Machine Template
The HCSMachineTemplate defines the VM specifications for control plane nodes.
Storage Requirements
The following data disk mount points are recommended for control plane nodes:
/var/lib/etcd- etcd data (10GB+)/var/lib/kubelet- kubelet data (100GB+)/var/lib/containerd- container runtime data (100GB+)/var/cpaas- platform data and logs (40GB+)
*Required when dataVolumes is specified.
Note: Do not set runtime identity fields such as providerID or serverId in HCSMachineTemplate manifests. The provider assigns these values when it creates HCS instances.
Note: Tenant administrators cannot retrieve the provider-recognized flavorName and availabilityZone values from the HCS UI. Get the exact values from the HCS administrator before you apply the manifest.
Configure KubeadmControlPlane
The KubeadmControlPlane defines the Kubernetes control plane configuration.
The HCS controller also injects files while resolving cloud-init data. It writes /etc/kubernetes/pki/kubelet.crt, /etc/kubernetes/pki/kubelet.key, and /etc/kubernetes/encryption-provider.conf for control plane machines. For the first control plane machine, the controller generates the encryption provider configuration. After the control plane is initialized, it tries to reuse the existing kube-apiserver encryption provider configuration. If you include a bootstrap file at /etc/kubernetes/encryption-provider.conf, treat it as a placeholder because the controller-generated or synchronized file takes precedence.
Note: Configure apiServer.extraArgs and apiServer.extraVolumes together. If the volume is not mounted, kube-apiserver cannot read the files written under /etc/kubernetes.
Note: For HCS control planes that use a fixed-size static IP pool, keep rolloutStrategy.rollingUpdate.maxSurge: 0 so replacements happen in a scale-down-then-scale-up order. This default upgrade path usually does not require additional control plane IPs. If you plan to increase control plane replicas or set maxSurge greater than 0, first extend the referenced HCSMachineConfigPool with additional hostname and static IP entries.
For component versions (DNS image tag, etcd image tag), refer to the OS Support Matrix.
Configure HCSCluster
The HCSCluster resource defines the HCS infrastructure configuration.
The HCS provider creates an ELB on the HCS platform for the Kubernetes API server. This ELB must use mixed load balancing so cluster nodes can also reach the API server through the ELB address.
If you want all ELB-related addresses to be fixed, provide vipAddress, elbVirsubnetL4Ips, and elbVirsubnetL7Ips. Each elbVirsubnetL4Ips[].ips and elbVirsubnetL7Ips[].ips entry must contain two IPs. If you omit the L4 or L7 virtual subnet IPs, HCS allocates them randomly.
If you set vipDomainName, configure HCS Cloud DNS Private Zones so that the domain resolves to vipAddress.
List every cluster subnet in spec.network.subnets before you reference it anywhere else. vipSubnetName, elbVirsubnetL4Ips[].subnetName, elbVirsubnetL7Ips[].subnetName, and the subnetName values used by HCSMachineConfigPool must all exist in spec.network.subnets.
Do not disable Hybrid Load Balancing on the provider-created ELB after the cluster is created. The cluster depends on that ELB mode so nodes can reach the API server through the ELB address.
*Required when you want to fix all ELB-related addresses. If you omit the L4 or L7 virtual subnet IPs, HCS allocates them randomly.
Do not include spec.controlPlaneEndpoint in the create manifest. In the HCS create flow, the controller derives and populates this field from spec.controlPlaneLoadBalancer after the HCSCluster is created. Do not set controlPlaneEndpoint manually, and do not add an empty controlPlaneEndpoint object. If controlPlaneEndpoint is explicitly present in the manifest, it must include both host and port.
Configure Cluster
The Cluster resource in Cluster API declares the cluster and references the control plane and infrastructure resources.
Cluster Verification
After deploying all cluster resources, verify that the cluster has been created successfully.
Using kubectl
Expected Results
A successfully created cluster should show:
- Cluster status: Running or Provisioned
- All control plane machines: Running
- Kubernetes nodes: Ready
- Cluster Module Status: Completed
Adding Worker Nodes
For instructions on adding worker nodes to the cluster, refer to Managing Nodes.
Upgrading Clusters
For instructions on upgrading cluster components, refer to Upgrading Clusters.