An operator is a software extension that uses custom resources (extension of the Kubernetes API) to manage complex applications on behalf of users.
Prerequisites
Create a Kubernetes Cluster with Kind (Optional)
kind-es-cluster.yaml
:kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker
- role: worker
create cluster
command followed by the --config
flag and the name of the configuration file:kind create cluster --config kind-es-cluster.yaml
Creating cluster "kind" ...
✓ Ensuring node image (kindest/node:v1.16.3) 🖼
✓ Preparing nodes 📦
✓ Writing configuration 📜
✓ Starting control-plane 🕹️
✓ Installing CNI 🔌
✓ Installing StorageClass 💾
✓ Joining worker nodes 🚜
Set kubectl context to "kind-kind"
You can now use your cluster with:
kubectl cluster-info --context kind-kind
Have a nice day! 👋
kubectl cluster-info
Kubernetes master is running at https://127.0.0.1:53519
KubeDNS is running at https://127.0.0.1:53519/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
Install the Operator Lifecycle Manager
curl -sL https://github.com/operator-framework/operator-lifecycle-manager/releases/download/0.13.0/install.sh | bash -s 0.13.0
customresourcedefinition.apiextensions.k8s.io/clusterserviceversions.operators.coreos.com created
customresourcedefinition.apiextensions.k8s.io/installplans.operators.coreos.com created
customresourcedefinition.apiextensions.k8s.io/subscriptions.operators.coreos.com created
customresourcedefinition.apiextensions.k8s.io/catalogsources.operators.coreos.com created
customresourcedefinition.apiextensions.k8s.io/operatorgroups.operators.coreos.com created
namespace/olm created
namespace/operators created
clusterrole.rbac.authorization.k8s.io/system:controller:operator-lifecycle-manager created
serviceaccount/olm-operator-serviceaccount created
clusterrolebinding.rbac.authorization.k8s.io/olm-operator-binding-olm created
deployment.apps/olm-operator created
deployment.apps/catalog-operator created
clusterrole.rbac.authorization.k8s.io/aggregate-olm-edit created
clusterrole.rbac.authorization.k8s.io/aggregate-olm-view created
operatorgroup.operators.coreos.com/global-operators created
operatorgroup.operators.coreos.com/olm-operators created
clusterserviceversion.operators.coreos.com/packageserver created
catalogsource.operators.coreos.com/operatorhubio-catalog created
Waiting for deployment "olm-operator" rollout to finish: 0 of 1 updated replicas are available...
deployment "olm-operator" successfully rolled out
deployment "catalog-operator" successfully rolled out
Package server phase: Installing
Package server phase: Succeeded
deployment "packageserver" successfully rolled out
Install the ECK Operator
kubectl apply -f https://download.elastic.co/downloads/eck/1.0.0/all-in-one.yaml
customresourcedefinition.apiextensions.k8s.io/apmservers.apm.k8s.elastic.co created
customresourcedefinition.apiextensions.k8s.io/elasticsearches.elasticsearch.k8s.elastic.co created
customresourcedefinition.apiextensions.k8s.io/kibanas.kibana.k8s.elastic.co created
clusterrole.rbac.authorization.k8s.io/elastic-operator created
clusterrolebinding.rbac.authorization.k8s.io/elastic-operator created
namespace/elastic-system created
statefulset.apps/elastic-operator created
serviceaccount/elastic-operator created
validatingwebhookconfiguration.admissionregistration.k8s.io/elastic-webhook.k8s.elastic.co created
service/elastic-webhook-server created
secret/elastic-webhook-server-cert created
kubectl get CustomResourceDefinition
NAME CREATED AT
apmservers.apm.k8s.elastic.co 2020-01-29T07:02:24Z
catalogsources.operators.coreos.com 2020-01-29T06:59:21Z
clusterserviceversions.operators.coreos.com 2020-01-29T06:59:20Z
elasticsearches.elasticsearch.k8s.elastic.co 2020-01-29T07:02:24Z
installplans.operators.coreos.com 2020-01-29T06:59:20Z
kibanas.kibana.k8s.elastic.co 2020-01-29T07:02:25Z
operatorgroups.operators.coreos.com 2020-01-29T06:59:21Z
subscriptions.operators.coreos.com 2020-01-29T06:59:20Z
kubectl describe CustomResourceDefinition
command followed by the name of the CRD:kubectl describe CustomResourceDefinition elasticsearches.elasticsearch.k8s.elastic.co
Name: elasticsearches.elasticsearch.k8s.elastic.co
Namespace:
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"apiextensions.k8s.io/v1beta1","kind":"CustomResourceDefinition","metadata":{"annotations":{},"creationTimestamp":null,"name...
API Version: apiextensions.k8s.io/v1
Kind: CustomResourceDefinition
Metadata:
Creation Timestamp: 2020-01-29T07:02:24Z
Generation: 1
Resource Version: 1074
Self Link: /apis/apiextensions.k8s.io/v1/customresourcedefinitions/elasticsearches.elasticsearch.k8s.elastic.co
UID: 2332769c-ead3-4208-b6bd-68b8cfcb3692
Spec:
Conversion:
Strategy: None
Group: elasticsearch.k8s.elastic.co
Names:
Categories:
elastic
Kind: Elasticsearch
List Kind: ElasticsearchList
Plural: elasticsearches
Short Names:
es
Singular: elasticsearch
Preserve Unknown Fields: true
Scope: Namespaced
Versions:
Additional Printer Columns:
Json Path: .status.health
Name: health
Type: string
Description: Available nodes
Json Path: .status.availableNodes
kubectl -n elastic-system logs -f statefulset.apps/elastic-operator
{"level":"info","@timestamp":"2020-01-27T14:57:57.656Z","logger":"controller-runtime.controller","message":"Starting workers","ver":"1.0.0-6881438d","controller":"license-controller","worker count":1}
{"level":"info","@timestamp":"2020-01-27T14:57:57.757Z","logger":"controller-runtime.controller","message":"Starting EventSource","ver":"1.0.0-6881438d","controller":"elasticsearch-controller","source":"kind source: /, Kind="}
{"level":"info","@timestamp":"2020-01-27T14:57:57.758Z","logger":"controller-runtime.controller","message":"Starting EventSource","ver":"1.0.0-6881438d","controller":"elasticsearch-controller","source":"kind source: /, Kind="}
{"level":"info","@timestamp":"2020-01-27T14:57:57.759Z","logger":"controller-runtime.controller","message":"Starting EventSource","ver":"1.0.0-6881438d","controller":"elasticsearch-controller","source":"channel source: 0xc00003a870"}
{"level":"info","@timestamp":"2020-01-27T14:57:57.759Z","logger":"controller-runtime.controller","message":"Starting Controller","ver":"1.0.0-6881438d","controller":"elasticsearch-controller"}
{"level":"info","@timestamp":"2020-01-27T14:57:57.760Z","logger":"controller-runtime.controller","message":"Starting workers","ver":"1.0.0-6881438d","controller":"elasticsearch-controller","worker count":1}
5. List the pods running in the
elastic-system
namespace with:kubectl get pods -n elastic-system
NAME READY STATUS RESTARTS AGE
elastic-operator-0 1/1 Running 0 11m
Running
before moving on.Deploy an Elasticsearch Cluster
elastic-search-cluster.yaml
with the following content:apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: quickstart
spec:
version: 7.5.2
nodeSets:
- name: default
count: 2
config:
node.master: true
node.data: true
node.ingest: true
node.store.allow_mmap: false
- the
parameter specifies the Elasticsearch version the Operator will deployversion
- the
parameter sets the number of database nodes. Make sure it's not greater than the number of nodes in your Kubernetes cluster.count
kubectl apply -f elastic-search-cluster.yaml
elasticsearch.elasticsearch.k8s.elastic.co/quickstart created
kubectl get elasticsearch
NAME HEALTH NODES VERSION PHASE AGE
quickstart unknown 7.5.2 3m51s
HEALTH
status has not been reported yet. It takes a few minutes for the process to complete. Then, the HEALTH
status will show as green
:kubectl get elasticsearch
NAME HEALTH NODES VERSION PHASE AGE
quickstart green 2 7.5.2 Ready 8m47s
kubectl get pods --selector='elasticsearch.k8s.elastic.co/cluster-name=quickstart'
NAME READY STATUS RESTARTS AGE
quickstart-es-default-0 1/1 Running 0 9m18s
quickstart-es-default-1 1/1 Running 0 9m18s
Verify Your Elasticsearch Installation
kubectl get service
command to see it:kubectl get service quickstart-es-http
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
quickstart-es-http ClusterIP 10.103.196.28 <none> 9200/TCP 15m
localhost:9200
to port 9200 of the pod running the quickstart-es-http
service, type the following command in a new terminal window:kubectl port-forward service/quickstart-es-http 9200
Forwarding from 127.0.0.1:9200 -> 9200
Forwarding from [::1]:9200 -> 9200
elastic
user is stored in a Kubernetes secret. Use the following command to retrieve the password, and save it into an environment variable called PASSWORD
:PASSWORD=$(kubectl get secret quickstart-es-elastic-user -o=jsonpath='{.data.elastic}' | base64 --decode)
curl
to make a request:curl -u "elastic:$PASSWORD" -k "https://localhost:9200"
{
"name" : "quickstart-es-default-0",
"cluster_name" : "quickstart",
"cluster_uuid" : "g0_1Vk9iQoGwFWYdzUqfig",
"version" : {
"number" : "7.5.2",
"build_flavor" : "default",
"build_type" : "docker",
"build_hash" : "8bec50e1e0ad29dad5653712cf3bb580cd1afcdf",
"build_date" : "2020-01-15T12:11:52.313576Z",
"build_snapshot" : false,
"lucene_version" : "8.3.0",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
Deploy Kibana
kibana.yaml
with the following content:apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
name: quickstart
spec:
version: 7.5.1
count: 1
elasticsearchRef:
name: quickstart
podTemplate:
metadata:
labels:
foo: kibana
spec:
containers:
- name: kibana
resources:
requests:
memory: 1Gi
cpu: 0.5
limits:
memory: 1Gi
cpu: 1
kubectl apply
command to create a Kibana cluster:kubectl apply -f kibana.yaml
kibana.kibana.k8s.elastic.co/quickstart created
kubectl get kibana
NAME HEALTH NODES VERSION AGE
quickstart 7.5.1 3s
HEALTH
status hasn't been reported yet.HEALTH
status will show as green:kubectl get kibana
NAME HEALTH NODES VERSION AGE
quickstart green 1 7.5.1 104s
kubectl get pods
command:kubectl get pod --selector='kibana.k8s.elastic.co/name=quickstart'
NAME READY STATUS RESTARTS AGE
quickstart-kb-7578b8d8fc-ftvbz 1/1 Running 0 70s
Verify Your Kibana Installation
ClusterIP
service for Kibana. You can retrieve it like this:kubectl get service quickstart-kb-http
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
quickstart-kb-http ClusterIP 10.98.126.75 <none> 5601/TCP 11m
kubectl port-forward service/quickstart-kb-http 5601
Forwarding from 127.0.0.1:5601 -> 5601
Forwarding from [::1]:5601 -> 5601
elastic
user. You've already saved it into an environment variable called PASSWORD
in Step 3 of the Verify Your Elasticsearch Installation section. You can now display it with:echo $PASSWORD
vrfr6b6v4687hnldrc72kb4q
vrfr6b6v4687hnldrc72kb4q
but yours will be different.elastic
username and the password you retrieved earlier:Manage Your ECK Cluster with the Kubernetes Operator
nodeSets.count: 1
in your elasticsearch.yaml
file. Your spec should look like this:apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: quickstart
spec:
version: 7.5.2
nodeSets:
- name: default
count: 1
config:
node.master: true
node.data: true
node.ingest: true
node.store.allow_mmap: false
kubectl apply -f elastic-search.yaml
elasticsearch.elasticsearch.k8s.elastic.co/quickstart configured
kubectl get elasticsearch
NAME HEALTH NODES VERSION PHASE AGE
quickstart green 1 7.5.2 ApplyingChanges 56m
kubectl get pods --selector='elasticsearch.k8s.elastic.co/cluster-name=quickstart'
NAME READY STATUS RESTARTS AGE
quickstart-es-default-0 1/1 Running 0 58m
nodeSets.count: 2
in your elasticsearch.yaml
file:apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: quickstart
spec:
version: 7.5.2
nodeSets:
- name: default
count: 3
config:
node.master: true
node.data: true
node.ingest: true
node.store.allow_mmap: false
kubectl get elasticsearch
NAME HEALTH NODES VERSION PHASE AGE
quickstart green 1 7.5.2 ApplyingChanges 61m
PHASE
column will show as Ready
:kubectl get elasticsearch
NAME HEALTH NODES VERSION PHASE AGE
quickstart green 2 7.5.2 Ready 68m