侧边栏壁纸
博主头像
张种恩的技术小栈博主等级

行动起来,活在当下

  • 累计撰写 748 篇文章
  • 累计创建 65 个标签
  • 累计收到 39 条评论

目 录CONTENT

文章目录

使用kubeadmin快速部署kubernetes集群

zze
zze
2020-06-30 / 0 评论 / 0 点赞 / 876 阅读 / 31924 字

不定期更新相关视频,抖音点击左上角加号后扫一扫右方侧边栏二维码关注我~正在更新《Shell其实很简单》系列

环境准备

按本篇文章部署 Kubernetes 集群机器需要满足以下几个条件:

  • 三台机器,操作系统 CentOS 7.x-86_x64,我这里使用的 CentOS 7.8;
  • 硬件配置:2GB 或 更多 RAM,2 个 CPU 或更多 CPU,硬盘 30GB 或更多;
  • 集群中所有机器之间网络互通;
  • 可以访问外网,需要拉取镜像;
  • 禁止 swap 分区;

我这里各机器信息如下:

主机名IP
k8s-master10.0.1.61
k8s-node110.0.1.62
k8s-node210.0.1.63

在这三台机器上执行如下操作:
1、关闭防火墙:

$ systemctl stop firewalld
$ systemctl disable firewalld

2、关闭 selinux:

$ sed -i 's/enforcing/disabled/' /etc/selinux/config  # 永久
$ setenforce 0  # 临时

3、关闭 swap:

$ swapoff -a  # 临时
$ vim /etc/fstab  # 永久

4、设置对应的主机名:

$ hostnamectl set-hostname <hostname>

5、在 master 添加 hosts:

$ cat >> /etc/hosts << EOF
10.0.1.61 k8s-master
10.0.1.62 k8s-node1
10.0.1.63 k8s-node2
EOF

6、将桥接的 IPv4 流量传递到 iptables 的链:

$ cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
$ sysctl --system  # 生效

7、时间同步:

$ yum install ntpdate -y
$ ntpdate time.windows.com

8、添加 Kubernetes 国内 yum 源:

$ cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

安装 Docker

在所有节点安装 Docker。

Kubernetes 默认 CRI(容器运行时)为 Docker,因此先安装 Docker。
1、安装 Docker:

$ wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
$ yum -y install docker-ce-18.06.1.ce-3.el7
$ systemctl enable docker && systemctl start docker
$ docker --version
Docker version 18.06.1-ce, build e68fc7a

2、配置国内镜像地址:

$ cat > /etc/docker/daemon.json << EOF
{
  "registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"]
}
EOF

安装 kubeadm、kubelet 和 kubectl

在所有节点安装 kubeadmkubeletkubectl

由于版本更新频繁,这里指定版本号部署:

$ yum install -y kubelet-1.17.0 kubeadm-1.17.0 kubectl-1.17.0
$ systemctl enable kubelet

部署 Kubernetes Master

在 10.0.1.61(k8s-master) 节点部署 Kubernetes Master。

初始化:

$ kubeadm init \
  --apiserver-advertise-address=10.0.1.61 \
  --image-repository registry.aliyuncs.com/google_containers \
  --kubernetes-version v1.17.0 \
  --service-cidr=10.96.0.0/12 \
  --pod-network-cidr=10.244.0.0/16
...
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.0.1.61:6443 --token fswzjs.057jwoyqk6pvkcru \
    --discovery-token-ca-cert-hash sha256:276b5cc71c1a91051a415fb209ceec6c3ac1942e7d03be2d73534f050d01da2a 

由于默认拉取镜像地址 k8s.gcr.io 国内无法访问,这里使用 --image-repository 选项指定阿里云镜像仓库地址。

出现如上输出则说明初始化成功,这里请保存好上面输出的最后一条命令

kubeadm join 10.0.1.61:6443 --token fswzjs.057jwoyqk6pvkcru \
    --discovery-token-ca-cert-hash sha256:276b5cc71c1a91051a415fb209ceec6c3ac1942e7d03be2d73534f050d01da2a 

使用 kubectl 工具检查节点:

$ mkdir -p $HOME/.kube
$ cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ chown $(id -u):$(id -g) $HOME/.kube/config
$ kubectl get nodes
NAME         STATUS       ROLES    AGE   VERSION
k8s-master   Not Ready    master   41m   v1.17.0

安装 Pod 网络插件

方法一:科学上网

$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds-amd64 created
daemonset.apps/kube-flannel-ds-arm64 created
daemonset.apps/kube-flannel-ds-arm created
daemonset.apps/kube-flannel-ds-ppc64le created
daemonset.apps/kube-flannel-ds-s390x created

这一步如果失败,原因就是这个 kube-flannel.yml 中使用的是国外镜像源,配置 Docker 科学上网即可:

$ mkdir /etc/systemd/system/docker.service.d -p
$ cat << EOF > /etc/systemd/system/docker.service.d/http-proxy.conf
[Service]
Environment="HTTP_PROXY=http://10.0.0.101:41091/" "HTTPS_PROXY=http://10.0.0.101:41091/" "NO_PROXY=localhost,127.0.0.1"
EOF
# 重启
$ systemctl daemon-reload && systemctl restart docker

这里的 HTTP_PROXY 要替换为你自己的代理地址。

方法二(推荐)

每次部署需要科学上网也的确不方便,所以我把 flannel 镜像传到了自己的阿里云镜像仓库。

然后我修改了 kube-flannel.yml 中的镜像使用我这里上传的镜像,如下面内容。所以你只需要手动将下面内容保存到一个文件,然后 kubectl apply -f <文件名> 即可。

---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: psp.flannel.unprivileged
  annotations:
    seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
    seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
    apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
    apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
  privileged: false
  volumes:
  - configMap
  - secret
  - emptyDir
  - hostPath
  allowedHostPaths:
  - pathPrefix: "/etc/cni/net.d"
  - pathPrefix: "/etc/kube-flannel"
  - pathPrefix: "/run/flannel"
  readOnlyRootFilesystem: false
  # Users and groups
  runAsUser:
    rule: RunAsAny
  supplementalGroups:
    rule: RunAsAny
  fsGroup:
    rule: RunAsAny
  # Privilege Escalation
  allowPrivilegeEscalation: false
  defaultAllowPrivilegeEscalation: false
  # Capabilities
  allowedCapabilities: ['NET_ADMIN', 'NET_RAW']
  defaultAddCapabilities: []
  requiredDropCapabilities: []
  # Host namespaces
  hostPID: false
  hostIPC: false
  hostNetwork: true
  hostPorts:
  - min: 0
    max: 65535
  # SELinux
  seLinux:
    # SELinux is unused in CaaSP
    rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
rules:
- apiGroups: ['extensions']
  resources: ['podsecuritypolicies']
  verbs: ['use']
  resourceNames: ['psp.flannel.unprivileged']
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes/status
  verbs:
  - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-system
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
      hostNetwork: true
      priorityClassName: system-node-critical
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: registry.cn-shenzhen.aliyuncs.com/zze/flannel:v0.13.0-rc2
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: registry.cn-shenzhen.aliyuncs.com/zze/flannel:v0.13.0-rc2
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN", "NET_RAW"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
      - name: run
        hostPath:
          path: /run/flannel
      - name: cni
        hostPath:
          path: /etc/cni/net.d
      - name: flannel-cfg
        configMap:
          name: kube-flannel-cfg

在这个 apply 的过程中,实际上会去拉取一些镜像,检查当前 docker 的镜像如果如下,则说明都拉取成功了:

$ docker images
REPOSITORY                                                        TAG                 IMAGE ID            CREATED             SIZE
quay.io/coreos/flannel                                            v0.12.0-amd64       4e9f801d2217        3 months ago        52.8MB
registry.aliyuncs.com/google_containers/kube-proxy                v1.17.0             7d54289267dc        6 months ago        116MB
registry.aliyuncs.com/google_containers/kube-scheduler            v1.17.0             78c190f736b1        6 months ago        94.4MB
registry.aliyuncs.com/google_containers/kube-controller-manager   v1.17.0             5eb3b7486872        6 months ago        161MB
registry.aliyuncs.com/google_containers/kube-apiserver            v1.17.0             0cae8d5cc64c        6 months ago        171MB
registry.aliyuncs.com/google_containers/coredns                   1.6.5               70f311871ae1        7 months ago        41.6MB
registry.aliyuncs.com/google_containers/etcd                      3.4.3-0             303ce5db0e90        8 months ago        288MB
registry.aliyuncs.com/google_containers/pause                     3.1                 da86e6ba6ca1        2 years ago         742kB

并且此时 docker 中会自行启动很多容器,可通过 docker ps 查看:

$ docker ps
CONTAINER ID        IMAGE                                               COMMAND                  CREATED             STATUS              PORTS               NAMES
a01e55786634        4e9f801d2217                                        "/opt/bin/flanneld -…"   7 minutes ago       Up 7 minutes                            k8s_kube-flannel_kube-flannel-ds-amd64-mkzhh_kube-system_0fb49ecb-695c-48d3-b654-cd1694691538_2
b187faf2291e        78c190f736b1                                        "kube-scheduler --au…"   7 minutes ago       Up 7 minutes                            k8s_kube-scheduler_kube-scheduler-k8s-master_kube-system_ef597d905c3006a0826f3e90c95561d5_3
fc71fd419088        0cae8d5cc64c                                        "kube-apiserver --ad…"   7 minutes ago       Up 7 minutes                            k8s_kube-apiserver_kube-apiserver-k8s-master_kube-system_e58ee1fe74feae783dabf1c1bc9f5fde_3
7f7244dde0bb        7d54289267dc                                        "/usr/local/bin/kube…"   7 minutes ago       Up 7 minutes                            k8s_kube-proxy_kube-proxy-fnr6c_kube-system_d12921e7-637c-4b78-aa90-a8e0c1280dcd_3
c284ad7ab065        5eb3b7486872                                        "kube-controller-man…"   7 minutes ago       Up 7 minutes                            k8s_kube-controller-manager_kube-controller-manager-k8s-master_kube-system_eb0c62892de0c481c800640b4c18fcd7_3
83f13477b673        303ce5db0e90                                        "etcd --advertise-cl…"   7 minutes ago       Up 7 minutes                            k8s_etcd_etcd-k8s-master_kube-system_c741dae9c998babb414de83baa201b73_3
3e18351c87c1        70f311871ae1                                        "/coredns -conf /etc…"   7 minutes ago       Up 7 minutes                            k8s_coredns_coredns-9d85f5447-tz552_kube-system_584abd00-5250-45e2-8297-7e7a02ade162_1
4c99825fd663        70f311871ae1                                        "/coredns -conf /etc…"   7 minutes ago       Up 7 minutes                            k8s_coredns_coredns-9d85f5447-v9z79_kube-system_3352f400-bac5-432c-9d15-4b3b222447b2_1
2574a5df7e9a        registry.aliyuncs.com/google_containers/pause:3.1   "/pause"                 7 minutes ago       Up 7 minutes                            k8s_POD_coredns-9d85f5447-tz552_kube-system_584abd00-5250-45e2-8297-7e7a02ade162_1
793ca933a684        registry.aliyuncs.com/google_containers/pause:3.1   "/pause"                 7 minutes ago       Up 7 minutes                            k8s_POD_kube-scheduler-k8s-master_kube-system_ef597d905c3006a0826f3e90c95561d5_6
a4628e880aa3        registry.aliyuncs.com/google_containers/pause:3.1   "/pause"                 7 minutes ago       Up 7 minutes                            k8s_POD_kube-controller-manager-k8s-master_kube-system_eb0c62892de0c481c800640b4c18fcd7_4
a50acc0d0eef        registry.aliyuncs.com/google_containers/pause:3.1   "/pause"                 7 minutes ago       Up 7 minutes                            k8s_POD_kube-apiserver-k8s-master_kube-system_e58ee1fe74feae783dabf1c1bc9f5fde_6
2d1a6e939dc0        registry.aliyuncs.com/google_containers/pause:3.1   "/pause"                 7 minutes ago       Up 7 minutes                            k8s_POD_kube-flannel-ds-amd64-mkzhh_kube-system_0fb49ecb-695c-48d3-b654-cd1694691538_2
ab663624336c        registry.aliyuncs.com/google_containers/pause:3.1   "/pause"                 7 minutes ago       Up 7 minutes                            k8s_POD_coredns-9d85f5447-v9z79_kube-system_3352f400-bac5-432c-9d15-4b3b222447b2_2
27de37b4b10b        registry.aliyuncs.com/google_containers/pause:3.1   "/pause"                 7 minutes ago       Up 7 minutes                            k8s_POD_etcd-k8s-master_kube-system_c741dae9c998babb414de83baa201b73_5
c778bd2e8e23        registry.aliyuncs.com/google_containers/pause:3.1   "/pause"                 7 minutes ago       Up 7 minutes                            k8s_POD_kube-proxy-fnr6c_kube-system_d12921e7-637c-4b78-aa90-a8e0c1280dcd_3
root@k8s-master:/root

使用 kubectl 检查节点会发现 k8s-master 已经处于 Ready 状态了:

$ kubectl get nodes
NAME         STATUS   ROLES    AGE   VERSION
k8s-master   Ready    master   41m   v1.17.0

加入 Kubernetes Node

k8s-node1k8s-node2 节点执行。

向集群添加新节点,执行在 k8s-masterkubeadm init 输出的 kubeadm join 命令:

$ kubeadm join 10.0.1.61:6443 --token fswzjs.057jwoyqk6pvkcru \
    --discovery-token-ca-cert-hash sha256:276b5cc71c1a91051a415fb209ceec6c3ac1942e7d03be2d73534f050d01da2a
W0630 16:41:49.544112   12396 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
        [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.17" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

出现如上述除则说明 node 节点加入集群成功。

此时在 k8s-master 节点中检查集群节点状态如下:

$ kubectl get nodes
NAME         STATUS     ROLES    AGE   VERSION
k8s-master   Ready      master   62m   v1.17.0
k8s-node1    NotReady   <none>   97s   v1.17.0
k8s-node2    NotReady   <none>   83s   v1.17.0

可以看到 k8s-node1k8s-node2 节点都是 NotReady 状态,这是因为它们也需要去国外拉取网络插件(flannel),这里解决方案有两种,一是在 k8s-master 中将拉取的网络插件相关镜像导出,然后离线导入到 k8s-node1k8s-node2 节点,二是直接在 k8s-node1k8s-node2 节点配置科学上网,我这里就使用第二种,如下:

$ mkdir /etc/systemd/system/docker.service.d -p
$ cat << EOF > /etc/systemd/system/docker.service.d/http-proxy.conf
[Service]
Environment="HTTP_PROXY=http://10.0.0.101:41091/" "NO_PROXY=localhost,127.0.0.1"
EOF
# 重启
$ systemctl daemon-reload && systemctl restart docker

这里的 HTTP_PROXY 要替换为你自己的代理地址。

检查 k8s-node1k8s-node2 节点是否成功拉取了 flannel 插件并运行:

# 检查镜像
$ docker images | grep flannel
quay.io/coreos/flannel                               v0.12.0-amd64       4e9f801d2217        3 months ago        52.8MB

# 检查容器
$ docker ps | grep flannel
8c52e667fffd        4e9f801d2217                                        "/opt/bin/flanneld -…"   2 minutes ago       Up 2 minutes                            k8s_kube-flannel_kube-flannel-ds-amd64-r2l9n_kube-system_a63be588-e136-4906-a1fa-bd5788824176_0
51d4b485a9f4        registry.aliyuncs.com/google_containers/pause:3.1   "/pause"                 3 minutes ago       Up 3 minutes                            k8s_POD_kube-flannel-ds-amd64-r2l9n_kube-system_a63be588-e136-4906-a1fa-bd5788824176_3

此时再在 k8s-master 节点中检查集群节点状态如下:

$ kubectl get nodes
NAME         STATUS   ROLES    AGE   VERSION
k8s-master   Ready    master   77m   v1.17.0
k8s-node1    Ready    <none>   16m   v1.17.0
k8s-node2    Ready    <none>   16m   v1.17.0

此时集群中各节点的状态就是正常的啦~~~

测试 Kubernetes 集群

在 Kubernetes 集群中创建一个 pod,验证是否正常运行:

$ kubectl create deployment nginx --image=nginx
deployment.apps/nginx created

$ kubectl expose deployment nginx --port=80 --type=NodePort
service/nginx exposed

$ kubectl get pod,svc
NAME                         READY   STATUS    RESTARTS   AGE
pod/nginx-86c57db685-mxgg9   1/1     Running   0          32s

NAME                 TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
service/kubernetes   ClusterIP   10.96.0.1      <none>        443/TCP        85m
service/nginx        NodePort    10.96.67.205   <none>        80:32251/TCP   15s

测试访问:

$ curl k8s-node1:32251 -I
HTTP/1.1 200 OK
Server: nginx/1.19.0
Date: Tue, 30 Jun 2020 09:26:32 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 26 May 2020 15:00:20 GMT
Connection: keep-alive
ETag: "5ecd2f04-264"
Accept-Ranges: bytes

部署 Dashboard

下载部署的 yml 并修改其 Service 暴露端口并设置 Dashboard 使用中文:

$ wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta8/aio/deploy/recommended.yaml
$ vim recommended.yaml
...
kind: Service
apiVersion: v1
...
spec:
  type: NodePort
  ports:
    - port: 443 
      targetPort: 8443
      nodePort: 30002
      ...
kind: Deployment
apiVersion: apps/v1
...
spec:
...
    spec:
      containers:
        - name: kubernetes-dashboard
          image: kubernetesui/dashboard:v2.0.0-beta8
          imagePullPolicy: Always
          ports:
            - containerPort: 8443
              protocol: TCP 
          env:
            - name: ACCEPT_LANGUAGE
              value: zh
...

部署该 yml:

$ kubectl apply -f recommended.yaml 
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created

测试使用浏览器访问 https://10.0.1.63:30002

image.png

创建 service account 并绑定默认 cluster-admin 管理员集群角色:

$ kubectl create serviceaccount dashboard-admin -n kube-system
serviceaccount/dashboard-admin created

$ kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
clusterrolebinding.rbac.authorization.k8s.io/dashboard-admin created
# 获取 token
$ kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')
Name:         dashboard-admin-token-6sk48
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: dashboard-admin
              kubernetes.io/service-account.uid: d831a378-8782-48d5-b133-60d883df4bb3

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1025 bytes
namespace:  11 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6Iks2ZkQ3dWtCbGxmNFRSaEFiU2lkSVVFbFQ0V25YWU9GMU5jN3dDRThDX1kifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tNnNrNDgiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZDgzMWEzNzgtODc4Mi00OGQ1LWIxMzMtNjBkODgzZGY0YmIzIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.E6jTCw-P5pk_0eGgpzkGll3uCSw16XU8le2yhpvuUxSA60Mw55bFbF5t-6ilrcvRl1br3TZE5xa-rog-qUqYWMTqmC6sWBhJ8VbA7ya4T7wr_rmUEIQRulOMOSVOdx1jJtfQrLPQcq18NY6Pc8uh_gPZWGzOitbzVEoMhD7IQIUf4sdsAkne3t4yJX4zpHzUW4v0yMY7Gm-Ki0i54wZOLvMZTeGjGeExevQMntfrU5M5NHJP7PPd8ERrnUanJTipftlTdD8_pu6G3o4-WexRDkS7s3CMkxBZlUholUHk1RGB2EdSvMD3hM-fxJGZmO8V6-D52x2Y21SIdlpytSn31g

在 Dashboard 的 Web 页面选中 Token 项,在下面的输入框输入上面生成的 token:

image.png

然后点击 登录

image.png

不能访问 Dashboard UI 解决

因为证书问题,使用 Chrome 等浏览器是不能正常访问 Dashboard UI 的,此时需要修改一下 Dashboard 使用的证书。


1、删除默认的 secret,用自签证书创建新的 secret

$ kubectl delete secret kubernetes-dashboard-certs -n kubernetes-dashboard
$ kubectl create secret generic kubernetes-dashboard-certs \
--from-file=/etc/kubernetes/pki/apiserver.key --from-file=/etc/kubernetes/pki/apiserver.crt -n kubernetes-dashboard

2、修改 recommended.yaml 文件,在 args 下面增加证书两行参数:

$ vim recommended.yaml
...
          args:
            - --auto-generate-certificates
            - --namespace=kubernetes-dashboard
            - --tls-key-file=apiserver.key
            - --tls-cert-file=apiserver.crt

$ kubectl apply -f kubernetes-dashboard.yaml

然后就可以正常使用 Google Chrome 访问 Dashboard UI 了。

kubeadm 的 token 过期问题解决

上面在 k8s-master 执行 kubeadm init 后输出了一个 kubeadm join 命令如下:

kubeadm join 10.0.1.61:6443 --token fswzjs.057jwoyqk6pvkcru \
    --discovery-token-ca-cert-hash sha256:276b5cc71c1a91051a415fb209ceec6c3ac1942e7d03be2d73534f050d01da2a 

到现在这条命令的作用咱们也都知道了,它是用来加入 node 到集群中用的。而这条命令中可以看到是存在一个 token 和校验码的,并且这个 token 是会过期的,默认的过期时间为 24 小时。

所以这里要说明的是当这个 token 过期后我们如何来生成一个新的 token,下面介绍两种方式。

方式一

1、生成 token:

$ kubeadm token create
W1003 16:09:20.680476   25639 validation.go:28] Cannot validate kube-proxy config - no validator is available
W1003 16:09:20.680669   25639 validation.go:28] Cannot validate kubelet config - no validator is available
2lztw0.n7mmegtqj8p0saac

2、查看 token 列表:

$ kubeadm token list
TOKEN                     TTL         EXPIRES                     USAGES                   DESCRIPTION                                                EXTRA GROUPS
1snf3b.ba9edvc0ynxaz4e0   1h          2020-10-03T17:18:38+08:00   authentication,signing   The default bootstrap token generated by 'kubeadm init'.   system:bootstrappers:kubeadm:default-node-token
2lztw0.n7mmegtqj8p0saac   23h         2020-10-04T16:09:20+08:00   authentication,signing   <none>                                                     system:bootstrappers:kubeadm:default-node-token

3、生成 token 签名:

$ openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
9eb0b506a3091c3e152101f50b9748a76c798e664d92daba59fb02ecea859580

4、所以最终生成的命令如下:

$ kubeadm join 192.168.31.61:6443 --token 2lztw0.n7mmegtqj8p0saac --discovery-token-ca-cert-hash sha256:9eb0b506a3091c3e152101f50b9748a76c798e664d92daba59fb02ecea859580

方式二

其实 kubeadm 命令本身就提供了一个选项来生成一个包含 token 和 token 签名的完整的 kubeadm join 命令,如下:

$ kubeadm token create --print-join-command
W1003 15:58:14.985169   21569 validation.go:28] Cannot validate kube-proxy config - no validator is available
W1003 15:58:14.985323   21569 validation.go:28] Cannot validate kubelet config - no validator is available
kubeadm join 192.168.0.181:6443 --token gcc3v0.jvliwlavl7swhlm6     --discovery-token-ca-cert-hash sha256:9eb0b506a3091c3e152101f50b9748a76c798e664d92daba59fb02ecea859580 

查看 token 列表:

$ kubeadm token list
TOKEN                     TTL         EXPIRES                     USAGES                   DESCRIPTION                                                EXTRA GROUPS
1snf3b.ba9edvc0ynxaz4e0   1h          2020-10-03T17:18:38+08:00   authentication,signing   The default bootstrap token generated by 'kubeadm init'.   system:bootstrappers:kubeadm:default-node-token
2lztw0.n7mmegtqj8p0saac   23h         2020-10-04T16:09:20+08:00   authentication,signing   <none>                                                     system:bootstrappers:kubeadm:default-node-token
gcc3v0.jvliwlavl7swhlm6   23h         2020-10-04T15:58:15+08:00   authentication,signing   <none>                                                     system:bootstrappers:kubeadm:default-node-token

环境清理

如果想要在主机上清除 Kubernetes 的部署环境,可以使用 kubeadm reset 命令。

在 master 节点上执行就是清理 master 环境:

$ kubeadm reset
[reset] Reading configuration from the cluster...
[reset] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] Are you sure you want to proceed? [y/N]: 

在 node 节点上执行就是清理 node 环境:

$ kubeadm reset
[reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] Are you sure you want to proceed? [y/N]:
0

评论区