一、三台机器环境规划
ip地址 | 主机名 | hosts解析 |
10.0.0.11 | k8s-master | 10.0.0.11 k8s-master 10.0.0.12 k8s-node-1 10.0.0.13 k8s-node-2 |
10.0.0.12 | k8s-node-1 | 10.0.0.11 k8s-master 10.0.0.12 k8s-node-1 10.0.0.13 k8s-node-2 |
10.0.0.13 | k8s-node-2 | 10.0.0.11 k8s-master 10.0.0.12 k8s-node-1 10.0.0.13 k8s-node-2 |
二、master节点安装etcd
[root@k8s-master ~]# yum install etcd -y
三、修改配置文件
[root@k8s-master ~]# grep -Ev "^$|#" /etc/etcd/etcd.conf
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"
ETCD_NAME="default"
ETCD_ADVERTISE_CLIENT_URLS="http://10.0.0.11:2379"
四、启动服务
[root@k8s-master ~]# systemctl start etcd.service
[root@k8s-master ~]# systemctl enable etcd.service
五、检查健康状态
[root@k8s-master ~]# etcdctl -C http://10.0.0.11:2379 cluster-health
member 8e9e05c52164694d is healthy: got healthy result from http://10.0.0.11:2379
cluster is healthy
六、master节点安装kubernetes
[root@k8s-master ~]# yum install kubernetes-master.x86_64 -y
七、修改配置文件
[root@k8s-master ~]# vim /etc/kubernetes/apiserver
KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"
KUBE_API_PORT="--port=8080"
KUBELET_PORT="--kubelet-port=10250"
KUBE_ETCD_SERVERS="--etcd-servers=http://10.0.0.11:2379"
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"
[root@k8s-master ~]# vim /etc/kubernetes/config
KUBE_MASTER="--master=http://10.0.0.11:8080" 配置节点及端口
八、启动所需的服务及加入开机自启动
[root@k8s-master ~]# systemctl enable kube-apiserver.service
[root@k8s-master ~]# systemctl restart kube-apiserver.service
[root@k8s-master ~]# systemctl enable kube-controller-manager.service
[root@k8s-master ~]# systemctl restart kube-controller-manager.service
[root@k8s-master ~]# systemctl enable kube-scheduler.service
[root@k8s-master ~]# systemctl restart kube-scheduler.service
九、检查服务状态
[root@k8s-master ~]# kubectl get componentstatus
NAME STATUS MESSAGE ERROR
etcd-0 Healthy {"health":"true"}
scheduler Healthy ok
controller-manager Healthy ok
十、master节点安装docker
[root@k8s-master ~]# yum install docker -y
十一、启动、开机自启
[root@k8s-master ~]# systemctl restart docker
[root@k8s-master ~]# systemctl enable docker
十二、node节点安装kubernetes
[root@k8s-node-1 ~]# yum install kubernetes-node.x86_64 -y
[root@k8s-node-2 ~]# yum install kubernetes-node.x86_64 -y
十三、修改配置文件
[root@k8s-node-1 ~]# vim /etc/kubernetes/config
KUBE_MASTER="--master=http://10.0.0.11:8080"
[root@k8s-node-2 ~]# vim /etc/kubernetes/config
KUBE_MASTER="--master=http://10.0.0.11:8080"
[root@k8s-node-1 ~]# vim /etc/kubernetes/kubelet
KUBELET_ADDRESS="--address=0.0.0.0"
KUBELET_PORT="--port=10250"
KUBELET_HOSTNAME="--hostname-override=10.0.0.12"
KUBELET_API_SERVER="--api-servers=http://10.0.0.11:8080"
[root@k8s-node-2 ~]# vim /etc/kubernetes/kubelet
KUBELET_ADDRESS="--address=0.0.0.0"
KUBELET_PORT="--port=10250"
KUBELET_HOSTNAME="--hostname-override=10.0.0.13"
KUBELET_API_SERVER="--api-servers=http://10.0.0.11:8080"
十四、启动
[root@k8s-node-1 ~]# systemctl enable kubelet.service
[root@k8s-node-1 ~]# systemctl restart kubelet.service
[root@k8s-node-1 ~]# systemctl enable kube-proxy.service
[root@k8s-node-1 ~]# systemctl restart kube-proxy.service
[root@k8s-node-2 ~]# systemctl enable kubelet.service
[root@k8s-node-2 ~]# systemctl restart kubelet.service
[root@k8s-node-2 ~]# systemctl enable kube-proxy.service
[root@k8s-node-2 ~]# systemctl restart kube-proxy.service
十五、在master节点检查
[root@k8s-master ~]# kubectl get nodes
NAME STATUS AGE
10.0.0.12 Ready 2m
10.0.0.13 Ready 2m
十六、所有节点配置flannel网络
[root@k8s-node-1 ~]# yum install flannel -y
[root@k8s-node-1 ~]# sed -i 's#http://127.0.0.1:2379#http://10.0.0.11:2379#g' /etc/sysconfig/flanneld
[root@k8s-node-2 ~]# yum install flannel -y
[root@k8s-node-2 ~]# sed -i 's#http://127.0.0.1:2379#http://10.0.0.11:2379#g' /etc/sysconfig/flanneld
[root@k8s-master ~]# yum install flannel -y
[root@k8s-master ~]# sed -i 's#http://127.0.0.1:2379#http://10.0.0.11:2379#g' /etc/sysconfig/flanneld
十七、mater节点配置网络段
[root@k8s-master ~]# etcdctl mk /atomic.io/network/config '{ "Network": "172.18.0.0/16" }'
十八、master节点重启服务
[root@k8s-master ~]# systemctl enable flanneld.service
[root@k8s-master ~]# systemctl restart flanneld.service
[root@k8s-master ~]# systemctl restart kube-apiserver.service
[root@k8s-master ~]# systemctl restart kube-controller-manager.service
[root@k8s-master ~]# systemctl restart kube-scheduler.service
十九、node节点重启服务
[root@k8s-node-1 ~]# systemctl enable flanneld.service
[root@k8s-node-1 ~]# systemctl restart flanneld.service
[root@k8s-node-1 ~]# systemctl restart docker
[root@k8s-node-1 ~]# systemctl restart kubelet.service
[root@k8s-node-1 ~]# systemctl restart kube-proxy.service
[root@k8s-node-2 ~]# systemctl enable flanneld.service
[root@k8s-node-2 ~]# systemctl restart flanneld.service
[root@k8s-node-2 ~]# systemctl restart docker
[root@k8s-node-2 ~]# systemctl restart kubelet.service
[root@k8s-node-2 ~]# systemctl restart kube-proxy.service
二十、默认的防火墙是阻止策略,修改为accept;docker重启,防火墙策略又恢复为阻止,写入开启自启配置中
[root@k8s-master ~]# vim /usr/lib/systemd/system/docker.service
#在[service]最后加入一行
ExecStartPost=/usr/sbin/iptables -P FORWARD ACCEPT
[root@k8s-master ~]# scp /usr/lib/systemd/system/docker.service 10.0.0.12:/usr/lib/systemd/system/docker.service
scp /usr/lib/systemd/system/docker.service 10.0.0.13:/usr/lib/systemd/system/docker.service
二十一、重启docker服务
[root@k8s-master ~]# systemctl daemon-reload
[root@k8s-master ~]# systemctl restart docker
[root@k8s-node-1 ~]# systemctl daemon-reload
[root@k8s-node-1 ~]# systemctl restart docker
[root@k8s-node-2 ~]# systemctl daemon-reload
[root@k8s-node-2 ~]# systemctl restart docker
二十二、三个机器启动alpine:latest的容器
[root@k8s-master ~]# docker run -it alpine:latest
/ # ifconfig
eth0 Link encap:Ethernet HWaddr 02:42:AC:12:64:02
inet addr:172.18.100.2 Bcast:0.0.0.0 Mask:255.255.255.0
inet6 addr: fe80::42:acff:fe12:6402/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1472 Metric:1
RX packets:16 errors:0 dropped:0 overruns:0 frame:0
TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:1312 (1.2 KiB) TX bytes:656 (656.0 B)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
[root@k8s-node-1 ~]# docker run -it alpine:latest
/ # ifconfig
eth0 Link encap:Ethernet HWaddr 02:42:AC:12:39:02
inet addr:172.18.57.2 Bcast:0.0.0.0 Mask:255.255.255.0
inet6 addr: fe80::42:acff:fe12:3902/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1472 Metric:1
RX packets:16 errors:0 dropped:0 overruns:0 frame:0
TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:1312 (1.2 KiB) TX bytes:656 (656.0 B)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
[root@k8s-node-2 ~]# docker run -it alpine:latest
/ # ifconfig
eth0 Link encap:Ethernet HWaddr 02:42:AC:12:4E:02
inet addr:172.18.78.2 Bcast:0.0.0.0 Mask:255.255.255.0
inet6 addr: fe80::42:acff:fe12:4e02/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1472 Metric:1
RX packets:8 errors:0 dropped:0 overruns:0 frame:0
TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:656 (656.0 B) TX bytes:656 (656.0 B)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
二十四、ping地址测试
/ # ping 172.18.57.2
PING 172.18.57.2 (172.18.57.2): 56 data bytes
64 bytes from 172.18.57.2: seq=0 ttl=60 time=2.433 ms
64 bytes from 172.18.57.2: seq=1 ttl=60 time=1.333 ms
64 bytes from 172.18.57.2: seq=2 ttl=60 time=1.337 ms
^C
--- 172.18.57.2 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 1.333/1.701/2.433 ms
/ # ping 172.18.78.2
PING 172.18.78.2 (172.18.78.2): 56 data bytes
64 bytes from 172.18.78.2: seq=0 ttl=60 time=21.858 ms
64 bytes from 172.18.78.2: seq=1 ttl=60 time=1.874 ms
^C
--- 172.18.78.2 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 1.874/11.866/21.858 ms
二十五、配置master为镜像仓库
[root@k8s-master ~]# vi /etc/docker/daemon.json
{
"registry-mirrors": ["https://registry.docker-cn.com"],
"insecure-registries": ["10.0.0.11:5000"]
}
[root@k8s-master ~]# scp /etc/docker/daemon.json 10.0.0.12:/etc/docker/daemon.json
[root@k8s-master ~]# scp /etc/docker/daemon.json 10.0.0.13:/etc/docker/daemon.json
二十六、重启
[root@k8s-master ~]# systemctl restart docker
[root@k8s-master ~]# systemctl restart docker
[root@k8s-master ~]# systemctl restart docker
二十七、上传镜像
[root@k8s-master ~]# ls
anaconda-ks.cfg docker_alpine.tar.gz registry.tar.gz
二十八、启动容器仓库
[root@k8s-master ~]# docker run -d -p 5000:5000 --restart=always --name registry -v /opt/myregistry:/var/lib/registry registry
二十九、node2节点打标签
[root@k8s-node-2 ~]# docker tag alpine:latest 10.0.0.11:5000/alpine:latest
三十、推送
[root@k8s-node-2 ~]# docker push 10.0.0.11:5000/alpine:latest
The push refers to a repository [10.0.0.11:5000/alpine]
1bfeebd65323: Pushed
latest: digest: sha256:57334c50959f26ce1ee025d08f136c2292c128f84e7b229d1b0da5dac89e9866 size: 528
0 Comments