Post

vagrant + kubespray로 k8s cluster 구축하기

구축 환경

  • ubuntu20.04
  • python3.9: ansible 8.5를 위해 필수로 설치

ubuntu20.04에서 python3.9 설치

https://codechacha.com/ko/ubuntu-install-python39/

Ubuntu에서 virtualbox 설치

1
2
3
4
5
6
7
8
9
10
wget -O- https://www.virtualbox.org/download/oracle_vbox_2016.asc | sudo gpg --yes --output /usr/share/keyrings/oracle-virtualbox-2016.gpg --dearmor
sudo vim /etc/apt/sources.list/virtualbox.list
# 아래 추가
# deb [arch=amd64 signed-by=/usr/share/keyrings/oracle-virtualbox-2016.gpg] https://download.virtualbox.org/virtualbox/debian focal contrib 
sudo apt-get update
sudo apt-get install virtualbox-6.1

# disable secure boot in Bios setting
# check if secureBoot is enabled
mokutil --sb-state

Vagrant 로 VM 구축

vagant 다운로드 및 설치

1
2
curl -O https://releases.hashicorp.com/vagrant/2.4.1/vagrant_2.4.1-1_amd64.deb
sudo apt install ./vagrant_2.4.1-1_amd64.deb

Vagranffile 작성

ip는 virtualbox의 host전용 어뎁터 subnet 기준으로 변경 필요

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
Vagrant.configure("2") do |config|
	# Define VM
	config.vm.define "k8s-node1" do |ubuntu|
		ubuntu.vm.box = "ubuntu/focal64"
		ubuntu.vm.hostname = "k8s-node1"
		ubuntu.vm.network "private_network", ip: "192.168.56.101"
		ubuntu.vm.provider "virtualbox" do |vb|
			vb.name = "k8s-node1"
			vb.cpus = 2
			vb.memory = 3000
		end
	end
	config.vm.define "k8s-node2" do |ubuntu|
		ubuntu.vm.box = "ubuntu/focal64"
		ubuntu.vm.hostname = "k8s-node2"
		ubuntu.vm.network "private_network", ip: "192.168.56.102"
		ubuntu.vm.provider "virtualbox" do |vb|
			vb.name = "k8s-node2"
			vb.cpus = 2
			vb.memory = 3000
		end
	end
	config.vm.define "k8s-node3" do |ubuntu|
		ubuntu.vm.box = "ubuntu/focal64"
		ubuntu.vm.hostname = "k8s-node3"
		ubuntu.vm.network "private_network", ip: "192.168.56.103"
		ubuntu.vm.provider "virtualbox" do |vb|
			vb.name = "k8s-node3"
			vb.cpus = 2
			vb.memory = 3000
		end
	end

	config.vm.provision "shell", inline: <<-SHELL
	  sed -i 's/PasswordAuthentication no/PasswordAuthentication yes/g' /etc/ssh/sshd_config
	  sed -i 's/archive.ubuntu.com/mirror.kakao.com/g' /etc/apt/sources.list
	  sed -i 's/security.ubuntu.com/mirror.kakao.com/g' /etc/apt/sources.list
	  systemctl restart ssh
	SHELL
end

Vagrant 실행

1
vagrant up --provider=virtualbox

SSH key 복사

vagrant 초기 암호는 vagrant이다. 기타 sshd_config로 암호 설정 취소

1
2
3
4
ssh-copy-id vagrant@192.168.56.101
ssh-copy-id vagrant@192.168.56.102
ssh-copy-id vagrant@192.168.56.103

Kubespray로 k8s cluster 구축

Git

https://github.com/kubernetes-sigs/kubespray

1
git clone https://github.com/kubernetes-sigs/kubespray.git

package 설치

1
pip3 install -r requirements.txt

inventory 생성

1
cp -rpf inventory/sample/ inventory/mycluster

inventory.ini 변경

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
[all]
node1 ansible_host=192.168.56.101 ip=192.168.56.101 ansible_ssh_user=vagrant
node2 ansible_host=192.168.56.102 ip=192.168.56.102 ansible_ssh_user=vagrant
node3 ansible_host=192.168.56.103 ip=192.168.56.103 ansible_ssh_user=vagrant
# node1 ansible_host=95.54.0.12  # ip=10.3.0.1 etcd_member_name=etcd1
# node2 ansible_host=95.54.0.13  # ip=10.3.0.2 etcd_member_name=etcd2
# node3 ansible_host=95.54.0.14  # ip=10.3.0.3 etcd_member_name=etcd3
# node4 ansible_host=95.54.0.15  # ip=10.3.0.4 etcd_member_name=etcd4
# node5 ansible_host=95.54.0.16  # ip=10.3.0.5 etcd_member_name=etcd5
# node6 ansible_host=95.54.0.17  # ip=10.3.0.6 etcd_member_name=etcd6

# ## configure a bastion host if your nodes are not directly reachable
# [bastion]
# bastion ansible_host=x.x.x.x ansible_user=some_user

[kube_control_plane]
node1
# node1
# node2
# node3

[etcd]
node1
# node1
# node2
# node3

[kube_node]
node1
node2
node3
# node2
# node3
# node4
# node5
# node6

[calico_rr]

[k8s_cluster:children]
kube_control_plane
kube_node
calico_rr

group_vars 변경 (optional)

https://kubespray.io/#/docs/vars

metalLB 관련: kube_proxy_strict_arp

audit관련: kubernetes_audit

실행

1
2
3
ansible all -m ping -i inventory/mycluster/inventory.ini # 실행 전 test

ansible-playbook -i inventory/mycluster/inventory.ini cluster.yml -become --become-user=root -vvv 

kubectl alias 설정

for zsh

https://kubernetes.io/ko/docs/tasks/tools/included/optional-kubectl-configs-zsh/

1
2
3
source <(kubectl completion zsh)
echo 'alias k=kubectl' >>~/.zshrc
echo 'complete -o default -F __start_kubectl k' >>~/.zshrc

for bash

https://kubernetes.io/ko/docs/tasks/tools/included/optional-kubectl-configs-bash-linux/

1
2
3
4
apt-get install bash-completion
echo 'source <(kubectl completion bash)' >>~/.bashrc
echo 'alias k=kubectl' >>~/.bashrc
echo 'complete -o default -F __start_kubectl k' >>~/.bashrc

최종 확인

node1에 들어가서 kubectl 동작 확인

1
2
3
4
5
6
7
8
9
vagrant@node1:~$ mkdir -p $HOME/.kube
vagrant@node1:~$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
vagrant@node1:~$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
vagrant@node1:~$ kubectl get nodes
NAME    STATUS   ROLES                  AGE     VERSION
node1   Ready    control-plane,master   7m59s   v1.22.8
node2   Ready    <none>                 6m56s   v1.22.8
node3   Ready    <none>                 6m56s   v1.22.8
vagrant@node1:~$

Krew를 통해 ctx, ns, konfig를 변경해서 사용

Link

https://nayoungs.tistory.com/entry/Kubernetes-Kubespray로-쿠버네티스-설치하기

This post is licensed under CC BY 4.0 by the author.