Compare commits
88 Commits
v0.0.5k-lg
...
main
Author | SHA1 | Date | |
---|---|---|---|
|
45e4401dcc | ||
|
dc50059f19 | ||
|
ada657401f | ||
|
b126e7f9e3 | ||
|
f9f3e8da8e | ||
|
be4f3b9030 | ||
c590edb875 | |||
3cd52a230e | |||
|
8dd65d65eb | ||
|
aac0f13134 | ||
|
ea60c89bf1 | ||
|
272867304a | ||
|
b53cd8c848 | ||
|
8ed9ebe6f8 | ||
|
c1fe781ca2 | ||
c2c1e8acb7 | |||
|
5176ad216c | ||
|
e45118eef1 | ||
|
959e056c5f | ||
|
8d7c5f7cfb | ||
|
80295cba99 | ||
|
35e816c2eb | ||
|
b1e4b50982 | ||
|
d65fe53ef8 | ||
|
451c8ba094 | ||
|
88061eb89d | ||
|
6fbad9d9fa | ||
|
30c7275ba6 | ||
|
33529f2781 | ||
|
77a7f3c567 | ||
|
a3235af304 | ||
|
17647b17da | ||
|
78230b7f21 | ||
|
7d90939ea3 | ||
|
7c01d0aa18 | ||
|
ea513e616d | ||
|
1d7fa35e48 | ||
|
250483501e | ||
|
4abf4d4950 | ||
|
8ebf476e05 | ||
|
26ae726457 | ||
|
b1bd102d85 | ||
|
6cfe40b998 | ||
|
e48d63a8bc | ||
|
873b6b6def | ||
|
3d94e6c050 | ||
|
151c0adf88 | ||
|
745bc05e76 | ||
|
82561d5d0a | ||
|
df1000e1b5 | ||
|
0824fd9621 | ||
|
3c680769be | ||
|
8ceaa8791f | ||
|
5f5aea168c | ||
|
ef5701c5d1 | ||
|
f74728292b | ||
|
bfdca163f7 | ||
|
cb1b315819 | ||
|
c086bcdc7f | ||
|
1134ca261d | ||
|
b0d81dc69c | ||
|
331b8b0fb6 | ||
|
4025f996dc | ||
|
a1ee9c6207 | ||
|
a1442e534d | ||
|
e78ef5948b | ||
|
298f105805 | ||
|
d88745e741 | ||
|
fffcb22db8 | ||
|
abb8c15028 | ||
|
73b4560dd9 | ||
|
91d8b57029 | ||
|
37bbbad9dd | ||
|
84215f502b | ||
|
2606cd19b0 | ||
|
b27ce2a372 | ||
|
18ce1f65ad | ||
|
116b84d230 | ||
|
c92a7654d3 | ||
|
02c7f3dffd | ||
|
5a8558d701 | ||
|
7d6b15844a | ||
|
2653221559 | ||
|
3100ba51e2 | ||
bbe58dbb01 | |||
|
7124d8aaff | ||
|
0afa2c3596 | ||
|
38602033b3 |
16
README.md
16
README.md
@ -1,6 +1,8 @@
|
|||||||
# gsb2024
|
# gsb2024
|
||||||
|
|
||||||
2024-01-19 11h45 ps
|
* 2024-05-23 16h07 ps
|
||||||
|
* 2024-04-12 8h55 ps
|
||||||
|
* 2024-01-19 11h45 ps
|
||||||
|
|
||||||
Environnement et playbooks **ansible** pour le projet **GSB 2024**
|
Environnement et playbooks **ansible** pour le projet **GSB 2024**
|
||||||
|
|
||||||
@ -11,8 +13,8 @@ Prérequis :
|
|||||||
* VirtualBox
|
* VirtualBox
|
||||||
* git
|
* git
|
||||||
* fichier machines virtuelles **ova** :
|
* fichier machines virtuelles **ova** :
|
||||||
* **debian-bookworm-gsb-2023c.ova**
|
* **debian-bookworm-gsb-2024b.ova**
|
||||||
* **debian-bullseye-gsb-2024a.ova**
|
* **debian-bullseye-gsb-2024b.ova**
|
||||||
|
|
||||||
|
|
||||||
## Les machines
|
## Les machines
|
||||||
@ -49,12 +51,12 @@ Il existe un playbook ansible pour chaque machine à installer, nommé comme la
|
|||||||
## Installation
|
## Installation
|
||||||
|
|
||||||
On utilisera les images de machines virtuelle suivantes :
|
On utilisera les images de machines virtuelle suivantes :
|
||||||
* **debian-bookworm-gsb-2023c.ova** (2023-12-18)
|
* **debian-bookworm-gsb-2024b.ova** (2024-05-23)
|
||||||
* Debian Bookworm 12.4 - 2 cartes - 1 Go - Stockage 20 Go
|
* Debian Bookworm 12.5 - 2 cartes - 1 Go - Stockage 20 Go
|
||||||
|
|
||||||
et pour **s-fog** :
|
et pour **s-fog** :
|
||||||
* **debian-bullseye-2024a.ova** (2024-01-06)
|
* **debian-bullseye-2024b.ova** (2024-04-11)
|
||||||
* Debian Bullseye 11.8 - 2 cartes - 1 Go - stockage 20 Go
|
* Debian Bullseye 11.9 - 2 cartes - 1 Go - stockage 20 Go
|
||||||
|
|
||||||
Les images **.ova** doivent etre stockées dans le répertoire habituel de téléchargement de l'utilisateur courant.
|
Les images **.ova** doivent etre stockées dans le répertoire habituel de téléchargement de l'utilisateur courant.
|
||||||
|
|
||||||
|
7
firewalld.yml
Normal file
7
firewalld.yml
Normal file
@ -0,0 +1,7 @@
|
|||||||
|
---
|
||||||
|
- hosts: localhost
|
||||||
|
connection: local
|
||||||
|
become: yes
|
||||||
|
|
||||||
|
roles:
|
||||||
|
- firewalld
|
25
goss.yaml
Normal file
25
goss.yaml
Normal file
@ -0,0 +1,25 @@
|
|||||||
|
port:
|
||||||
|
tcp:22:
|
||||||
|
listening: true
|
||||||
|
ip:
|
||||||
|
- 0.0.0.0
|
||||||
|
tcp6:22:
|
||||||
|
listening: true
|
||||||
|
ip:
|
||||||
|
- '::'
|
||||||
|
service:
|
||||||
|
sshd:
|
||||||
|
enabled: true
|
||||||
|
running: true
|
||||||
|
user:
|
||||||
|
sshd:
|
||||||
|
exists: true
|
||||||
|
uid: 101
|
||||||
|
gid: 65534
|
||||||
|
groups:
|
||||||
|
- nogroup
|
||||||
|
home: /run/sshd
|
||||||
|
shell: /usr/sbin/nologin
|
||||||
|
process:
|
||||||
|
sshd:
|
||||||
|
running: true
|
6
goss/s-awx.yaml
Normal file
6
goss/s-awx.yaml
Normal file
@ -0,0 +1,6 @@
|
|||||||
|
interface:
|
||||||
|
enp0s8:
|
||||||
|
exists: true
|
||||||
|
addrs:
|
||||||
|
- 172.16.0.22/24
|
||||||
|
mtu: 1500
|
173
goss/s-kea1.yaml
173
goss/s-kea1.yaml
@ -1,90 +1,93 @@
|
|||||||
file:
|
file:
|
||||||
/etc/kea/kea-ctrl-agent.conf:
|
/etc/kea/kea-ctrl-agent.conf:
|
||||||
exists: true
|
exists: true
|
||||||
mode: "0644"
|
mode: "0644"
|
||||||
size: 2470
|
owner: _kea
|
||||||
owner: _kea
|
group: root
|
||||||
group: root
|
filetype: file
|
||||||
filetype: file
|
contents: []
|
||||||
contains: []
|
/etc/kea/kea-dhcp4.conf:
|
||||||
/etc/kea/kea-dhcp4.conf:
|
exists: true
|
||||||
exists: true
|
mode: "0644"
|
||||||
mode: "0644"
|
owner: _kea
|
||||||
size: 11346
|
group: root
|
||||||
owner: _kea
|
filetype: file
|
||||||
group: root
|
contents: []
|
||||||
filetype: file
|
/tmp/kea4-ctrl-socket:
|
||||||
contains: []
|
exists: true
|
||||||
/tmp/kea4-ctrl-socket:
|
mode: "0755"
|
||||||
exists: true
|
size: 0
|
||||||
mode: "0755"
|
owner: _kea
|
||||||
size: 0
|
group: _kea
|
||||||
owner: _kea
|
filetype: socket
|
||||||
group: _kea
|
contains: []
|
||||||
filetype: socket
|
contents: null
|
||||||
contains: []
|
/usr/lib/x86_64-linux-gnu/kea:
|
||||||
/usr/local/lib/kea:
|
exists: true
|
||||||
exists: true
|
mode: "0755"
|
||||||
mode: "0755"
|
owner: root
|
||||||
size: 4096
|
group: root
|
||||||
owner: root
|
filetype: directory
|
||||||
group: root
|
contents: []
|
||||||
filetype: directory
|
|
||||||
contains: []
|
|
||||||
package:
|
package:
|
||||||
isc-kea-common:
|
isc-kea-common:
|
||||||
installed: true
|
installed: true
|
||||||
versions:
|
versions:
|
||||||
- 2.4.1-isc20231123184533
|
- 2.4.1-isc20231123184533
|
||||||
isc-kea-ctrl-agent:
|
isc-kea-ctrl-agent:
|
||||||
installed: true
|
installed: true
|
||||||
versions:
|
versions:
|
||||||
- 2.4.1-isc20231123184533
|
- 2.4.1-isc20231123184533
|
||||||
isc-kea-dhcp4:
|
isc-kea-dhcp4:
|
||||||
installed: true
|
installed: true
|
||||||
versions:
|
versions:
|
||||||
- 2.4.1-isc20231123184533
|
- 2.4.1-isc20231123184533
|
||||||
isc-kea-hooks:
|
isc-kea-hooks:
|
||||||
installed: true
|
installed: true
|
||||||
versions:
|
versions:
|
||||||
- 2.4.1-isc20231123184533
|
- 2.4.1-isc20231123184533
|
||||||
libmariadb3:
|
libmariadb3:
|
||||||
installed: true
|
installed: true
|
||||||
versions:
|
versions:
|
||||||
- 1:10.11.4-1~deb12u1
|
- 1:10.11.4-1~deb12u1
|
||||||
mariadb-common:
|
mariadb-common:
|
||||||
installed: true
|
installed: true
|
||||||
versions:
|
versions:
|
||||||
- 1:10.11.4-1~deb12u1
|
- 1:10.11.4-1~deb12u1
|
||||||
mysql-common:
|
mysql-common:
|
||||||
installed: true
|
installed: true
|
||||||
versions:
|
versions:
|
||||||
- 5.8+1.1.0
|
- 5.8+1.1.0
|
||||||
|
addr:
|
||||||
|
udp://172.16.64.254:67:
|
||||||
|
local-address: 127.0.0.1
|
||||||
|
reachable: true
|
||||||
|
timeout: 500
|
||||||
port:
|
port:
|
||||||
tcp:8000:
|
tcp:8000:
|
||||||
listening: true
|
listening: true
|
||||||
ip:
|
ip:
|
||||||
- 172.16.64.20
|
- 172.16.0.20
|
||||||
service:
|
service:
|
||||||
isc-kea-ctrl-agent.service:
|
isc-kea-ctrl-agent.service:
|
||||||
enabled: true
|
enabled: true
|
||||||
running: true
|
running: true
|
||||||
isc-kea-dhcp4-server.service:
|
isc-kea-dhcp4-server.service:
|
||||||
enabled: true
|
enabled: true
|
||||||
running: true
|
running: true
|
||||||
interface:
|
interface:
|
||||||
enp0s3:
|
enp0s3:
|
||||||
exists: true
|
exists: true
|
||||||
addrs:
|
addrs:
|
||||||
- 192.168.99.20/24
|
- 192.168.99.20/24
|
||||||
mtu: 1500
|
mtu: 1500
|
||||||
enp0s8:
|
enp0s8:
|
||||||
exists: true
|
exists: true
|
||||||
addrs:
|
addrs:
|
||||||
- 172.16.0.20/24
|
- 172.16.0.20/24
|
||||||
mtu: 1500
|
mtu: 1500
|
||||||
enp0s9:
|
enp0s9:
|
||||||
exists: true
|
exists: true
|
||||||
addrs:
|
addrs:
|
||||||
- 172.16.64.20/24
|
- 172.16.64.20/24
|
||||||
mtu: 1500
|
mtu: 1500
|
||||||
|
173
goss/s-kea2.yaml
173
goss/s-kea2.yaml
@ -1,90 +1,93 @@
|
|||||||
file:
|
file:
|
||||||
/etc/kea/kea-ctrl-agent.conf:
|
/etc/kea/kea-ctrl-agent.conf:
|
||||||
exists: true
|
exists: true
|
||||||
mode: "0644"
|
mode: "0644"
|
||||||
size: 2470
|
owner: _kea
|
||||||
owner: _kea
|
group: root
|
||||||
group: root
|
filetype: file
|
||||||
filetype: file
|
contents: []
|
||||||
contains: []
|
/etc/kea/kea-dhcp4.conf:
|
||||||
/etc/kea/kea-dhcp4.conf:
|
exists: true
|
||||||
exists: true
|
mode: "0644"
|
||||||
mode: "0644"
|
owner: _kea
|
||||||
size: 11346
|
group: root
|
||||||
owner: _kea
|
filetype: file
|
||||||
group: root
|
contents: []
|
||||||
filetype: file
|
/tmp/kea4-ctrl-socket:
|
||||||
contains: []
|
exists: true
|
||||||
/tmp/kea4-ctrl-socket:
|
mode: "0755"
|
||||||
exists: true
|
size: 0
|
||||||
mode: "0755"
|
owner: _kea
|
||||||
size: 0
|
group: _kea
|
||||||
owner: _kea
|
filetype: socket
|
||||||
group: _kea
|
contains: []
|
||||||
filetype: socket
|
contents: null
|
||||||
contains: []
|
/usr/lib/x86_64-linux-gnu/kea:
|
||||||
/usr/local/lib/kea:
|
exists: true
|
||||||
exists: true
|
mode: "0755"
|
||||||
mode: "0755"
|
owner: root
|
||||||
size: 4096
|
group: root
|
||||||
owner: root
|
filetype: directory
|
||||||
group: root
|
contents: []
|
||||||
filetype: directory
|
|
||||||
contains: []
|
|
||||||
package:
|
package:
|
||||||
isc-kea-common:
|
isc-kea-common:
|
||||||
installed: true
|
installed: true
|
||||||
versions:
|
versions:
|
||||||
- 2.4.1-isc20231123184533
|
- 2.4.1-isc20231123184533
|
||||||
isc-kea-ctrl-agent:
|
isc-kea-ctrl-agent:
|
||||||
installed: true
|
installed: true
|
||||||
versions:
|
versions:
|
||||||
- 2.4.1-isc20231123184533
|
- 2.4.1-isc20231123184533
|
||||||
isc-kea-dhcp4:
|
isc-kea-dhcp4:
|
||||||
installed: true
|
installed: true
|
||||||
versions:
|
versions:
|
||||||
- 2.4.1-isc20231123184533
|
- 2.4.1-isc20231123184533
|
||||||
isc-kea-hooks:
|
isc-kea-hooks:
|
||||||
installed: true
|
installed: true
|
||||||
versions:
|
versions:
|
||||||
- 2.4.1-isc20231123184533
|
- 2.4.1-isc20231123184533
|
||||||
libmariadb3:
|
libmariadb3:
|
||||||
installed: true
|
installed: true
|
||||||
versions:
|
versions:
|
||||||
- 1:10.11.4-1~deb12u1
|
- 1:10.11.4-1~deb12u1
|
||||||
mariadb-common:
|
mariadb-common:
|
||||||
installed: true
|
installed: true
|
||||||
versions:
|
versions:
|
||||||
- 1:10.11.4-1~deb12u1
|
- 1:10.11.4-1~deb12u1
|
||||||
mysql-common:
|
mysql-common:
|
||||||
installed: true
|
installed: true
|
||||||
versions:
|
versions:
|
||||||
- 5.8+1.1.0
|
- 5.8+1.1.0
|
||||||
|
addr:
|
||||||
|
udp://172.16.64.254:67:
|
||||||
|
local-address: 127.0.0.1
|
||||||
|
reachable: true
|
||||||
|
timeout: 500
|
||||||
port:
|
port:
|
||||||
tcp:8000:
|
tcp:8000:
|
||||||
listening: true
|
listening: true
|
||||||
ip:
|
ip:
|
||||||
- 172.16.64.21
|
- 172.16.0.21
|
||||||
service:
|
service:
|
||||||
isc-kea-ctrl-agent.service:
|
isc-kea-ctrl-agent.service:
|
||||||
enabled: true
|
enabled: true
|
||||||
running: true
|
running: true
|
||||||
isc-kea-dhcp4-server.service:
|
isc-kea-dhcp4-server.service:
|
||||||
enabled: true
|
enabled: true
|
||||||
running: true
|
running: true
|
||||||
interface:
|
interface:
|
||||||
enp0s3:
|
enp0s3:
|
||||||
exists: true
|
exists: true
|
||||||
addrs:
|
addrs:
|
||||||
- 192.168.99.21/24
|
- 192.168.99.21/24
|
||||||
mtu: 1500
|
mtu: 1500
|
||||||
enp0s8:
|
enp0s8:
|
||||||
exists: true
|
exists: true
|
||||||
addrs:
|
addrs:
|
||||||
- 172.16.0.21/24
|
- 172.16.0.21/24
|
||||||
mtu: 1500
|
mtu: 1500
|
||||||
enp0s9:
|
enp0s9:
|
||||||
exists: true
|
exists: true
|
||||||
addrs:
|
addrs:
|
||||||
- 172.16.64.21/24
|
- 172.16.64.21/24
|
||||||
mtu: 1500
|
mtu: 1500
|
||||||
|
@ -98,10 +98,10 @@ file:
|
|||||||
filetype: file
|
filetype: file
|
||||||
contains: []
|
contains: []
|
||||||
|
|
||||||
addr:
|
#addr:
|
||||||
tcp://s-nxc.gsb.lan:443:
|
#tcp://s-nxc.gsb.lan:443:
|
||||||
reachable: true
|
#reachable: true
|
||||||
timeout: 500
|
#timeout: 500
|
||||||
|
|
||||||
port:
|
port:
|
||||||
tcp:22:
|
tcp:22:
|
||||||
@ -117,10 +117,10 @@ port:
|
|||||||
listening: true
|
listening: true
|
||||||
ip: []
|
ip: []
|
||||||
|
|
||||||
#tcp:8081:
|
#tcp:8081:
|
||||||
#listening: true
|
#listening: true
|
||||||
#ip:
|
#ip:
|
||||||
#- 0.0.0.0
|
#- 0.0.0.0
|
||||||
|
|
||||||
interface:
|
interface:
|
||||||
enp0s3:
|
enp0s3:
|
||||||
|
@ -11,7 +11,7 @@ GITPRJ=gsb2024
|
|||||||
apt-get update
|
apt-get update
|
||||||
apt-get install -y lighttpd git
|
apt-get install -y lighttpd git
|
||||||
STOREREP="/var/www/html/gsbstore"
|
STOREREP="/var/www/html/gsbstore"
|
||||||
|
SRC="${SRC:-http://depl.sio.lan/gsbstore}"
|
||||||
|
|
||||||
GLPIREL=10.0.11
|
GLPIREL=10.0.11
|
||||||
str="wget -nc -4 https://github.com/glpi-project/glpi/releases/download/${GLPIREL}/glpi-${GLPIREL}.tgz"
|
str="wget -nc -4 https://github.com/glpi-project/glpi/releases/download/${GLPIREL}/glpi-${GLPIREL}.tgz"
|
||||||
@ -39,7 +39,7 @@ str7="wget -nc -4 https://github.com/goss-org/goss/releases/latest/download/dgos
|
|||||||
str8="wget -nc -4 'https://gestsup.fr/index.php?page=download&channel=stable&version=3.2.30&type=gestsup' -O gestsup_3.2.30.zip"
|
str8="wget -nc -4 'https://gestsup.fr/index.php?page=download&channel=stable&version=3.2.30&type=gestsup' -O gestsup_3.2.30.zip"
|
||||||
|
|
||||||
#METRICBEAT ET FILEBEAT
|
#METRICBEAT ET FILEBEAT
|
||||||
ELKREL=8.11.3
|
ELKREL=8.11.4
|
||||||
str81="wget -nc -4 https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-${ELKREL}-amd64.deb"
|
str81="wget -nc -4 https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-${ELKREL}-amd64.deb"
|
||||||
str82="wget -nc -4 https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-${ELKREL}-windows-x86_64.zip"
|
str82="wget -nc -4 https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-${ELKREL}-windows-x86_64.zip"
|
||||||
str83="wget -nc -4 https://artifacts.elastic.co/downloads/beats/metricbeat/metricbeat-${ELKREL}-windows-x86_64.zip"
|
str83="wget -nc -4 https://artifacts.elastic.co/downloads/beats/metricbeat/metricbeat-${ELKREL}-windows-x86_64.zip"
|
||||||
@ -50,6 +50,12 @@ str84="wget -nc -4 https://artifacts.elastic.co/downloads/beats/metricbeat/metri
|
|||||||
|
|
||||||
(cat <<EOT > "${STOREREP}/getall"
|
(cat <<EOT > "${STOREREP}/getall"
|
||||||
#!/bin/bash
|
#!/bin/bash
|
||||||
|
if [[ -z "${SRC+x}" ]]; then
|
||||||
|
echo "erreur : variable SRC indefinie"
|
||||||
|
echo " SRC : URL serveur deploiement"
|
||||||
|
echo "export SRC=http://depl.sio.adm/gsbstore ; ./$0"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
${str}
|
${str}
|
||||||
${str31}
|
${str31}
|
||||||
@ -72,6 +78,7 @@ ${str81}
|
|||||||
${str82}
|
${str82}
|
||||||
${str83}
|
${str83}
|
||||||
${str84}
|
${str84}
|
||||||
|
wget -nc -4 "${SRC}/zabbix.sql.gz" -O zabbix.sql.gz
|
||||||
|
|
||||||
EOT
|
EOT
|
||||||
)
|
)
|
||||||
|
@ -1,12 +1,14 @@
|
|||||||
---
|
---
|
||||||
- hosts: localhost
|
- hosts: localhost
|
||||||
connection: local
|
connection: local
|
||||||
|
become: yes
|
||||||
|
|
||||||
|
|
||||||
roles:
|
roles:
|
||||||
- base
|
- base
|
||||||
- goss
|
- goss
|
||||||
- r-ext
|
- r-ext
|
||||||
- snmp-agent
|
- zabbix-cli
|
||||||
- ssh-cli
|
- ssh-cli
|
||||||
# - syslog-cli
|
# - syslog-cli
|
||||||
- post
|
- post
|
||||||
|
@ -1,6 +1,7 @@
|
|||||||
---
|
---
|
||||||
- hosts: localhost
|
- hosts: localhost
|
||||||
connection: local
|
connection: local
|
||||||
|
become: yes
|
||||||
|
|
||||||
roles:
|
roles:
|
||||||
- base
|
- base
|
||||||
@ -9,5 +10,5 @@
|
|||||||
- ssh-cli
|
- ssh-cli
|
||||||
# - syslog-cli
|
# - syslog-cli
|
||||||
- dhcp
|
- dhcp
|
||||||
- snmp-agent
|
- zabbix-cli
|
||||||
- post
|
- post
|
||||||
|
24
roles/awx-user-cli/tasks/main.yml
Normal file
24
roles/awx-user-cli/tasks/main.yml
Normal file
@ -0,0 +1,24 @@
|
|||||||
|
---
|
||||||
|
|
||||||
|
- name: Creation user awx
|
||||||
|
ansible.builtin.user:
|
||||||
|
name: awx
|
||||||
|
groups: sudo
|
||||||
|
append: yes
|
||||||
|
|
||||||
|
- name: Cration d'un mdp pour user awx
|
||||||
|
user:
|
||||||
|
name: awx
|
||||||
|
password: '$5$1POIEvs/Q.DHI4/6$RT6nl42XkekxTPKA/dktbnCMxL8Rfk8GAK7NxqL9D70'
|
||||||
|
|
||||||
|
- name: Get awx key_pub
|
||||||
|
get_url:
|
||||||
|
url: http://s-adm.gsb.adm/gsbstore/id_rsa_awx.pub
|
||||||
|
dest: /tmp
|
||||||
|
|
||||||
|
|
||||||
|
- name: Set authorized key taken from file /tmp
|
||||||
|
ansible.posix.authorized_key:
|
||||||
|
user: awx
|
||||||
|
state: present
|
||||||
|
key: "{{ lookup('file', '/tmp/id_rsa_awx.pub') }}"
|
20
roles/awx-user/tasks/main.yml
Normal file
20
roles/awx-user/tasks/main.yml
Normal file
@ -0,0 +1,20 @@
|
|||||||
|
---
|
||||||
|
- name: Creation user awx, cle SSH et group sudo
|
||||||
|
ansible.builtin.user:
|
||||||
|
name: awx
|
||||||
|
groups: sudo
|
||||||
|
append: yes
|
||||||
|
shell: /bin/bash
|
||||||
|
generate_ssh_key: yes
|
||||||
|
|
||||||
|
#- name: Creation mdp user awx
|
||||||
|
# ansible.builtin.user:
|
||||||
|
#name:
|
||||||
|
#user: awx
|
||||||
|
# password: '$5$1POIEvs/Q.DHI4/6$RT6nl42XkekxTPKA/dktbnCMxL8Rfk8GAK7NxqL9D70'
|
||||||
|
|
||||||
|
- name: Copie cle publique dans gsbstore
|
||||||
|
copy:
|
||||||
|
src: /home/awx/.ssh/id_rsa.pub
|
||||||
|
dest: /var/www/html/gsbstore/id_rsa_awx.pub
|
||||||
|
remote_src: yes
|
26
roles/awx/README.md
Normal file
26
roles/awx/README.md
Normal file
@ -0,0 +1,26 @@
|
|||||||
|
# Rôle awx
|
||||||
|
***
|
||||||
|
Rôle awx: Configuration d'un serveur AWX avec k3s.
|
||||||
|
|
||||||
|
## Tables des matières
|
||||||
|
1. [Que fait le rôle AWX ?]
|
||||||
|
2. [Connexion à l'interface WEB du serveur AWX]
|
||||||
|
|
||||||
|
**AWX** est l'application développée par **RedHat** permettant de lancer des playbooks **ansible** depuis une interface web évoluée plutôt qu'en ligne de commande. **AWX** utlise kubernetes mise en oeuvre ici avec **k3s**.
|
||||||
|
|
||||||
|
## Que fait le rôle AWX ?
|
||||||
|
Le rôle **awx** installe et configure un serveur **AWX** avec **k3s** pour cela le role:
|
||||||
|
- Installe **k3s** en spécifiant l'adresse IP ainsi que l'interface d'écoute
|
||||||
|
- Clone le dépot **Github** **awx-on-k3s**
|
||||||
|
- Procéde au déploiement du pod **awx-operator**
|
||||||
|
- Génére un certifiacat auto-signé utlisée par le serveur **AWX** en utilisant **OpenSSL**
|
||||||
|
- Edite le fichier awx.yaml afin d'y indique le nom d'hote du serveur en accord avec le nom utlisé par les certificats
|
||||||
|
- Déploie le serveur **AWX**
|
||||||
|
- Test l'accésibilité du serveur **AWX**.
|
||||||
|
|
||||||
|
### Connexions à l'interface WEB du serveur AWX ###
|
||||||
|
Une fois le role **awx** terminé il est possible de se connecter à l'interface web duserveur depuis un navigateur.
|
||||||
|
S'assurer que votre machine puisse résoudre **s-awx.gsb.lan**
|
||||||
|
- Se connecter sur : **https://s-awx.gsb.lan**
|
||||||
|
- Utlisateur: **admin** / Mot de passe: **Ansible123!**
|
||||||
|
|
79
roles/awx/tasks/main.yml
Normal file
79
roles/awx/tasks/main.yml
Normal file
@ -0,0 +1,79 @@
|
|||||||
|
---
|
||||||
|
- name: Installation de k3s ...
|
||||||
|
ansible.builtin.shell: curl -sfL https://get.k3s.io | INSTALL_K3S_VERSION=v1.28.5+k3s1 sh -s - --write-kubeconfig-mode 644 --node-ip "{{ awx_ip }}" --flannel-iface "{{ awx_if }}"
|
||||||
|
|
||||||
|
- name: Clonage du dépot awx-on-k3s
|
||||||
|
git:
|
||||||
|
repo: https://github.com/kurokobo/awx-on-k3s.git
|
||||||
|
dest: "{{ awx_dir }}"
|
||||||
|
clone: yes
|
||||||
|
force: yes
|
||||||
|
|
||||||
|
- name: Git checkout
|
||||||
|
ansible.builtin.shell: "git checkout 2.10.0"
|
||||||
|
args:
|
||||||
|
chdir: "{{ awx_dir }}"
|
||||||
|
|
||||||
|
|
||||||
|
- name: Deploiement AWX Operator ...
|
||||||
|
ansible.builtin.shell: "kubectl apply -k operator"
|
||||||
|
args:
|
||||||
|
chdir: "{{ awx_dir }}"
|
||||||
|
|
||||||
|
#- name: Git checkout
|
||||||
|
#ansible.builtin.git:
|
||||||
|
#repo: 'https://github.com/kurokobo/awx-on-k3s.git'
|
||||||
|
#dest: "{{ awx_dir }}"
|
||||||
|
#version: release-2.10.0
|
||||||
|
|
||||||
|
- name: Generation de certificat auto-signé avec OpenSSL
|
||||||
|
ansible.builtin.shell: 'openssl req -x509 -nodes -days 3650 -newkey rsa:2048 -out ./base/tls.crt -keyout ./base/tls.key -subj "/CN={{ awx_host }}/O={{ awx_host }}" -addext "subjectAltName = DNS:{{ awx_host }}"'
|
||||||
|
args:
|
||||||
|
chdir: "{{ awx_dir }}"
|
||||||
|
|
||||||
|
- name: Changement de la ligne hostname dans le fichier awx.yaml
|
||||||
|
replace:
|
||||||
|
path: ~/tools/awx-on-k3s/base/awx.yaml
|
||||||
|
regexp: 'awx.example.com'
|
||||||
|
replace: '{{ awx_host }}'
|
||||||
|
backup: yes
|
||||||
|
|
||||||
|
- name: creation du repertoire postgres-13
|
||||||
|
ansible.builtin.file:
|
||||||
|
path: /data/postgres-13
|
||||||
|
state: directory
|
||||||
|
mode: '0755'
|
||||||
|
|
||||||
|
- name: Creation repertoire projects
|
||||||
|
ansible.builtin.file:
|
||||||
|
path: /data/projects
|
||||||
|
state: directory
|
||||||
|
owner: 1000:0
|
||||||
|
|
||||||
|
- name: Deploiement d'AWX ...
|
||||||
|
ansible.builtin.shell: "kubectl apply -k base"
|
||||||
|
args:
|
||||||
|
chdir: "{{ awx_dir }}"
|
||||||
|
|
||||||
|
- name: Test d'accésibilité de l'interface web AWX
|
||||||
|
ansible.builtin.uri:
|
||||||
|
url: "https://s-awx.gsb.lan"
|
||||||
|
follow_redirects: none
|
||||||
|
method: GET
|
||||||
|
validate_certs: false
|
||||||
|
register: _result
|
||||||
|
until: _result.status == 200
|
||||||
|
retries: 60 # 90*10 seconds = 15 min
|
||||||
|
delay: 10 # Every 10 seconds
|
||||||
|
|
||||||
|
- debug:
|
||||||
|
msg: "L'installation du serveur AWX est terminée."
|
||||||
|
|
||||||
|
- debug:
|
||||||
|
msg: "Connectez-vous sur: https://s-awx.gsb.lan"
|
||||||
|
|
||||||
|
- debug:
|
||||||
|
msg: "Nom d'utilisateur: admin / mdp: Ansible123!"
|
||||||
|
|
||||||
|
|
||||||
|
|
@ -1 +1 @@
|
|||||||
BEATVER: "8.11.5"
|
BEATVER: "8.11.4"
|
||||||
|
@ -1,17 +1,51 @@
|
|||||||
---
|
---
|
||||||
- name: Récupération de filebeat
|
- name: Récupération de filebeat
|
||||||
get_url:
|
get_url:
|
||||||
url: http://s-adm.gsb.adm/gsbstore/filebeat-${BEATVAR}-amd64.deb
|
url: "http://s-adm.gsb.adm/gsbstore/filebeat-{{ BEATVER }}-amd64.deb"
|
||||||
dest: /tmp/
|
dest: /tmp/
|
||||||
|
|
||||||
- name: Installation de filebeat
|
- name: Installation de filebeat
|
||||||
apt:
|
apt:
|
||||||
deb: /tmp/filebeat-${BEATVEAR}-amd64.deb
|
deb: "/tmp/filebeat-{{ BEATVER }}-amd64.deb"
|
||||||
|
|
||||||
|
<<<<<<< HEAD
|
||||||
|
- name: Chgt filebeat.yml - localhost:9200 - Elastic
|
||||||
|
replace:
|
||||||
|
path: /etc/filebeat/filebeat.yml
|
||||||
|
regexp: 'localhost:9200'
|
||||||
|
replace: 's-elk.gsb.adm:9200'
|
||||||
|
backup: yes
|
||||||
|
|
||||||
|
- name: Chgt filebeat.yml - localhost:5601 - Kibana
|
||||||
|
replace:
|
||||||
|
path: /etc/filebeat/filebeat.yml
|
||||||
|
regexp: 'localhost:5601'
|
||||||
|
replace: 's-elk.gsb.adm:5601'
|
||||||
|
backup: yes
|
||||||
|
|
||||||
|
|
||||||
|
- name: Chgt filebeat.yml - user - Kibana
|
||||||
|
replace:
|
||||||
|
path: /etc/filebeat/filebeat.yml
|
||||||
|
regexp: 'user:5601'
|
||||||
|
replace: 's-elk.gsb.adm:5601'
|
||||||
|
backup: yes
|
||||||
|
|
||||||
|
#- name: Changement du fichier de conf
|
||||||
|
# copy:
|
||||||
|
# src: filebeat.yml
|
||||||
|
# dest: /etc/filebeat/filebeat.yml
|
||||||
|
=======
|
||||||
|
- name: sorie pou debug
|
||||||
|
fail:
|
||||||
|
msg: "packet installe"
|
||||||
|
|
||||||
|
|
||||||
- name: Changement du fichier de conf
|
- name: Changement du fichier de conf
|
||||||
copy:
|
copy:
|
||||||
src: filebeat.yml
|
src: filebeat.yml
|
||||||
dest: /etc/filebeat/filebeat.yml
|
dest: /etc/filebeat/filebeat.yml
|
||||||
|
>>>>>>> d16ccae (maj pour elk-filebeat-cli)
|
||||||
|
|
||||||
- name: Configuration de filebeat
|
- name: Configuration de filebeat
|
||||||
shell: filebeat modules enable system
|
shell: filebeat modules enable system
|
||||||
|
@ -1,9 +1,22 @@
|
|||||||
## Principe du rôle elk
|
# Le rôle elk
|
||||||
ELK 8.5.3
|
ELK Version 8.5.3
|
||||||
|
|
||||||
Ce rôle permet de créer un serveur ELK pour centraliser les logs et de des métriques pour simplifier la gestion du parc informatique GSB.
|
|
||||||
Le principe de ce rôle est d'installer docker, les différentes tâches de ce rôle sont de :
|
Ce rôle a pour but d'installer un serveur ELK pour centraliser les logs et les métriques pour simplifier la gestion du parc informatique GSB.
|
||||||
|
|
||||||
|
|
||||||
|
Le rôle **elk** installe **docker**, les différentes tâches de ce rôle sont de :
|
||||||
- Vérifier si ELK est déjà installé,
|
- Vérifier si ELK est déjà installé,
|
||||||
- Importation un docker-compose depuis github,
|
- clonage du depot **devianthony** depuis github,
|
||||||
- Changement la configuration pour passer en version 'basic'
|
- Changement de la configuration pour passer en version 'basic'
|
||||||
- Lancement d'ELK avec docker-compose
|
- Lancement d'ELK avec docker-compose
|
||||||
|
|
||||||
|
## Lancement manuel
|
||||||
|
- depuis le répertoire **nxc** :
|
||||||
|
````shell
|
||||||
|
docker compose up setup
|
||||||
|
docker compose up -d
|
||||||
|
````
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
@ -21,7 +21,7 @@
|
|||||||
regexp: 'xpack.license.self_generated.type: trial'
|
regexp: 'xpack.license.self_generated.type: trial'
|
||||||
replace: 'xpack.license.self_generated.type: basic'
|
replace: 'xpack.license.self_generated.type: basic'
|
||||||
|
|
||||||
- name: Execution du fichier docker-compose.yml
|
# - name: Execution du fichier docker-compose.yml
|
||||||
shell: docker compose up -d
|
# shell: docker compose pull
|
||||||
args:
|
# args:
|
||||||
chdir: /root/elk
|
# chdir: /root/elk
|
||||||
|
26
roles/firewalld/README.md
Normal file
26
roles/firewalld/README.md
Normal file
@ -0,0 +1,26 @@
|
|||||||
|
# Rôle awx
|
||||||
|
***
|
||||||
|
Rôle awx: Configuration d'un serveur AWX avec k3s.
|
||||||
|
|
||||||
|
## Tables des matières
|
||||||
|
1. [Que fait le rôle AWX ?]
|
||||||
|
2. [Connexion à l'interface WEB du serveur AWX]
|
||||||
|
|
||||||
|
**AWX** est l'application développée par **RedHat** permettant de lancer des playbooks **ansible** depuis une interface web évoluée plutôt qu'en ligne de commande. **AWX** utlise kubernetes mise en oeuvre ici avec **k3s**.
|
||||||
|
|
||||||
|
## Que fait le rôle AWX ?
|
||||||
|
Le rôle **awx** installe et configure un serveur **AWX** avec **k3s** pour cela le role:
|
||||||
|
- Installe **k3s** en spécifiant l'adresse IP ainsi que l'interface d'écoute
|
||||||
|
- Clone le dépot **Github** **awx-on-k3s**
|
||||||
|
- Procéde au déploiement du pod **awx-operator**
|
||||||
|
- Génére un certifiacat auto-signé utlisée par le serveur **AWX** en utilisant **OpenSSL**
|
||||||
|
- Edite le fichier awx.yaml afin d'y indique le nom d'hote du serveur en accord avec le nom utlisé par les certificats
|
||||||
|
- Déploie le serveur **AWX**
|
||||||
|
- Test l'accésibilité du serveur **AWX**.
|
||||||
|
|
||||||
|
### Connexions à l'interface WEB du serveur AWX ###
|
||||||
|
Une fois le role **awx** terminé il est possible de se connecter à l'interface web duserveur depuis un navigateur.
|
||||||
|
S'assurer que votre machine puisse résoudre **s-awx.gsb.lan**
|
||||||
|
- Se connecter sur : **https://s-awx.gsb.lan**
|
||||||
|
- Utlisateur: **admin** / Mot de passe: **Ansible123!**
|
||||||
|
|
91
roles/firewalld/tasks/main.yml
Normal file
91
roles/firewalld/tasks/main.yml
Normal file
@ -0,0 +1,91 @@
|
|||||||
|
---
|
||||||
|
- name: Installation de firewalld
|
||||||
|
apt:
|
||||||
|
state: present
|
||||||
|
name:
|
||||||
|
- firewalld
|
||||||
|
|
||||||
|
- name: affectation de l'interface enp0s3 a la zone external
|
||||||
|
ansible.posix.firewalld:
|
||||||
|
zone: external
|
||||||
|
interface: enp0s3
|
||||||
|
permanent: true
|
||||||
|
state: enabled
|
||||||
|
|
||||||
|
- name: affectation de l'interface enp0s8 a la zone external
|
||||||
|
ansible.posix.firewalld:
|
||||||
|
zone: internal
|
||||||
|
interface: enp0s8
|
||||||
|
permanent: true
|
||||||
|
state: enabled
|
||||||
|
|
||||||
|
- name: FirewallD rules pour la zone internal
|
||||||
|
firewalld:
|
||||||
|
zone: internal
|
||||||
|
permanent: yes
|
||||||
|
immediate: yes
|
||||||
|
service: "{{ item }}"
|
||||||
|
state: enabled
|
||||||
|
with_items:
|
||||||
|
- http
|
||||||
|
- https
|
||||||
|
- dns
|
||||||
|
- ssh
|
||||||
|
- rdp
|
||||||
|
|
||||||
|
- name: FirewallD rules pour la zone internal
|
||||||
|
firewalld:
|
||||||
|
zone: external
|
||||||
|
permanent: yes
|
||||||
|
immediate: yes
|
||||||
|
service: "{{ item }}"
|
||||||
|
state: enabled
|
||||||
|
with_items:
|
||||||
|
- ssh
|
||||||
|
- rdp
|
||||||
|
#- ansible.posix.firewalld:
|
||||||
|
# zone: internal
|
||||||
|
# service: http
|
||||||
|
# permanent: true
|
||||||
|
# state: enabled
|
||||||
|
|
||||||
|
#- ansible.posix.firewalld:
|
||||||
|
# zone: internal
|
||||||
|
# service: dns
|
||||||
|
# permanent: true
|
||||||
|
#state: enabled
|
||||||
|
|
||||||
|
#- ansible.posix.firewalld:
|
||||||
|
# zone: internal
|
||||||
|
# service: ssh
|
||||||
|
# permanent: true
|
||||||
|
# state: enabled
|
||||||
|
|
||||||
|
#- ansible.posix.firewalld:
|
||||||
|
# zone: internal
|
||||||
|
# service: rdp
|
||||||
|
#permanent: true
|
||||||
|
#state: enabled
|
||||||
|
|
||||||
|
|
||||||
|
- ansible.posix.firewalld:
|
||||||
|
zone: internal
|
||||||
|
port: 8080/tcp
|
||||||
|
permanent: true
|
||||||
|
state: enabled
|
||||||
|
|
||||||
|
- ansible.posix.firewalld:
|
||||||
|
zone: external
|
||||||
|
port: 3389/tcp
|
||||||
|
permanent: true
|
||||||
|
state: enabled
|
||||||
|
|
||||||
|
- ansible.posix.firewalld:
|
||||||
|
port_forward:
|
||||||
|
- port: 3389
|
||||||
|
proto: tcp
|
||||||
|
toaddr: "192.168.99.6"
|
||||||
|
toport: 3389
|
||||||
|
state: enabled
|
||||||
|
immediate: yes
|
||||||
|
|
@ -1,6 +1,76 @@
|
|||||||
|
Configuration de ferm
|
||||||
|
|
||||||
# [Ferm](http://ferm.foo-projects.org/)
|
# [Ferm](http://ferm.foo-projects.org/)
|
||||||
|
|
||||||
Modifier l'execution d'iptables [plus d'info ici](https://wiki.debian.org/iptables)
|
Modifier l'execution d'iptables [plus d'info ici#!/bin/bash
|
||||||
|
set -u
|
||||||
|
set -e
|
||||||
|
# Version Site to Site
|
||||||
|
|
||||||
|
AddressAwg=10.0.0.1/32 # Adresse VPN Wireguard cote A
|
||||||
|
EndpointA=192.168.0.51 # Adresse extremite A
|
||||||
|
PortA=51820 # Port ecoute extremite A
|
||||||
|
NetworkA=192.168.1.0/24 # reseau cote A
|
||||||
|
NetworkC=192.168.200.0/24 #reseau cote A
|
||||||
|
NetworkD=172.16.0.0/24 #reseau cote A
|
||||||
|
|
||||||
|
AddressBwg=10.0.0.2/32 # Adresse VPN Wireguard cote B
|
||||||
|
EndpointB=192.168.0.52 # Adresse extremite B
|
||||||
|
PortB=51820 # Port ecoute extremite B
|
||||||
|
NetworkB=172.16.128.0/24 # reseau cote B
|
||||||
|
|
||||||
|
umask 077
|
||||||
|
wg genkey > endpoint-a.key
|
||||||
|
wg pubkey < endpoint-a.key > endpoint-a.pub
|
||||||
|
|
||||||
|
wg genkey > endpoint-b.key
|
||||||
|
wg pubkey < endpoint-b.key > endpoint-b.pub
|
||||||
|
|
||||||
|
|
||||||
|
PKA=$(cat endpoint-a.key)
|
||||||
|
pKA=$(cat endpoint-a.pub)
|
||||||
|
PKB=$(cat endpoint-b.key)
|
||||||
|
pKB=$(cat endpoint-b.pub)
|
||||||
|
|
||||||
|
cat <<FINI > wg0-a.conf
|
||||||
|
# local settings for Endpoint A
|
||||||
|
[Interface]
|
||||||
|
PrivateKey = $PKA
|
||||||
|
Address = $AddressAwg
|
||||||
|
ListenPort = $PortA
|
||||||
|
|
||||||
|
# IP forwarding
|
||||||
|
PreUp = sysctl -w net.ipv4.ip_forward=1
|
||||||
|
|
||||||
|
# remote settings for Endpoint B
|
||||||
|
[Peer]
|
||||||
|
PublicKey = $pKB
|
||||||
|
Endpoint = ${EndpointB}:$PortB
|
||||||
|
AllowedIPs = $AddressBwg, $NetworkB
|
||||||
|
|
||||||
|
FINI
|
||||||
|
|
||||||
|
|
||||||
|
cat <<FINI > wg0-b.conf
|
||||||
|
# local settings for Endpoint B
|
||||||
|
[Interface]
|
||||||
|
PrivateKey = $PKB
|
||||||
|
Address = $AddressBwg
|
||||||
|
ListenPort = $PortB
|
||||||
|
|
||||||
|
# IP forwarding
|
||||||
|
PreUp = sysctl -w net.ipv4.ip_forward=1
|
||||||
|
|
||||||
|
# remote settings for Endpoint A
|
||||||
|
[Peer]
|
||||||
|
PublicKey = $pKA
|
||||||
|
Endpoint = ${EndpointA}:$PortA
|
||||||
|
AllowedIPs = $AddressAwg, $NetworkA, $NetworkC, $NetworkD
|
||||||
|
|
||||||
|
FINI
|
||||||
|
|
||||||
|
echo "wg0-a.conf et wg0-b.conf sont generes ..."
|
||||||
|
echo "copier wg0-b.conf sur la machine b et renommer les fichiers de configuration ..."](https://wiki.debian.org/iptables)
|
||||||
```shell
|
```shell
|
||||||
update-alternatives --set iptables /usr/sbin/iptables-legacy
|
update-alternatives --set iptables /usr/sbin/iptables-legacy
|
||||||
```
|
```
|
||||||
|
16
roles/gotify/README.md
Normal file
16
roles/gotify/README.md
Normal file
@ -0,0 +1,16 @@
|
|||||||
|
# Rôle Gotify
|
||||||
|
***
|
||||||
|
Rôle gotify pour la notification Zabbix et pas que
|
||||||
|
|
||||||
|
## Que fait le rôle gotify ?
|
||||||
|
|
||||||
|
Le rôle gotify va installer gotify en binaire, il s'agit d'une installation basic sans https.
|
||||||
|
***
|
||||||
|
## Identifiant
|
||||||
|
|
||||||
|
***
|
||||||
|
|
||||||
|
Admin
|
||||||
|
Admin
|
||||||
|
|
||||||
|
***
|
21
roles/kea/README.md
Normal file
21
roles/kea/README.md
Normal file
@ -0,0 +1,21 @@
|
|||||||
|
# Rôle Kea
|
||||||
|
***
|
||||||
|
Rôle Kea: Configuration de 2 serveurs KEA en mode haute disponbilité.
|
||||||
|
|
||||||
|
## Tables des matières
|
||||||
|
1. [Que fait le rôle Kea ?]
|
||||||
|
2. [Installation et configuration de ka]
|
||||||
|
3. [Remarques]
|
||||||
|
|
||||||
|
|
||||||
|
## Que fait le rôle Kea ?
|
||||||
|
Le rôle KEA permet de configurer 1 serveurs kea (s-kea1 et s-kea2) en mode haute disponibilité.
|
||||||
|
- Le serveur **s-kea1** sera en mode **primary** il délivrera les baux DHCP sur le réseau n-user.
|
||||||
|
- Le serveur **s-kea2**, sera en mode **stand-by** le service DHCP basculera donc sur **s-kea2** en cas disponibilité du serveur**s-kea1**.
|
||||||
|
|
||||||
|
### Installation et configuration de kea
|
||||||
|
|
||||||
|
Le rôle kea installe les packets **kea dhcp4, hooks, admin** une fois les packets installer. Il configure un serveur kea pour qu'il distribue les ips sur le réseau n-user et soit en haute disponibilité.
|
||||||
|
|
||||||
|
### Remarquees ###
|
||||||
|
Une fois le playbook **s-kea** correctement terminé et la machine **s-kea** redemarrée, redémarrée le service **isc-kea-dhcp4.service** afin de prendre en compte les modifications éfféctuées sur la couche réseau par le role POST.
|
66
roles/kea/files/kea-ctrl-agent.conf
Normal file
66
roles/kea/files/kea-ctrl-agent.conf
Normal file
@ -0,0 +1,66 @@
|
|||||||
|
// This is an example of a configuration for Control-Agent (CA) listening
|
||||||
|
// for incoming HTTP traffic. This is necessary for handling API commands,
|
||||||
|
// in particular lease update commands needed for HA setup.
|
||||||
|
{
|
||||||
|
"Control-agent":
|
||||||
|
{
|
||||||
|
// We need to specify where the agent should listen to incoming HTTP
|
||||||
|
// queries.
|
||||||
|
"http-host": "172.16.0.20",
|
||||||
|
|
||||||
|
// This specifies the port CA will listen on.
|
||||||
|
"http-port": 8000,
|
||||||
|
|
||||||
|
"control-sockets":
|
||||||
|
{
|
||||||
|
// This is how the Agent can communicate with the DHCPv4 server.
|
||||||
|
"dhcp4":
|
||||||
|
{
|
||||||
|
"comment": "socket to DHCPv4 server",
|
||||||
|
"socket-type": "unix",
|
||||||
|
"socket-name": "/tmp/kea4-ctrl-socket"
|
||||||
|
},
|
||||||
|
|
||||||
|
// Location of the DHCPv6 command channel socket.
|
||||||
|
# "dhcp6":
|
||||||
|
# {
|
||||||
|
# "socket-type": "unix",
|
||||||
|
# "socket-name": "/tmp/kea6-ctrl-socket"
|
||||||
|
# },
|
||||||
|
|
||||||
|
// Location of the D2 command channel socket.
|
||||||
|
# "d2":
|
||||||
|
# {
|
||||||
|
# "socket-type": "unix",
|
||||||
|
# "socket-name": "/tmp/kea-ddns-ctrl-socket",
|
||||||
|
# "user-context": { "in-use": false }
|
||||||
|
# }
|
||||||
|
},
|
||||||
|
|
||||||
|
// Similar to other Kea components, CA also uses logging.
|
||||||
|
"loggers": [
|
||||||
|
{
|
||||||
|
"name": "kea-ctrl-agent",
|
||||||
|
"output_options": [
|
||||||
|
{
|
||||||
|
"output": "stdout",
|
||||||
|
|
||||||
|
// Several additional parameters are possible in addition
|
||||||
|
// to the typical output. Flush determines whether logger
|
||||||
|
// flushes output to a file. Maxsize determines maximum
|
||||||
|
// filesize before the file is rotated. maxver
|
||||||
|
// specifies the maximum number of rotated files being
|
||||||
|
// kept.
|
||||||
|
"flush": true,
|
||||||
|
"maxsize": 204800,
|
||||||
|
"maxver": 4,
|
||||||
|
// We use pattern to specify custom log message layout
|
||||||
|
"pattern": "%d{%y.%m.%d %H:%M:%S.%q} %-5p [%c/%i] %m\n"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"severity": "INFO",
|
||||||
|
"debuglevel": 0 // debug level only applies when severity is set to DEBUG.
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
12
roles/kea/handlers/main.yml
Normal file
12
roles/kea/handlers/main.yml
Normal file
@ -0,0 +1,12 @@
|
|||||||
|
---
|
||||||
|
- name: Restart isc-kea-dhcp4-server
|
||||||
|
ansible.builtin.service:
|
||||||
|
name: isc-kea-dhcp4-server.service
|
||||||
|
state: restarted
|
||||||
|
enabled: yes
|
||||||
|
|
||||||
|
- name: Restart isc-kea-ctrl-agent
|
||||||
|
ansible.builtin.service:
|
||||||
|
name: isc-kea-ctrl-agent.service
|
||||||
|
state: restarted
|
||||||
|
enabled: yes
|
43
roles/kea/tasks/main.yml
Normal file
43
roles/kea/tasks/main.yml
Normal file
@ -0,0 +1,43 @@
|
|||||||
|
---
|
||||||
|
|
||||||
|
- name: Preparation
|
||||||
|
ansible.builtin.shell: curl -1sLf 'https://dl.cloudsmith.io/public/isc/kea-2-4/setup.deb.sh' | sudo -E bash
|
||||||
|
|
||||||
|
- name: Update apt
|
||||||
|
ansible.builtin.apt:
|
||||||
|
update_cache: yes
|
||||||
|
|
||||||
|
#- name: Installation paquet isc-kea-common
|
||||||
|
# ansible.builtin.apt:
|
||||||
|
# deb: isc-kea-common
|
||||||
|
# state: present
|
||||||
|
|
||||||
|
- name: Installation isc-kea-dhcp4
|
||||||
|
ansible.builtin.apt:
|
||||||
|
name: isc-kea-dhcp4-server
|
||||||
|
state: present
|
||||||
|
|
||||||
|
- name: Installation isc-kea-ctrl-agent
|
||||||
|
ansible.builtin.apt:
|
||||||
|
name: isc-kea-ctrl-agent
|
||||||
|
state: present
|
||||||
|
|
||||||
|
- name: Installation isc-kea-hooks
|
||||||
|
ansible.builtin.apt:
|
||||||
|
name: isc-kea-hooks
|
||||||
|
state: present
|
||||||
|
|
||||||
|
- name: Generation ---- du fichier de configuration kea-ctrl-agent
|
||||||
|
ansible.builtin.template:
|
||||||
|
src: kea-ctrl-agent.conf.j2
|
||||||
|
dest: /etc/kea/kea-ctrl-agent.conf
|
||||||
|
notify:
|
||||||
|
- Restart isc-kea-ctrl-agent
|
||||||
|
|
||||||
|
- name: Generation du fichier de configuration kea-dhcp4.conf
|
||||||
|
ansible.builtin.template:
|
||||||
|
src: kea-dhcp4.conf.j2
|
||||||
|
dest: /etc/kea/kea-dhcp4.conf
|
||||||
|
notify:
|
||||||
|
- Restart isc-kea-dhcp4-server
|
||||||
|
|
32
roles/kea/templates/kea-ctrl-agent.conf.j2
Normal file
32
roles/kea/templates/kea-ctrl-agent.conf.j2
Normal file
@ -0,0 +1,32 @@
|
|||||||
|
{
|
||||||
|
"Control-agent":
|
||||||
|
{
|
||||||
|
"http-host": "{{ kea_ctrl_address_this }}",
|
||||||
|
"http-port": 8000,
|
||||||
|
"control-sockets":
|
||||||
|
{
|
||||||
|
"dhcp4":
|
||||||
|
{
|
||||||
|
"socket-type": "unix",
|
||||||
|
"socket-name": "/tmp/kea4-ctrl-socket"
|
||||||
|
},
|
||||||
|
},
|
||||||
|
|
||||||
|
"loggers": [
|
||||||
|
{
|
||||||
|
"name": "kea-ctrl-agent",
|
||||||
|
"output_options": [
|
||||||
|
{
|
||||||
|
"output": "stdout",
|
||||||
|
"flush": true,
|
||||||
|
"maxsize": 204800,
|
||||||
|
"maxver": 4,
|
||||||
|
{% raw %} "pattern": "%d{%y.%m.%d %H:%M:%S.%q} %-5p [%c/%i] %m\n", {% endraw %}
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"severity": "INFO",
|
||||||
|
"debuglevel": 0
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
241
roles/kea/templates/kea-dhcp4.conf.j2
Normal file
241
roles/kea/templates/kea-dhcp4.conf.j2
Normal file
@ -0,0 +1,241 @@
|
|||||||
|
// This is an example configuration of the Kea DHCPv4 server 1:
|
||||||
|
//
|
||||||
|
// - uses High Availability hook library and Lease Commands hook library
|
||||||
|
// to enable High Availability function for the DHCP server. This config
|
||||||
|
// file is for the primary (the active) server.
|
||||||
|
// - uses memfile, which stores lease data in a local CSV file
|
||||||
|
// - it assumes a single /24 addressing over a link that is directly reachable
|
||||||
|
// (no DHCP relays)
|
||||||
|
// - there is a handful of IP reservations
|
||||||
|
//
|
||||||
|
// It is expected to run with a standby (the passive) server, which has a very similar
|
||||||
|
// configuration. The only difference is that "this-server-name" must be set to "server2" on the
|
||||||
|
// other server. Also, the interface configuration depends on the network settings of the
|
||||||
|
// particular machine.
|
||||||
|
|
||||||
|
{
|
||||||
|
|
||||||
|
"Dhcp4": {
|
||||||
|
|
||||||
|
// Add names of your network interfaces to listen on.
|
||||||
|
"interfaces-config": {
|
||||||
|
// The DHCPv4 server listens on this interface. When changing this to
|
||||||
|
// the actual name of your interface, make sure to also update the
|
||||||
|
// interface parameter in the subnet definition below.
|
||||||
|
"interfaces": ["{{ kea_dhcp_int }}"]
|
||||||
|
},
|
||||||
|
|
||||||
|
// Control socket is required for communication between the Control
|
||||||
|
// Agent and the DHCP server. High Availability requires Control Agent
|
||||||
|
// to be running because lease updates are sent over the RESTful
|
||||||
|
// API between the HA peers.
|
||||||
|
"control-socket": {
|
||||||
|
"socket-type": "unix",
|
||||||
|
"socket-name": "/tmp/kea4-ctrl-socket"
|
||||||
|
},
|
||||||
|
|
||||||
|
// Use Memfile lease database backend to store leases in a CSV file.
|
||||||
|
// Depending on how Kea was compiled, it may also support SQL databases
|
||||||
|
// (MySQL and/or PostgreSQL). Those database backends require more
|
||||||
|
// parameters, like name, host and possibly user and password.
|
||||||
|
// There are dedicated examples for each backend. See Section 7.2.2 "Lease
|
||||||
|
// Storage" for details.
|
||||||
|
"lease-database": {
|
||||||
|
// Memfile is the simplest and easiest backend to use. It's an in-memory
|
||||||
|
// database with data being written to a CSV file. It is very similar to
|
||||||
|
// what ISC DHCP does.
|
||||||
|
"type": "memfile"
|
||||||
|
},
|
||||||
|
|
||||||
|
// Let's configure some global parameters. The home network is not very dynamic
|
||||||
|
// and there's no shortage of addresses, so no need to recycle aggressively.
|
||||||
|
"valid-lifetime": 43200, // leases will be valid for 12h
|
||||||
|
"renew-timer": 21600, // clients should renew every 6h
|
||||||
|
"rebind-timer": 32400, // clients should start looking for other servers after 9h
|
||||||
|
|
||||||
|
// Kea will clean up its database of expired leases once per hour. However, it
|
||||||
|
// will keep the leases in expired state for 2 days. This greatly increases the
|
||||||
|
// chances for returning devices to get the same address again. To guarantee that,
|
||||||
|
// use host reservation.
|
||||||
|
// If both "flush-reclaimed-timer-wait-time" and "hold-reclaimed-time" are
|
||||||
|
// not 0, when the client sends a release message the lease is expired
|
||||||
|
// instead of being deleted from lease storage.
|
||||||
|
"expired-leases-processing": {
|
||||||
|
"reclaim-timer-wait-time": 3600,
|
||||||
|
"hold-reclaimed-time": 172800,
|
||||||
|
"max-reclaim-leases": 0,
|
||||||
|
"max-reclaim-time": 0
|
||||||
|
},
|
||||||
|
|
||||||
|
// HA requires two hook libraries to be loaded: libdhcp_lease_cmds.so and
|
||||||
|
// libdhcp_ha.so. The former handles incoming lease updates from the HA peers.
|
||||||
|
// The latter implements high availability feature for Kea. Note the library name
|
||||||
|
// should be the same, but the path is OS specific.
|
||||||
|
"hooks-libraries": [
|
||||||
|
// The lease_cmds library must be loaded because HA makes use of it to
|
||||||
|
// deliver lease updates to the server as well as synchronize the
|
||||||
|
// lease database after failure.
|
||||||
|
{
|
||||||
|
"library": "/usr/lib/x86_64-linux-gnu/kea/hooks/libdhcp_lease_cmds.so"
|
||||||
|
},
|
||||||
|
|
||||||
|
{
|
||||||
|
"library": "/usr/lib/x86_64-linux-gnu/kea/hooks/libdhcp_stat_cmds.so"
|
||||||
|
},
|
||||||
|
|
||||||
|
{
|
||||||
|
// The HA hook library should be loaded.
|
||||||
|
"library": "/usr/lib/x86_64-linux-gnu/kea/hooks/libdhcp_ha.so",
|
||||||
|
"parameters": {
|
||||||
|
// Each server should have the same HA configuration, except for the
|
||||||
|
// "this-server-name" parameter.
|
||||||
|
"high-availability": [ {
|
||||||
|
// This parameter points to this server instance. The respective
|
||||||
|
// HA peers must have this parameter set to their own names.
|
||||||
|
"this-server-name": "{{ kea_this_server }}",
|
||||||
|
// The HA mode is set to hot-standby. In this mode, the active server handles
|
||||||
|
// all the traffic. The standby takes over if the primary becomes unavailable.
|
||||||
|
"mode": "hot-standby",
|
||||||
|
// Heartbeat is to be sent every 10 seconds if no other control
|
||||||
|
// commands are transmitted.
|
||||||
|
"heartbeat-delay": 10000,
|
||||||
|
// Maximum time for partner's response to a heartbeat, after which
|
||||||
|
// failure detection is started. This is specified in milliseconds.
|
||||||
|
// If we don't hear from the partner in 60 seconds, it's time to
|
||||||
|
// start worrying.
|
||||||
|
"max-response-delay": 30000,
|
||||||
|
// The following parameters control how the server detects the
|
||||||
|
// partner's failure. The ACK delay sets the threshold for the
|
||||||
|
// 'secs' field of the received discovers. This is specified in
|
||||||
|
// milliseconds.
|
||||||
|
"max-ack-delay": 5000,
|
||||||
|
// This specifies the number of clients which send messages to
|
||||||
|
// the partner but appear to not receive any response.
|
||||||
|
"max-unacked-clients": 0,
|
||||||
|
// This specifies the maximum timeout (in milliseconds) for the server
|
||||||
|
// to complete sync. If you have a large deployment (high tens or
|
||||||
|
// hundreds of thousands of clients), you may need to increase it
|
||||||
|
// further. The default value is 60000ms (60 seconds).
|
||||||
|
"sync-timeout": 60000,
|
||||||
|
"peers": [
|
||||||
|
// This is the configuration of this server instance.
|
||||||
|
{
|
||||||
|
"name": "{{ kea_srv1 }}",
|
||||||
|
// This specifies the URL of this server instance. The
|
||||||
|
// Control Agent must run along with this DHCPv4 server
|
||||||
|
// instance and the "http-host" and "http-port" must be
|
||||||
|
// set to the corresponding values.
|
||||||
|
"url": "http://{{ kea_ctrl_address1 }}:8000/",
|
||||||
|
// This server is primary. The other one must be
|
||||||
|
// secondary.
|
||||||
|
"role": "primary"
|
||||||
|
},
|
||||||
|
// This is the configuration of the secondary server.
|
||||||
|
{
|
||||||
|
"name": "{{ kea_srv2 }}",
|
||||||
|
// Specifies the URL on which the partner's control
|
||||||
|
// channel can be reached. The Control Agent is required
|
||||||
|
// to run on the partner's machine with "http-host" and
|
||||||
|
// "http-port" values set to the corresponding values.
|
||||||
|
"url": "http://{{ kea_ctrl_address2 }}:8000/",
|
||||||
|
// The other server is secondary. This one must be
|
||||||
|
// primary.
|
||||||
|
"role": "standby"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
} ]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
],
|
||||||
|
|
||||||
|
// This example contains a single subnet declaration.
|
||||||
|
"subnet4": [
|
||||||
|
{
|
||||||
|
// Subnet prefix.
|
||||||
|
"subnet": "172.16.64.0/24",
|
||||||
|
|
||||||
|
// There are no relays in this network, so we need to tell Kea that this subnet
|
||||||
|
// is reachable directly via the specified interface.
|
||||||
|
"interface": "enp0s9",
|
||||||
|
|
||||||
|
// Specify a dynamic address pool.
|
||||||
|
"pools": [
|
||||||
|
{
|
||||||
|
"pool": "172.16.64.100-172.16.64.150"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
|
||||||
|
// These are options that are subnet specific. In most cases, you need to define at
|
||||||
|
// least routers option, as without this option your clients will not be able to reach
|
||||||
|
// their default gateway and will not have Internet connectivity. If you have many
|
||||||
|
// subnets and they share the same options (e.g. DNS servers typically is the same
|
||||||
|
// everywhere), you may define options at the global scope, so you don't repeat them
|
||||||
|
// for every network.
|
||||||
|
"option-data": [
|
||||||
|
{
|
||||||
|
// For each IPv4 subnet you typically need to specify at least one router.
|
||||||
|
"name": "routers",
|
||||||
|
"data": "172.16.64.254"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
// Using cloudflare or Quad9 is a reasonable option. Change this
|
||||||
|
// to your own DNS servers is you have them. Another popular
|
||||||
|
// choice is 8.8.8.8, owned by Google. Using third party DNS
|
||||||
|
// service raises some privacy concerns.
|
||||||
|
"name": "domain-name-servers",
|
||||||
|
"data": "172.16.0.1, 172.16.0.4"
|
||||||
|
},
|
||||||
|
|
||||||
|
{
|
||||||
|
"name": "domain-name",
|
||||||
|
"data": "gsb.lan"
|
||||||
|
},
|
||||||
|
|
||||||
|
{
|
||||||
|
"name": "domain-search",
|
||||||
|
"data": "gsb.lan"
|
||||||
|
},
|
||||||
|
|
||||||
|
],
|
||||||
|
|
||||||
|
// Some devices should get a static address. Since the .100 - .199 range is dynamic,
|
||||||
|
// let's use the lower address space for this. There are many ways how reservation
|
||||||
|
// can be defined, but using MAC address (hw-address) is by far the most popular one.
|
||||||
|
// You can use client-id, duid and even custom defined flex-id that may use whatever
|
||||||
|
// parts of the packet you want to use as identifiers. Also, there are many more things
|
||||||
|
// you can specify in addition to just an IP address: extra options, next-server, hostname,
|
||||||
|
// assign device to client classes etc. See the Kea ARM, Section 8.3 for details.
|
||||||
|
// The reservations are subnet specific.
|
||||||
|
#"reservations": [
|
||||||
|
# {
|
||||||
|
# "hw-address": "1a:1b:1c:1d:1e:1f",
|
||||||
|
# "ip-address": "192.168.1.10"
|
||||||
|
# },
|
||||||
|
# {
|
||||||
|
# "client-id": "01:11:22:33:44:55:66",
|
||||||
|
# "ip-address": "192.168.1.11"
|
||||||
|
# }
|
||||||
|
#]
|
||||||
|
}
|
||||||
|
],
|
||||||
|
// fichier de logs
|
||||||
|
"loggers": [
|
||||||
|
{
|
||||||
|
// This section affects kea-dhcp4, which is the base logger for DHCPv4 component. It tells
|
||||||
|
// DHCPv4 server to write all log messages (on severity INFO or higher) to a file. The file
|
||||||
|
// will be rotated once it grows to 2MB and up to 4 files will be kept. The debuglevel
|
||||||
|
// (range 0 to 99) is used only when logging on DEBUG level.
|
||||||
|
"name": "kea-dhcp4",
|
||||||
|
"output_options": [
|
||||||
|
{
|
||||||
|
"output": "stdout",
|
||||||
|
"maxsize": 2048000,
|
||||||
|
"maxver": 4
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"severity": "INFO",
|
||||||
|
"debuglevel": 0
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
10
roles/lb-bd/README.md
Normal file
10
roles/lb-bd/README.md
Normal file
@ -0,0 +1,10 @@
|
|||||||
|
# Role lb-bd
|
||||||
|
***
|
||||||
|
Rôle lb-bd pour la mise en place de la base de données du serveur WordPress.
|
||||||
|
|
||||||
|
## Tables des matières
|
||||||
|
1. Que fait le rôle lb-bd ?
|
||||||
|
|
||||||
|
|
||||||
|
## Que fait le rôle lb-bd ?
|
||||||
|
Ce rôle installe le paquet `mariadb-server` puis créé et configure la base de données nommée **wordpressdb** en ouvrant le port 3306 et en créant l'utilisateur MySQL nommé **wordpressuser** avec le mot de passe **wordpresspasswd**.
|
22
roles/lb-front-ssl/README.md
Normal file
22
roles/lb-front-ssl/README.md
Normal file
@ -0,0 +1,22 @@
|
|||||||
|
# Rôle lb-front
|
||||||
|
***
|
||||||
|
Rôle lb-front pour la répartition de charge des serveurs web sur WordPress avec HAProxy
|
||||||
|
|
||||||
|
## Tables des matières
|
||||||
|
1. Que fait le rôle lb-front ?
|
||||||
|
2. Ordre d'installation des serveurs.
|
||||||
|
|
||||||
|
|
||||||
|
## Que fait le rôle lb-front ?
|
||||||
|
|
||||||
|
Le rôle lb-front va installer `haproxy` pour le load balancing/la répartition de charge et va configurer le fichier `/etc/haproxy/haproxy.cfg`.
|
||||||
|
|
||||||
|
le fichier va faire du Round-Robin, un algoritme qui va équilibrer le nombre de requêtes entre s-lb-web1 et s-lb-web2.
|
||||||
|
|
||||||
|
le site web est accessibe à l'adresse <http://s-lb.gsb.adm>.
|
||||||
|
|
||||||
|
## Ordre d'installation des serveurs.
|
||||||
|
1. Le serveur s-lb avec haproxy qui va "initialiser" les sous-réseaux dans la DMZ.
|
||||||
|
2. Le serveur s-lb-bd qui va contenir la base de données WordPress utilisée par les serveurs web.
|
||||||
|
3. Le serveur s-nas qui va stocker la configuration WordPress et la partager aux serveurs web en NFS. Il va aussi utiliser la base de données sur stockée s-lb-bd.
|
||||||
|
4. Les serveurs s-web1 et s-web2 qui vont installer Apache2, PHP et afficher le serveur WordPress.
|
55
roles/lb-front-ssl/files/haproxy.cfg
Normal file
55
roles/lb-front-ssl/files/haproxy.cfg
Normal file
@ -0,0 +1,55 @@
|
|||||||
|
global
|
||||||
|
log /dev/log local0
|
||||||
|
log /dev/log local1 notice
|
||||||
|
chroot /var/lib/haproxy
|
||||||
|
stats socket /run/haproxy/admin.sock mode 660 level admin
|
||||||
|
stats timeout 30s
|
||||||
|
user haproxy
|
||||||
|
group haproxy
|
||||||
|
daemon
|
||||||
|
|
||||||
|
# Default SSL material locations
|
||||||
|
ca-base /etc/ssl/certs
|
||||||
|
crt-base /etc/ssl/private
|
||||||
|
|
||||||
|
# Default ciphers to use on SSL-enabled listening sockets.
|
||||||
|
# For more information, see ciphers(1SSL). This list is from:
|
||||||
|
# https://hynek.me/articles/hardening-your-web-servers-ssl-ciphers/
|
||||||
|
ssl-default-bind-ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS
|
||||||
|
ssl-default-bind-options no-sslv3
|
||||||
|
|
||||||
|
defaults
|
||||||
|
log global
|
||||||
|
mode http
|
||||||
|
option httplog
|
||||||
|
option dontlognull
|
||||||
|
timeout connect 5000
|
||||||
|
timeout client 50000
|
||||||
|
timeout server 50000
|
||||||
|
errorfile 400 /etc/haproxy/errors/400.http
|
||||||
|
errorfile 403 /etc/haproxy/errors/403.http
|
||||||
|
errorfile 408 /etc/haproxy/errors/408.http
|
||||||
|
errorfile 500 /etc/haproxy/errors/500.http
|
||||||
|
errorfile 502 /etc/haproxy/errors/502.http
|
||||||
|
errorfile 503 /etc/haproxy/errors/503.http
|
||||||
|
errorfile 504 /etc/haproxy/errors/504.http
|
||||||
|
|
||||||
|
frontend proxypublic
|
||||||
|
bind 192.168.100.10:80
|
||||||
|
default_backend fermeweb
|
||||||
|
|
||||||
|
backend fermeweb
|
||||||
|
balance roundrobin
|
||||||
|
option httpclose
|
||||||
|
option httpchk HEAD / HTTP/1.0
|
||||||
|
server s-lb-web1 192.168.101.1:80 check
|
||||||
|
server s-lb-web2 192.168.101.2:80 check
|
||||||
|
|
||||||
|
|
||||||
|
listen stats
|
||||||
|
bind *:8080
|
||||||
|
stats enable
|
||||||
|
stats uri /haproxy
|
||||||
|
stats auth admin:admin
|
||||||
|
|
||||||
|
|
3
roles/lb-front-ssl/handlers/main.yml
Normal file
3
roles/lb-front-ssl/handlers/main.yml
Normal file
@ -0,0 +1,3 @@
|
|||||||
|
---
|
||||||
|
- name: restart haproxy
|
||||||
|
service: name=haproxy state=restarted
|
75
roles/lb-front-ssl/tasks/main.yml
Normal file
75
roles/lb-front-ssl/tasks/main.yml
Normal file
@ -0,0 +1,75 @@
|
|||||||
|
- name: install haproxy
|
||||||
|
apt:
|
||||||
|
name: haproxy
|
||||||
|
state: present
|
||||||
|
|
||||||
|
- name: Creer le repertoire du certificat
|
||||||
|
file:
|
||||||
|
path: /etc/haproxy/crt
|
||||||
|
state: directory
|
||||||
|
mode: '0755'
|
||||||
|
|
||||||
|
- name: Creer le repertoire de la cle privee
|
||||||
|
file:
|
||||||
|
path: /etc/haproxy/crt/private
|
||||||
|
state: directory
|
||||||
|
mode: '0755'
|
||||||
|
|
||||||
|
- name: Generer une clee privee avec les valeurs par defaut (4096 bits, RSA)
|
||||||
|
openssl_privatekey:
|
||||||
|
path: /etc/haproxy/crt/private/haproxy.pem.key
|
||||||
|
size: 4096
|
||||||
|
type: RSA
|
||||||
|
state: present
|
||||||
|
|
||||||
|
- name: creer un certificat auto-signé
|
||||||
|
openssl_certificate:
|
||||||
|
path: /etc/haproxy/crt/private/haproxy.pem
|
||||||
|
privatekey_path: /etc/haproxy/crt/private/haproxy.pem.key
|
||||||
|
provider: selfsigned
|
||||||
|
state: present
|
||||||
|
|
||||||
|
- name: s'assurer que le certificat a les bonnes permissions
|
||||||
|
file:
|
||||||
|
path: /etc/haproxy/crt/private/haproxy.pem
|
||||||
|
owner: root
|
||||||
|
group: haproxy
|
||||||
|
mode: '0640'
|
||||||
|
|
||||||
|
- name: parametre global
|
||||||
|
blockinfile:
|
||||||
|
path: /etc/haproxy/haproxy.cfg
|
||||||
|
block: |
|
||||||
|
global
|
||||||
|
log /dev/log local0
|
||||||
|
log /dev/log local1 notice
|
||||||
|
chroot /var/lib/haproxy
|
||||||
|
stats socket /run/haproxy/admin.sock mode 660 level admin
|
||||||
|
stats timeout 30s
|
||||||
|
user haproxy
|
||||||
|
group haproxy
|
||||||
|
daemon
|
||||||
|
ssl-server-verify none
|
||||||
|
|
||||||
|
- name: parametre backend et fontend
|
||||||
|
blockinfile:
|
||||||
|
path: /etc/haproxy/haproxy.cfg
|
||||||
|
block: |
|
||||||
|
frontend proxypublic
|
||||||
|
bind 192.168.100.10:80
|
||||||
|
bind 192.168.100.10:443 ssl crt /etc/haproxy/crt/private/haproxy.pem
|
||||||
|
http-request redirect scheme https unless { ssl_fc }
|
||||||
|
default_backend fermeweb
|
||||||
|
|
||||||
|
backend fermeweb
|
||||||
|
balance roundrobin
|
||||||
|
option httpclose
|
||||||
|
option httpchk HEAD / HTTP/1.0
|
||||||
|
server s-lb-web1 192.168.101.1:80 check
|
||||||
|
server s-lb-web2 192.168.101.2:80 check
|
||||||
|
|
||||||
|
- name: redemarre haproxy
|
||||||
|
service:
|
||||||
|
name: haproxy
|
||||||
|
# state: restarted
|
||||||
|
enabled: yes
|
22
roles/lb-front/README.md
Normal file
22
roles/lb-front/README.md
Normal file
@ -0,0 +1,22 @@
|
|||||||
|
# Rôle lb-front
|
||||||
|
***
|
||||||
|
Rôle lb-front pour la répartition de charge des serveurs web sur WordPress avec HAProxy
|
||||||
|
|
||||||
|
## Tables des matières
|
||||||
|
1. Que fait le rôle lb-front ?
|
||||||
|
2. Ordre d'installation des serveurs.
|
||||||
|
|
||||||
|
|
||||||
|
## Que fait le rôle lb-front ?
|
||||||
|
|
||||||
|
Le rôle lb-front va installer `haproxy` pour le load balancing/la répartition de charge et va configurer le fichier `/etc/haproxy/haproxy.cfg`.
|
||||||
|
|
||||||
|
le fichier va faire du Round-Robin, un algoritme qui va équilibrer le nombre de requêtes entre s-lb-web1 et s-lb-web2.
|
||||||
|
|
||||||
|
le site web est accessibe à l'adresse <http://s-lb.gsb.adm>.
|
||||||
|
|
||||||
|
## Ordre d'installation des serveurs.
|
||||||
|
1. Le serveur s-lb avec haproxy qui va "initialiser" les sous-réseaux dans la DMZ.
|
||||||
|
2. Le serveur s-lb-bd qui va contenir la base de données WordPress utilisée par les serveurs web.
|
||||||
|
3. Le serveur s-nas qui va stocker la configuration WordPress et la partager aux serveurs web en NFS. Il va aussi utiliser la base de données sur stockée s-lb-bd.
|
||||||
|
4. Les serveurs s-web1 et s-web2 qui vont installer Apache2, PHP et afficher le serveur WordPress.
|
@ -1,23 +0,0 @@
|
|||||||
port:
|
|
||||||
tcp:80:
|
|
||||||
listening: true
|
|
||||||
ip:
|
|
||||||
- 192.168.100.11
|
|
||||||
service:
|
|
||||||
haproxy:
|
|
||||||
enabled: true
|
|
||||||
running: true
|
|
||||||
sshd:
|
|
||||||
enabled: true
|
|
||||||
running: true
|
|
||||||
interface:
|
|
||||||
enp0s8:
|
|
||||||
exists: true
|
|
||||||
addrs:
|
|
||||||
- 192.168.100.11/24
|
|
||||||
mtu: 1500
|
|
||||||
enp0s9:
|
|
||||||
exists: true
|
|
||||||
addrs:
|
|
||||||
- 192.168.101.254/24
|
|
||||||
mtu: 1500
|
|
@ -41,7 +41,7 @@ frontend proxypublic
|
|||||||
backend fermeweb
|
backend fermeweb
|
||||||
balance roundrobin
|
balance roundrobin
|
||||||
option httpclose
|
option httpclose
|
||||||
#option httpchk HEAD / HTTP/1.0
|
option httpchk HEAD / HTTP/1.0
|
||||||
server s-lb-web1 192.168.101.1:80 check
|
server s-lb-web1 192.168.101.1:80 check
|
||||||
server s-lb-web2 192.168.101.2:80 check
|
server s-lb-web2 192.168.101.2:80 check
|
||||||
|
|
||||||
|
@ -14,7 +14,7 @@
|
|||||||
backend fermeweb
|
backend fermeweb
|
||||||
balance roundrobin
|
balance roundrobin
|
||||||
option httpclose
|
option httpclose
|
||||||
#option httpchk HEAD / HTTP/1.0
|
option httpchk HEAD / HTTP/1.0
|
||||||
server s-lb-web1 192.168.101.1:80 check
|
server s-lb-web1 192.168.101.1:80 check
|
||||||
server s-lb-web2 192.168.101.2:80 check
|
server s-lb-web2 192.168.101.2:80 check
|
||||||
|
|
||||||
|
@ -1,3 +1,10 @@
|
|||||||
##Partage NFS
|
# Rôle lb-nfs-client
|
||||||
|
***
|
||||||
Ce rôle sert à installer nfs et à monter le répertoire /home/wordpress du s-nas dans /var/www/html/wordpress sur les serveurs webs.
|
Rôle lb-nfs-client pour l'accès au serveur NFS sur les serveurs lb-web1 et lb-web2.
|
||||||
|
|
||||||
|
## Tables des matières
|
||||||
|
1. Que fait le rôle lb-nfs-client ?
|
||||||
|
|
||||||
|
|
||||||
|
## Que fait le rôle lb-nfs-client ?
|
||||||
|
Ce rôle sert à installer le paquet `nfs-common` et à monter le répertoire /home/wordpress du s-nas dans /var/www/html/wordpress sur les serveurs webs.
|
||||||
|
@ -1,10 +1,17 @@
|
|||||||
# Role s-nas-server
|
# Role lb-nfs-server
|
||||||
## Installation de nfs-server et mise en oeuvre du partage /home/wordpress
|
***
|
||||||
|
Rôle lb-nfs-server pour la mise en place du partage des fichiers de configuration de WordPress.
|
||||||
|
|
||||||
|
## Tables des matières
|
||||||
|
1. Que fait le rôle lb-nfs-server ?
|
||||||
|
|
||||||
|
|
||||||
|
## Que fait le rôle lb-nfs-server ?
|
||||||
Ce rôle :
|
Ce rôle :
|
||||||
* installe **nfs-server**
|
* installe le paquet `nfs-server`
|
||||||
* copie le fichier de configuration **exports** pour exporter le répertoire **/home/wordpress**
|
* copie le fichier de configuration **exports** pour exporter le répertoire **/home/wordpress**
|
||||||
* relance le service **nfs-server**
|
* décompresse WordPress dans **/home/wordpress**
|
||||||
* décompresse wordpress
|
* relance le service `nfs-server`
|
||||||
### Objectif
|
* Configure l'accès de WordPress à la base de données dans le fichier `wp-config.php`
|
||||||
Le répertoire **/home/wordpress** est exporté par **nfs** sur le réseau **n-dmz-db**
|
|
||||||
|
Le répertoire **/home/wordpress** est exporté par NFS dans le sous-réseau **n-dmz-db**
|
||||||
|
@ -1,3 +1,12 @@
|
|||||||
##Téléchargement et configuration de WordPress
|
# Rôle lb-web
|
||||||
|
***
|
||||||
Ce rôle télécharge wordpress depuis s-adm puis configure le fichier wp-config.php pour la situation du gsb.
|
Rôle lb-web pour l'affichage et l'utilisation du site web.
|
||||||
|
|
||||||
|
## Tables des matières
|
||||||
|
1. Que fait le rôle lb-web ?
|
||||||
|
|
||||||
|
|
||||||
|
## Que fait le rôle lb-web ?
|
||||||
|
Ce rôle télécharge les paquets nécessaires au fonctionnement du site web (`apache2`, `php` et `mariadb-client`) qui permetront aux serveurs web d'accerder a la base de données de WordPress.
|
||||||
|
|
||||||
|
Le site web est accessibe à l'adresse http://s-lb.gsb.adm.
|
||||||
|
@ -1,8 +1,16 @@
|
|||||||
# Installation de Nextcloud et du proxy inverse Traefik
|
# Installation de Nextcloud et du proxy inverse Traefik
|
||||||
|
|
||||||
Nextcloud et Traefik fonctionnent grâce à docker. Pour pouvoir faire fonctionner ce playbook, docker doit être installé.
|
## Explication de l'installation de Nextcloud
|
||||||
|
Afin de pouvoir faire fonctionner Nextcloud et Traefik, il faut mettre en place docker. Dans un premier plan, il vas donc falloir lancer le script **getall** sur **s-adm**. Ensuite dans un second temps, il faudra etre dans le fichier **/nxc** sur **s-nxc** et lancer **docker-compose.yaml**. Pour finir, il faudra ajouter l'authentification LDAP au nextcloud grace a l'AD de **s-win**.
|
||||||
|
|
||||||
## 1.
|
# <p align="center">Procédure d'installation</p>
|
||||||
|
|
||||||
|
***
|
||||||
|
## 1. Installation docker
|
||||||
|
|
||||||
|
Voir: https://gitea.lyc-lecastel.fr/gsb/gsb2024/src/branch/main/roles/docker
|
||||||
|
|
||||||
|
## 2. Fonctionnement du playbook s-nxc
|
||||||
|
|
||||||
Le playbook crée le dossier **nxc** à la racine de root.
|
Le playbook crée le dossier **nxc** à la racine de root.
|
||||||
|
|
||||||
@ -10,11 +18,11 @@ Les fichiers "nextcloud.yml" et "traefik.yml" y seront copiés depuis le répert
|
|||||||
|
|
||||||
Enfin, dans le répertoire nxc, sont créés les répertoires **certs** et **config**.
|
Enfin, dans le répertoire nxc, sont créés les répertoires **certs** et **config**.
|
||||||
|
|
||||||
## 2. Copie des fichiers
|
### 2.1 Copie des fichiers
|
||||||
|
|
||||||
Le playbook copie les fichiers placés dans "files" et les placer dans les bons répertoires.
|
Le playbook copie les fichiers placés dans "files" et les places dans les bons répertoires.
|
||||||
|
|
||||||
## 3. Génération du certificat
|
### 2.2 Génération du certificat
|
||||||
|
|
||||||
Le playbook crée un certificat **x509** grâce à **mkcert**, il s'agit d'une solution permettant de créer des certificats auto-signés. Pour cela, il télécharge **mkcert** sur **s-adm** (utiliser le script **getall**).
|
Le playbook crée un certificat **x509** grâce à **mkcert**, il s'agit d'une solution permettant de créer des certificats auto-signés. Pour cela, il télécharge **mkcert** sur **s-adm** (utiliser le script **getall**).
|
||||||
|
|
||||||
@ -25,7 +33,7 @@ Pour créer le certificat, le playbook exécute les commandes (lancé depuis nxc
|
|||||||
/usr/local/bin/mkcert -install # Installe mkcert
|
/usr/local/bin/mkcert -install # Installe mkcert
|
||||||
/usr/local/bin/mkcert -key-file key.pem -cert-file cert.pem "hôte.domaine.local" "*.domaine.local" #Crée le certificat le DNS spécifié
|
/usr/local/bin/mkcert -key-file key.pem -cert-file cert.pem "hôte.domaine.local" "*.domaine.local" #Crée le certificat le DNS spécifié
|
||||||
```
|
```
|
||||||
## 4. Lancement
|
## 3. Lancement
|
||||||
|
|
||||||
Le playbook lance les fichiers "docker-compose" à savoir : nextcloud.yml et traefik.yml qui démarrent les deux piles **docker**.
|
Le playbook lance les fichiers "docker-compose" à savoir : nextcloud.yml et traefik.yml qui démarrent les deux piles **docker**.
|
||||||
|
|
||||||
@ -37,22 +45,28 @@ ATTENTION : Après avoir relancé la VM, executez le script "nxc-start.sh" afin
|
|||||||
Une fois le script terminé, le site est disponible ici : https://s-nxc.gsb.lan
|
Une fois le script terminé, le site est disponible ici : https://s-nxc.gsb.lan
|
||||||
|
|
||||||
|
|
||||||
## 5. Ajout authentification LDAP
|
## 4. Ajout authentification LDAP
|
||||||
|
|
||||||
Pour ajouter l'authentification LDAP au Nextcloud, il faut :
|
Pour ajouter l'authentification LDAP au Nextcloud, depuis **n-user** il faut :
|
||||||
* Une fois l'installation de Nextcloud terminé, cliquez sur le profil et Application
|
* Une fois l'installation de Nextcloud terminé, cliquez sur le profil et "Application"
|
||||||
* Dans vos applications, descendre et activer "LDAP user and group backend"
|
* Dans vos applications, descendre et activer "LDAP user and group backend"
|
||||||
* Puis cliquer sur le profil, puis Paramètres d'administration et dans Administration cliquer sur Intégration LDAP/AD
|
* Puis cliquer sur le profil, puis "Paramètres d'administration" et dans "Administration" cliquer sur "Intégration LDAP/AD"
|
||||||
* Une fois sur la page d'intégration LDAP/AD :
|
* Une fois sur la page d'intégration LDAP/AD :
|
||||||
* Dans Hôte mettre :
|
* Dans Hôte mettre :
|
||||||
> ldap://s-win.gsb.lan
|
> **ldap://s-win.gsb.lan**
|
||||||
* Cliquer sur Détecter le port (normalement le port 389 apparait)
|
* Cliquer sur "Détecter le port" (normalement le port 389 apparait)
|
||||||
* Dans DN Utilisateur mettre :
|
* Dans DN Utilisateur mettre :
|
||||||
> CN=nextcloud,CN=Users,DC=GSB,DC=LAN
|
> **CN=nextcloud,CN=Users,DC=gsb,DC=lan**
|
||||||
* Mot de passe :
|
* Mot de passe :
|
||||||
> Azerty1+
|
> **Azerty1+**
|
||||||
* Et dans Un DN de base par ligne :
|
* Et dans "Un DN de base par ligne" :
|
||||||
> DC=GSB,DC=LAN
|
> **DC=gsb,DC=lan**
|
||||||
* Après la configuration passe OK
|
* Cliquer sur "Détecter le DN de base" (normalement il apparaitra automatiquement)
|
||||||
* Une fois la configuration finie, cliquer 3 fois sur continuer
|
* Après la configuration réaliser, cliquer sur "Continuer", puis cliquer 3 fois sur continuer
|
||||||
* Une fois arrivé sur Groupes, vous pouvez vous déconnecter du compte Admin et vous connecter avec un compte qui est dans l'AD.
|
* Une fois arrivé sur "Groupes", vous pouvez vous déconnecter du compte Admin et vous connecter avec un compte qui est dans l'AD.
|
||||||
|
|
||||||
|
## Contributeurs
|
||||||
|
|
||||||
|
- LG
|
||||||
|
- CH
|
||||||
|
|
||||||
|
19
roles/nxc-traefik/files/save/README.md
Normal file
19
roles/nxc-traefik/files/save/README.md
Normal file
@ -0,0 +1,19 @@
|
|||||||
|
Ce script Bash a pour objectif d'automatiser le processus de sauvegarde du serveur NextCloud, qui est exécuté dans un environnement Docker.
|
||||||
|
|
||||||
|
## 1. Activation du mode maintenance :
|
||||||
|
- La première commande Docker est utilisée pour mettre le serveur NextCloud en mode maintenance. Cette mesure préventive garantit qu'aucune modification n'est apportée pendant la sauvegarde, assurant ainsi la cohérence des données.
|
||||||
|
|
||||||
|
## 2. Copie des fichiers de sauvegarde :
|
||||||
|
- La commande `cd /root/nxc` change le répertoire de travail vers `/root/nxc`.
|
||||||
|
- Ensuite, la commande `rsync -Aavx nextcloud/ nextcloud-dirbkp/` effectue une copie récursive des fichiers du dossier `nextcloud/` vers `nextcloud-dirbkp/`. Ceci crée une copie locale des fichiers de NextCloud à des fins de sauvegarde.
|
||||||
|
|
||||||
|
## 3. Sauvegarde de la base de données MySQL/MariaDB :
|
||||||
|
- La ligne suivante utilise `docker compose exec` pour exécuter la commande `mysqldump` dans le conteneur de la base de données. Cela génère une sauvegarde de la base de données NextCloud qui est enregistrée dans le fichier `nextcloud-sqlbkp.bak`.
|
||||||
|
|
||||||
|
## 4. Désactivation du mode maintenance :
|
||||||
|
- Après la sauvegarde, une autre commande Docker est utilisée pour désactiver le mode maintenance de NextCloud, permettant ainsi la reprise normale des opérations.
|
||||||
|
|
||||||
|
## 5. Création d'une archive compressée :
|
||||||
|
- Enfin, la dernière ligne crée une archive compressée `nxc.tgz` qui regroupe la sauvegarde de la base de données (`nextcloud-sqlbkp.bak`) et la copie locale des fichiers NextCloud (`nextcloud-dirbkp/`).
|
||||||
|
|
||||||
|
Ce script simplifie et automatise le processus de sauvegarde de NextCloud en mettant en place la mise en mode maintenance, la copie des fichiers locaux, la sauvegarde de la base de données, la désactivation du mode maintenance, et la création d'une archive compressée consolidant l'ensemble des éléments de sauvegarde.
|
22
roles/nxc-traefik/files/save/savenextcloud.sh
Normal file
22
roles/nxc-traefik/files/save/savenextcloud.sh
Normal file
@ -0,0 +1,22 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
# Mettre le serveur NextCloud en mode maintenance
|
||||||
|
docker compose exec -u www-data app php occ maintenance:mode --on
|
||||||
|
|
||||||
|
# Extraire les dossiers de sauvegarde
|
||||||
|
cd /root/nxc
|
||||||
|
|
||||||
|
# Copie locale de la sauvegarde
|
||||||
|
rsync -Aavx nextcloud/ nextcloud-dirbkp/
|
||||||
|
|
||||||
|
# Base de données MySQL/MariaDB
|
||||||
|
docker compose exec db mysqldump -u nextcloud -pAzerty1+ nextcloud > nextcloud-sqlbkp.bak
|
||||||
|
|
||||||
|
# Sortir du mode maintenance
|
||||||
|
docker compose exec -u www-data app php occ maintenance:mode --off
|
||||||
|
|
||||||
|
# création d'une archive
|
||||||
|
tar cvfz nxc.tgz nextcloud-sqlbkp.bak nextcloud-dirbkp
|
||||||
|
|
||||||
|
|
||||||
|
|
226
roles/old/kea-master/files/kea-dhcp4.conf
Normal file
226
roles/old/kea-master/files/kea-dhcp4.conf
Normal file
@ -0,0 +1,226 @@
|
|||||||
|
// This is an example configuration of the Kea DHCPv4 server 1:
|
||||||
|
//
|
||||||
|
// - uses High Availability hook library and Lease Commands hook library
|
||||||
|
// to enable High Availability function for the DHCP server. This config
|
||||||
|
// file is for the primary (the active) server.
|
||||||
|
// - uses memfile, which stores lease data in a local CSV file
|
||||||
|
// - it assumes a single /24 addressing over a link that is directly reachable
|
||||||
|
// (no DHCP relays)
|
||||||
|
// - there is a handful of IP reservations
|
||||||
|
//
|
||||||
|
// It is expected to run with a standby (the passive) server, which has a very similar
|
||||||
|
// configuration. The only difference is that "this-server-name" must be set to "server2" on the
|
||||||
|
// other server. Also, the interface configuration depends on the network settings of the
|
||||||
|
// particular machine.
|
||||||
|
|
||||||
|
{
|
||||||
|
|
||||||
|
"Dhcp4": {
|
||||||
|
|
||||||
|
// Add names of your network interfaces to listen on.
|
||||||
|
"interfaces-config": {
|
||||||
|
// The DHCPv4 server listens on this interface. When changing this to
|
||||||
|
// the actual name of your interface, make sure to also update the
|
||||||
|
// interface parameter in the subnet definition below.
|
||||||
|
"interfaces": [ "enp0s9" ]
|
||||||
|
},
|
||||||
|
|
||||||
|
// Control socket is required for communication between the Control
|
||||||
|
// Agent and the DHCP server. High Availability requires Control Agent
|
||||||
|
// to be running because lease updates are sent over the RESTful
|
||||||
|
// API between the HA peers.
|
||||||
|
"control-socket": {
|
||||||
|
"socket-type": "unix",
|
||||||
|
"socket-name": "/tmp/kea4-ctrl-socket"
|
||||||
|
},
|
||||||
|
|
||||||
|
// Use Memfile lease database backend to store leases in a CSV file.
|
||||||
|
// Depending on how Kea was compiled, it may also support SQL databases
|
||||||
|
// (MySQL and/or PostgreSQL). Those database backends require more
|
||||||
|
// parameters, like name, host and possibly user and password.
|
||||||
|
// There are dedicated examples for each backend. See Section 7.2.2 "Lease
|
||||||
|
// Storage" for details.
|
||||||
|
"lease-database": {
|
||||||
|
// Memfile is the simplest and easiest backend to use. It's an in-memory
|
||||||
|
// database with data being written to a CSV file. It is very similar to
|
||||||
|
// what ISC DHCP does.
|
||||||
|
"type": "memfile"
|
||||||
|
},
|
||||||
|
|
||||||
|
// Let's configure some global parameters. The home network is not very dynamic
|
||||||
|
// and there's no shortage of addresses, so no need to recycle aggressively.
|
||||||
|
"valid-lifetime": 43200, // leases will be valid for 12h
|
||||||
|
"renew-timer": 21600, // clients should renew every 6h
|
||||||
|
"rebind-timer": 32400, // clients should start looking for other servers after 9h
|
||||||
|
|
||||||
|
// Kea will clean up its database of expired leases once per hour. However, it
|
||||||
|
// will keep the leases in expired state for 2 days. This greatly increases the
|
||||||
|
// chances for returning devices to get the same address again. To guarantee that,
|
||||||
|
// use host reservation.
|
||||||
|
// If both "flush-reclaimed-timer-wait-time" and "hold-reclaimed-time" are
|
||||||
|
// not 0, when the client sends a release message the lease is expired
|
||||||
|
// instead of being deleted from lease storage.
|
||||||
|
"expired-leases-processing": {
|
||||||
|
"reclaim-timer-wait-time": 3600,
|
||||||
|
"hold-reclaimed-time": 172800,
|
||||||
|
"max-reclaim-leases": 0,
|
||||||
|
"max-reclaim-time": 0
|
||||||
|
},
|
||||||
|
|
||||||
|
// HA requires two hook libraries to be loaded: libdhcp_lease_cmds.so and
|
||||||
|
// libdhcp_ha.so. The former handles incoming lease updates from the HA peers.
|
||||||
|
// The latter implements high availability feature for Kea. Note the library name
|
||||||
|
// should be the same, but the path is OS specific.
|
||||||
|
"hooks-libraries": [
|
||||||
|
// The lease_cmds library must be loaded because HA makes use of it to
|
||||||
|
// deliver lease updates to the server as well as synchronize the
|
||||||
|
// lease database after failure.
|
||||||
|
{
|
||||||
|
"library": "/usr/local/lib/kea/hooks/libdhcp_lease_cmds.so"
|
||||||
|
},
|
||||||
|
|
||||||
|
{
|
||||||
|
// The HA hook library should be loaded.
|
||||||
|
"library": "/usr/local/lib/kea/hooks/libdhcp_ha.so",
|
||||||
|
"parameters": {
|
||||||
|
// Each server should have the same HA configuration, except for the
|
||||||
|
// "this-server-name" parameter.
|
||||||
|
"high-availability": [ {
|
||||||
|
// This parameter points to this server instance. The respective
|
||||||
|
// HA peers must have this parameter set to their own names.
|
||||||
|
"this-server-name": "s-kea1.gsb.lan",
|
||||||
|
// The HA mode is set to hot-standby. In this mode, the active server handles
|
||||||
|
// all the traffic. The standby takes over if the primary becomes unavailable.
|
||||||
|
"mode": "hot-standby",
|
||||||
|
// Heartbeat is to be sent every 10 seconds if no other control
|
||||||
|
// commands are transmitted.
|
||||||
|
"heartbeat-delay": 10000,
|
||||||
|
// Maximum time for partner's response to a heartbeat, after which
|
||||||
|
// failure detection is started. This is specified in milliseconds.
|
||||||
|
// If we don't hear from the partner in 60 seconds, it's time to
|
||||||
|
// start worrying.
|
||||||
|
"max-response-delay": 30000,
|
||||||
|
// The following parameters control how the server detects the
|
||||||
|
// partner's failure. The ACK delay sets the threshold for the
|
||||||
|
// 'secs' field of the received discovers. This is specified in
|
||||||
|
// milliseconds.
|
||||||
|
"max-ack-delay": 5000,
|
||||||
|
// This specifies the number of clients which send messages to
|
||||||
|
// the partner but appear to not receive any response.
|
||||||
|
"max-unacked-clients": 0,
|
||||||
|
// This specifies the maximum timeout (in milliseconds) for the server
|
||||||
|
// to complete sync. If you have a large deployment (high tens or
|
||||||
|
// hundreds of thousands of clients), you may need to increase it
|
||||||
|
// further. The default value is 60000ms (60 seconds).
|
||||||
|
"sync-timeout": 60000,
|
||||||
|
"peers": [
|
||||||
|
// This is the configuration of this server instance.
|
||||||
|
{
|
||||||
|
"name": "s-kea1.gsb.lan",
|
||||||
|
// This specifies the URL of this server instance. The
|
||||||
|
// Control Agent must run along with this DHCPv4 server
|
||||||
|
// instance and the "http-host" and "http-port" must be
|
||||||
|
// set to the corresponding values.
|
||||||
|
"url": "http://172.16.64.20:8000/",
|
||||||
|
// This server is primary. The other one must be
|
||||||
|
// secondary.
|
||||||
|
"role": "primary"
|
||||||
|
},
|
||||||
|
// This is the configuration of the secondary server.
|
||||||
|
{
|
||||||
|
"name": "s-kea2.gsb.lan",
|
||||||
|
// Specifies the URL on which the partner's control
|
||||||
|
// channel can be reached. The Control Agent is required
|
||||||
|
// to run on the partner's machine with "http-host" and
|
||||||
|
// "http-port" values set to the corresponding values.
|
||||||
|
"url": "http://172.16.64.21:8000/",
|
||||||
|
// The other server is secondary. This one must be
|
||||||
|
// primary.
|
||||||
|
"role": "standby"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
} ]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
],
|
||||||
|
|
||||||
|
// This example contains a single subnet declaration.
|
||||||
|
"subnet4": [
|
||||||
|
{
|
||||||
|
// Subnet prefix.
|
||||||
|
"subnet": "172.16.64.0/24",
|
||||||
|
|
||||||
|
// There are no relays in this network, so we need to tell Kea that this subnet
|
||||||
|
// is reachable directly via the specified interface.
|
||||||
|
"interface": "enp0s9",
|
||||||
|
|
||||||
|
// Specify a dynamic address pool.
|
||||||
|
"pools": [
|
||||||
|
{
|
||||||
|
"pool": "172.16.64.100-172.16.64.150"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
|
||||||
|
// These are options that are subnet specific. In most cases, you need to define at
|
||||||
|
// least routers option, as without this option your clients will not be able to reach
|
||||||
|
// their default gateway and will not have Internet connectivity. If you have many
|
||||||
|
// subnets and they share the same options (e.g. DNS servers typically is the same
|
||||||
|
// everywhere), you may define options at the global scope, so you don't repeat them
|
||||||
|
// for every network.
|
||||||
|
"option-data": [
|
||||||
|
{
|
||||||
|
// For each IPv4 subnet you typically need to specify at least one router.
|
||||||
|
"name": "routers",
|
||||||
|
"data": "172.16.64.254"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
// Using cloudflare or Quad9 is a reasonable option. Change this
|
||||||
|
// to your own DNS servers is you have them. Another popular
|
||||||
|
// choice is 8.8.8.8, owned by Google. Using third party DNS
|
||||||
|
// service raises some privacy concerns.
|
||||||
|
"name": "domain-name-servers",
|
||||||
|
"data": "172.16.0.1"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
|
||||||
|
// Some devices should get a static address. Since the .100 - .199 range is dynamic,
|
||||||
|
// let's use the lower address space for this. There are many ways how reservation
|
||||||
|
// can be defined, but using MAC address (hw-address) is by far the most popular one.
|
||||||
|
// You can use client-id, duid and even custom defined flex-id that may use whatever
|
||||||
|
// parts of the packet you want to use as identifiers. Also, there are many more things
|
||||||
|
// you can specify in addition to just an IP address: extra options, next-server, hostname,
|
||||||
|
// assign device to client classes etc. See the Kea ARM, Section 8.3 for details.
|
||||||
|
// The reservations are subnet specific.
|
||||||
|
#"reservations": [
|
||||||
|
# {
|
||||||
|
# "hw-address": "1a:1b:1c:1d:1e:1f",
|
||||||
|
# "ip-address": "192.168.1.10"
|
||||||
|
# },
|
||||||
|
# {
|
||||||
|
# "client-id": "01:11:22:33:44:55:66",
|
||||||
|
# "ip-address": "192.168.1.11"
|
||||||
|
# }
|
||||||
|
#]
|
||||||
|
}
|
||||||
|
],
|
||||||
|
// fichier de logs
|
||||||
|
"loggers": [
|
||||||
|
{
|
||||||
|
// This section affects kea-dhcp4, which is the base logger for DHCPv4 component. It tells
|
||||||
|
// DHCPv4 server to write all log messages (on severity INFO or higher) to a file. The file
|
||||||
|
// will be rotated once it grows to 2MB and up to 4 files will be kept. The debuglevel
|
||||||
|
// (range 0 to 99) is used only when logging on DEBUG level.
|
||||||
|
"name": "kea-dhcp4",
|
||||||
|
"output_options": [
|
||||||
|
{
|
||||||
|
"output": "stdout",
|
||||||
|
"maxsize": 2048000,
|
||||||
|
"maxver": 4
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"severity": "INFO",
|
||||||
|
"debuglevel": 0
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
8
roles/old/kea-slave/default/main.yml
Normal file
8
roles/old/kea-slave/default/main.yml
Normal file
@ -0,0 +1,8 @@
|
|||||||
|
#variable kea
|
||||||
|
kea_ver: "2.4.1"
|
||||||
|
kea_dbname: ""
|
||||||
|
kaa_dbuser: ""
|
||||||
|
kea_dbpasswd: ""
|
||||||
|
kea_dhcp4_dir: "/etc/kea/kea-dhcp4.conf"
|
||||||
|
kea_ctrl_dir: "/etc/kea/kea-ctrl-agent.conf"
|
||||||
|
|
10
roles/smb-backup/files/backupnxc.sh
Normal file
10
roles/smb-backup/files/backupnxc.sh
Normal file
@ -0,0 +1,10 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
# envoie sur s-backup
|
||||||
|
BACKUP=/home/backup/s-nxc
|
||||||
|
|
||||||
|
# Préparation des dossiers qui vont accueillir les données à sauvegarder (-e lance le répertoire si il existe)
|
||||||
|
[[ -e "${BACKUP}" ]] || mkdir -p "${BACKUP}"
|
||||||
|
|
||||||
|
# Sauvegarde du fichier nxc.tgz vers la machine s-backup
|
||||||
|
scp -i ~/.ssh/id_rsa_sbackup root@s-nxc.gsb.adm:/root/nxc/nxc.tgz "${BACKUP}/"
|
@ -14,6 +14,14 @@
|
|||||||
group: root
|
group: root
|
||||||
mode: '0755'
|
mode: '0755'
|
||||||
|
|
||||||
|
- name: copie script backupnxc dans /usr/local/bin
|
||||||
|
copy:
|
||||||
|
src: backupnxc.sh
|
||||||
|
dest: /usr/local/bin
|
||||||
|
owner: root
|
||||||
|
group: root
|
||||||
|
mode: '0755'
|
||||||
|
|
||||||
- name: crontab backupsmb ( commentee par defaut )
|
- name: crontab backupsmb ( commentee par defaut )
|
||||||
cron:
|
cron:
|
||||||
name: backupsmb
|
name: backupsmb
|
||||||
|
17
roles/ssh-backup-key-gen/README.md
Normal file
17
roles/ssh-backup-key-gen/README.md
Normal file
@ -0,0 +1,17 @@
|
|||||||
|
Ce script génère et distribue une paire de clés SSH (clé privée et clé publique).
|
||||||
|
|
||||||
|
## 1. Génération de la clé privée :
|
||||||
|
- Cette étape crée une clé privée de type RSA destinée à être utilisée pour des opérations liées à s-backup.
|
||||||
|
- La clé privée est enregistrée dans le chemin spécifié (`/root/id_rsa_sbackup`).
|
||||||
|
- L'attribut `state: present` garantit que la clé privée est générée si elle n'existe pas déjà.
|
||||||
|
|
||||||
|
## 2. Copie de la clé publique dans gsbstore :
|
||||||
|
- Cette étape copie la clé publique associée à la clé privée générée précédemment (`/root/id_rsa_sbackup.pub`).
|
||||||
|
- La clé publique est déplacée vers un répertoire spécifié (`/var/www/html/gsbstore`) sur la machine distante.
|
||||||
|
- Les permissions de la clé publique sont définies avec `mode: 0644`, et `remote_src: yes` indique que la source du fichier est sur la machine distante.
|
||||||
|
|
||||||
|
## 3. Copie de la clé privée dans gsbstore :
|
||||||
|
- Cette étape copie la clé privée générée dans le même répertoire que la clé publique sur la machine distante (`/var/www/html/gsbstore`).
|
||||||
|
- Les permissions de la clé privée sont également définies avec `mode: 0644`, et `remote_src: yes` indique que la source du fichier est sur la machine distante.
|
||||||
|
|
||||||
|
Ce script automatise la création d'une paire de clés SSH et déplace ces clés vers un emplacement spécifique (`/var/www/html/gsbstore`) sur une machine distante. Ces clés pourraient être utilisées dans des processus de sauvegarde sécurisés, garantissant l'authentification sécurisée lors des opérations liées à s-backup.
|
20
roles/ssh-backup-key-gen/tasks/main.yml
Normal file
20
roles/ssh-backup-key-gen/tasks/main.yml
Normal file
@ -0,0 +1,20 @@
|
|||||||
|
---
|
||||||
|
- name: on genere une cle privee pour s-backup
|
||||||
|
openssh_keypair:
|
||||||
|
path: /root/id_rsa_sbackup
|
||||||
|
type: rsa
|
||||||
|
state: present
|
||||||
|
|
||||||
|
- name: copie cle publique dans gsbstore
|
||||||
|
copy:
|
||||||
|
src: /root/id_rsa_sbackup.pub
|
||||||
|
dest: /var/www/html/gsbstore
|
||||||
|
mode: 0644
|
||||||
|
remote_src: yes
|
||||||
|
|
||||||
|
- name: copie cle privee dans gsbstore
|
||||||
|
copy:
|
||||||
|
src: /root/id_rsa_sbackup
|
||||||
|
dest: /var/www/html/gsbstore
|
||||||
|
mode: 0644
|
||||||
|
remote_src: yes
|
9
roles/ssh-backup-key-private/README.md
Normal file
9
roles/ssh-backup-key-private/README.md
Normal file
@ -0,0 +1,9 @@
|
|||||||
|
# récupération d'une clé publique
|
||||||
|
|
||||||
|
Ce script permet de récupérer un clé publique créer depuis la machine s-adm:
|
||||||
|
1. Création du répertoire .ssh :
|
||||||
|
- Il crée le répertoire `~/.ssh` avec des permissions strictes (0700) pour l'utilisateur.
|
||||||
|
|
||||||
|
2. Récupération de la clé privée :
|
||||||
|
- Il télécharge une clé privée depuis l'URL spécifiée (`http://s-adm.gsb.adm/gsbstore/id_rsa_sbackup`) et la place dans le répertoire `~/.ssh` avec le nom `id_rsa_sbackup`.
|
||||||
|
- La clé privée est configurée avec des permissions strictes (0600) pour garantir la sécurité.
|
13
roles/ssh-backup-key-private/tasks/main.yml
Normal file
13
roles/ssh-backup-key-private/tasks/main.yml
Normal file
@ -0,0 +1,13 @@
|
|||||||
|
---
|
||||||
|
- name: creation .ssh
|
||||||
|
file:
|
||||||
|
path: ~/.ssh
|
||||||
|
state: directory
|
||||||
|
mode: 0700
|
||||||
|
|
||||||
|
- name: recuperation de la cle privee generee par s-adm
|
||||||
|
get_url:
|
||||||
|
url: http://s-adm.gsb.adm/gsbstore/id_rsa_sbackup
|
||||||
|
dest: /root/.ssh/id_rsa_sbackup
|
||||||
|
mode: 0600
|
||||||
|
|
9
roles/ssh-backup-key-pub/README.md
Normal file
9
roles/ssh-backup-key-pub/README.md
Normal file
@ -0,0 +1,9 @@
|
|||||||
|
# script de récupération de la clé publique générer par s-adm
|
||||||
|
|
||||||
|
Ce script Ansible utilise le module `ansible.posix.authorized_key` pour gérer les clés SSH autorisées sur un système conforme aux normes POSIX. Plus précisément, la tâche vise à garantir la présence d'une clé publique spécifiée dans le fichier des clés autorisées pour l'utilisateur `root`.
|
||||||
|
|
||||||
|
- `user: root` : Indique que la clé SSH est associée à l'utilisateur root.
|
||||||
|
- `state: present` : Spécifie que la clé doit être présente dans le fichier des clés autorisées. Si la clé n'existe pas, elle sera ajoutée.
|
||||||
|
- `key: http://s-adm.gsb.adm/gsbstore/id_rsa_sbackup.pub` : Indique l'URL à partir de laquelle la clé publique (`id_rsa_sbackup.pub`) doit être récupérée. Ansible télécharge la clé publique depuis cette URL et l'ajoute au fichier des clés autorisées pour l'utilisateur root.
|
||||||
|
|
||||||
|
Ce script Ansible assure que la clé publique spécifiée est présente dans le fichier des clés autorisées pour l'utilisateur root sur un système compatible POSIX, en récupérant la clé publique à partir de l'URL fournie.
|
6
roles/ssh-backup-key-pub/tasks/main.yml
Normal file
6
roles/ssh-backup-key-pub/tasks/main.yml
Normal file
@ -0,0 +1,6 @@
|
|||||||
|
---
|
||||||
|
- name: recuperation de la cle publique generee par s-adm
|
||||||
|
ansible.posix.authorized_key:
|
||||||
|
user: root
|
||||||
|
state: present
|
||||||
|
key: http://s-adm.gsb.adm/gsbstore/id_rsa_sbackup.pub
|
@ -6,9 +6,21 @@
|
|||||||
mode: 0700
|
mode: 0700
|
||||||
state: directory
|
state: directory
|
||||||
|
|
||||||
- name: Copie cle publiique depuis s-adm
|
- name: Copie cle publique depuis s-adm
|
||||||
ansible.posix.authorized_key:
|
ansible.posix.authorized_key:
|
||||||
user: root
|
user: root
|
||||||
state: present
|
state: present
|
||||||
key: http://s-adm.gsb.adm/id_rsa.pub
|
key: http://s-adm.gsb.adm/id_rsa.pub
|
||||||
|
|
||||||
|
- name: Creation user gsbadm
|
||||||
|
ansible.builtin.user:
|
||||||
|
name: gsbadm
|
||||||
|
groups: sudo
|
||||||
|
append: yes
|
||||||
|
shell: /bin/bash
|
||||||
|
|
||||||
|
- name: Copie cle publique oour gsbadm depuis s-adm
|
||||||
|
ansible.posix.authorized_key:
|
||||||
|
user: gsbadm
|
||||||
|
state: present
|
||||||
|
key: http://s-adm.gsb.adm/id_rsa.pub
|
||||||
|
21
roles/stork-agent/README.md
Normal file
21
roles/stork-agent/README.md
Normal file
@ -0,0 +1,21 @@
|
|||||||
|
# Rôle Kea
|
||||||
|
***
|
||||||
|
Rôle Kea: Configuration de 2 serveurs KEA en mode haute disponbilité.
|
||||||
|
|
||||||
|
## Tables des matières
|
||||||
|
1. [Que fait le rôle Kea ?]
|
||||||
|
2. [Installation et configuration de ka]
|
||||||
|
3. [Remarques]
|
||||||
|
|
||||||
|
|
||||||
|
## Que fait le rôle Kea ?
|
||||||
|
Le rôle KEA permet de configurer 1 serveurs kea (s-kea1 et s-kea2) en mode haute disponibilité.
|
||||||
|
- Le serveur **s-kea1** sera en mode **primary** il délivrera les baux DHCP sur le réseau n-user.
|
||||||
|
- Le serveur **s-kea2**, sera en mode **stand-by** le service DHCP basculera donc sur **s-kea2** en cas disponibilité du serveur**s-kea1**.
|
||||||
|
|
||||||
|
### Installation et configuration de kea
|
||||||
|
|
||||||
|
Le rôle kea installe les packets **kea dhcp4, hooks, admin** une fois les packets installer. Il configure un serveur kea pour qu'il distribue les ips sur le réseau n-user et soit en haute disponibilité.
|
||||||
|
|
||||||
|
### Remarquees ###
|
||||||
|
Une fois le playbook **s-kea** correctement terminé et la machine **s-kea** redemarrée, redémarrée le service **isc-kea-dhcp4.service** afin de prendre en compte les modifications éfféctuées sur la couche réseau par le role POST.
|
7
roles/stork-agent/handlers/main.yml
Normal file
7
roles/stork-agent/handlers/main.yml
Normal file
@ -0,0 +1,7 @@
|
|||||||
|
---
|
||||||
|
- name: Restart isc-stork-agent
|
||||||
|
ansible.builtin.service:
|
||||||
|
name: isc-stork-agent.service
|
||||||
|
state: restarted
|
||||||
|
enabled: yes
|
||||||
|
|
21
roles/stork-agent/tasks/main.yml
Normal file
21
roles/stork-agent/tasks/main.yml
Normal file
@ -0,0 +1,21 @@
|
|||||||
|
---
|
||||||
|
|
||||||
|
- name: Preparation
|
||||||
|
ansible.builtin.shell: curl -1sLf 'https://dl.cloudsmith.io/public/isc/stork/cfg/setup/bash.deb.sh' | sudo bash
|
||||||
|
|
||||||
|
- name: Update apt
|
||||||
|
ansible.builtin.apt:
|
||||||
|
update_cache: yes
|
||||||
|
|
||||||
|
- name: Installation isc-stork-agent
|
||||||
|
ansible.builtin.apt:
|
||||||
|
name: isc-stork-agent
|
||||||
|
state: present
|
||||||
|
|
||||||
|
- name: Generation du fichier de configuration agent.env
|
||||||
|
ansible.builtin.template:
|
||||||
|
src: agent.env.j2
|
||||||
|
dest: /etc/stork/agent.env
|
||||||
|
notify:
|
||||||
|
- Restart isc-stork-agent
|
||||||
|
|
45
roles/stork-agent/templates/agent.env.j2
Normal file
45
roles/stork-agent/templates/agent.env.j2
Normal file
@ -0,0 +1,45 @@
|
|||||||
|
### the IP or hostname to listen on for incoming Stork server connections
|
||||||
|
STORK_AGENT_HOST={{ stork_host }}
|
||||||
|
|
||||||
|
### the TCP port to listen on for incoming Stork server connections
|
||||||
|
STORK_AGENT_PORT={{ stork_port }}
|
||||||
|
|
||||||
|
### listen for commands from the Stork server only, but not for Prometheus requests
|
||||||
|
# STORK_AGENT_LISTEN_STORK_ONLY=true
|
||||||
|
|
||||||
|
### listen for Prometheus requests only, but not for commands from the Stork server
|
||||||
|
# STORK_AGENT_LISTEN_PROMETHEUS_ONLY=true
|
||||||
|
|
||||||
|
### settings for exporting stats to Prometheus
|
||||||
|
### the IP or hostname on which the agent exports Kea statistics to Prometheus
|
||||||
|
# STORK_AGENT_PROMETHEUS_KEA_EXPORTER_ADDRESS=
|
||||||
|
### the port on which the agent exports Kea statistics to Prometheus
|
||||||
|
# STORK_AGENT_PROMETHEUS_KEA_EXPORTER_PORT=
|
||||||
|
### how often the agent collects stats from Kea, in seconds
|
||||||
|
# STORK_AGENT_PROMETHEUS_KEA_EXPORTER_INTERVAL=
|
||||||
|
## enable or disable collecting per-subnet stats from Kea
|
||||||
|
# STORK_AGENT_PROMETHEUS_KEA_EXPORTER_PER_SUBNET_STATS=true
|
||||||
|
### the IP or hostname on which the agent exports BIND 9 statistics to Prometheus
|
||||||
|
# STORK_AGENT_PROMETHEUS_BIND9_EXPORTER_ADDRESS=
|
||||||
|
### the port on which the agent exports BIND 9 statistics to Prometheus
|
||||||
|
# STORK_AGENT_PROMETHEUS_BIND9_EXPORTER_PORT=
|
||||||
|
### how often the agent collects stats from BIND 9, in seconds
|
||||||
|
# STORK_AGENT_PROMETHEUS_BIND9_EXPORTER_INTERVAL=
|
||||||
|
|
||||||
|
### Stork Server URL used by the agent to send REST commands to the server during agent registration
|
||||||
|
STORK_AGENT_SERVER_URL=http://s-backup.gsb.lan:8080/
|
||||||
|
|
||||||
|
### skip TLS certificate verification when the Stork Agent connects
|
||||||
|
### to Kea over TLS and Kea uses self-signed certificates
|
||||||
|
# STORK_AGENT_SKIP_TLS_CERT_VERIFICATION=true
|
||||||
|
|
||||||
|
|
||||||
|
### Logging parameters
|
||||||
|
|
||||||
|
### Set logging level. Supported values are: DEBUG, INFO, WARN, ERROR
|
||||||
|
# STORK_LOG_LEVEL=DEBUG
|
||||||
|
### disable output colorization
|
||||||
|
# CLICOLOR=false
|
||||||
|
|
||||||
|
### path to the hook directory
|
||||||
|
# STORK_AGENT_HOOK_DIRECTORY=
|
21
roles/stork-server/README.md
Normal file
21
roles/stork-server/README.md
Normal file
@ -0,0 +1,21 @@
|
|||||||
|
# Rôle Kea
|
||||||
|
***
|
||||||
|
Rôle Kea: Configuration de 2 serveurs KEA en mode haute disponbilité.
|
||||||
|
|
||||||
|
## Tables des matières
|
||||||
|
1. [Que fait le rôle Kea ?]
|
||||||
|
2. [Installation et configuration de ka]
|
||||||
|
3. [Remarques]
|
||||||
|
|
||||||
|
|
||||||
|
## Que fait le rôle Kea ?
|
||||||
|
Le rôle KEA permet de configurer 1 serveurs kea (s-kea1 et s-kea2) en mode haute disponibilité.
|
||||||
|
- Le serveur **s-kea1** sera en mode **primary** il délivrera les baux DHCP sur le réseau n-user.
|
||||||
|
- Le serveur **s-kea2**, sera en mode **stand-by** le service DHCP basculera donc sur **s-kea2** en cas disponibilité du serveur**s-kea1**.
|
||||||
|
|
||||||
|
### Installation et configuration de kea
|
||||||
|
|
||||||
|
Le rôle kea installe les packets **kea dhcp4, hooks, admin** une fois les packets installer. Il configure un serveur kea pour qu'il distribue les ips sur le réseau n-user et soit en haute disponibilité.
|
||||||
|
|
||||||
|
### Remarquees ###
|
||||||
|
Une fois le playbook **s-kea** correctement terminé et la machine **s-kea** redemarrée, redémarrée le service **isc-kea-dhcp4.service** afin de prendre en compte les modifications éfféctuées sur la couche réseau par le role POST.
|
8
roles/stork-server/default/main.yml
Normal file
8
roles/stork-server/default/main.yml
Normal file
@ -0,0 +1,8 @@
|
|||||||
|
#variable kea
|
||||||
|
kea_ver: "2.4.1"
|
||||||
|
kea_dbname: ""
|
||||||
|
kaa_dbuser: ""
|
||||||
|
kea_dbpasswd: ""
|
||||||
|
kea_dhcp4_dir: "/etc/kea/kea-dhcp4.conf"
|
||||||
|
kea_ctrl_dir: "/etc/kea/kea-ctrl-agent.conf"
|
||||||
|
|
6
roles/stork-server/handlers/main.yml
Normal file
6
roles/stork-server/handlers/main.yml
Normal file
@ -0,0 +1,6 @@
|
|||||||
|
---
|
||||||
|
- name: Restart isc-stork-server.service
|
||||||
|
ansible.builtin.service:
|
||||||
|
name: isc-stork-server.service
|
||||||
|
state: restarted
|
||||||
|
enabled: yes
|
31
roles/stork-server/tasks/main.yml
Normal file
31
roles/stork-server/tasks/main.yml
Normal file
@ -0,0 +1,31 @@
|
|||||||
|
---
|
||||||
|
|
||||||
|
- name: Preparation
|
||||||
|
ansible.builtin.shell: curl -1sLf 'https://dl.cloudsmith.io/public/isc/stork/cfg/setup/bash.deb.sh' | sudo bash
|
||||||
|
|
||||||
|
- name: Update apt
|
||||||
|
ansible.builtin.apt:
|
||||||
|
update_cache: yes
|
||||||
|
|
||||||
|
#- name: Installation paquet isc-kea-common
|
||||||
|
# ansible.builtin.apt:
|
||||||
|
# deb: isc-kea-common
|
||||||
|
# state: present
|
||||||
|
|
||||||
|
- name: Installation isc-stork-server postgresql
|
||||||
|
ansible.builtin.apt:
|
||||||
|
pkg:
|
||||||
|
- isc-stork-server
|
||||||
|
- postgresql-15
|
||||||
|
|
||||||
|
- name: lancer la commande de création de la base de donnees stork
|
||||||
|
ansible.builtin.shell: su postgres --command "stork-tool db-create --db-name {{ stork_db_name }} --db-user {{ stork_db_user }} --db-password {{ stork_db_passwd }}"
|
||||||
|
|
||||||
|
- name: Generation ---- du fichier de configuration server.env
|
||||||
|
ansible.builtin.template:
|
||||||
|
src: server.env.j2
|
||||||
|
dest: /etc/stork/server.env
|
||||||
|
notify:
|
||||||
|
- Restart isc-stork-server.service
|
||||||
|
|
||||||
|
|
52
roles/stork-server/templates/server.env.j2
Normal file
52
roles/stork-server/templates/server.env.j2
Normal file
@ -0,0 +1,52 @@
|
|||||||
|
### database settings
|
||||||
|
### the address of a PostgreSQL database
|
||||||
|
STORK_DATABASE_HOST=localhost
|
||||||
|
### the port of a PostgreSQL database
|
||||||
|
STORK_DATABASE_PORT=5432
|
||||||
|
### the name of a database
|
||||||
|
STORK_DATABASE_NAME={{ stork_db_name }}
|
||||||
|
### the username for connecting to the database
|
||||||
|
STORK_DATABASE_USER_NAME={{ stork_db_user }}
|
||||||
|
### the SSL mode for connecting to the database
|
||||||
|
### possible values: disable, require, verify-ca, or verify-full
|
||||||
|
# STORK_DATABASE_SSLMODE=
|
||||||
|
### the location of the SSL certificate used by the server to connect to the database
|
||||||
|
# STORK_DATABASE_SSLCERT=
|
||||||
|
### the location of the SSL key used by the server to connect to the database
|
||||||
|
# STORK_DATABASE_SSLKEY=
|
||||||
|
### the location of the root certificate file used to verify the database server's certificate
|
||||||
|
# STORK_DATABASE_SSLROOTCERT=
|
||||||
|
### the password for the username connecting to the database
|
||||||
|
### empty password is set to avoid prompting a user for database password
|
||||||
|
STORK_DATABASE_PASSWORD={{stork_db_passwd }}
|
||||||
|
|
||||||
|
### REST API settings
|
||||||
|
### the IP address on which the server listens
|
||||||
|
# STORK_REST_HOST=
|
||||||
|
### the port number on which the server listens
|
||||||
|
# STORK_REST_PORT=
|
||||||
|
### the file with a certificate to use for secure connections
|
||||||
|
# STORK_REST_TLS_CERTIFICATE=
|
||||||
|
### the file with a private key to use for secure connections
|
||||||
|
# STORK_REST_TLS_PRIVATE_KEY=
|
||||||
|
### the certificate authority file used for mutual TLS authentication
|
||||||
|
# STORK_REST_TLS_CA_CERTIFICATE=
|
||||||
|
### the directory with static files served in the UI
|
||||||
|
STORK_REST_STATIC_FILES_DIR=/usr/share/stork/www
|
||||||
|
### the base URL of the UI - to be used only if the UI is served from a subdirectory
|
||||||
|
# STORK_REST_BASE_URL=
|
||||||
|
|
||||||
|
### enable Prometheus /metrics HTTP endpoint for exporting metrics from
|
||||||
|
### the server to Prometheus. It is recommended to secure this endpoint
|
||||||
|
### (e.g. using HTTP proxy).
|
||||||
|
# STORK_SERVER_ENABLE_METRICS=true
|
||||||
|
|
||||||
|
### Logging parameters
|
||||||
|
|
||||||
|
### Set logging level. Supported values are: DEBUG, INFO, WARN, ERROR
|
||||||
|
# STORK_LOG_LEVEL=DEBUG
|
||||||
|
### disable output colorization
|
||||||
|
# CLICOLOR=false
|
||||||
|
|
||||||
|
### path to the hook directory
|
||||||
|
# STORK_SERVER_HOOK_DIRECTORY=
|
@ -17,7 +17,7 @@ Attendre la fin de l'installation. Ensuite lancer le scipt r-vp1-post.sh
|
|||||||
|
|
||||||
### 🛠️ Lancer le script r-vp1-post.sh
|
### 🛠️ Lancer le script r-vp1-post.sh
|
||||||
```bash
|
```bash
|
||||||
cd /tools/ansible/gsb2023/Scripts
|
cd tools/ansible/gsb2024/scripts
|
||||||
```
|
```
|
||||||
```bash
|
```bash
|
||||||
bash r-vp1-post.sh
|
bash r-vp1-post.sh
|
||||||
@ -30,7 +30,7 @@ Puis lancer le script r-vp2-post.sh pour récuperer le fichier de configuration
|
|||||||
|
|
||||||
### 🛠️ Lancer le script
|
### 🛠️ Lancer le script
|
||||||
```bash
|
```bash
|
||||||
cd /tools/ansible/gsb2023/Scripts
|
cd tools/ansible/gsb2024/scripts
|
||||||
```
|
```
|
||||||
```bash
|
```bash
|
||||||
bash r-vp2-post.sh
|
bash r-vp2-post.sh
|
||||||
@ -44,4 +44,4 @@ reboot
|
|||||||
Veuillez maintenant vous rendre dans le dossier du role ferm :
|
Veuillez maintenant vous rendre dans le dossier du role ferm :
|
||||||
*gsb2024/roles/fw-ferm*
|
*gsb2024/roles/fw-ferm*
|
||||||
|
|
||||||
*Modification : jm*
|
*Modification : jm*
|
||||||
|
@ -11,7 +11,7 @@ Rôle du Zabbix client pour la supervision des différentes machines en active
|
|||||||
Il permet de configurer les agents zabbix en active sur le serveur.
|
Il permet de configurer les agents zabbix en active sur le serveur.
|
||||||
|
|
||||||
### Installation et configuration de Zabbix-agent
|
### Installation et configuration de Zabbix-agent
|
||||||
Le rôle Zabbix-cli va installer Zabbix-agent sur les serveurs Debian. Vous pouvez modifier les paramètres dans le fichier 'defaults'. Il s'agit d'une configuration en mode actif, ce qui signifie que du côté du serveur, il suffit de définir les hôtes avec leur nom, le type d'OS, et pour notre cas, préciser qu'il s'agit d'une machine virtuelle sur le serveur Zabbix.
|
Le rôle Zabbix-cli installe Zabbix-agent sur Debian. Les paramètres sont modifiable dans le fichier 'defaults'. Il s'agit d'une configuration en mode actif(remonte lui seule au serveur zabbix), ce qui signifie que du côté du serveur, il suffit de définir les hôtes avec leur nom, le type d'OS, et pour notre cas, préciser qu'il s'agit d'une machine virtuelle sur le serveur Zabbix.(Le script hostcreate.sh remonte automatiquement les machines uniquement si la clée d'api est valide)
|
||||||
### Partie Windows !
|
### Partie Windows !
|
||||||
Le fonctionnement de Zabbix-agent n'est pas différent de celui sur Linux. Cependant, lorsque vous êtes sur le site de Zabbix pour installer l'agent, veillez à choisir la version classique de Zabbix-agent plutôt que la version 2, car elle requiert plus de ressources pour une faible supervision supplémentaire.
|
Le fonctionnement de Zabbix-agent n'est pas différent de celui sur Linux. Cependant, lorsque vous êtes sur le site de Zabbix pour installer l'agent, veillez à choisir la version classique de Zabbix-agent plutôt que la version 2, car elle requiert plus de ressources pour une faible supervision supplémentaire.
|
||||||
|
|
||||||
|
@ -1,3 +1,3 @@
|
|||||||
SERVER: "127.0.0.1"
|
SERVER: "127.0.0.1"
|
||||||
SERVERACTIVE: "192.168.99.8"
|
SERVERACTIVE: "192.168.99.8"
|
||||||
TOKENAPI: "f72473b7e5402a5247773e456f3709dcdd5e41792360108fc3451bbfeed8eafe"
|
TOKENAPI: "a44e2a4977d61a869437739cb6086ae42f4b9937fbb96aed24bbad028469a1cf"
|
||||||
|
@ -1,9 +1,9 @@
|
|||||||
- name: Intallation paquet zabbix agent
|
- name: Installation paquet zabbix agent
|
||||||
get_url:
|
get_url:
|
||||||
url: "https://repo.zabbix.com/zabbix/6.4/debian/pool/main/z/zabbix-release/zabbix-release_6.4-1+debian12_all.deb"
|
url: "https://repo.zabbix.com/zabbix/6.4/debian/pool/main/z/zabbix-release/zabbix-release_6.4-1+debian12_all.deb"
|
||||||
dest: "/tmp"
|
dest: "/tmp"
|
||||||
|
|
||||||
- name: Intallation paquet zabbix agent suite
|
- name: Installation paquet zabbix agent suite
|
||||||
apt:
|
apt:
|
||||||
deb: "/tmp/zabbix-release_6.4-1+debian12_all.deb"
|
deb: "/tmp/zabbix-release_6.4-1+debian12_all.deb"
|
||||||
state: present
|
state: present
|
||||||
@ -12,7 +12,7 @@
|
|||||||
apt:
|
apt:
|
||||||
update_cache: yes
|
update_cache: yes
|
||||||
|
|
||||||
- name: Intallation Zabbix agent
|
- name: Installation Zabbix agent
|
||||||
apt:
|
apt:
|
||||||
name: zabbix-agent
|
name: zabbix-agent
|
||||||
state: present
|
state: present
|
||||||
@ -28,7 +28,7 @@
|
|||||||
state: restarted
|
state: restarted
|
||||||
enabled: yes
|
enabled: yes
|
||||||
|
|
||||||
- name: mise ne place script hostcreate
|
- name: mise en place script hostcreate
|
||||||
template:
|
template:
|
||||||
src: hostcreate.sh.j2
|
src: hostcreate.sh.j2
|
||||||
dest: /tmp/hostcreate.sh
|
dest: /tmp/hostcreate.sh
|
||||||
|
@ -10,6 +10,12 @@ Rôle zabbix-srv pour la supervision des différentes machines
|
|||||||
|
|
||||||
Le rôle zabbix-srv va installer `apache2` pour le serveur web, `zabbix-server` pour la supervision et `zabbix-agent` pour gérer les clients, **Zabbix** qui sera notre outil de supervision.
|
Le rôle zabbix-srv va installer `apache2` pour le serveur web, `zabbix-server` pour la supervision et `zabbix-agent` pour gérer les clients, **Zabbix** qui sera notre outil de supervision.
|
||||||
|
|
||||||
Lors de l'éxecution du playbook, les identifiants de la BDD sont crées avec le nom d'utilisateur "zabbix" et le mot de passe "password".
|
La base de données est importée depuis une sauvegarde existante sur s-adm qui contient les clés API pour la notification gotify.
|
||||||
|
|
||||||
|
Lors de l'éxecution du playbook, les identifiants de la BDD sont crées avec le nom d'utilisateur "zabbix" et le mot de passe "password" pour se connecter a la BD importée.
|
||||||
|
|
||||||
Pour l'identifiant de Zabbix, c'est "Admin" et le mot de passe "zabbix", à l'adresse <http://s-mon/zabbix>.
|
Pour l'identifiant de Zabbix, c'est "Admin" et le mot de passe "zabbix", à l'adresse <http://s-mon/zabbix>.
|
||||||
|
|
||||||
|
## Notification Zabbix avec gotify
|
||||||
|
|
||||||
|
Ce rôle installe la base pour pouvoir faire des notification avec gotify, la base importée est pre-configurer pas besoin de rajouter le media gotify. Le serveur gotify est sur s-backup et est accessible via s-backup.gsb.adm:8008.
|
||||||
|
7
roles/zabbix-srv/files/gotify.sh
Normal file
7
roles/zabbix-srv/files/gotify.sh
Normal file
@ -0,0 +1,7 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
ALERTSENDTO=$1
|
||||||
|
ALERTSUBJECT=$2
|
||||||
|
ALERTMESSAGE=$3
|
||||||
|
|
||||||
|
curl -X POST "http://s-backup.gsb.adm:8008/message?token=$ALERTSENDTO" -F "title=$ALERTSUBJECT" -F "message=$ALERTMESSAGE" -F "priority=5" > /dev/null 2>&1
|
@ -29,15 +29,7 @@
|
|||||||
name: mariadb
|
name: mariadb
|
||||||
state: started
|
state: started
|
||||||
|
|
||||||
- name: 6. Créer la base de données
|
- name: 6. Creer un utilisateur et lui attribuer tous les droits
|
||||||
community.mysql.mysql_db:
|
|
||||||
name: zabbix
|
|
||||||
encoding: utf8mb4
|
|
||||||
collation: utf8mb4_bin
|
|
||||||
state: present
|
|
||||||
login_unix_socket: /var/run/mysqld/mysqld.sock
|
|
||||||
|
|
||||||
- name: 7. Creer un utilisateur et lui attribuer tous les droits
|
|
||||||
community.mysql.mysql_user:
|
community.mysql.mysql_user:
|
||||||
name: zabbix
|
name: zabbix
|
||||||
password: password
|
password: password
|
||||||
@ -45,50 +37,52 @@
|
|||||||
state: present
|
state: present
|
||||||
login_unix_socket: /var/run/mysqld/mysqld.sock
|
login_unix_socket: /var/run/mysqld/mysqld.sock
|
||||||
|
|
||||||
- name: 8. Modifier une variable pour importer un schema
|
- name: 7. Modifier la variable trust function creators pour importer la base données
|
||||||
community.mysql.mysql_variables:
|
community.mysql.mysql_variables:
|
||||||
variable: log_bin_trust_function_creators
|
variable: log_bin_trust_function_creators
|
||||||
value: 1
|
value: 1
|
||||||
mode: global
|
mode: global
|
||||||
login_unix_socket: /var/run/mysqld/mysqld.sock
|
login_unix_socket: /var/run/mysqld/mysqld.sock
|
||||||
|
|
||||||
- name: 9. Importer le schema initial
|
- name: 8. Récupérer la base de données
|
||||||
|
get_url:
|
||||||
|
url: http://s-adm.gsb.adm/gsbstore/zabbix.sql.gz
|
||||||
|
dest: /tmp
|
||||||
|
|
||||||
|
- name: 9. Importer la base de données
|
||||||
community.mysql.mysql_db:
|
community.mysql.mysql_db:
|
||||||
state: import
|
state: import
|
||||||
name: zabbix
|
name: zabbix
|
||||||
encoding: utf8mb4
|
encoding: utf8mb4
|
||||||
login_user: zabbix
|
target: /tmp/zabbix.sql.gz
|
||||||
login_password: password
|
|
||||||
target: /usr/share/zabbix-sql-scripts/mysql/server.sql.gz
|
|
||||||
login_unix_socket: /var/run/mysqld/mysqld.sock
|
login_unix_socket: /var/run/mysqld/mysqld.sock
|
||||||
|
|
||||||
- name: 10. Modifier la variable pour le schema
|
- name: 10. Remettre a zero la variable trust function creators
|
||||||
community.mysql.mysql_variables:
|
community.mysql.mysql_variables:
|
||||||
variable: log_bin_trust_function_creators
|
variable: log_bin_trust_function_creators
|
||||||
value: 0
|
value: 0
|
||||||
mode: global
|
mode: global
|
||||||
login_unix_socket: /var/run/mysqld/mysqld.sock
|
login_unix_socket: /var/run/mysqld/mysqld.sock
|
||||||
|
|
||||||
- name: 11. Configurer le mdp de la db
|
- name: 11. Lancer le service zabbix-server
|
||||||
replace:
|
|
||||||
path: /etc/zabbix/zabbix_server.conf
|
|
||||||
regexp: '^# DBPassword='
|
|
||||||
replace: 'DBPassword=password'
|
|
||||||
|
|
||||||
- name: 12. Lancer le service zabbix-server
|
|
||||||
service:
|
service:
|
||||||
name: zabbix-server
|
name: zabbix-server
|
||||||
state: restarted
|
state: restarted
|
||||||
enabled: yes
|
enabled: yes
|
||||||
|
|
||||||
- name: 13. Lancer le service zabbix-agent
|
- name: 12. Lancer le service zabbix-agent
|
||||||
service:
|
service:
|
||||||
name: zabbix-agent
|
name: zabbix-agent
|
||||||
state: restarted
|
state: restarted
|
||||||
enabled: yes
|
enabled: yes
|
||||||
|
|
||||||
- name: 14. Lancer le service apache2
|
- name: 13. Lancer le service apache2
|
||||||
service:
|
service:
|
||||||
name: apache2
|
name: apache2
|
||||||
state: restarted
|
state: restarted
|
||||||
enabled: yes
|
enabled: yes
|
||||||
|
|
||||||
|
- name: 14. Gotify
|
||||||
|
copy:
|
||||||
|
src: gotify.sh
|
||||||
|
dest: /usr/lib/zabbix/alertscripts
|
||||||
|
@ -1,14 +1,18 @@
|
|||||||
---
|
---
|
||||||
- hosts: localhost
|
- hosts: localhost
|
||||||
connection: local
|
connection: local
|
||||||
|
become: yes
|
||||||
|
|
||||||
|
|
||||||
roles:
|
roles:
|
||||||
- base
|
- base
|
||||||
- s-ssh
|
- s-ssh
|
||||||
|
#- zabbix-cli
|
||||||
- dnsmasq
|
- dnsmasq
|
||||||
- squid
|
- squid
|
||||||
|
- ssh-backup-key-gen
|
||||||
|
# awx-user
|
||||||
# - local-store
|
# - local-store
|
||||||
- zabbix-cli
|
|
||||||
## - syslog-cli
|
## - syslog-cli
|
||||||
- post
|
- post
|
||||||
# - goss
|
# - goss
|
||||||
|
11
s-awx-post.yml
Normal file
11
s-awx-post.yml
Normal file
@ -0,0 +1,11 @@
|
|||||||
|
---
|
||||||
|
- hosts: localhost
|
||||||
|
connection: local
|
||||||
|
vars:
|
||||||
|
awx_host: "s-awx.gsb.lan"
|
||||||
|
awx_dir: "/root/tools/awx-on-k3s"
|
||||||
|
awx_ip: "172.16.0.22"
|
||||||
|
awx_if: "enp0s8"
|
||||||
|
|
||||||
|
roles:
|
||||||
|
- awx
|
13
s-awx.yml
Normal file
13
s-awx.yml
Normal file
@ -0,0 +1,13 @@
|
|||||||
|
---
|
||||||
|
- hosts: localhost
|
||||||
|
connection: local
|
||||||
|
vars:
|
||||||
|
roles:
|
||||||
|
- base
|
||||||
|
- goss
|
||||||
|
- ssh-cli
|
||||||
|
- awx-user-cli
|
||||||
|
#- awx
|
||||||
|
# - zabbix-cli
|
||||||
|
- journald-snd
|
||||||
|
- post
|
12
s-backup.yml
12
s-backup.yml
@ -1,15 +1,21 @@
|
|||||||
---
|
---
|
||||||
- hosts: localhost
|
- hosts: localhost
|
||||||
connection: local
|
connection: local
|
||||||
|
become: yes
|
||||||
|
vars:
|
||||||
|
stork_db_user: "stork-server"
|
||||||
|
stork_db_passwd: "Azerty1+"
|
||||||
|
stork_db_name: "stork"
|
||||||
|
|
||||||
roles:
|
roles:
|
||||||
- base
|
- base
|
||||||
- goss
|
- goss
|
||||||
# - proxy3
|
|
||||||
- zabbix-cli
|
- zabbix-cli
|
||||||
- gotify
|
- gotify
|
||||||
# - ssh-cli
|
- stork-server
|
||||||
# - syslog-cli
|
- ssh-cli
|
||||||
|
#- syslog-cli
|
||||||
- smb-backup
|
- smb-backup
|
||||||
- dns-slave
|
- dns-slave
|
||||||
- post
|
- post
|
||||||
|
- ssh-backup-key-private
|
||||||
|
@ -1,14 +1,18 @@
|
|||||||
---
|
---
|
||||||
- hosts: localhost
|
- hosts: localhost
|
||||||
connection: local
|
connection: local
|
||||||
|
become: yes
|
||||||
# include: config.yml
|
# include: config.yml
|
||||||
|
|
||||||
roles:
|
roles:
|
||||||
- base
|
- base
|
||||||
- zabbix-cli
|
#- zabbix-cli
|
||||||
- goss
|
- goss
|
||||||
- dns-master
|
- dns-master
|
||||||
- webautoconf
|
- webautoconf
|
||||||
|
# - elk-filebeat-cli
|
||||||
- journald-snd
|
- journald-snd
|
||||||
- ssh-cli
|
- ssh-cli
|
||||||
|
#- awx-user-cli
|
||||||
- post
|
- post
|
||||||
|
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
---
|
---
|
||||||
- hosts: localhost
|
- hosts: localhost
|
||||||
connection: local
|
connection: local
|
||||||
|
become: yes
|
||||||
#vars:
|
#vars:
|
||||||
#glpi_version: "10.0.11"
|
#glpi_version: "10.0.11"
|
||||||
#glpi_dir: "/var/www/html/glpi"
|
#glpi_dir: "/var/www/html/glpi"
|
||||||
|
23
s-kea1.yml
23
s-kea1.yml
@ -1,13 +1,24 @@
|
|||||||
---
|
---
|
||||||
- hosts: localhost
|
- hosts: localhost
|
||||||
connection: local
|
connection: local
|
||||||
|
vars:
|
||||||
|
kea_this_server: "s-kea1"
|
||||||
|
kea_srv1: "s-kea1"
|
||||||
|
kea_srv2: "s-kea2"
|
||||||
|
kea_ctrl_address_this: "172.16.0.20"
|
||||||
|
kea_ctrl_address1: "172.16.0.20"
|
||||||
|
kea_ctrl_address2: "172.16.0.21"
|
||||||
|
kea_dhcp_int: "enp0s9"
|
||||||
|
stork_host: "s-kea1.gsb.lan"
|
||||||
|
stork_port: "8081"
|
||||||
|
|
||||||
roles:
|
roles:
|
||||||
- base
|
- base
|
||||||
#- goss
|
- goss
|
||||||
#- ssh-cli
|
- ssh-cli
|
||||||
- kea-master
|
- kea
|
||||||
#- zabbix-cli
|
- awx-user-cli
|
||||||
#- journald-snd
|
#- stork-agent
|
||||||
#- snmp-agent
|
# - zabbix-cli
|
||||||
|
- journald-snd
|
||||||
- post
|
- post
|
||||||
|
23
s-kea2.yml
23
s-kea2.yml
@ -1,13 +1,24 @@
|
|||||||
---
|
---
|
||||||
- hosts: localhost
|
- hosts: localhost
|
||||||
connection: local
|
connection: local
|
||||||
|
vars:
|
||||||
|
kea_this_server: "s-kea2"
|
||||||
|
kea_srv1: "s-kea1"
|
||||||
|
kea_srv2: "s-kea2"
|
||||||
|
kea_ctrl_address_this: "172.16.0.21"
|
||||||
|
kea_ctrl_address1: "172.16.0.20"
|
||||||
|
kea_ctrl_address2: "172.16.0.21"
|
||||||
|
kea_dhcp_int: "enp0s9"
|
||||||
|
stork_host: "s-kea2.gsb.lan"
|
||||||
|
stork_port: "8081"
|
||||||
|
|
||||||
roles:
|
roles:
|
||||||
- base
|
- base
|
||||||
# - goss
|
- goss
|
||||||
# - ssh-cli
|
- ssh-cli
|
||||||
- kea-slave
|
- kea
|
||||||
# - zabbix-cli
|
- stork-agent
|
||||||
# - journald-snd
|
- zabbix-cli
|
||||||
# - snmp-agent
|
- journald-snd
|
||||||
|
- snmp-agent
|
||||||
- post
|
- post
|
||||||
|
3
s-lb.yml
3
s-lb.yml
@ -5,7 +5,8 @@
|
|||||||
roles:
|
roles:
|
||||||
- base
|
- base
|
||||||
- goss
|
- goss
|
||||||
- lb-front
|
#- lb-front
|
||||||
|
- lb-front-ssl
|
||||||
#- zabbix-cli
|
#- zabbix-cli
|
||||||
- ssh-cli
|
- ssh-cli
|
||||||
- post
|
- post
|
||||||
|
@ -10,3 +10,4 @@
|
|||||||
# - syslog-cli
|
# - syslog-cli
|
||||||
- snmp-agent
|
- snmp-agent
|
||||||
- post
|
- post
|
||||||
|
- ssh-backup-key-pub
|
||||||
|
@ -1,7 +1,8 @@
|
|||||||
---
|
---
|
||||||
- hosts: localhost
|
- hosts: localhost
|
||||||
connection: local
|
connection: local
|
||||||
|
become: yes
|
||||||
|
|
||||||
roles:
|
roles:
|
||||||
- base
|
- base
|
||||||
- goss
|
- goss
|
||||||
|
33
scripts/mkvm
33
scripts/mkvm
@ -1,19 +1,32 @@
|
|||||||
#!/bin/bash
|
#!/bin/bash
|
||||||
|
|
||||||
mkvmrelease="v1.3.2"
|
mkvmrelease="v1.3.3"
|
||||||
|
|
||||||
ovarelease="2023c"
|
ovarelease="2024b"
|
||||||
ovafogrelease="2024a"
|
ovafogrelease="2024b"
|
||||||
#ovafile="$HOME/Téléchargements/debian-bullseye-gsb-${ovarelease}.ova"
|
#ovafile="$HOME/Téléchargements/debian-bullseye-gsb-${ovarelease}.ova"
|
||||||
ovafile="$HOME/Téléchargements/debian-bookworm-gsb-${ovarelease}.ova"
|
ovafile="$HOME/Téléchargements/debian-bookworm-gsb-${ovarelease}.ova"
|
||||||
ovafilefog="$HOME/Téléchargements/debian-bullseye-gsb-${ovafogrelease}.ova"
|
ovafilefog="$HOME/Téléchargements/debian-bullseye-gsb-${ovafogrelease}.ova"
|
||||||
startmode=0
|
startmode=0
|
||||||
deletemode=0
|
deletemode=0
|
||||||
|
|
||||||
|
declare -A vmMem
|
||||||
|
vmMem[r-int]=512
|
||||||
|
vmMem[r-ext]=512
|
||||||
|
vmMem[s-nas]=512
|
||||||
|
vmMem[s-infra]=768
|
||||||
|
vmMem[s-backup]=768
|
||||||
|
vmMem[s-elk]=3072
|
||||||
|
vmMem[s-awx]=4096
|
||||||
|
|
||||||
|
declare -A vmCpus
|
||||||
|
vmCpus[s-peertube]=2
|
||||||
|
vmCpus[s-awx]=2
|
||||||
|
|
||||||
usage () {
|
usage () {
|
||||||
echo "$0 - version ${mkvmrelease} - Ova version ${ovarelease}"
|
echo "$0 - version ${mkvmrelease} - Ova version ${ovarelease}"
|
||||||
echo "$0 : creation VM et parametrage interfaces"
|
echo "$0 : creation VM et parametrage interfaces"
|
||||||
echo "usage : $0 [-r] [-s] <s-adm|s-infra|r-int|r-ext|s-proxy|s-mon|s-appli|s-backup|s-itil|s-ncx|s-fog>"
|
echo "usage : $0 [-r] [-s] <s-adm|s-infra|r-int|r-ext|s-proxy|s-mon|s-appli|s-backup|s-itil|s-nxc|s-fog>"
|
||||||
echo " option -r : efface VM existante avant creation nouvelle"
|
echo " option -r : efface VM existante avant creation nouvelle"
|
||||||
echo " option -s : start VM apres creation"
|
echo " option -s : start VM apres creation"
|
||||||
exit 1
|
exit 1
|
||||||
@ -32,7 +45,15 @@ create_vm () {
|
|||||||
if [[ "${deletemode}" = 1 ]] ; then
|
if [[ "${deletemode}" = 1 ]] ; then
|
||||||
VBoxManage unregistervm --delete "${nom}"
|
VBoxManage unregistervm --delete "${nom}"
|
||||||
fi
|
fi
|
||||||
vboxmanage import "${nomova}" --vsys 0 --vmname "${nom}"
|
mem=1024
|
||||||
|
cpus=1
|
||||||
|
if [[ -v vmMem[${nom}] ]]; then
|
||||||
|
mem=${vmMem[${nom}]}
|
||||||
|
fi
|
||||||
|
if [[ -v vmCpus[${nom}] ]]; then
|
||||||
|
cpus=${vmCpus[${nom}]}
|
||||||
|
fi
|
||||||
|
vboxmanage import "${nomova}" --vsys 0 --vmname "${nom}" --memory "${mem}" --cpus "${cpus}"
|
||||||
}
|
}
|
||||||
|
|
||||||
setif () {
|
setif () {
|
||||||
@ -132,6 +153,8 @@ elif [[ "${vm}" == "r-vp2" ]] ; then
|
|||||||
./addint.r-vp2
|
./addint.r-vp2
|
||||||
elif [[ "${vm}" == "s-agence" ]] ; then
|
elif [[ "${vm}" == "s-agence" ]] ; then
|
||||||
create_if "${vm}" "n-adm" "n-agence"
|
create_if "${vm}" "n-adm" "n-agence"
|
||||||
|
elif [[ "${vm}" == "s-awx" ]] ; then
|
||||||
|
create_if "${vm}" "n-adm" "n-infra"
|
||||||
else
|
else
|
||||||
echo "$0 : vm ${vm} non prevue "
|
echo "$0 : vm ${vm} non prevue "
|
||||||
exit 2
|
exit 2
|
||||||
|
@ -4,19 +4,40 @@
|
|||||||
#mkvm pour toutes les vms
|
#mkvm pour toutes les vms
|
||||||
|
|
||||||
$mkvmrelease="v1.3.1"
|
$mkvmrelease="v1.3.1"
|
||||||
$ovarelease="2023c"
|
$ovarelease="2024b"
|
||||||
$ovafogrelease="2024a"
|
$ovafogrelease="2024b"
|
||||||
$ovafile="$HOME\Downloads\debian-bookworm-gsb-${ovarelease}.ova"
|
$ovafile="$HOME\Downloads\debian-bookworm-gsb-${ovarelease}.ova"
|
||||||
$ovafilefog="$HOME\Downloads\debian-bullseye-gsb-${ovafogrelease}.ova"
|
$ovafilefog="$HOME\Downloads\debian-bullseye-gsb-${ovafogrelease}.ova"
|
||||||
$vboxmanage="C:\Program Files\Oracle\VirtualBox\VBoxManage.exe"
|
$vboxmanage="C:\Program Files\Oracle\VirtualBox\VBoxManage.exe"
|
||||||
$deletemode=0
|
$deletemode=0
|
||||||
|
|
||||||
|
$vmMem = @{
|
||||||
|
"r-int" = "512"
|
||||||
|
"r-ext" = "512"
|
||||||
|
"s-nas" = "512"
|
||||||
|
"s-infra" = "768"
|
||||||
|
"s-backup" = "768"
|
||||||
|
"s-elk" = "3072"
|
||||||
|
"s-awx" = "4096"
|
||||||
|
"s-peertube" = "4096"
|
||||||
|
}
|
||||||
|
|
||||||
|
$vmCpus = @{
|
||||||
|
"s-awx" = "2"
|
||||||
|
"s-peertube" = "2"
|
||||||
|
}
|
||||||
#FONCTIONS
|
#FONCTIONS
|
||||||
|
|
||||||
function create_vm{ param([string]$nomvm)
|
function create_vm{ param([string]$nomvm)
|
||||||
#Importation depuis l'ova
|
|
||||||
& "$vboxmanage" import "$ovafile" --vsys 0 --vmname "$nomvm"
|
if (($vmMem.ContainsKey($nomvm)) -and ($vmCpus.ContainsKey($nomvm))) {
|
||||||
Write-Host "Machine $nomvm importée"
|
& "$vboxmanage" import "$ovafile" --vsys 0 --vmname "$nomvm" --memory $vmMem[$nomvm] --cpus $vmCpus[$nomvm]
|
||||||
|
Write-Host "Machine $nomvm importée"
|
||||||
|
} else {
|
||||||
|
#Importation depuis l'ova
|
||||||
|
& "$vboxmanage" import "$ovafile" --vsys 0 --vmname "$nomvm"
|
||||||
|
Write-Host "Machine $nomvm importée"
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
function create_if{ param([string]$nomvm, [string]$nic, [int]$rang, [string]$reseau)
|
function create_if{ param([string]$nomvm, [string]$nic, [int]$rang, [string]$reseau)
|
||||||
@ -118,6 +139,22 @@ elseif ($args[0] -eq "s-kea2") {
|
|||||||
create_if $args[0] "int" 3 "n-user"
|
create_if $args[0] "int" 3 "n-user"
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
|
elseif ($args[0] -eq "s-awx") {
|
||||||
|
|
||||||
|
create_vm $args[0]
|
||||||
|
create_if $args[0] "int" 1 "n-adm"
|
||||||
|
create_if $args[0] "int" 2 "n-infra"
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
elseif ($args[0] -eq "s-peertube") {
|
||||||
|
|
||||||
|
create_vm $args[0]
|
||||||
|
create_if $args[0] "int" 1 "n-adm"
|
||||||
|
create_if $args[0] "int" 2 "n-infra"
|
||||||
|
}
|
||||||
|
|
||||||
elseif ($args[0] -eq "s-agence") {
|
elseif ($args[0] -eq "s-agence") {
|
||||||
|
|
||||||
create_vm $args[0]
|
create_vm $args[0]
|
||||||
|
@ -1,4 +1,4 @@
|
|||||||
!/bin/bash
|
#!/bin/bash
|
||||||
|
|
||||||
#Ancien scipt 2023
|
#Ancien scipt 2023
|
||||||
#stoper le fw
|
#stoper le fw
|
||||||
|
@ -9,11 +9,11 @@
|
|||||||
- Une fois l'installation du serveur Windows 2019, démarrer la VM Windows Server.
|
- Une fois l'installation du serveur Windows 2019, démarrer la VM Windows Server.
|
||||||
- Faite l'installation de windows server. Vous pouvez suivre de l'étape 3 a l'étape 12 (Lien : https://www.infonovice.fr/guide-dinstallation-de-windows-server-2019-avec-une-interface-graphique/)
|
- Faite l'installation de windows server. Vous pouvez suivre de l'étape 3 a l'étape 12 (Lien : https://www.infonovice.fr/guide-dinstallation-de-windows-server-2019-avec-une-interface-graphique/)
|
||||||
- Renommer votre nom de machine du serveur windows depuis les parametre de "Informations Systèmes" en **s-win**. Puis redémarrer la machine.
|
- Renommer votre nom de machine du serveur windows depuis les parametre de "Informations Systèmes" en **s-win**. Puis redémarrer la machine.
|
||||||
- Modifier la premiere carte de l'IP du serveur windows depuis le panneau de configuration en "192.168.99.6" avec la passerelle en "192.168.99.99 et la seconde en "172.16.0.6" et ajouter la passerelle par defaut en "172.16.0.254".
|
- Modifier la premiere carte de l'IP du serveur windows depuis le panneau de configuration en "192.168.99.6" avec la passerelle en "192.168.99.99" et la seconde en "172.16.0.6" et ajouter la passerelle par defaut en "172.16.0.254".
|
||||||
- Eteindre votre VM et ajouter une carte en pont dans les parametres reseaux de la VM. Allumer votre VM et installer git depuis le site officiel, il faudra sans doute activer certaines options dans les parametres d'internet explorer comme "JavaScript" ou encore l'option de Téléchargement (lien source: https://git-scm.com/download/win et https://support.microsoft.com/fr-fr/topic/procédure-d-activation-de-javascript-dans-windows-88d27b37-6484-7fc0-17df-872f65168279).
|
- Eteindre votre VM et ajouter une carte en pont dans les parametres reseaux de la VM. Allumer votre VM et installer git depuis le site officiel, il faudra sans doute activer certaines options dans les parametres d'internet explorer comme "JavaScript" ou encore l'option de "Téléchargement" et si besoin rajouter le site du **gitea** du lycée etant comme site sure (lien source: https://git-scm.com/download/win et https://support.microsoft.com/fr-fr/topic/procédure-d-activation-de-javascript-dans-windows-88d27b37-6484-7fc0-17df-872f65168279).
|
||||||
- Installer Serveur DNS et Services AD DS. Pour vous aider, suivre le TP de "Installation du service" a "Installation serveur DNS" (lien source: https://sio.lyc-lecastel.fr/doku.php?id=promo_2024:serveur_windows_2019-installation_ad)
|
- Installer Serveur DNS et Services AD DS. Pour vous aider, suivre le TP de "Installation du service" a "Installation serveur DNS" (lien source: https://sio.lyc-lecastel.fr/doku.php?id=promo_2024:serveur_windows_2019-installation_ad)
|
||||||
- Créer une nouvelle fôret pour le domaine **gsb.lan**.
|
- Créer une nouvelle fôret pour le domaine **gsb.lan**.
|
||||||
- Configurer la zone inverse du DNS et l'alimenter avec les enregistrements souhaités (A et PTR pour **s-win**, **s-itil**, **r-int** et **s-infra*. Vous pouvez vous aidez de ce tutoriel (https://www.it-connect.fr/dns-sous-windows-server-2022-comment-configurer-une-zone-de-recherche-inversee/).
|
- Configurer la zone inverse du DNS et l'alimenter avec les enregistrements souhaités (A et PTR pour **s-win**, **s-itil**, **r-int** et **s-infra**. Vous pouvez vous aidez de ce tutoriel (https://www.it-connect.fr/dns-sous-windows-server-2022-comment-configurer-une-zone-de-recherche-inversee/).
|
||||||
|
|
||||||
## Création des dossiers partagés et des utilisateur
|
## Création des dossiers partagés et des utilisateur
|
||||||
|
|
||||||
|
Loading…
x
Reference in New Issue
Block a user