Compare commits

..

2 Commits

Author SHA1 Message Date
c78e135cb6 modification roles glpi 2024-01-11 11:18:53 +01:00
6902c40779 mise a jour role glpi 2024-01-11 09:44:41 +01:00
91 changed files with 724 additions and 3261 deletions

View File

@ -1,34 +1,34 @@
# gsb2024
2024-01-19 11h45 ps
2024-12-21 ps
Environnement et playbooks **ansible** pour le projet **GSB 2024**
Environnement et playbooks ansible pour le projet GSB 2024
## Quickstart
Prérequis :
* une machine **Linux Debian Bookworm** ou **Windows**
* une machine LInux Debian Bookworm ou Windows
* VirtualBox
* git
* fichier machines virtuelles **ova** :
* **debian-bookworm-gsb-2023c.ova**
* **debian-bullseye-gsb-2024a.ova**
* **debian-buster-gsb-2023a.ova**
## Les machines
* **s-adm** : routeur adm, DHCP + NAT, déploiement, proxy squid
* **s-adm** : routeur adm, DHCP + NAT, deploiement, proxy squid
* **s-infra** : DNS maitre, autoconfiguration navigateurs avec **wpad**
* **r-int** : routage, DHCP
* **r-ext** : routage, NAT
* **s-proxy** : proxy **squid**
* **s-proxy** : squid
* **s-itil** : serveur GLPI
* **s-backup** : DNS esclave + sauvegarde s-win (SMB), Stork et Gotify
* **s-mon** : supervision avec **Nagios4/Zabbix**, notifications et journald
* **s-backup** : DNS esclave + sauvegarde s-win (SMB)
* **s-mon** : supervision avec **Nagios4**, notifications et syslog
* **s-fog** : deploiement postes de travail avec **FOG**
* **s-win** : Windows Server 2019, AD, DNS, DHCP, partage fichiers
* **s-nxc** : NextCloud avec **docker** via proxy inverse **traefik** et certificat auto-signé
* **s-elk** : pile **ELK** dockerisée
* **s-nxc** : NextCloud avec **docker**
* **s-elk** : pile ELK dockerisée
* **s-lb** : Load Balancer **HaProxy** pour application Wordpress (DMZ)
* **r-vp1** : Routeur VPN Wireguard coté siège
* **r-vp2** : Routeur VPN Wireguard coté agence, DHCP
@ -38,8 +38,6 @@ Prérequis :
* **s-lb-web2** : Serveur Wordpress 2 Load Balancer
* **s-lb-db** : Serveur Mariadb pour Wordpress
* **s-nas** : Serveur NFS pour application Wordpress avec LB
* **s-kea1** : Serveur DHCP Kea HA 1
* **s-kea2** : Serveur DHCP Kea HA 2
## Les playbooks
@ -53,8 +51,8 @@ On utilisera les images de machines virtuelle suivantes :
* Debian Bookworm 12.4 - 2 cartes - 1 Go - Stockage 20 Go
et pour **s-fog** :
* **debian-bullseye-2024a.ova** (2024-01-06)
* Debian Bullseye 11.8 - 2 cartes - 1 Go - stockage 20 Go
* **debian-buster-2023a.ova** (2023-01-06)
* Debian Buster 10 - 2 cartes - 1 Go - stockage 20 Go
Les images **.ova** doivent etre stockées dans le répertoire habituel de téléchargement de l'utilisateur courant.
@ -74,10 +72,6 @@ mkvm -r s-adm
```
### Machine s-adm
La machine **-sadm** est la première machine à installer.
* créer la machine virtuelle **s-adm** avec **mkvm** comme décrit plus haut.
* démarrer la VM puis ouvir une session
* utiliser le script de renommage comme suit :
@ -91,49 +85,48 @@ bash chname <nouveau_nom_de_machine>` , puis redémarrer
git clone https://gitea.lyc-lecastel.fr/gsb/gsb2024.git
cd gsb2024/pre
bash inst-depl
cd /root/tools/ansible/gsb2024/pre
DEPL=192.168.99.99 bash gsbboot
cd ../.. ; bash pull-config
cd /var/www/html/gsbstore
bash getall
cd /root/tools/ansible/gsb024/pre
bash gsbboot
cd .. ; bash pull-config
```
- redémarrer
- la machine **s-adm** doit etre opérationnelle
### Pour chaque machine
#### Etape 1 - Nommage machine
#### Etape 1
- créer la machine avec **mkvm -r**, les cartes réseau sont paramétrées par **mkvm** selon les spécifications
- ouvrir une session sur la machine considérée
- renommer la machine soit
- renomme la machine soit
* en utilisant le script de renommage comme suit :
` /root/tools/ansible/gsb2024/scripts/chname <nouveau_nom_de_machine>`
* soit (ici on renomme la machine en **s-infra**) avec :
* soit avec :
```shell
export HOST=s-infra
curl 192.168.99.99/gsbstore/inst1|bash
reboot # on redemarre
NHOST=mavm
sed -i "s/bookworm/${NHOST}/g" /etc/host{s,name}
sudo reboot # on redemarre
```
#### Etape 2 - installation outils, depot gsb2024 et lancement playbook
#### Etape 2
- utiliser le script **gsb-start** : `bash gsb-start`
- ou sinon:
```shell
curl 192.168.99.99/gsbstore/inst2|bash
mkdir -p tools/ansible ; cd tools/ansible
git clone https://gitea.lyc-lecastel.fr/gsb/gsb2024.git
cd gsb2024/pre
DEPL=192.168.99.99 bash gsbboot
cd ../..
bash pull-config
```
- le script recupere le dépot **gsb2024.git**
- il lance ensuite le script **pull-config** avec le script porant le nom de la machine
- on peut alors redémarrer
#### Etape 3 - Redémarrage et tests
#### Etape 3
- redémarrer
- **Remarque** : une machine doit avoir été redémarrée pour prendre en charge la nouvelle configuration, en particulier la couche réseau et l'adressage.
- selon les situations, il est possible qu'un seul playbook ne soit pas suffisant pour installer complètement une machine. Dans ce cas de figure, le second playbook s'appelle **s-machine-post.yml**.
Il est à lancer depuis ''tools/ansible/gsb2024'' :
```shell
ansible-playbook -i localhost, -c local s-machine-post.yml
```
- **Remarque** : une machine doit avoir été redémarrée pour prendre en charge la nouvelle configuration
## Les tests

View File

@ -1,20 +1,21 @@
file:
/etc/wireguard/wg0.conf:
exists: true
mode: "0600"
mode: "0644"
owner: root
group: root
filetype: file
contains: []
contains:
- AllowedIPs = 10.0.0.2/32, 172.16.128.0/24
package:
wireguard:
installed: true
versions:
- 1.0.20210914-1
- 1.0.20210223-1
wireguard-tools:
installed: true
versions:
- 1.0.20210914-1+b1
- 1.0.20210223-1
service:
wg-quick@wg0:
enabled: true

View File

@ -1,8 +1,7 @@
file:
/etc/wireguard/wg0.conf:
exists: true
mode: "0600"
size: 374
mode: "0644"
owner: root
group: root
filetype: file
@ -11,11 +10,11 @@ package:
wireguard:
installed: true
versions:
- 1.0.20210914-1
- 1.0.20210223-1
wireguard-tools:
installed: true
versions:
- 1.0.20210914-1+b1
- 1.0.20210223-1
service:
isc-dhcp-server:
enabled: true

View File

@ -1,18 +1,6 @@
file:
/var/www/html/gsbstore/getall:
exists: true
mode: "0644"
owner: root
group: root
filetype: file
contents: []
package:
dnsmasq:
installed: true
lighttpd:
installed: true
versions:
- 1.4.69-1
squid:
installed: true
addr:
@ -24,18 +12,10 @@ port:
listening: true
ip:
- 0.0.0.0
tcp:80:
listening: true
ip:
- 0.0.0.0
tcp6:53:
listening: true
ip:
- '::'
tcp6:80:
listening: true
ip:
- '::'
udp:53:
listening: true
ip:
@ -52,9 +32,6 @@ service:
dnsmasq:
enabled: true
running: true
lighttpd:
enabled: true
running: true
squid:
enabled: true
running: true
@ -84,8 +61,6 @@ dns:
process:
dnsmasq:
running: true
lighttpd:
running: true
squid:
running: true
interface:

View File

@ -1,77 +1,28 @@
file:
/tftpboot/default.ipxe:
exists: true
mode: "0644"
owner: root
group: root
filetype: file
contains: []
contents: null
package:
apache2:
installed: true
versions:
- 2.4.56-1~deb11u2
isc-dhcp-server:
installed: true
versions:
- 4.4.1-2.3+deb11u2
mariadb-server:
installed: true
versions:
- 1:10.5.21-0+deb11u1
tftpd-hpa:
installed: true
versions:
- 5.2+20150808-1.2
port:
tcp:80:
listening: true
ip:
- 0.0.0.0
tcp:443:
listening: true
ip:
- 0.0.0.0
udp:67:
listening: true
ip:
- 0.0.0.0
udp:69:
listening: true
ip:
- 0.0.0.0
service:
apache2:
enabled: true
running: true
isc-dhcp-server:
enabled: true
running: true
nfs-server:
enabled: true
running: true
tftpd-hpa:
enabled: true
running: true
command:
ping -c 4 192.168.99.99:
exit-status: 0
stdout:
- 0% packet loss
stderr: []
timeout: 10000
ping -c 4 google.fr:
exit-status: 0
stdout:
- 0% packet loss
stderr: []
timeout: 10000
process:
apache2:
running: true
interface:
enp0s9:
exists: true
addrs:
- 172.16.64.16/24
enp0s3:
exists: true
addrs:
- 192.168.99.16/24
interface:
enp0s8:
exists: true
addrs:
- 172.16.0.16/24
interface:
enp0s9:
exists: true
addrs:
- 172.16.64.16/24
command:
ping -c 4 192.168.99.99:
exit-status: 0
stdout:
- 0% packet loss
stderr: []
timeout: 10000
ping -c 4 google.fr:
exit-status: 0
stdout:
- 0% packet loss
stderr: []
timeout: 10000

View File

@ -1,87 +1,36 @@
file:
/etc/nginx/sites-enabled/default:
exists: false
contents: []
/etc/nginx/sites-enabled/glpi:
exists: true
mode: "0644"
owner: root
group: root
filetype: file
contents: []
/var/www/html/glpi:
exists: true
mode: "0755"
owner: www-data
group: www-data
filetype: directory
contents: []
/var/www/html/glpicli:
exists: true
mode: "0775"
owner: www-data
group: www-data
filetype: directory
contents: []
/var/www/html/glpicli/GLPI-Agent-1.7-x64.msi:
exists: true
mode: "0644"
owner: root
group: root
filetype: file
contents: []
port:
tcp:22:
listening: true
ip:
- 0.0.0.0
tcp:80:
listening: true
ip:
- 0.0.0.0
tcp:3306:
listening: true
ip:
- 127.0.0.1
tcp:9000:
listening: true
ip:
- 127.0.0.1
tcp:10050:
listening: true
ip:
- 0.0.0.0
/var/www/html/glpi:
exists: true
mode: "0755"
owner: www-data
group: www-data
filetype: directory
/var/www/html/ficlients:
exists: true
mode: "0775"
owner: www-data
group: www-data
filetype: directory
/var/www/html/glpi/plugins:
exists: true
mode: "0777"
filetype: directory
/var/www/html/index.nginx-debian.html:
exists: true
mode: "0775"
owner: www-data
group: www-data
filetype: file
service:
mariadb.service:
enabled: true
running: true
nginx:
enabled: true
running: true
php8.2-fpm.service:
enabled: true
running: true
ssh:
enabled: true
running: true
systemd-journal-upload:
enabled: true
running: true
zabbix-agent:
enabled: true
running: true
http:
http://s-itil.gsb.lan/:
status: 200
allow-insecure: false
no-follow-redirects: false
timeout: 5000
body: []
username: glpi
password: glpi
http://s-itil.gsb.lan/glpicli:
status: 200
allow-insecure: false
no-follow-redirects: false
timeout: 5000
body: []
mariadb:
enabled: true
running: true
nginx:
enabled: true
running: true

View File

@ -1,38 +1,21 @@
addr:
tcp://192.168.102.1:80:
reachable: true
timeout: 500
tcp://192.168.102.2:80:
reachable: true
timeout: 500
service:
mariadb:
enabled: true
running: true
mysql:
enabled: true
running: true
user:
mysql:
exists: true
uid: 104
gid: 111
groups:
- mysql
home: /nonexistent
shell: /bin/false
group:
mysql:
exists: true
gid: 111
package:
mysql-server:
installed: true
versions:
- 5.5.54-0+deb8u1
command:
egrep "#bind-address" /etc/mysql/my.cnf:
exit-status: 0
stdout:
- "#bind-address\t\t= 127.0.0.1"
stderr: []
timeout: 10000
interface:
enp0s3:
exists: true
addrs:
- 192.168.99.154/24
mtu: 1500
enp0s8:
exists: true
addrs:
- 192.168.102.254/24
mtu: 1500
enp0s3:
exists: true
addrs:
- 192.168.99.13/24
enp0s8:
exists: true
addrs:
- 192.168.102.50/24

View File

@ -1,62 +1,63 @@
package:
apache2:
installed: true
versions:
- 2.4.57-2
nfs-common:
installed: true
versions:
- 1:2.6.2-4
apache2:
installed: true
versions:
- 2.4.10-10+deb8u7
php5:
installed: true
versions:
- 5.6.29+dfsg-0+deb8u1
port:
tcp6:80:
listening: true
ip:
- '::'
tcp:22:
listening: true
ip:
- 0.0.0.0
tcp6:22:
listening: true
ip:
- '::'
tcp6:80:
listening: true
ip:
- '::'
service:
apache2:
enabled: true
running: true
nfs-common:
enabled: false
running: false
apache2:
enabled: true
running: true
sshd:
enabled: true
running: true
user:
sshd:
exists: true
uid: 105
gid: 65534
groups:
- nogroup
home: /var/run/sshd
shell: /usr/sbin/nologin
command:
egrep 192.168.102.14:/export/www /etc/fstab:
exit-status: 0
stdout:
- 192.168.102.14:/export/www /var/www/html nfs _netdev rw 0 0
stderr: []
timeout: 10000
process:
apache2:
running: true
mount:
/var/www/html:
exists: true
opts:
- rw
- relatime
vfs-opts:
- rw
- vers=4.2
- rsize=131072
- wsize=131072
- namlen=255
- hard
- proto=tcp
- timeo=600
- retrans=2
- sec=sys
- clientaddr=192.168.102.1
- local_lock=none
- addr=192.168.102.253
source: 192.168.102.253:/home/wordpress
filesystem: nfs4
apache2:
running: true
sshd:
running: true
interface:
enp0s3:
exists: true
addrs:
- 192.168.99.101/24
mtu: 1500
enp0s8:
exists: true
addrs:
- 192.168.101.1/24
mtu: 1500
enp0s9:
exists: true
addrs:
- 192.168.102.1/24
mtu: 1500
enp0s3:
exists: true
addrs:
- 192.168.99.11/24
enp0s8:
exists: true
addrs:
- 192.168.101.1/24
enp0s9:
exists: true
addrs:
- 192.168.102.1/24

View File

@ -1,62 +1,63 @@
package:
apache2:
installed: true
versions:
- 2.4.57-2
nfs-common:
installed: true
versions:
- 1:2.6.2-4
apache2:
installed: true
versions:
- 2.4.10-10+deb8u7
php5:
installed: true
versions:
- 5.6.29+dfsg-0+deb8u1
port:
tcp6:80:
listening: true
ip:
- '::'
tcp:22:
listening: true
ip:
- 0.0.0.0
tcp6:22:
listening: true
ip:
- '::'
tcp6:80:
listening: true
ip:
- '::'
service:
apache2:
enabled: true
running: true
nfs-common:
enabled: false
running: false
apache2:
enabled: true
running: true
sshd:
enabled: true
running: true
user:
sshd:
exists: true
uid: 105
gid: 65534
groups:
- nogroup
home: /var/run/sshd
shell: /usr/sbin/nologin
command:
egrep 192.168.102.14:/export/www /etc/fstab:
exit-status: 0
stdout:
- 192.168.102.14:/export/www /var/www/html nfs _netdev rw 0 0
stderr: []
timeout: 10000
process:
apache2:
running: true
mount:
/var/www/html:
exists: true
opts:
- rw
- relatime
vfs-opts:
- rw
- vers=4.2
- rsize=131072
- wsize=131072
- namlen=255
- hard
- proto=tcp
- timeo=600
- retrans=2
- sec=sys
- clientaddr=192.168.102.2
- local_lock=none
- addr=192.168.102.253
source: 192.168.102.253:/home/wordpress
filesystem: nfs4
apache2:
running: true
sshd:
running: true
interface:
enp0s3:
exists: true
addrs:
- 192.168.99.102/24
mtu: 1500
enp0s8:
exists: true
addrs:
- 192.168.101.2/24
mtu: 1500
enp0s9:
exists: true
addrs:
- 192.168.102.2/24
mtu: 1500
enp0s3:
exists: true
addrs:
- 192.168.99.12/24
enp0s8:
exists: true
addrs:
- 192.168.101.2/24
enp0s9:
exists: true
addrs:
- 192.168.102.2/24

View File

@ -1,55 +1,28 @@
package:
haproxy:
installed: true
versions:
- 2.6.12-1+deb12u1
addr:
tcp://192.168.101.1:80:
reachable: true
timeout: 500
tcp://192.168.101.2:80:
reachable: true
timeout: 500
port:
tcp:80:
listening: true
ip:
- 192.168.100.10
tcp:80:
listening: true
ip:
- 192.168.100.11
service:
haproxy:
enabled: true
running: true
user:
haproxy:
exists: true
uid: 104
gid: 111
groups:
- haproxy
home: /var/lib/haproxy
shell: /usr/sbin/nologin
group:
haproxy:
exists: true
gid: 111
process:
haproxy:
running: true
haproxy:
enabled: true
running: true
sshd:
enabled: true
running: true
interface:
enp0s3:
exists: true
addrs:
- 192.168.99.100/24
mtu: 1500
enp0s8:
exists: true
addrs:
- 192.168.100.10/24
mtu: 1500
http:
http://192.168.100.10/:
status: 200
allow-insecure: false
no-follow-redirects: false
timeout: 5000
body: []
enp0s3:
exists: true
addrs:
- 192.168.99.100/24
mtu: 1500
enp0s8:
exists: true
addrs:
- 192.168.100.11/24
mtu: 1500
enp0s9:
exists: true
addrs:
- 192.168.101.254/24
mtu: 1500

View File

@ -1,63 +1,36 @@
file:
/etc/nagios4/htdigest.users:
exists: true
mode: "0640"
owner: nagios
group: www-data
filetype: file
contains: [nagiosadmin]
package:
apache2:
installed: true
zabbix-server-mysql:
nagios-snmp-plugins:
installed: true
zabbix-frontend-php:
nagios4:
installed: true
zabbix-apache-conf:
snmp:
installed: true
zabbix-sql-scripts:
python3-passlib:
installed: true
zabbix-agent:
installed: true
mariadb-server:
installed: true
python3-pymysql:
installed: true
systemd-journal-remote:
installed: true
file:
/etc/systemd/system/systemd-journal-remote.service:
exist: true
mode: "0777"
filetype: directory
/var/log/journal/remote:
exist: true
mode: "0777"
filetype: directory
port:
tcp:80:
listening: true
ip:
- 0.0.0.0
tcp:3306:
listening: true
ip:
- 127.0.0.1
tcp:10050:
udp:514:
listening: true
ip:
- 0.0.0.0
tcp:10051:
listening: true
ip:
- 0.0.0.0
tcp:19532:
listening: true
ip:
- '*'
service:
apache2:
enabled: true
running: true
zabbix-server:
enabled: true
running: true
zabbix-agent:
enabled: true
running: true
systemd-journal-remote.socket:
nagios4:
enabled: true
running: true
command:
@ -70,9 +43,7 @@ command:
process:
apache2:
running: true
zabbix_server:
running: true
mariadb:
nagios4:
running: true
interface:
enp0s3:
@ -84,7 +55,7 @@ interface:
addrs:
- 172.16.0.8/24
http:
http://localhost/zabbix:
http://localhost/nagios4:
status: 401
allow-insecure: false
no-follow-redirects: false

View File

@ -1,55 +0,0 @@
file:
/home/wordpress:
exists: true
mode: "0755"
owner: www-data
group: www-data
filetype: directory
contents: []
package:
file:
installed: true
versions:
- 1:5.44-3
nfs-common:
installed: true
versions:
- 1:2.6.2-4
nfs-kernel-server:
installed: true
versions:
- 1:2.6.2-4
addr:
tcp://192.168.102.1:80:
reachable: true
timeout: 500
tcp://192.168.102.2:80:
reachable: true
timeout: 500
service:
nfs-common:
enabled: false
running: false
nfs-kernel-server:
enabled: true
running: true
nfs-mountd:
enabled: true
running: true
nfs-server:
enabled: true
running: true
nfs-utils:
enabled: true
running: false
interface:
enp0s3:
exists: true
addrs:
- 192.168.99.153/24
mtu: 1500
enp0s8:
exists: true
addrs:
- 192.168.102.253/24
mtu: 1500

83
localhost, Normal file
View File

@ -0,0 +1,83 @@
# Ce fichier viminfo a été généré par Vim 9.0.
# Vous pouvez l'éditer, mais soyez prudent.
# Viminfo version
|1,4
# 'encoding' dans lequel ce fichier a été écrit
*encoding=utf-8
# hlsearch on (H) or off (h):
~h
# Historique ligne de commande (chronologie décroissante) :
:q!
|2,0,1703236388,,"q!"
:x
|2,0,1703236381,,"x"
:x!
|2,0,1703236221,,"x!"
# Historique chaîne de recherche (chronologie décroissante) :
# Historique expression (chronologie décroissante) :
# Historique ligne de saisie (chronologie décroissante) :
# Historique Ligne de débogage (chronologie décroissante) :
# Registres :
""1 LINE 0
connection: local
|3,1,1,1,1,0,1703236374," connection: local"
"2 LINE 0
hosts: localhost
|3,0,2,1,1,0,1703236374," hosts: localhost"
# Marques dans le fichier :
'0 1 2 ~/tools/ansible/gsb2024/s-mon.yml
|4,48,1,2,1703236388,"~/tools/ansible/gsb2024/s-mon.yml"
'1 1 9 ~/tools/ansible/gsb2024/s-mon.yml
|4,49,1,9,1703236339,"~/tools/ansible/gsb2024/s-mon.yml"
'2 9 9 ~/tools/ansible/gsb2024/s-mon.yml
|4,50,9,9,1703236221,"~/tools/ansible/gsb2024/s-mon.yml"
'3 9 9 ~/tools/ansible/gsb2024/s-mon.yml
|4,51,9,9,1703236221,"~/tools/ansible/gsb2024/s-mon.yml"
'4 11 9 ~/tools/ansible/gsb2024/s-mon.yml
|4,52,11,9,1703236221,"~/tools/ansible/gsb2024/s-mon.yml"
'5 11 9 ~/tools/ansible/gsb2024/s-mon.yml
|4,53,11,9,1703236221,"~/tools/ansible/gsb2024/s-mon.yml"
'6 1 13 ~/tools/ansible/gsb2024/s-mon.yml
|4,54,1,13,1703236013,"~/tools/ansible/gsb2024/s-mon.yml"
'7 1 13 ~/tools/ansible/gsb2024/s-mon.yml
|4,55,1,13,1703236013,"~/tools/ansible/gsb2024/s-mon.yml"
'8 1 13 ~/tools/ansible/gsb2024/s-mon.yml
|4,56,1,13,1703236013,"~/tools/ansible/gsb2024/s-mon.yml"
'9 1 13 ~/tools/ansible/gsb2024/s-mon.yml
|4,57,1,13,1703236013,"~/tools/ansible/gsb2024/s-mon.yml"
# Liste de sauts (le plus récent en premier) :
-' 1 2 ~/tools/ansible/gsb2024/s-mon.yml
|4,39,1,2,1703236388,"~/tools/ansible/gsb2024/s-mon.yml"
-' 1 9 ~/tools/ansible/gsb2024/s-mon.yml
|4,39,1,9,1703236339,"~/tools/ansible/gsb2024/s-mon.yml"
-' 9 9 ~/tools/ansible/gsb2024/s-mon.yml
|4,39,9,9,1703236318,"~/tools/ansible/gsb2024/s-mon.yml"
-' 11 9 ~/tools/ansible/gsb2024/s-mon.yml
|4,39,11,9,1703236318,"~/tools/ansible/gsb2024/s-mon.yml"
-' 11 9 ~/tools/ansible/gsb2024/s-mon.yml
|4,39,11,9,1703236221,"~/tools/ansible/gsb2024/s-mon.yml"
-' 1 13 ~/tools/ansible/gsb2024/s-mon.yml
|4,39,1,13,1703236018,"~/tools/ansible/gsb2024/s-mon.yml"
-' 1 13 ~/tools/ansible/gsb2024/s-mon.yml
|4,39,1,13,1703236013,"~/tools/ansible/gsb2024/s-mon.yml"
# Historique des marques dans les fichiers (les plus récentes en premier) :
> ~/tools/ansible/gsb2024/s-mon.yml
* 1703236386 0
" 1 2
^ 9 10
. 2 0
+ 10 0
+ 2 0

77
pre/Vagrantfile-s-adm Normal file
View File

@ -0,0 +1,77 @@
# -*- mode: ruby -*-
# vi: set ft=ruby :
# All Vagrant configuration is done below. The "2" in Vagrant.configure
# configures the configuration version (we support older styles for
# backwards compatibility). Please don't change it unless you know what
# you're doing.
Vagrant.configure("2") do |config|
# The most common configuration options are documented and commented below.
# For a complete reference, please see the online documentation at
# https://docs.vagrantup.com.
# Every Vagrant development environment requires a box. You can search for
# boxes at https://vagrantcloud.com/search.
config.vm.box = "debian/buster64"
config.vm.hostname = "s-adm"
config.vm.define "s-adm"
config.vm.provider :virtualbox do |vb|
vb.name = "s-adm"
end
# Disable automatic box update checking. If you disable this, then
# boxes will only be checked for updates when the user runs
# `vagrant box outdated`. This is not recommended.
# config.vm.box_check_update = false
# Create a forwarded port mapping which allows access to a specific port
# within the machine from a port on the host machine. In the example below,
# accessing "localhost:8080" will access port 80 on the guest machine.
# NOTE: This will enable public access to the opened port
# config.vm.network "forwarded_port", guest: 80, host: 8080
# Create a forwarded port mapping which allows access to a specific port
# within the machine from a port on the host machine and only allow access
# via 127.0.0.1 to disable public access
# config.vm.network "forwarded_port", guest: 80, host: 8080, host_ip: "127.0.0.1"
# Create a private network, which allows host-only access to the machine
# using a specific IP.
config.vm.network "public_network", ip: "192.168.1.91"
config.vm.network "private_network", ip: "192.168.99.99"
# Create a public network, which generally matched to bridged network.
# Bridged networks make the machine appear as another physical device on
# your network.
# config.vm.network "public_network"
# Share an additional folder to the guest VM. The first argument is
# the path on the host to the actual folder. The second argument is
# the path on the guest to mount the folder. And the optional third
# argument is a set of non-required options.
# config.vm.synced_folder "../data", "/vagrant_data"
# Provider-specific configuration so you can fine-tune various
# backing providers for Vagrant. These expose provider-specific options.
# Example for VirtualBox:
#
# config.vm.provider "virtualbox" do |vb|
# # Display the VirtualBox GUI when booting the machine
# vb.gui = true
#
# # Customize the amount of memory on the VM:
# vb.memory = "1024"
# end
#
# View the documentation for the provider you are using for more
# information on available options.
# Enable provisioning with a shell script. Additional provisioners such as
# Ansible, Chef, Docker, Puppet and Salt are also available. Please see the
# documentation for more information about their specific syntax and use.
config.vm.provision "shell", inline: <<-SHELL
apt-get update
apt-get upgrade
apt-get install -y vim wget curl
# apt-get install -y apache2
SHELL
end

0
pre/gsbboot Executable file → Normal file
View File

72
pre/inst-depl Executable file → Normal file
View File

@ -2,51 +2,49 @@
## aa : 2023-01-18 15:25
## ps : 2023-02-01 15:25
## ps : 2023-12-18 15:25
## ps : 2024-01-17 15:25
set -o errexit
set -o pipefail
GITUSR=gitgsb
GITPRJ=gsb2024
apt-get update
apt-get install -y lighttpd git
apt-get install -y apache2 git
STOREREP="/var/www/html/gsbstore"
GLPIREL=10.0.11
str="wget -nc -4 https://github.com/glpi-project/glpi/releases/download/${GLPIREL}/glpi-${GLPIREL}.tgz"
str="wget -nc https://github.com/glpi-project/glpi/releases/download/${GLPIREL}/glpi-${GLPIREL}.tgz"
#GLPI Agent
GLPIAGVER=1.7
str31="wget -nc -4 https://github.com/glpi-project/glpi-agent/releases/download/${GLPIAGVER}/GLPI-Agent-${GLPIAGVER}-x64.msi"
str31="wget -nc https://github.com/glpi-project/glpi-agent/releases/download/${GLPIAGVER}/GLPI-Agent-${GLPIAGVER}-x64.msi"
#str32="wget -nc -4 https://github.com/glpi-project/glpi-agent/releases/download/${GLPIAGVER}/GLPI-Agent-${GLPIAGVER}-x86.msi"
#str32="wget -nc https://github.com/glpi-project/glpi-agent/releases/download/${GLPIAGVER}/GLPI-Agent-${GLPIAGVER}-x86.msi"
FOGREL=1.5.10
str4="wget -nc -4 https://github.com/FOGProject/fogproject/archive/${FOGREL}.tar.gz -O fogproject-${FOGREL}.tar.gz"
str4="wget -nc https://github.com/FOGProject/fogproject/archive/${FOGREL}.tar.gz -O fogproject-${FOGREL}.tar.gz"
WPREL=6.4.2
#v6.1.1 le 17/01/2023
str5="wget -nc -4 https://fr.wordpress.org/latest-fr_FR.tar.gz -O wordpress-6.4.2-fr_FR.tar.gz"
str5="wget -nc https://fr.wordpress.org/latest-fr_FR.tar.gz -O wordpress-6.4.2-fr_FR.tar.gz"
str6="wget -nc -4 https://github.com/goss-org/goss/releases/latest/download/goss-linux-amd64 -O goss"
str6="curl -L https://github.com/goss-org/goss/releases/latest/download/goss-linux-amd64 -o goss"
str7="wget -nc -4 https://github.com/goss-org/goss/releases/latest/download/dgoss -O dgoss"
str7="curl -L https://github.com/goss-org/goss/releases/latest/download/dgoss -o dgoss"
#GESTSUPREL=3.2.30
#str8="wget -nc -4 'https://gestsup.fr/index.php?page=download&channel=stable&version=${GESTSUPREL}&type=gestsup' -O gestsup_${GESTSUPREL}.zip"
str8="wget -nc -4 'https://gestsup.fr/index.php?page=download&channel=stable&version=3.2.30&type=gestsup' -O gestsup_3.2.30.zip"
#str8="wget -nc 'https://gestsup.fr/index.php?page=download&channel=stable&version=${GESTSUPREL}&type=gestsup' -O gestsup_${GESTSUPREL}.zip"
str8="wget -nc 'https://gestsup.fr/index.php?page=download&channel=stable&version=3.2.30&type=gestsup' -O gestsup_3.2.30.zip"
#METRICBEAT ET FILEBEAT
ELKREL=8.11.3
str81="wget -nc -4 https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-${ELKREL}-amd64.deb"
str82="wget -nc -4 https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-${ELKREL}-windows-x86_64.zip"
str83="wget -nc -4 https://artifacts.elastic.co/downloads/beats/metricbeat/metricbeat-${ELKREL}-windows-x86_64.zip"
str84="wget -nc -4 https://artifacts.elastic.co/downloads/beats/metricbeat/metricbeat-${ELKREL}-amd64.deb"
str81="wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-${ELKREL}-amd64.deb"
str82="wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-${ELKREL}-windows-x86_64.zip"
str83="wget https://artifacts.elastic.co/downloads/beats/metricbeat/metricbeat-${ELKREL}-windows-x86_64.zip"
str84="wget https://artifacts.elastic.co/downloads/beats/metricbeat/metricbeat-${ELKREL}-amd64.deb"
[[ -d "${STOREREP}" ]] || mkdir "${STOREREP}"
[[ -d "${STOREREP}" ]]|| mkdir "${STOREREP}"
(cat <<EOT > "${STOREREP}/getall"
#!/bin/bash
@ -60,13 +58,13 @@ ${str7}
chmod +x ./goss ./dgoss
wget -nc -4 https://get.docker.com -O getdocker.sh
curl -L https://get.docker.com -o getdocker.sh
chmod +x ./getdocker.sh
wget -nc -4 https://github.com/FiloSottile/mkcert/releases/download/v1.4.4/mkcert-v1.4.4-linux-amd64 -O mkcert
wget -nc https://github.com/FiloSottile/mkcert/releases/download/v1.4.4/mkcert-v1.4.4-linux-amd64 -O mkcert
chmod +x ./mkcert
#${str8}
${str8}
${str81}
${str82}
@ -77,37 +75,3 @@ EOT
)
cat "${STOREREP}/getall"
cd "${STOREREP}" || exit 2
bash getall
cp goss /usr/local/bin
(cat <<'EOT' > "${STOREREP}/inst1"
#!/bin/bash
if [[ -z "${HOST+x}" ]]; then
echo "erreur : variable HOST indefinie"
echo " HOST : adresse serveur deploiement"
echo "export HOST=s-xyzt ; ./$0"
exit 1
fi
hostname=$(hostname)
echo "${HOST}" > /etc/hostname
hostnamectl set-hostname "${HOST}"
sed -i "s/${hostname}/${HOST}/g" /etc/hosts
echo "vous pouvez redemarrer ..."
EOT
)
(cat <<'EOT' > "${STOREREP}/inst2"
#!/bin/bash
mkdir -p ~/tools/ansible ; cd ~/tools/ansible
git clone https://gitea.lyc-lecastel.fr/gsb/gsb2024.git
cd gsb2024/pre
DEPL=192.168.99.99 bash gsbboot
cd ../.. ; bash pull-config
EOT
)

48
pre/inst-depl.old Normal file
View File

@ -0,0 +1,48 @@
#!/bin/bash
set -o errexit
set -o pipefail
GITUSR=gitgsb
GITPRJ=gsb
apt update && apt upgrade
apt install -y apache2 git
getent passwd "${GITUSR}" >> /dev/null
if [[ $? != 0 ]]; then
echo "creation utilisateur "${GITUSR}" ..."
/sbin/useradd -m -d /home/"${GITUSR}" -s /bin/bash "${GITUSR}"
echo "${GITUSR}:${GITUSR}" | /sbin/chpasswd
else
echo "utilisateur "${GITUSR}" existant..."
fi
su -c "git init --share --bare /home/${GITUSR}/${GITPRJ}.git" "${GITUSR}"
su -c "cd ${GITPRJ}.git/.git/hooks && mv post-update.sample post-update" "${GITUSR}"
[[ -h /var/www/html/"${GITPRJ}".git ]]|| ln -s /home/"${GITUSR}"/"${GITPRJ}".git /var/www/html/"${GITPRJ}".git
[[ -d /var/www/html/gsbstore ]]|| mkdir /var/www/html/gsbstore
(cat <<EOT > /var/www/html/gsbstore/getall
#!/bin/bash
set -o errexit
set -o pipefail
GLPIREL=9.4.5
wget -nc https://github.com/glpi-project/glpi/releases/download/\${GLPIREL}/glpi-\${GLPIREL}.tgz
FIREL=9.4+2.4
wget -nc -O fusioninventory-glpi\${FIREL}.tag.gz https://github.com/fusioninventory/fusioninventory-for-glpi/archive/glpi\${FIREL}.tar.gz
#https://github.com/fusioninventory/fusioninventory-for-glpi/archive/glpi9.4+2.4.tar.g
FIAGREL=2.5.2
wget -nc https://github.com/fusioninventory/fusioninventory-agent/releases/download/\${FIAGREL}/fusioninventory-agent_windows-x64_\${FIAGREL}.exe
wget -nc https://github.com/fusioninventory/fusioninventory-agent/releases/download/\$FIAGREL/fusioninventory-agent_windows-x86_\${FIAGREL}.exe
FOGREL=1.5.7
wget -nc https://github.com/FOGProject/fogproject/archive/\${FOGREL}.tar.gz -O fogproject-\${FOGREL}.tar.gz
wget -nc https://fr.wordpress.org/wordpress-5.3.2-fr_FR.tar.gz
EOT
)
cat /var/www/html/gsbstore/getall

8
pre/pull-config Executable file → Normal file
View File

@ -14,15 +14,15 @@ dir=/root/tools/ansible
cd "${dir}" || exit 1
hostname > hosts
if [[ $# == 1 ]] ; then
opt=$1
fi
if [[ "${opt}" == '-l' ]] ; then
cd "${dir}/${prj}" || exit 2
echo "Execution locale ...."
ansible-playbook -i localhost, -c local "$(hostname).yml"
cd "${dir}/${prj}" || exit 2
ansible-playbook -i localhost, -c local "$(hostname).yml"
else
ansible-pull -i "$(hostname)," -U "${UREP}"
ansible-pull -i "${dir}/hosts" -C main -U "${UREP}"
fi
exit 0

View File

@ -1,11 +1,7 @@
#!/bin/bash
dir=/root/tools/ansible
prj=gsb2024
opt=""
if [ -z ${UREP+x} ]; then
UREP=https://gitea.lyc-lecastel.fr/gsb/gsb2024.git
UREP=https://gitea.lyc-lecastel.fr/gsb/gsb2024.git
fi
dir=/root/tools/ansible
@ -14,15 +10,7 @@ dir=/root/tools/ansible
cd "${dir}" || exit 1
if [[ $# == 1 ]] ; then
opt=$1
fi
if [[ "${opt}" == '-l' ]] ; then
cd "${dir}/${prj}" || exit 2
echo "Execution locale ...."
ansible-playbook -i localhost, -c local "$(hostname).yml"
else
ansible-pull -i "$(hostname)," -U "${UREP}"
fi
hostname > hosts
ansible-pull -i "${dir}/hosts" -C main -U "${UREP}"
exit 0

View File

@ -1,14 +1,8 @@
---
- name: desactive unatentted upgrade
ansible.builtin.service:
name: unattended-upgrades.service
state: stopped
enabled: false
- name: Copie sources.list
copy:
src: sources.list.{{ ansible_distribution_release }}
src: sources.list.{{ ansible_distribution }}
dest: /etc/apt/sources.list
- name: Copie apt.conf pour proxy
@ -81,3 +75,8 @@
- net.ipv6.conf.default.disable_ipv6
- net.ipv6.conf.lo.disable_ipv6
- name: desactive unatentted upgrade
ansible.builtin.service:
name: unattended-upgrades.service
state: stopped
enabled: false

View File

@ -22,9 +22,6 @@
192.168.99.14 s-nas.gsb.adm
192.168.99.15 s-san.gsb.adm
192.168.99.16 s-fog.gsb.adm
192.168.99.20 s-kea1.gsb.adm
192.168.99.21 s-kea2.gsb.adm
192.168.99.22 s-awx.gsb.adm
192.168.99.50 s-lb-bd.gsb.adm
192.168.99.101 s-lb-web1.gsb.adm
192.168.99.102 s-lb-web2.gsb.adm

View File

@ -21,9 +21,6 @@
192.168.99.12 r-int.gsb.adm
192.168.99.13 r-ext.gsb.adm
192.168.99.14 s-nas.gsb.adm
192.168.99.20 s-kea1.gsb.adm
192.168.99.21 s-kea2.gsb.adm
192.168.99.22 s-awx.gsb.adm
192.168.99.50 s-lb-bd.gsb.adm
192.168.99.101 s-lb-web1.gsb.adm
192.168.99.102 s-lb-web2.gsb.adm

View File

@ -120,7 +120,7 @@ subnet 172.16.65.0 netmask 255.255.255.0 {
#DHCP pour le réseau USER
subnet 172.16.64.0 netmask 255.255.255.0 {
range 172.16.64.100 172.16.64.150;
range 172.16.64.20 172.16.64.120;
option domain-name-servers 172.16.0.1 ;
option routers 172.16.64.254;
option broadcast-address 172.16.64.255;

View File

@ -5,7 +5,7 @@
;
$TTL 604800
@ IN SOA s-infra.gsb.lan. root.s-infra.gsb.lan. (
2024011900 ; Serial
2023051000 ; Serial
7200 ; Refresh
86400 ; Retry
8419200 ; Expire
@ -16,11 +16,9 @@ $TTL 604800
@ IN A 127.0.0.1
@ IN AAAA ::1
s-infra IN A 172.16.0.1
s-backup IN A 172.16.0.4
s-proxy IN A 172.16.0.2
s-appli IN A 172.16.0.3
s-backup IN A 172.16.0.4
s-stork IN A 172.16.0.4
s-gotify IN A 172.16.0.4
s-win IN A 172.16.0.6
s-mess IN A 172.16.0.7
s-nxc IN A 172.16.0.7
@ -29,9 +27,6 @@ s-mon IN A 172.16.0.8
s-itil IN A 172.16.0.9
s-elk IN A 172.16.0.11
s-gestsup IN A 172.16.0.17
s-kea1 IN A 172.16.0.20
s-kea2 IN A 172.16.0.21
s-awx IN A 172.16.0.22
r-int IN A 172.16.0.254
r-int-lnk IN A 192.168.200.254
r-ext IN A 192.168.200.253

View File

@ -5,7 +5,7 @@
;
$TTL 604800
@ IN SOA s-infra.gsb.lan. root.s-infra.gsb.lan. (
2024011800 ; Serial
2023040501 ; Serial
7200 ; Refresh
86400 ; Retry
8419200 ; Expire
@ -21,13 +21,10 @@ $TTL 604800
7.0 IN PTR s-nxc.gsb.lan.
8.0 IN PTR s-mon.gsb.lan.
9.0 IN PTR s-itil.gsb.lan.
20.0 IN PTR s-kea1.gsb.lan.
21.0 IN PTR s-kea2.gsb.lan.
22.0 IN PTR s-awx.gsb.lan.
101.1 IN PTR s-web1
101.2 IN PTR s-web2
100.10 IN PTR s-lb
100.10 IN PTR s-lb.gsb.lan
11.0 IN PTR s-elk.gsb.lan.
17.0 IN PTR s-gestsup.lan
254.0 IN PTR r-int.gsb.lan.
254.0 IN PTR r-int.gsb.lan.

View File

@ -1,16 +1,16 @@
---
- name: on recupere getdocker
get_url:
url: http://s-adm.gsb.adm/gsbstore/getdocker.sh
dest: /usr/local/bin
- name: Supprime le fichier getdocker.sh si déjà présent
file:
state: absent
path: /tmp/getdocker.sh
- name: on verifie si docker est installe
stat:
path: /usr/bin/docker
# command: which docker
register: docker_present
- name: Télécharge le script d'installation de docker
uri:
url: 'https://get.docker.com'
method: GET
dest: /tmp/getdocker.sh
mode: a+x
register: result
- name: Execution du script getdocker si docker n'est pas deja installe
shell: bash /usr/local/bin/getdocker.sh
#when: docker_present.stdout.find('/usr/bin/docker') == -1
when: not docker_present.stat.exists
- name: Execution du script getdocker
shell: bash /tmp/getdocker.sh

View File

@ -1,41 +1,16 @@
# Fog
Ce rôle permet **l'installation** et la **configuration** de **Fog**.
Ce rôle permet l'installation et la modification de Fog.
**Fog** est une solution open-source de gestion de parc informatique. Il offre des fonctionnalités telles que la **création d'images système**, le **déploiement d'images sur plusieurs machines** et la **gestion des postes de travail** grâce à **PXE**.
**PXE** (Preboot eXecution Environment) est un protocole qui permet à un hôte de démarrer via le réseau, plutôt que depuis son disque dur local. Cela facilite le déploiement d'images système à distance.
Dans le contexte de GSB, Fog avec PXE,assure le service **DHCP**.
## Fog, c'est quoi ?
Ainsi, Fog simplifie le processus de création d'images et du déployement de postes en gérant à la fois le démarrage réseau (PXE) et la configuration réseau (DHCP).
## Comment l'installer et le configurer ?
Fog permet le déploiement d'images disque tel que Windows ou bien Linux en utilisant PXE (Preboot Execution Environment).
### Prérequis:
Mettre au moins 4GB de mémoire.
## Comment l'installer ?
### Etape 1:
Lancez le PlayBook Ansible de "pré-installation" nommé **s-fog.yml**.
Il installe la base de **Fog** , l'outil **Goss** , configure le **DHCP** , **SSH** et l'agent **SSH**.
Ce PlayBook fait aussi appel au PlayBook **main.yml** qui se trouve dans **roles/fog/tasks/** qui installe les paquets de base comme **Apache2** , **MariaDb client et serveur** ... (Voir en détail le PlayBook).
Enfin ce PlayBook permet aussi de récupérer l'archive d'installation de Fog depuis le serveur **s-admin** (grâce au PlayBook **main.yml** dans **roles/default/**), puis décompresse cette archive et l'exécute (à partir du moment où on lance le deuxieme PlayBook : voir l'étape 2).
Redémarrer le serveur pour que les interfaces puissent avoir les bonnes adresses IP.
### Etape 2:
Lancez le second PlayBook **install-fog.yml** qui permet de faire appel aux tâches qui exécute le script d'installation **fogsettings** qui permet d'éviter de répondre aux différentes questions manuellement.
### Etape 3:
Il n'y a plus qu'à se rendre sur l'interface en ligne de Fog avec l'URL suivant : **http://172.16.64.16/management/** à partir d'un poste étant dans le bon réseau et suivre les consignes indiquées (Installation ou mise à jour de la base de données) et vous pourrez ainsi vous y connecter et commencer à l'utiliser.
### Etape supplémentaire:
Vous pouvez tester que la configuration est correcte avec Goss (commande : **./agoss -f tap** ) à partir du répertoire **gsb2024**.
Avant toute chose, lancer le fichier goss de s-fog ( présent dans gsb2023/goss/s-fog.yaml ) pour vérifier que la configuration réseau est correct et opérationnel. Une fois l'installation principale effectuée, il faut lancer le playbook ansible s-fog.yaml.
Il faudra se rendre dans le dossier **fog** pour lancer le script **installfog.sh** ( fog/bin/ ). La configuration sera déjà établie via le fichier **.fogsettings**

View File

@ -1,3 +1,3 @@
depl_url: "http://s-adm.gsb.adm/gsbstore"
depl_fog: "fogproject-1.5.10.tar.gz"
depl_fog: "fogproject-1.5.9.tar.gz"
instructions: "Pour lancer l'installateur Fog, faites : 'bash /root/tools/fog/bin/installfog.sh'. Suivez ensuite les instructions"

View File

@ -1,46 +0,0 @@
## Start of FOG Settings
## Created by the FOG Installer
## Find more information about this file in the FOG Project wiki:
## https://wiki.fogproject.org/wiki/index.php?title=.fogsettings
## Version: 1.5.10
## Install time: mar. 16 janv. 2024 15:27:57
ipaddress='192.168.99.100'
copybackold='0'
interface='enp0s3'
submask='255.255.255.0'
hostname='s-fog.gsb.lan'
routeraddress='192.168.99.99'
plainrouter='192.168.99.99'
dnsaddress='192.168.99.99'
username='fogproject'
password='zbSw#FaGPS7O1bJ5tpfj'
osid='2'
osname='Debian'
dodhcp='Y'
bldhcp='0'
dhcpd='isc-dhcp-server'
blexports='1'
installtype='N'
snmysqluser='fogmaster'
snmysqlpass='cbZjO*gCONbbldV4a6l1'
snmysqlhost='localhost'
mysqldbname='fog'
installlang='0'
storageLocation='/images'
fogupdateloaded=1
docroot='/var/www/html/'
webroot='/fog/'
caCreated='yes'
httpproto='http'
startrange=''
endrange=''
packages='apache2 bc build-essential cpp curl g++ gawk gcc genisoimage git gzip htmldoc isolinux lftp libapache2-mod-php libc6 libcurl4 liblzma-dev m4 mariadb-client mariadb-server net-tools nfs-kernel-server openssh-server php php-bcmath php-cli php-curl php-fpm php-gd php-json php-ldap php-mbstring php-mysql tar tftpd-hpa tftp-hpa unzip vsftpd wget zlib1g'
noTftpBuild=''
tftpAdvOpts=''
sslpath='/opt/fog/snapins/ssl/'
backupPath='/home/'
armsupport=''
php_ver='7.4'
sslprivkey='/opt/fog/snapins/ssl//.srvprivate.key'
sendreports='Y'
## End of FOG Settings

View File

@ -2,18 +2,18 @@
## Created by the FOG Installer
## Find more information about this file in the FOG Project wiki:
## https://wiki.fogproject.org/wiki/index.php?title=.fogsettings
## Version: 1.5.10
## Install time: Mon Jan 15 23:16:31 2024
ipaddress='172.16.0.16'
## Version: 1.5.9
## Install time: jeu. 26 janv. 2023 11:41:05
ipaddress='172.16.64.16'
copybackold='0'
interface='enp0s9'
submask='255.255.255.0'
hostname='s-fog'
routeraddress='172.16.64.254'
plainrouter='172.16.64.254'
hostname='s-fog.gsb.lan'
routeraddress='192.168.99.99'
plainrouter='192.168.99.99'
dnsaddress='172.16.0.1'
username='fogproject'
password='0lEyBKxcrQxseHLB#Cbg'
password='/7ElC1OHrP47EN2w59xl'
osid='2'
osname='Debian'
dodhcp='y'
@ -22,27 +22,25 @@ dhcpd='isc-dhcp-server'
blexports='1'
installtype='N'
snmysqluser='fogmaster'
snmysqlpass='DQG@4PU31F9vOE4bX6V2'
snmysqlpass='HHO5vSGqFiHE_9d2lja3'
snmysqlhost='localhost'
mysqldbname='fog'
installlang='1'
installlang='0'
storageLocation='/images'
fogupdateloaded=1
docroot='/var/www/html/'
webroot='/fog/'
caCreated='yes'
httpproto='https'
startrange='172.16.64.120'
endrange='172.16.64.140'
httpproto='http'
startrange='172.16.64.10'
endrange='172.16.64.254'
bootfilename='undionly.kpxe'
packages='apache2 bc build-essential cpp curl g++ gawk gcc genisoimage gettext git gzip htmldoc isc-dhcp-server isolinux lftp libapache2-mod-php libc6 libcurl4 liblzma-dev m4 mariadb-client mariadb-server net-tools nfs-kernel-server openssh-server php php-bcmath php-cli php-curl php-fpm php-gd php-intl php-json php-ldap php-mbstring php-mysql tar tftp-hpa tftpd-hpa unzip vsftpd wget zlib1g'
packages='apache2 bc build-essential cpp curl g++ gawk gcc genisoimage git gzip htmldoc isc-dhcp-server isolinux lftp libapache2-mod-php7.4 libc6 libcurl4 li>
noTftpBuild=''
tftpAdvOpts=''
sslpath='/opt/fog/snapins/ssl/'
#backupPath='/home/'
backupPath='/home/'
armsupport='0'
php_ver='7.4'
php_verAdds='-7.4'
sslprivkey='/opt/fog/snapins/ssl//.srvprivate.key'
sendreports='N'
## End of FOG Settings

View File

@ -1,49 +0,0 @@
## Start of FOG Settings
## Created by the FOG Installer
## Find more information about this file in the FOG Project wiki:
## https://wiki.fogproject.org/wiki/index.php?title=.fogsettings
## Version: 1.5.10
## Install time: jeu. 11 janv. 2024
## Install time: jeu. 11 janv. 2024 11:41:05
ipaddress='172.16.64.16'
copybackold='0'
interface='enp0s9'
submask='255.255.255.0'
hostname='s-fog.gsb.lan'
routeraddress='172.16.64.254'
plainrouter='172.16.64.254'
dnsaddress='172.16.0.1'
username='fogproject'
password='/7ElC1OHrP47EN2w59xl'
osid='2'
osname='Debian'
dodhcp='y'
bldhcp='1'
dhcpd='isc-dhcp-server'
blexports='1'
installtype='N'
snmysqluser='fogmaster'
snmysqlpass='HHO5vSGqFiHE_9d2lja3'
snmysqlhost='localhost'
mysqldbname='fog'
installlang='1'
storageLocation='/images'
fogupdateloaded=1
docroot='/var/www/'
webroot='/fog/'
caCreated='yes'
httpproto='https'
startrange='172.16.64.10'
endrange='172.16.64.254'
#bootfilename='undionly.kpxe'
packages='apache2 bc build-essential cpp curl g++ gawk gcc genisoimage gettext git gzip htmldoc isc-dhcp-server isolinux lftp libapache2-mod-php libc6 libcurl4 liblzma-dev m4 mariadb-client mariadb-server net-tools nfs-kernel-server openssh-server php php-bcmath php-cli php-curl php-fpm php-gd php-intl php-json php-ldap php-mbstring php-mysql tar tftpd-hpa tftp-hpa unzip vsftpd wget zlib1g'
noTftpBuild=''
tftpAdvOpts=''
sslpath='/opt/fog/snapins/ssl/'
backupPath='/home/'
armsupport='0'
php_ver='7.4'
#php_verAdds='-7.4'
sslprivkey='/opt/fog/snapins/ssl//.srvprivate.key'
sendreports='Y'
## End of FOG Settings

View File

@ -1,51 +0,0 @@
## Start of FOG Settings
## Created by the FOG Installer
## Find more information about this file in the FOG Project wiki:
## https://wiki.fogproject.org/wiki/index.php?title=.fogsettings
## Version: 1.5.10
## Install time: Mon Jan 15 23:16:31 2024
ipaddress='192.168.56.10'
copybackold='0'
interface='eth2'
submask='255.255.255.0'
hostname='fog'
routeraddress='192.168.1.1'
plainrouter='192.168.1.1'
dnsaddress='192.168.1.1'
username='fogproject'
password='0lEyBKxcrQxseHLB#Cbg'
osid='2'
osname='Debian'
dodhcp='y'
bldhcp='1'
dhcpd='isc-dhcp-server'
blexports='1'
installtype='N'
snmysqluser='fogmaster'
snmysqlpass='DQG@4PU31F9vOE4bX6V2'
snmysqlhost='localhost'
mysqldbname='fog'
installlang='1'
storageLocation='/images'
fogupdateloaded=1
docroot='/var/www/html/'
webroot='/fog/'
caCreated='yes'
httpproto='https'
startrange='192.168.56.10'
endrange='192.168.56.254'
packages='apache2 bc build-essential cpp curl g++ gawk gcc genisoimage gettext git gzip htmldoc i
sc-dhcp-server isolinux lftp libapache2-mod-php libc6 libcurl4 liblzma-dev m4 mariadb-client mari
adb-server net-tools nfs-kernel-server openssh-server php php-bcmath php-cli php-curl php-fpm php
-gd php-intl php-json php-ldap php-mbstring php-mysql tar tftp-hpa tftpd-hpa unzip vsftpd wget zl
ib1g '
noTftpBuild=''
tftpAdvOpts=''
sslpath='/opt/fog/snapins/ssl/'
backupPath='/home/'
armsupport='0'
php_ver='7.4'
sslprivkey='/opt/fog/snapins/ssl//.srvprivate.key'
sendreports='N'
## End of FOG Settings

View File

@ -1,54 +1,26 @@
---
- name: Installation des paquets de base
apt:
state: present
name:
- apache2
- curl
- git
- gzip
- isc-dhcp-server
- mariadb-client
- mariadb-server
- net-tools
- openssh-server
- php
- php-cli
- php-curl
- php-fpm
- php-gd
- php-intl
- php-json
- php-ldap
- php-mbstring
- php-mysql
- tar
- unzip
- vsftpd
- wget
- name: creation /root/tmp
- name: creation d'un repertoire fog
file:
path: /root/tmp
path: /root/tools/fog
state: directory
- name: recuperation de l'archive d'installation fog sur git
git:
repo: https://gitea.lyc-lecastel.fr/gadmin/fog.git
dest: /root/tools/fog/
clone: yes
update: yes
- name: Modification fichier bash (desac UDPCast)
ansible.builtin.lineinfile:
path: /root/tools/fog/lib/common/functions.sh
regexp: '^configureUDPCast\(\).*'
line: "configureUDPCast() {\nreturn"
backup: yes
- name: fichier config fogsettings
copy:
src: fogsettings
dest: /root/tmp/
command: "cp /root/tools/ansible/roles/fog/files/fogsettings /opt/fog/"
- name: Récupération archive d'installation Fog
get_url:
url: "{{ depl_url }}/{{ depl_fog }}"
dest: "/root/tmp/"
- name: Décompression de l'archive
ansible.builtin.unarchive:
src: "/root/tmp/{{ depl_fog }}"
dest: "/root/tmp/"
- name: Exécution du script d'installation Fog
ansible.builtin.shell: sudo bash /root/tmp/fogproject-1.5.10/bin/installfog.sh --recreate-keys -f /root/tmp/fogsettings -y
args:
chdir: "/root/tmp/fogproject-1.5.10/"
- name: fichier fogsettings en .fogsettings
command: "mv /opt/fog/fogsettings /opt/fog/.fogsettings"

View File

@ -4,6 +4,7 @@
@def $DEV_PRIVATE = enp0s8;
@def $DEV_WORLD = enp0s9;
@def $DEV_WORLD = enp0s9;
@def $DEV_VPN= wg0;
@def $NET_PRIVATE = 172.16.0.0/24;
@ -31,7 +32,7 @@ table filter {
# well-known internet hosts
saddr ($NET_PRIVATE) proto tcp dport ssh ACCEPT;
# we provide DNS services for the internal net
# we provide DNS and SMTP services for the internal net
interface $DEV_PRIVATE saddr $NET_PRIVATE {
proto (udp tcp) dport domain ACCEPT;
proto udp dport bootps ACCEPT;

View File

@ -29,7 +29,7 @@ table filter {
# well-known internet hosts
saddr ($NET_PRIVATE) proto tcp dport ssh ACCEPT;
# we provide DNS services for the internal net
# we provide DNS and SMTP services for the internal net
interface $DEV_PRIVATE saddr $NET_PRIVATE {
proto (udp tcp) dport domain ACCEPT;
proto udp dport bootps ACCEPT;

View File

@ -1,93 +1,44 @@
## Comment fonctionne le rôle
## Comment marche le rôle
Le rôle installe un serveur GLPI fonctionnant graĉe à php et à nginx.
Ce rôle permet aussi de télécharger l'agent GLPI sur glpi.
Le rôle permet de créer la base GLPI.
Ce rôle permet aussi d'installer FusionInventory sur glpi.
Le rôle permet aussi de sauvegarde la BDD de glpi.
## Comment utiliser GLPI
Après le pull-config, depuis une machine du réseau n-user, se rendre sur l'URL : *http://s-itil.gsb.lan*
Puis lancer l'installation, les paramètres sql à fournir sont les suivant :
* serveur : **localhost**
* utilisateur : **glpi**
* mot de passe : **glpi**
* Selectionner la base **glpi**
Après le pull-config, aller sur une machine du réseau n-user et aller sur http://s-itil/install/install.php
Puis lancer l'installation, les paramètres sql à fournir sont :
serveur : localhost
utilisateur : glpi
mot de passe : glpi
Selectionner la base glpi
Ne pas envoyer de statistique d'usage
## Postfix :
Postfix est utilisé pour renvoyer des messages pour assuré le suivi de l'avancement du ticket.
Aller dans **Configuration > Notification**, activer le suivi et les notification
Aller dans Configuration > Notification, activer le suivi et les notification
Aller dans Configuration des notifications par courriels
Mettre l'adresse mail de supervision dans : Courriel de l'administrateur, Courriel expéditeur et comme adresse de réponse
Le mode d'envoie des courriels est SMTP
l'hôte SMTP est localhost
## Inventorier une machine windows sur le serveur GLPI avec l'agent :
Actuellement la version de l'agent glpi disponible directement sur le serveur est la version 1.7 de celui.
* Télécharger l'agent depuis le serveur GLPI : *http://s-itil/glpicli*
* Installer l'agent : sélectionner l'option "Typical" et entrer l'URL du serveur dans "Remote Targets" : *http://s-itil/*
* Se rendre sur localhost:62354 pour forcer la remonter
*Note: si la machine ne remonte aprés avoir forcer l'interface depuis l'interface web il est possible de rédémarrer le service glpi agent.*
* Actualiser GLPI et la machine sera inventoriée
## Enregistrements A et PTR pour S-WIN :
Les enregistrements "A" et "PTR" sont utilisés pour résoudre les noms des machines nécessaire à la synchronisation de l'annuaire LDAP sur le serveur comme serveur DNS GLPI en utilisant le serveur S-WIN sans passé sur le serveur S-INFRA.
Ajouter les enregistrement "A" et "PTR" sur le DNS de l'Active Directory:
Sur le serveur S-WIN:
**Gestionnaire de serveur** --> **Gestionnaire DNS** --> **Zones Recherches Directes** --> **gsb.lan** --> **Ajouter un hôte (A)**:
* s-infra 172.16.0.1
* s-itil 172.16.0.9
* r-int 172.16.0.254
* s-win 172.16.0.6
Cocher la case **Créer un pointeur d'enregistrement PTR associé**
## LDAP :
Aller dans Configuration > Authentification > Annuaires LDAP.
Ajouter un serveur en cliquant sur le +
Remplisser les cases:
* Nom : **s-win**
* Serveur par défaut : **oui**
* Actif : **oui**
* Serveur : **s-win.gsb.lan**
* Filtre de connexion : (&(objectClass=user)(objectCategory=person)(!(userAccountControl:1.2.840.113556.1.4.803:=2)))
* BaseDN : **DC=gsb,DC=lan**
* DN du compte : **GSB\Administrateur**
* Mot de passe : **Azerty1+**
* Champ de l'identifiant : **samaccountname**
Nom : s-win
Serveur par défaut : oui
Actif : oui
Serveur : s-win.gsb.lan
Filtre de connexion : (&(objectClass=user)(objectCategory=person)(!(userAccountControl:1.2.840.113556.1.4.803:=2)))
BaseDN : DC=gsb,DC=lan
DN du compte : GSB\Administrateur
Mot de passe : Azerty1+
Champ de l'identifiant : samaccountname
Pour importer les utilisateurs allez dans **Administration > Utilisateur > Liaison annuaire LDAP > Importation de nouveau utilisateurs**
Pour importer les utilisateurs allez dans Administration > Utilisateur > Liaison annuaire LDAP > Importation de nouveau utilisateurs
Appuyer sur rechercher
Puis sélectionner les utilisateurs afficher, allez dans action et sélectionnez importer.
## Rejoindre le domaine gsb.lan depuis un client windows :
Afin de rejoindre le domaine **gsb.lan**, il est nécessaire d'ajouter en guise de serveur DNS sur la machine cliente le serveur DNS de l'Active Directory **s-win.gsb.lan**.
Il est possible d'ainsi rejoindre le domaine **gsb.lan**.
Nous utiliserons pour se connecter à l'Active Directory l'utilisateur **Administrateur**:
* **Login**: **Administrateur@gsb.lan**
* **Mot de passe**: **Azerty1+**
## Les modification à faire pour un prochaine version de GLPI :
Pour les prochaines versions de GLPI et pouvoir utiliser le playbook, voici les modification à faire :
* Changer la version de GLPI
* Changer la version de PHP
* Changer la version de GLPI Agent
*Modification effectué par : jm - ak*

View File

@ -1,15 +1,9 @@
---
- name: restart php-fpm
service:
name: php8.2-fpm
state=: restarted
service: name=php8.2-fpm state=restarted
- name: restart nginx
service:
name: nginx
state: restarted
service: name=nginx state=restarted
- name: restart mariadb-server
service:
name: mariadb-server
state: restarted
service: name=mariadb-server state=restarted

View File

@ -139,7 +139,7 @@
- restart nginx
- name: lancer la commande de création de la base de donnees glpi
ansible.builtin.shell: "php bin/console database:install --reconfigure --db-name {{ glpi_dbname }} --db-user {{ glpi_dbuser }} --db-password {{ glpi_dbpasswd }} -f -n"
ansible.builtin.shell: php bin/console database:install -f -n
args:
chdir: "{{ glpi_dir }}"

View File

@ -1,30 +1,16 @@
# Role journald-rcv : installation et configuration du serveur systemd-journal-remote (centralisation des logs)
# Role syslog : installation et configuration de syslog serveur (centralisation des logs)
***
## Fonctionnalitées du rôle:
Ce role a pour objectif d'installer et d'éditer les fichiers de configuration de systemd journal remote afin que les machines lançant ce rôle puissent recevoir les logs des autres machine du parc.
Ce role a pour objectif de activer le module UDP dans le fichier /etc/rsyslog.conf pour accepter les logs entrants des machines concernées :
on décommente la ligne suivante :
'module(load="imudp"\)'
## Opérations réalisées par le role:
Le role réalise les opération suivante:
* installation du paquet **systemd-journal-remote**.
* Démarrage et activation (au démarrage) du service **systemd-journal-remote.socket.
* Création des fichiers de configuration de **systemd-journal-remote** à partir d'une copie du fichier de configuration déja existante.
* Changement du protocole utilisé par journald. Passant du protocole **HTTPS** au protocole **HTTP*** Activation du mode split qui permet d'avoir un fichier de log par machine supervisées.
* Création du répertoire qui accueillera les fichiers de logs.
* Rédémarrage du daemon systemd afin que le système prenne en compte les modifications efféctuées.
Ensuite le role active l'écoute du module UDP sur le port 514 afin de pouvoir envoyer les logs.
on décommente la ligne suivante dans le même fichier que ci-dessus :
'input\(type="imudp" port="514"\)'
## Test du bon fonctionnement du rôle
pour finir le role va charger le module UDP afin que la machine **s-infra** puissent reçevoir les logs entrants.
Pour faire cela on décommente la ligne suivante dans le fichier /etc/systemd/journald.conf :
'ForwardToSyslog=yes'
Afin de tester le rôle nous éffectuons un test:
**Depuis la machine sur laquelle ce rôle est installé:**
* **journalctl -f -D /var/log/journal/remote/
* S'assurer que le port 19532 (port par défault utilisé par le serviceà) soit ouvert et utilisable sur toutes les machines en entrée.
Afin de consulter les fichiers d'événement.
** Depuis une des machines eméttrices de logs:**
* **logger ok**
Si le message émis par la machine éméttrice et consultable depuis la machine receptrice alors le test est réussi.
pour finir le role va redemmarer automatiquement les services journald et rsyslog

View File

@ -1,14 +0,0 @@
# Rôle Kea
***
Rôle du Kea pour la haute disponibilité dhcp
## Tables des matières
1. [Que fait le rôle Kea ?]
## Que fait le rôle Kea ?
Il permet de configurer les serveur kea en mode haute disponibilité.
### Installation et configuration de kea
Le rôle kea va installer les packets kea dhcp4, hook, admin une fois les packets installer. Nous allons configurer les 2 serveurs kea pour qu'il distribut les ip de n-user et soit en haute disponibilité.

View File

@ -1,8 +0,0 @@
#variable kea
kea_ver: "2.4.1"
kea_dbname: ""
kaa_dbuser: ""
kea_dbpasswd: ""
kea_dhcp4_dir: "/etc/kea/kea-dhcp4.conf"
kea_ctrl_dir: "/etc/kea/kea-ctrl-agent.conf"

View File

@ -1,66 +0,0 @@
// This is an example of a configuration for Control-Agent (CA) listening
// for incoming HTTP traffic. This is necessary for handling API commands,
// in particular lease update commands needed for HA setup.
{
"Control-agent":
{
// We need to specify where the agent should listen to incoming HTTP
// queries.
"http-host": "172.16.64.20",
// This specifies the port CA will listen on.
"http-port": 8000,
"control-sockets":
{
// This is how the Agent can communicate with the DHCPv4 server.
"dhcp4":
{
"comment": "socket to DHCPv4 server",
"socket-type": "unix",
"socket-name": "/tmp/kea4-ctrl-socket"
},
// Location of the DHCPv6 command channel socket.
# "dhcp6":
# {
# "socket-type": "unix",
# "socket-name": "/tmp/kea6-ctrl-socket"
# },
// Location of the D2 command channel socket.
# "d2":
# {
# "socket-type": "unix",
# "socket-name": "/tmp/kea-ddns-ctrl-socket",
# "user-context": { "in-use": false }
# }
},
// Similar to other Kea components, CA also uses logging.
"loggers": [
{
"name": "kea-ctrl-agent",
"output_options": [
{
"output": "stdout",
// Several additional parameters are possible in addition
// to the typical output. Flush determines whether logger
// flushes output to a file. Maxsize determines maximum
// filesize before the file is rotated. maxver
// specifies the maximum number of rotated files being
// kept.
"flush": true,
"maxsize": 204800,
"maxver": 4,
// We use pattern to specify custom log message layout
"pattern": "%d{%y.%m.%d %H:%M:%S.%q} %-5p [%c/%i] %m\n"
}
],
"severity": "INFO",
"debuglevel": 0 // debug level only applies when severity is set to DEBUG.
}
]
}
}

View File

@ -1,226 +0,0 @@
// This is an example configuration of the Kea DHCPv4 server 1:
//
// - uses High Availability hook library and Lease Commands hook library
// to enable High Availability function for the DHCP server. This config
// file is for the primary (the active) server.
// - uses memfile, which stores lease data in a local CSV file
// - it assumes a single /24 addressing over a link that is directly reachable
// (no DHCP relays)
// - there is a handful of IP reservations
//
// It is expected to run with a standby (the passive) server, which has a very similar
// configuration. The only difference is that "this-server-name" must be set to "server2" on the
// other server. Also, the interface configuration depends on the network settings of the
// particular machine.
{
"Dhcp4": {
// Add names of your network interfaces to listen on.
"interfaces-config": {
// The DHCPv4 server listens on this interface. When changing this to
// the actual name of your interface, make sure to also update the
// interface parameter in the subnet definition below.
"interfaces": [ "enp0s9" ]
},
// Control socket is required for communication between the Control
// Agent and the DHCP server. High Availability requires Control Agent
// to be running because lease updates are sent over the RESTful
// API between the HA peers.
"control-socket": {
"socket-type": "unix",
"socket-name": "/tmp/kea4-ctrl-socket"
},
// Use Memfile lease database backend to store leases in a CSV file.
// Depending on how Kea was compiled, it may also support SQL databases
// (MySQL and/or PostgreSQL). Those database backends require more
// parameters, like name, host and possibly user and password.
// There are dedicated examples for each backend. See Section 7.2.2 "Lease
// Storage" for details.
"lease-database": {
// Memfile is the simplest and easiest backend to use. It's an in-memory
// database with data being written to a CSV file. It is very similar to
// what ISC DHCP does.
"type": "memfile"
},
// Let's configure some global parameters. The home network is not very dynamic
// and there's no shortage of addresses, so no need to recycle aggressively.
"valid-lifetime": 43200, // leases will be valid for 12h
"renew-timer": 21600, // clients should renew every 6h
"rebind-timer": 32400, // clients should start looking for other servers after 9h
// Kea will clean up its database of expired leases once per hour. However, it
// will keep the leases in expired state for 2 days. This greatly increases the
// chances for returning devices to get the same address again. To guarantee that,
// use host reservation.
// If both "flush-reclaimed-timer-wait-time" and "hold-reclaimed-time" are
// not 0, when the client sends a release message the lease is expired
// instead of being deleted from lease storage.
"expired-leases-processing": {
"reclaim-timer-wait-time": 3600,
"hold-reclaimed-time": 172800,
"max-reclaim-leases": 0,
"max-reclaim-time": 0
},
// HA requires two hook libraries to be loaded: libdhcp_lease_cmds.so and
// libdhcp_ha.so. The former handles incoming lease updates from the HA peers.
// The latter implements high availability feature for Kea. Note the library name
// should be the same, but the path is OS specific.
"hooks-libraries": [
// The lease_cmds library must be loaded because HA makes use of it to
// deliver lease updates to the server as well as synchronize the
// lease database after failure.
{
"library": "/usr/local/lib/kea/hooks/libdhcp_lease_cmds.so"
},
{
// The HA hook library should be loaded.
"library": "/usr/local/lib/kea/hooks/libdhcp_ha.so",
"parameters": {
// Each server should have the same HA configuration, except for the
// "this-server-name" parameter.
"high-availability": [ {
// This parameter points to this server instance. The respective
// HA peers must have this parameter set to their own names.
"this-server-name": "s-kea1.gsb.lan",
// The HA mode is set to hot-standby. In this mode, the active server handles
// all the traffic. The standby takes over if the primary becomes unavailable.
"mode": "hot-standby",
// Heartbeat is to be sent every 10 seconds if no other control
// commands are transmitted.
"heartbeat-delay": 10000,
// Maximum time for partner's response to a heartbeat, after which
// failure detection is started. This is specified in milliseconds.
// If we don't hear from the partner in 60 seconds, it's time to
// start worrying.
"max-response-delay": 30000,
// The following parameters control how the server detects the
// partner's failure. The ACK delay sets the threshold for the
// 'secs' field of the received discovers. This is specified in
// milliseconds.
"max-ack-delay": 5000,
// This specifies the number of clients which send messages to
// the partner but appear to not receive any response.
"max-unacked-clients": 0,
// This specifies the maximum timeout (in milliseconds) for the server
// to complete sync. If you have a large deployment (high tens or
// hundreds of thousands of clients), you may need to increase it
// further. The default value is 60000ms (60 seconds).
"sync-timeout": 60000,
"peers": [
// This is the configuration of this server instance.
{
"name": "s-kea1.gsb.lan",
// This specifies the URL of this server instance. The
// Control Agent must run along with this DHCPv4 server
// instance and the "http-host" and "http-port" must be
// set to the corresponding values.
"url": "http://172.16.64.20:8000/",
// This server is primary. The other one must be
// secondary.
"role": "primary"
},
// This is the configuration of the secondary server.
{
"name": "s-kea2.gsb.lan",
// Specifies the URL on which the partner's control
// channel can be reached. The Control Agent is required
// to run on the partner's machine with "http-host" and
// "http-port" values set to the corresponding values.
"url": "http://172.16.64.21:8000/",
// The other server is secondary. This one must be
// primary.
"role": "standby"
}
]
} ]
}
}
],
// This example contains a single subnet declaration.
"subnet4": [
{
// Subnet prefix.
"subnet": "172.16.64.0/24",
// There are no relays in this network, so we need to tell Kea that this subnet
// is reachable directly via the specified interface.
"interface": "enp0s9",
// Specify a dynamic address pool.
"pools": [
{
"pool": "172.16.64.100-172.16.64.150"
}
],
// These are options that are subnet specific. In most cases, you need to define at
// least routers option, as without this option your clients will not be able to reach
// their default gateway and will not have Internet connectivity. If you have many
// subnets and they share the same options (e.g. DNS servers typically is the same
// everywhere), you may define options at the global scope, so you don't repeat them
// for every network.
"option-data": [
{
// For each IPv4 subnet you typically need to specify at least one router.
"name": "routers",
"data": "172.16.64.254"
},
{
// Using cloudflare or Quad9 is a reasonable option. Change this
// to your own DNS servers is you have them. Another popular
// choice is 8.8.8.8, owned by Google. Using third party DNS
// service raises some privacy concerns.
"name": "domain-name-servers",
"data": "172.16.0.1"
}
],
// Some devices should get a static address. Since the .100 - .199 range is dynamic,
// let's use the lower address space for this. There are many ways how reservation
// can be defined, but using MAC address (hw-address) is by far the most popular one.
// You can use client-id, duid and even custom defined flex-id that may use whatever
// parts of the packet you want to use as identifiers. Also, there are many more things
// you can specify in addition to just an IP address: extra options, next-server, hostname,
// assign device to client classes etc. See the Kea ARM, Section 8.3 for details.
// The reservations are subnet specific.
#"reservations": [
# {
# "hw-address": "1a:1b:1c:1d:1e:1f",
# "ip-address": "192.168.1.10"
# },
# {
# "client-id": "01:11:22:33:44:55:66",
# "ip-address": "192.168.1.11"
# }
#]
}
],
// fichier de logs
"loggers": [
{
// This section affects kea-dhcp4, which is the base logger for DHCPv4 component. It tells
// DHCPv4 server to write all log messages (on severity INFO or higher) to a file. The file
// will be rotated once it grows to 2MB and up to 4 files will be kept. The debuglevel
// (range 0 to 99) is used only when logging on DEBUG level.
"name": "kea-dhcp4",
"output_options": [
{
"output": "stdout",
"maxsize": 2048000,
"maxver": 4
}
],
"severity": "INFO",
"debuglevel": 0
}
]
}
}

View File

@ -1,18 +0,0 @@
---
- name: restart isc-kea-dhcp4-server
service:
name: isc-kea-dhcp4-server.service
state: restarted
enabled: yes
- name: restart isc-kea-ctrl-agent
service:
name: isc-kea-ctrl-agent.service
state: restarted
enabled: yes
- name: restart mariadb-server
service:
name: mariadb-server
state: restarted
enabled: yes

View File

@ -1,75 +0,0 @@
---
- name: installation des dépendances
apt:
name:
- liblog4cplus-2.0.5
- libmariadb3
- libpq5
- mariadb-common
- mysql-common
state: present
- name: telechargemement du paquet isc-kea-common
get_url:
url: "https://dl.cloudsmith.io/public/isc/kea-2-4/deb/debian/pool/bookworm/main/i/is/isc-kea-common_2.4.1-isc20231123184533/isc-kea-common_2.4.1-isc20231123184533_amd64.deb"
dest: "/tmp"
- name: telechargement du paquet isc-kea-dhcp4
get_url:
url: "https://dl.cloudsmith.io/public/isc/kea-2-4/deb/debian/pool/bookworm/main/i/is/isc-kea-dhcp4_2.4.1-isc20231123184533/isc-kea-dhcp4_2.4.1-isc20231123184533_amd64.deb"
dest: "/tmp"
- name: telechargement du paquet isc-kea-ctrl-agent
get_url:
url: "https://dl.cloudsmith.io/public/isc/kea-2-4/deb/debian/pool/bookworm/main/i/is/isc-kea-ctrl-agent_2.4.1-isc20231123184533/isc-kea-ctrl-agent_2.4.1-isc20231123184533_amd64.deb"
dest: "/tmp"
- name: telechargement du paquet isc-kea-hooks
get_url:
url: "https://dl.cloudsmith.io/public/isc/kea-2-4/deb/debian/pool/bookworm/main/i/is/isc-kea-hooks_2.4.1-isc20231123184533/isc-kea-hooks_2.4.1-isc20231123184533_amd64.deb"
dest: "/tmp"
- name: Update apt
apt:
update_cache: yes
- name: Installation paquet isc-kea-common
apt:
deb: "/tmp/isc-kea-common_2.4.1-isc20231123184533_amd64.deb"
state: present
- name: Installation isc-kea-dhcp4
apt:
deb: "/tmp/isc-kea-dhcp4_2.4.1-isc20231123184533_amd64.deb"
state: present
- name: Installation isc-kea-ctrl-agent
apt:
deb: "/tmp/isc-kea-ctrl-agent_2.4.1-isc20231123184533_amd64.deb"
state: present
- name: Installation isc-kea-hooks
apt:
deb: "/tmp/isc-kea-hooks_2.4.1-isc20231123184533_amd64.deb"
state: present
- name: Copie du repertoire des hooks dans le repertoire /usr/local/bin/kea/hooks
copy:
src: /usr/lib/x86_64-linux-gnu/kea/
dest: /usr/local/lib/kea/
- name: Copie du fichier de configuration kea-dhcp4.conf
copy:
src: kea-dhcp4.conf
dest: /etc/kea/kea-dhcp4.conf
notify:
- restart isc-kea-dhcp4-server
- name: Copie du fichier de configuration kea-ctrl-agent
copy:
src: kea-ctrl-agent.conf
dest: /etc/kea/kea-ctrl-agent.conf
notify:
- restart isc-kea-ctrl-agent

View File

@ -1,66 +0,0 @@
// This is an example of a configuration for Control-Agent (CA) listening
// for incoming HTTP traffic. This is necessary for handling API commands,
// in particular lease update commands needed for HA setup.
{
"Control-agent":
{
// We need to specify where the agent should listen to incoming HTTP
// queries.
"http-host": "172.16.64.1",
// This specifies the port CA will listen on.
"http-port": 8000,
"control-sockets":
{
// This is how the Agent can communicate with the DHCPv4 server.
"dhcp4":
{
"comment": "socket to DHCPv4 server",
"socket-type": "unix",
"socket-name": "/tm/kea4-ctrl-socket"
},
// Location of the DHCPv6 command channel socket.
# "dhcp6":
# {
# "socket-type": "unix",
# "socket-name": "/tmp/kea6-ctrl-socket"
# },
// Location of the D2 command channel socket.
# "d2":
# {
# "socket-type": "unix",
# "socket-name": "/tmp/kea-ddns-ctrl-socket",
# "user-context": { "in-use": false }
# }
},
// Similar to other Kea components, CA also uses logging.
"loggers": [
{
"name": "kea-ctrl-agent",
"output_options": [
{
"output": "stdout",
// Several additional parameters are possible in addition
// to the typical output. Flush determines whether logger
// flushes output to a file. Maxsize determines maximum
// filesize before the file is rotated. maxver
// specifies the maximum number of rotated files being
// kept.
"flush": true,
"maxsize": 204800,
"maxver": 4,
// We use pattern to specify custom log message layout
"pattern": "%d{%y.%m.%d %H:%M:%S.%q} %-5p [%c/%i] %m\n"
}
],
"severity": "INFO",
"debuglevel": 0 // debug level only applies when severity is set to DEBUG.
}
]
}
}

View File

@ -1,226 +0,0 @@
// This is an example configuration of the Kea DHCPv4 server 1:
//
// - uses High Availability hook library and Lease Commands hook library
// to enable High Availability function for the DHCP server. This config
// file is for the primary (the active) server.
// - uses memfile, which stores lease data in a local CSV file
// - it assumes a single /24 addressing over a link that is directly reachable
// (no DHCP relays)
// - there is a handful of IP reservations
//
// It is expected to run with a standby (the passive) server, which has a very similar
// configuration. The only difference is that "this-server-name" must be set to "server2" on the
// other server. Also, the interface configuration depends on the network settings of the
// particular machine.
{
"Dhcp4": {
// Add names of your network interfaces to listen on.
"interfaces-config": {
// The DHCPv4 server listens on this interface. When changing this to
// the actual name of your interface, make sure to also update the
// interface parameter in the subnet definition below.
"interfaces": [ "enp0s8" ]
},
// Control socket is required for communication between the Control
// Agent and the DHCP server. High Availability requires Control Agent
// to be running because lease updates are sent over the RESTful
// API between the HA peers.
"control-socket": {
"socket-type": "unix",
"socket-name": "/tmp/kea4-ctrl-socket"
},
// Use Memfile lease database backend to store leases in a CSV file.
// Depending on how Kea was compiled, it may also support SQL databases
// (MySQL and/or PostgreSQL). Those database backends require more
// parameters, like name, host and possibly user and password.
// There are dedicated examples for each backend. See Section 7.2.2 "Lease
// Storage" for details.
"lease-database": {
// Memfile is the simplest and easiest backend to use. It's an in-memory
// database with data being written to a CSV file. It is very similar to
// what ISC DHCP does.
"type": "memfile"
},
// Let's configure some global parameters. The home network is not very dynamic
// and there's no shortage of addresses, so no need to recycle aggressively.
"valid-lifetime": 43200, // leases will be valid for 12h
"renew-timer": 21600, // clients should renew every 6h
"rebind-timer": 32400, // clients should start looking for other servers after 9h
// Kea will clean up its database of expired leases once per hour. However, it
// will keep the leases in expired state for 2 days. This greatly increases the
// chances for returning devices to get the same address again. To guarantee that,
// use host reservation.
// If both "flush-reclaimed-timer-wait-time" and "hold-reclaimed-time" are
// not 0, when the client sends a release message the lease is expired
// instead of being deleted from lease storage.
"expired-leases-processing": {
"reclaim-timer-wait-time": 3600,
"hold-reclaimed-time": 172800,
"max-reclaim-leases": 0,
"max-reclaim-time": 0
},
// HA requires two hook libraries to be loaded: libdhcp_lease_cmds.so and
// libdhcp_ha.so. The former handles incoming lease updates from the HA peers.
// The latter implements high availability feature for Kea. Note the library name
// should be the same, but the path is OS specific.
"hooks-libraries": [
// The lease_cmds library must be loaded because HA makes use of it to
// deliver lease updates to the server as well as synchronize the
// lease database after failure.
{
"library": "/usr/local/lib/kea/hooks/libdhcp_lease_cmds.so"
},
{
// The HA hook library should be loaded.
"library": "/usr/local/lib/kea/hooks/libdhcp_ha.so",
"parameters": {
// Each server should have the same HA configuration, except for the
// "this-server-name" parameter.
"high-availability": [ {
// This parameter points to this server instance. The respective
// HA peers must have this parameter set to their own names.
"this-server-name": "kea1",
// The HA mode is set to hot-standby. In this mode, the active server handles
// all the traffic. The standby takes over if the primary becomes unavailable.
"mode": "hot-standby",
// Heartbeat is to be sent every 10 seconds if no other control
// commands are transmitted.
"heartbeat-delay": 10000,
// Maximum time for partner's response to a heartbeat, after which
// failure detection is started. This is specified in milliseconds.
// If we don't hear from the partner in 60 seconds, it's time to
// start worrying.
"max-response-delay": 30000,
// The following parameters control how the server detects the
// partner's failure. The ACK delay sets the threshold for the
// 'secs' field of the received discovers. This is specified in
// milliseconds.
"max-ack-delay": 5000,
// This specifies the number of clients which send messages to
// the partner but appear to not receive any response.
"max-unacked-clients": 0,
// This specifies the maximum timeout (in milliseconds) for the server
// to complete sync. If you have a large deployment (high tens or
// hundreds of thousands of clients), you may need to increase it
// further. The default value is 60000ms (60 seconds).
"sync-timeout": 60000,
"peers": [
// This is the configuration of this server instance.
{
"name": "kea1",
// This specifies the URL of this server instance. The
// Control Agent must run along with this DHCPv4 server
// instance and the "http-host" and "http-port" must be
// set to the corresponding values.
"url": "http://172.16.64.1:8000/",
// This server is primary. The other one must be
// secondary.
"role": "primary"
},
// This is the configuration of the secondary server.
{
"name": "kea2",
// Specifies the URL on which the partner's control
// channel can be reached. The Control Agent is required
// to run on the partner's machine with "http-host" and
// "http-port" values set to the corresponding values.
"url": "http://172.16.64.2:8000/",
// The other server is secondary. This one must be
// primary.
"role": "standby"
}
]
} ]
}
}
],
// This example contains a single subnet declaration.
"subnet4": [
{
// Subnet prefix.
"subnet": "172.16.64.0/24",
// There are no relays in this network, so we need to tell Kea that this subnet
// is reachable directly via the specified interface.
"interface": "enp0s8",
// Specify a dynamic address pool.
"pools": [
{
"pool": "172.16.64.100-172.16.64.150"
}
],
// These are options that are subnet specific. In most cases, you need to define at
// least routers option, as without this option your clients will not be able to reach
// their default gateway and will not have Internet connectivity. If you have many
// subnets and they share the same options (e.g. DNS servers typically is the same
// everywhere), you may define options at the global scope, so you don't repeat them
// for every network.
"option-data": [
{
// For each IPv4 subnet you typically need to specify at least one router.
"name": "routers",
"data": "172.16.64.1"
},
{
// Using cloudflare or Quad9 is a reasonable option. Change this
// to your own DNS servers is you have them. Another popular
// choice is 8.8.8.8, owned by Google. Using third party DNS
// service raises some privacy concerns.
"name": "domain-name-servers",
"data": "172.16.64.1"
}
],
// Some devices should get a static address. Since the .100 - .199 range is dynamic,
// let's use the lower address space for this. There are many ways how reservation
// can be defined, but using MAC address (hw-address) is by far the most popular one.
// You can use client-id, duid and even custom defined flex-id that may use whatever
// parts of the packet you want to use as identifiers. Also, there are many more things
// you can specify in addition to just an IP address: extra options, next-server, hostname,
// assign device to client classes etc. See the Kea ARM, Section 8.3 for details.
// The reservations are subnet specific.
#"reservations": [
# {
# "hw-address": "1a:1b:1c:1d:1e:1f",
# "ip-address": "192.168.1.10"
# },
# {
# "client-id": "01:11:22:33:44:55:66",
# "ip-address": "192.168.1.11"
# }
#]
}
],
// fichier de logs
"loggers": [
{
// This section affects kea-dhcp4, which is the base logger for DHCPv4 component. It tells
// DHCPv4 server to write all log messages (on severity INFO or higher) to a file. The file
// will be rotated once it grows to 2MB and up to 4 files will be kept. The debuglevel
// (range 0 to 99) is used only when logging on DEBUG level.
"name": "kea-dhcp4",
"output_options": [
{
"output": "stdout",
"maxsize": 2048000,
"maxver": 4
}
],
"severity": "INFO",
"debuglevel": 0
}
]
}
}

View File

@ -1,14 +0,0 @@
# Rôle Kea
***
Rôle du Kea pour la haute disponibilité dhcp
## Tables des matières
1. [Que fait le rôle Kea ?]
## Que fait le rôle Kea ?
Il permet de configurer les serveur kea en mode haute disponibilité.
### Installation et configuration de kea
Le rôle kea va installer les packets kea dhcp4, hook, admin une fois les packets installer. Nous allons configurer les 2 serveurs kea pour qu'il distribut les ip de n-user et soit en haute disponibilité.

View File

@ -1,8 +0,0 @@
#variable kea
kea_ver: "2.4.1"
kea_dbname: ""
kaa_dbuser: ""
kea_dbpasswd: ""
kea_dhcp4_dir: "/etc/kea/kea-dhcp4.conf"
kea_ctrl_dir: "/etc/kea/kea-ctrl-agent.conf"

View File

@ -1,66 +0,0 @@
// This is an example of a configuration for Control-Agent (CA) listening
// for incoming HTTP traffic. This is necessary for handling API commands,
// in particular lease update commands needed for HA setup.
{
"Control-agent":
{
// We need to specify where the agent should listen to incoming HTTP
// queries.
"http-host": "172.16.64.21",
// This specifies the port CA will listen on.
"http-port": 8000,
"control-sockets":
{
// This is how the Agent can communicate with the DHCPv4 server.
"dhcp4":
{
"comment": "socket to DHCPv4 server",
"socket-type": "unix",
"socket-name": "/tmp/kea4-ctrl-socket"
},
// Location of the DHCPv6 command channel socket.
# "dhcp6":
# {
# "socket-type": "unix",
# "socket-name": "/tmp/kea6-ctrl-socket"
# },
// Location of the D2 command channel socket.
# "d2":
# {
# "socket-type": "unix",
# "socket-name": "/tmp/kea-ddns-ctrl-socket",
# "user-context": { "in-use": false }
# }
},
// Similar to other Kea components, CA also uses logging.
"loggers": [
{
"name": "kea-ctrl-agent",
"output_options": [
{
"output": "stdout",
// Several additional parameters are possible in addition
// to the typical output. Flush determines whether logger
// flushes output to a file. Maxsize determines maximum
// filesize before the file is rotated. maxver
// specifies the maximum number of rotated files being
// kept.
"flush": true,
"maxsize": 204800,
"maxver": 4,
// We use pattern to specify custom log message layout
"pattern": "%d{%y.%m.%d %H:%M:%S.%q} %-5p [%c/%i] %m\n"
}
],
"severity": "INFO",
"debuglevel": 0 // debug level only applies when severity is set to DEBUG.
}
]
}
}

View File

@ -1,226 +0,0 @@
// This is an example configuration of the Kea DHCPv4 server 1:
//
// - uses High Availability hook library and Lease Commands hook library
// to enable High Availability function for the DHCP server. This config
// file is for the primary (the active) server.
// - uses memfile, which stores lease data in a local CSV file
// - it assumes a single /24 addressing over a link that is directly reachable
// (no DHCP relays)
// - there is a handful of IP reservations
//
// It is expected to run with a standby (the passive) server, which has a very similar
// configuration. The only difference is that "this-server-name" must be set to "server2" on the
// other server. Also, the interface configuration depends on the network settings of the
// particular machine.
{
"Dhcp4": {
// Add names of your network interfaces to listen on.
"interfaces-config": {
// The DHCPv4 server listens on this interface. When changing this to
// the actual name of your interface, make sure to also update the
// interface parameter in the subnet definition below.
"interfaces": [ "enp0s9" ]
},
// Control socket is required for communication between the Control
// Agent and the DHCP server. High Availability requires Control Agent
// to be running because lease updates are sent over the RESTful
// API between the HA peers.
"control-socket": {
"socket-type": "unix",
"socket-name": "/tmp/kea4-ctrl-socket"
},
// Use Memfile lease database backend to store leases in a CSV file.
// Depending on how Kea was compiled, it may also support SQL databases
// (MySQL and/or PostgreSQL). Those database backends require more
// parameters, like name, host and possibly user and password.
// There are dedicated examples for each backend. See Section 7.2.2 "Lease
// Storage" for details.
"lease-database": {
// Memfile is the simplest and easiest backend to use. It's an in-memory
// database with data being written to a CSV file. It is very similar to
// what ISC DHCP does.
"type": "memfile"
},
// Let's configure some global parameters. The home network is not very dynamic
// and there's no shortage of addresses, so no need to recycle aggressively.
"valid-lifetime": 43200, // leases will be valid for 12h
"renew-timer": 21600, // clients should renew every 6h
"rebind-timer": 32400, // clients should start looking for other servers after 9h
// Kea will clean up its database of expired leases once per hour. However, it
// will keep the leases in expired state for 2 days. This greatly increases the
// chances for returning devices to get the same address again. To guarantee that,
// use host reservation.
// If both "flush-reclaimed-timer-wait-time" and "hold-reclaimed-time" are
// not 0, when the client sends a release message the lease is expired
// instead of being deleted from lease storage.
"expired-leases-processing": {
"reclaim-timer-wait-time": 3600,
"hold-reclaimed-time": 172800,
"max-reclaim-leases": 0,
"max-reclaim-time": 0
},
// HA requires two hook libraries to be loaded: libdhcp_lease_cmds.so and
// libdhcp_ha.so. The former handles incoming lease updates from the HA peers.
// The latter implements high availability feature for Kea. Note the library name
// should be the same, but the path is OS specific.
"hooks-libraries": [
// The lease_cmds library must be loaded because HA makes use of it to
// deliver lease updates to the server as well as synchronize the
// lease database after failure.
{
"library": "/usr/local/lib/kea/hooks/libdhcp_lease_cmds.so"
},
{
// The HA hook library should be loaded.
"library": "/usr/local/lib/kea/hooks/libdhcp_ha.so",
"parameters": {
// Each server should have the same HA configuration, except for the
// "this-server-name" parameter.
"high-availability": [ {
// This parameter points to this server instance. The respective
// HA peers must have this parameter set to their own names.
"this-server-name": "s-kea2.gsb.lan",
// The HA mode is set to hot-standby. In this mode, the active server handles
// all the traffic. The standby takes over if the primary becomes unavailable.
"mode": "hot-standby",
// Heartbeat is to be sent every 10 seconds if no other control
// commands are transmitted.
"heartbeat-delay": 10000,
// Maximum time for partner's response to a heartbeat, after which
// failure detection is started. This is specified in milliseconds.
// If we don't hear from the partner in 60 seconds, it's time to
// start worrying.
"max-response-delay": 30000,
// The following parameters control how the server detects the
// partner's failure. The ACK delay sets the threshold for the
// 'secs' field of the received discovers. This is specified in
// milliseconds.
"max-ack-delay": 5000,
// This specifies the number of clients which send messages to
// the partner but appear to not receive any response.
"max-unacked-clients": 0,
// This specifies the maximum timeout (in milliseconds) for the server
// to complete sync. If you have a large deployment (high tens or
// hundreds of thousands of clients), you may need to increase it
// further. The default value is 60000ms (60 seconds).
"sync-timeout": 60000,
"peers": [
// This is the configuration of this server instance.
{
"name": "s-kea1.gsb.lan",
// This specifies the URL of this server instance. The
// Control Agent must run along with this DHCPv4 server
// instance and the "http-host" and "http-port" must be
// set to the corresponding values.
"url": "http://172.16.64.20:8000/",
// This server is primary. The other one must be
// secondary.
"role": "primary"
},
// This is the configuration of the secondary server.
{
"name": "s-kea2.gsb.lan",
// Specifies the URL on which the partner's control
// channel can be reached. The Control Agent is required
// to run on the partner's machine with "http-host" and
// "http-port" values set to the corresponding values.
"url": "http://172.16.64.21:8000/",
// The other server is secondary. This one must be
// primary.
"role": "standby"
}
]
} ]
}
}
],
// This example contains a single subnet declaration.
"subnet4": [
{
// Subnet prefix.
"subnet": "172.16.64.0/24",
// There are no relays in this network, so we need to tell Kea that this subnet
// is reachable directly via the specified interface.
"interface": "enp0s9",
// Specify a dynamic address pool.
"pools": [
{
"pool": "172.16.64.100-172.16.64.150"
}
],
// These are options that are subnet specific. In most cases, you need to define at
// least routers option, as without this option your clients will not be able to reach
// their default gateway and will not have Internet connectivity. If you have many
// subnets and they share the same options (e.g. DNS servers typically is the same
// everywhere), you may define options at the global scope, so you don't repeat them
// for every network.
"option-data": [
{
// For each IPv4 subnet you typically need to specify at least one router.
"name": "routers",
"data": "172.16.64.254"
},
{
// Using cloudflare or Quad9 is a reasonable option. Change this
// to your own DNS servers is you have them. Another popular
// choice is 8.8.8.8, owned by Google. Using third party DNS
// service raises some privacy concerns.
"name": "domain-name-servers",
"data": "172.16.0.1"
}
],
// Some devices should get a static address. Since the .100 - .199 range is dynamic,
// let's use the lower address space for this. There are many ways how reservation
// can be defined, but using MAC address (hw-address) is by far the most popular one.
// You can use client-id, duid and even custom defined flex-id that may use whatever
// parts of the packet you want to use as identifiers. Also, there are many more things
// you can specify in addition to just an IP address: extra options, next-server, hostname,
// assign device to client classes etc. See the Kea ARM, Section 8.3 for details.
// The reservations are subnet specific.
#"reservations": [
# {
# "hw-address": "1a:1b:1c:1d:1e:1f",
# "ip-address": "192.168.1.10"
# },
# {
# "client-id": "01:11:22:33:44:55:66",
# "ip-address": "192.168.1.11"
# }
#]
}
],
// fichier de logs
"loggers": [
{
// This section affects kea-dhcp4, which is the base logger for DHCPv4 component. It tells
// DHCPv4 server to write all log messages (on severity INFO or higher) to a file. The file
// will be rotated once it grows to 2MB and up to 4 files will be kept. The debuglevel
// (range 0 to 99) is used only when logging on DEBUG level.
"name": "kea-dhcp4",
"output_options": [
{
"output": "stdout",
"maxsize": 2048000,
"maxver": 4
}
],
"severity": "INFO",
"debuglevel": 0
}
]
}
}

View File

@ -1,18 +0,0 @@
---
- name: restart isc-kea-dhcp4-server
service:
name: isc-kea-dhcp4-server.service
state: restarted
enabled: yes
- name: restart isc-kea-ctrl-agent
service:
name: isc-kea-ctrl-agent.service
state: restarted
enabled: yes
- name: restart mariadb-server
service:
name: mariadb-server
state: restarted
enabled: yes

View File

@ -1,75 +0,0 @@
---
- name: installation des dépendances
apt:
name:
- liblog4cplus-2.0.5
- libmariadb3
- libpq5
- mariadb-common
- mysql-common
state: present
- name: telechargemement du paquet isc-kea-common
get_url:
url: "https://dl.cloudsmith.io/public/isc/kea-2-4/deb/debian/pool/bookworm/main/i/is/isc-kea-common_2.4.1-isc20231123184533/isc-kea-common_2.4.1-isc20231123184533_amd64.deb"
dest: "/tmp"
- name: telechargement du paquet isc-kea-dhcp4
get_url:
url: "https://dl.cloudsmith.io/public/isc/kea-2-4/deb/debian/pool/bookworm/main/i/is/isc-kea-dhcp4_2.4.1-isc20231123184533/isc-kea-dhcp4_2.4.1-isc20231123184533_amd64.deb"
dest: "/tmp"
- name: telechargement du paquet isc-kea-ctrl-agent
get_url:
url: "https://dl.cloudsmith.io/public/isc/kea-2-4/deb/debian/pool/bookworm/main/i/is/isc-kea-ctrl-agent_2.4.1-isc20231123184533/isc-kea-ctrl-agent_2.4.1-isc20231123184533_amd64.deb"
dest: "/tmp"
- name: telechargement du paquet isc-kea-hooks
get_url:
url: "https://dl.cloudsmith.io/public/isc/kea-2-4/deb/debian/pool/bookworm/main/i/is/isc-kea-hooks_2.4.1-isc20231123184533/isc-kea-hooks_2.4.1-isc20231123184533_amd64.deb"
dest: "/tmp"
- name: Update apt
apt:
update_cache: yes
- name: Installation paquet isc-kea-common
apt:
deb: "/tmp/isc-kea-common_2.4.1-isc20231123184533_amd64.deb"
state: present
- name: Installation isc-kea-dhcp4
apt:
deb: "/tmp/isc-kea-dhcp4_2.4.1-isc20231123184533_amd64.deb"
state: present
- name: Installation isc-kea-ctrl-agent
apt:
deb: "/tmp/isc-kea-ctrl-agent_2.4.1-isc20231123184533_amd64.deb"
state: present
- name: Installation isc-kea-hooks
apt:
deb: "/tmp/isc-kea-hooks_2.4.1-isc20231123184533_amd64.deb"
state: present
- name: Copie du repertoire des hooks dans le repertoire /usr/local/bin/kea/hooks
copy:
src: /usr/lib/x86_64-linux-gnu/kea/
dest: /usr/local/lib/kea/
- name: Copie du fichier de configuration kea-dhcp4.conf
copy:
src: kea-dhcp4.conf
dest: /etc/kea/kea-dhcp4.conf
notify:
- restart isc-kea-dhcp4-server
- name: Copie du fichier de configuration kea-ctrl-agent
copy:
src: kea-ctrl-agent.conf
dest: /etc/kea/kea-ctrl-agent.conf
notify:
- restart isc-kea-ctrl-agent

View File

@ -1,66 +0,0 @@
// This is an example of a configuration for Control-Agent (CA) listening
// for incoming HTTP traffic. This is necessary for handling API commands,
// in particular lease update commands needed for HA setup.
{
"Control-agent":
{
// We need to specify where the agent should listen to incoming HTTP
// queries.
"http-host": "172.16.64.1",
// This specifies the port CA will listen on.
"http-port": 8000,
"control-sockets":
{
// This is how the Agent can communicate with the DHCPv4 server.
"dhcp4":
{
"comment": "socket to DHCPv4 server",
"socket-type": "unix",
"socket-name": "/tm/kea4-ctrl-socket"
},
// Location of the DHCPv6 command channel socket.
# "dhcp6":
# {
# "socket-type": "unix",
# "socket-name": "/tmp/kea6-ctrl-socket"
# },
// Location of the D2 command channel socket.
# "d2":
# {
# "socket-type": "unix",
# "socket-name": "/tmp/kea-ddns-ctrl-socket",
# "user-context": { "in-use": false }
# }
},
// Similar to other Kea components, CA also uses logging.
"loggers": [
{
"name": "kea-ctrl-agent",
"output_options": [
{
"output": "stdout",
// Several additional parameters are possible in addition
// to the typical output. Flush determines whether logger
// flushes output to a file. Maxsize determines maximum
// filesize before the file is rotated. maxver
// specifies the maximum number of rotated files being
// kept.
"flush": true,
"maxsize": 204800,
"maxver": 4,
// We use pattern to specify custom log message layout
"pattern": "%d{%y.%m.%d %H:%M:%S.%q} %-5p [%c/%i] %m\n"
}
],
"severity": "INFO",
"debuglevel": 0 // debug level only applies when severity is set to DEBUG.
}
]
}
}

View File

@ -1,226 +0,0 @@
// This is an example configuration of the Kea DHCPv4 server 1:
//
// - uses High Availability hook library and Lease Commands hook library
// to enable High Availability function for the DHCP server. This config
// file is for the primary (the active) server.
// - uses memfile, which stores lease data in a local CSV file
// - it assumes a single /24 addressing over a link that is directly reachable
// (no DHCP relays)
// - there is a handful of IP reservations
//
// It is expected to run with a standby (the passive) server, which has a very similar
// configuration. The only difference is that "this-server-name" must be set to "server2" on the
// other server. Also, the interface configuration depends on the network settings of the
// particular machine.
{
"Dhcp4": {
// Add names of your network interfaces to listen on.
"interfaces-config": {
// The DHCPv4 server listens on this interface. When changing this to
// the actual name of your interface, make sure to also update the
// interface parameter in the subnet definition below.
"interfaces": [ "enp0s8" ]
},
// Control socket is required for communication between the Control
// Agent and the DHCP server. High Availability requires Control Agent
// to be running because lease updates are sent over the RESTful
// API between the HA peers.
"control-socket": {
"socket-type": "unix",
"socket-name": "/tmp/kea4-ctrl-socket"
},
// Use Memfile lease database backend to store leases in a CSV file.
// Depending on how Kea was compiled, it may also support SQL databases
// (MySQL and/or PostgreSQL). Those database backends require more
// parameters, like name, host and possibly user and password.
// There are dedicated examples for each backend. See Section 7.2.2 "Lease
// Storage" for details.
"lease-database": {
// Memfile is the simplest and easiest backend to use. It's an in-memory
// database with data being written to a CSV file. It is very similar to
// what ISC DHCP does.
"type": "memfile"
},
// Let's configure some global parameters. The home network is not very dynamic
// and there's no shortage of addresses, so no need to recycle aggressively.
"valid-lifetime": 43200, // leases will be valid for 12h
"renew-timer": 21600, // clients should renew every 6h
"rebind-timer": 32400, // clients should start looking for other servers after 9h
// Kea will clean up its database of expired leases once per hour. However, it
// will keep the leases in expired state for 2 days. This greatly increases the
// chances for returning devices to get the same address again. To guarantee that,
// use host reservation.
// If both "flush-reclaimed-timer-wait-time" and "hold-reclaimed-time" are
// not 0, when the client sends a release message the lease is expired
// instead of being deleted from lease storage.
"expired-leases-processing": {
"reclaim-timer-wait-time": 3600,
"hold-reclaimed-time": 172800,
"max-reclaim-leases": 0,
"max-reclaim-time": 0
},
// HA requires two hook libraries to be loaded: libdhcp_lease_cmds.so and
// libdhcp_ha.so. The former handles incoming lease updates from the HA peers.
// The latter implements high availability feature for Kea. Note the library name
// should be the same, but the path is OS specific.
"hooks-libraries": [
// The lease_cmds library must be loaded because HA makes use of it to
// deliver lease updates to the server as well as synchronize the
// lease database after failure.
{
"library": "/usr/local/lib/kea/hooks/libdhcp_lease_cmds.so"
},
{
// The HA hook library should be loaded.
"library": "/usr/local/lib/kea/hooks/libdhcp_ha.so",
"parameters": {
// Each server should have the same HA configuration, except for the
// "this-server-name" parameter.
"high-availability": [ {
// This parameter points to this server instance. The respective
// HA peers must have this parameter set to their own names.
"this-server-name": "kea1",
// The HA mode is set to hot-standby. In this mode, the active server handles
// all the traffic. The standby takes over if the primary becomes unavailable.
"mode": "hot-standby",
// Heartbeat is to be sent every 10 seconds if no other control
// commands are transmitted.
"heartbeat-delay": 10000,
// Maximum time for partner's response to a heartbeat, after which
// failure detection is started. This is specified in milliseconds.
// If we don't hear from the partner in 60 seconds, it's time to
// start worrying.
"max-response-delay": 30000,
// The following parameters control how the server detects the
// partner's failure. The ACK delay sets the threshold for the
// 'secs' field of the received discovers. This is specified in
// milliseconds.
"max-ack-delay": 5000,
// This specifies the number of clients which send messages to
// the partner but appear to not receive any response.
"max-unacked-clients": 0,
// This specifies the maximum timeout (in milliseconds) for the server
// to complete sync. If you have a large deployment (high tens or
// hundreds of thousands of clients), you may need to increase it
// further. The default value is 60000ms (60 seconds).
"sync-timeout": 60000,
"peers": [
// This is the configuration of this server instance.
{
"name": "kea1",
// This specifies the URL of this server instance. The
// Control Agent must run along with this DHCPv4 server
// instance and the "http-host" and "http-port" must be
// set to the corresponding values.
"url": "http://172.16.64.1:8000/",
// This server is primary. The other one must be
// secondary.
"role": "primary"
},
// This is the configuration of the secondary server.
{
"name": "kea2",
// Specifies the URL on which the partner's control
// channel can be reached. The Control Agent is required
// to run on the partner's machine with "http-host" and
// "http-port" values set to the corresponding values.
"url": "http://172.16.64.2:8000/",
// The other server is secondary. This one must be
// primary.
"role": "standby"
}
]
} ]
}
}
],
// This example contains a single subnet declaration.
"subnet4": [
{
// Subnet prefix.
"subnet": "172.16.64.0/24",
// There are no relays in this network, so we need to tell Kea that this subnet
// is reachable directly via the specified interface.
"interface": "enp0s8",
// Specify a dynamic address pool.
"pools": [
{
"pool": "172.16.64.100-172.16.64.150"
}
],
// These are options that are subnet specific. In most cases, you need to define at
// least routers option, as without this option your clients will not be able to reach
// their default gateway and will not have Internet connectivity. If you have many
// subnets and they share the same options (e.g. DNS servers typically is the same
// everywhere), you may define options at the global scope, so you don't repeat them
// for every network.
"option-data": [
{
// For each IPv4 subnet you typically need to specify at least one router.
"name": "routers",
"data": "172.16.64.1"
},
{
// Using cloudflare or Quad9 is a reasonable option. Change this
// to your own DNS servers is you have them. Another popular
// choice is 8.8.8.8, owned by Google. Using third party DNS
// service raises some privacy concerns.
"name": "domain-name-servers",
"data": "172.16.64.1"
}
],
// Some devices should get a static address. Since the .100 - .199 range is dynamic,
// let's use the lower address space for this. There are many ways how reservation
// can be defined, but using MAC address (hw-address) is by far the most popular one.
// You can use client-id, duid and even custom defined flex-id that may use whatever
// parts of the packet you want to use as identifiers. Also, there are many more things
// you can specify in addition to just an IP address: extra options, next-server, hostname,
// assign device to client classes etc. See the Kea ARM, Section 8.3 for details.
// The reservations are subnet specific.
#"reservations": [
# {
# "hw-address": "1a:1b:1c:1d:1e:1f",
# "ip-address": "192.168.1.10"
# },
# {
# "client-id": "01:11:22:33:44:55:66",
# "ip-address": "192.168.1.11"
# }
#]
}
],
// fichier de logs
"loggers": [
{
// This section affects kea-dhcp4, which is the base logger for DHCPv4 component. It tells
// DHCPv4 server to write all log messages (on severity INFO or higher) to a file. The file
// will be rotated once it grows to 2MB and up to 4 files will be kept. The debuglevel
// (range 0 to 99) is used only when logging on DEBUG level.
"name": "kea-dhcp4",
"output_options": [
{
"output": "stdout",
"maxsize": 2048000,
"maxver": 4
}
],
"severity": "INFO",
"debuglevel": 0
}
]
}
}

View File

@ -16,7 +16,7 @@
- name: 20 - decompresse wordpress
unarchive:
src: http://s-adm.gsb.adm/gsbstore/wordpress-6.4.2-fr_FR.tar.gz
src: https://fr.wordpress.org/latest-fr_FR.tar.gz
dest: /home/
remote_src: yes

View File

@ -1,2 +1,2 @@
depl_url: "http://s-adm.gsb.adm/gsbstore/"
depl_wordpress: "wordpress-6.4.2-fr_FR.tar.gz"
depl_wordpress: "wordpress-6.1.1-fr_FR.tar.gz"

View File

@ -21,11 +21,11 @@
- name: Copie de dynamic.yml
copy:
src: dynamic.yml
src: dynamic.yml
dest: /root/nxc/config
- name: Copie de docker-compose.yml
copy:
copy:
src: docker-compose.yml
dest: /root/nxc
@ -69,14 +69,8 @@
args:
chdir: /root/nxc
- name: vérification si le réseau proxy existe
command: docker network ls --filter name=proxy
register: net_proxy
- name: création du réseau proxy
- name: Creation reseau docker proxy
command: docker network create proxy
# when: net_proxy.stdout.find('proxy') == -1
when: "'proxy' not in net_proxy.stdout"
#- name: Démarrage du docker-compose...
#command: /bin/bash docker-compose up -d

View File

@ -7,15 +7,15 @@ iface lo inet loopback
# carte n-adm
allow-hotplug enp0s3
iface enp0s3 inet static
address 192.168.99.102/24
address 192.168.99.101/24
# Réseau n-dmz-lb
allow-hotplug enp0s8
iface enp0s8 inet static
address 192.168.101.2/24
address 192.168.101.1/24
# réseau n-dmz-db
allow-hotplug enp0s9
iface enp0s9 inet static
address 192.168.102.2/24
address 192.168.102.1/24
post-up mount -o rw 192.168.102.253:/home/wordpress /var/www/html

View File

@ -1,23 +0,0 @@
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).
#source /etc/network/interfaces.d/*
# The loopback network interface
auto lo
iface lo inet loopback
# cote n-adm
allow-hotplug enp0s3
iface enp0s3 inet static
address 192.168.99.22/24
gateway 192.168.99.99
# Cote n-infra
allow-hotplug enp0s8
iface enp0s8 inet static
address 172.16.0.22/24
up ip route add 172.16.64.0/24 via 172.16.0.254
up ip route add 172.16.128.0/24 via 172.16.0.254
up ip route add 192.168.0.0/16 via 172.16.0.254
up ip route add 192.168.200.0/24 via 172.16.0.254

View File

@ -1,26 +0,0 @@
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).
# The loopback network interface
auto lo
iface lo inet loopback
# cote N-adm
allow-hotplug enp0s3
iface enp0s3 inet static
address 192.168.99.20
netmask 255.255.255.0
gateway 192.168.99.99
# cote N-infra
allow-hotplug enp0s8
iface enp0s8 inet static
address 172.16.0.20
netmask 255.255.255.0
#cote N-user
allow-hotplug enp0s9
iface enp0s9 inet static
address 172.16.64.20
netmask 255.255.255.0

View File

@ -1,26 +0,0 @@
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).
# The loopback network interface
auto lo
iface lo inet loopback
# cote N-adm
allow-hotplug enp0s3
iface enp0s3 inet static
address 192.168.99.21
netmask 255.255.255.0
gateway 192.168.99.99
# cote N-infra
allow-hotplug enp0s8
iface enp0s8 inet static
address 172.16.0.21
netmask 255.255.255.0
#cote N-user
allow-hotplug enp0s9
iface enp0s9 inet static
address 172.16.64.21
netmask 255.255.255.0

View File

@ -1,21 +1,13 @@
## **Explication de l'installation du VPN :**
Le processus d'installation s'articule en trois phases distinctes. Tout d'abord, l'installation commence par le playbook **r-vp1**. Ensuite, dans une seconde étape, le playbook r-vp2 est déployé. Enfin, la dernière phase concerne la mise en place de notre filtrage à l'aide de **ferm**.
## **Explication des dossiers pour Wireguard :**
Le dossier wireguard-r = r-vp1
wireguard-l = r-vp2
# <p align="center">Procédure d'installation </p>
de **r-vp1** et de copie du fichier wg0-b.conf.
***
## Sur **r-vp1**:
Attendre la fin de l'installation. Ensuite lancer un serveur http avec python3 pour récuperer le fichier wg0-b.conf sur **r-vp2** .
Lancer le playbook : *ansible-playbook -i localhost, -c local* r-vp1.yml sur **r-vp1**
Attendre la fin de l'installation. Ensuite lancer le scipt r-vp1-post.sh
### 🛠️ Lancer le script r-vp1-post.sh
### 🛠️ Lancer le script
```bash
cd /tools/ansible/gsb2023/Scripts
```

View File

@ -1,24 +1,14 @@
# Rôle Zabbix client
# Rôle nagios
***
Rôle du Zabbix client pour la supervision des différentes machines en active
Rôle Nagios pour la supervision des différentes machines
## Tables des matières
1. [Que fait le rôle Zabbix ?]
2. [Installation et configuration de Zabbix-agent]
3. [Partie windows]
## Que fait le rôle Zabbix ?
Il permet de configurer les agents zabbix en active sur le serveur.
## Que fait le rôle Nagios ?
Il permet de configurer les agents zabbix en active.
### Installation et configuration de Zabbix-agent
Le rôle Zabbix-cli va installer Zabbix-agent sur les serveurs Debian. Vous pouvez modifier les paramètres dans le fichier 'defaults'. Il s'agit d'une configuration en mode actif, ce qui signifie que du côté du serveur, il suffit de définir les hôtes avec leur nom, le type d'OS, et pour notre cas, préciser qu'il s'agit d'une machine virtuelle sur le serveur Zabbix.
### Partie Windows !
Le fonctionnement de Zabbix-agent n'est pas différent de celui sur Linux. Cependant, lorsque vous êtes sur le site de Zabbix pour installer l'agent, veillez à choisir la version classique de Zabbix-agent plutôt que la version 2, car elle requiert plus de ressources pour une faible supervision supplémentaire.
En ce qui concerne la configuration lors de l'installation de l'agent Zabbix, il vous demandera de saisir des informations telles que, par exemple, 'IP du serveur'. Vous n'êtes pas obligé de fournir ces informations, car tout peut être modifié ultérieurement.
Le fichier de configuration est le même que celui utilisé dans Linux. Si vous avez effectué l'installation par défaut de l'agent Zabbix, vous trouverez les fichiers de configuration dans le répertoire C:\Program Files\Zabbix Agent, et le nom du fichier de configuration est "zabbix_agentd.conf".
Avant toute configuration après l'installation de Zabbix Agent, pensez bien à aller dans le Gestionnaire des tâches, puis dans Services. Tout en bas, vous trouverez 'Zabbix Agent' qui est en cours d'exécution. Arrêtez-le, puis vous pourrez modifier la configuration sans aucun problème.
Dans la configuration pour activer Zabbix Agent en active, il vous suffit de modifier la valeur 'server' en la remplaçant par 127.0.0.1, et la valeur 'serveractif' par l'adresse IP de votre serveur Zabbix, dans notre cas 172.16.0.8. N'oubliez pas de modifier la valeur du 'hostname', car c'est celle-ci que vous devrez saisir dans les hôtes du serveur Zabbix pour que la supervision remonte. Pensez également à redémarrer le service une fois que Zabbix Agent est configuré.
Le rôle Zabbix-cli va installer zabbix-agent pour les serveurs, zabbix-agent pour superviser, zabbix-agent sera notre outil de supervision côté serveurs.

View File

@ -1,2 +1,6 @@
SERVER: "127.0.0.1"
SERVERACTIVE: "172.16.0.8"
PidFile: "/run/zabbix/zabbix_agentd.pid"
LogFile: "/var/log/zabbix/zabbix_agentd.log"
LogFileSize: "0"
Server: "127.0.0.1"
ServerActive: "192.168.99.106"
Include: "/etc/zabbix/zabbix_agentd.d/*.conf"

View File

@ -1,5 +1,12 @@
- name: restart zabbix agent
service:
name: zabbix-agent
state: restarted
enabled: yes
- name: config
template:
src: zabbix_agentd.conf.temp
dest: /etc/zabbix/zabbix_agentd.conf
vars:
PidFile: "{{ PidFile }}"
LogFile: "{{ LogFile }}"
LogFileSize: "{{ LogFileSize }}"
Server: "{{ Server }}"
ServerActive: "{{ ServerActive }}"
Hostname: "{{ ansible_hostname }}"
Include: "{{ Include }}"

View File

@ -1,7 +1,7 @@
- name: Intallation paquet zabbix agent
get_url:
url: "https://repo.zabbix.com/zabbix/6.4/debian/pool/main/z/zabbix-release/zabbix-release_6.4-1+debian12_all.deb"
dest: "/tmp"
url: "https://repo.zabbix.com/zabbix/6.4/debian/pool/main/z/zabbix-release/zabbix-release_6.4-1+debian12_all.deb"
dest: "/tmp/zabbix-release_6.4-1+debian12_all.deb"
- name: Intallation paquet zabbix agent suite
apt:
@ -17,14 +17,13 @@
name: zabbix-agent
state: present
- name: Mise en place du fichier conf zabbix agent (active)
template:
src: zabbix_agentd.conf.j2
dest: /etc/zabbix/zabbix_agentd.conf
- name: Enable Zabbix agent service
service:
systemd:
name: zabbix-agent
state: restarted
enabled: yes
- name: Rm package
file:
path: "/tmp/zabbix-release_6.4-1+debian12_all.deb"
state: absent

View File

@ -1,554 +0,0 @@
# This is a configuration file for Zabbix agent daemon (Unix)
# To get more information about Zabbix, visit http://www.zabbix.com
############ GENERAL PARAMETERS #################
### Option: PidFile
# Name of PID file.
#
# Mandatory: no
# Default:
# PidFile=/tmp/zabbix_agentd.pid
PidFile=/run/zabbix/zabbix_agentd.pid
### Option: LogType
# Specifies where log messages are written to:
# system - syslog
# file - file specified with LogFile parameter
# console - standard output
#
# Mandatory: no
# Default:
# LogType=file
### Option: LogFile
# Log file name for LogType 'file' parameter.
#
# Mandatory: yes, if LogType is set to file, otherwise no
# Default:
# LogFile=
LogFile=/var/log/zabbix/zabbix_agentd.log
### Option: LogFileSize
# Maximum size of log file in MB.
# 0 - disable automatic log rotation.
#
# Mandatory: no
# Range: 0-1024
# Default:
# LogFileSize=1
LogFileSize=0
### Option: DebugLevel
# Specifies debug level:
# 0 - basic information about starting and stopping of Zabbix processes
# 1 - critical information
# 2 - error information
# 3 - warnings
# 4 - for debugging (produces lots of information)
# 5 - extended debugging (produces even more information)
#
# Mandatory: no
# Range: 0-5
# Default:
# DebugLevel=3
### Option: SourceIP
# Source IP address for outgoing connections.
#
# Mandatory: no
# Default:
# SourceIP=
### Option: AllowKey
# Allow execution of item keys matching pattern.
# Multiple keys matching rules may be defined in combination with DenyKey.
# Key pattern is wildcard expression, which support "*" character to match any number of any characters in certain position. It might be used in both key name and key arguments.
# Parameters are processed one by one according their appearance order.
# If no AllowKey or DenyKey rules defined, all keys are allowed.
#
# Mandatory: no
### Option: DenyKey
# Deny execution of items keys matching pattern.
# Multiple keys matching rules may be defined in combination with AllowKey.
# Key pattern is wildcard expression, which support "*" character to match any number of any characters in certain position. It might be used in both key name and key arguments.
# Parameters are processed one by one according their appearance order.
# If no AllowKey or DenyKey rules defined, all keys are allowed.
# Unless another system.run[*] rule is specified DenyKey=system.run[*] is added by default.
#
# Mandatory: no
# Default:
# DenyKey=system.run[*]
### Option: EnableRemoteCommands - Deprecated, use AllowKey=system.run[*] or DenyKey=system.run[*] instead
# Internal alias for AllowKey/DenyKey parameters depending on value:
# 0 - DenyKey=system.run[*]
# 1 - AllowKey=system.run[*]
#
# Mandatory: no
### Option: LogRemoteCommands
# Enable logging of executed shell commands as warnings.
# 0 - disabled
# 1 - enabled
#
# Mandatory: no
# Default:
# LogRemoteCommands=0
##### Passive checks related
### Option: Server
# List of comma delimited IP addresses, optionally in CIDR notation, or DNS names of Zabbix servers and Zabbix proxies.
# Incoming connections will be accepted only from the hosts listed here.
# If IPv6 support is enabled then '127.0.0.1', '::127.0.0.1', '::ffff:127.0.0.1' are treated equally
# and '::/0' will allow any IPv4 or IPv6 address.
# '0.0.0.0/0' can be used to allow any IPv4 address.
# Example: Server=127.0.0.1,192.168.1.0/24,::1,2001:db8::/32,zabbix.example.com
#
# Mandatory: yes, if StartAgents is not explicitly set to 0
# Default:
# Server=
Server = {{ SERVER }}
### Option: ListenPort
# Agent will listen on this port for connections from the server.
#
# Mandatory: no
# Range: 1024-32767
# Default:
# ListenPort=10050
### Option: ListenIP
# List of comma delimited IP addresses that the agent should listen on.
# First IP address is sent to Zabbix server if connecting to it to retrieve list of active checks.
#
# Mandatory: no
# Default:
# ListenIP=0.0.0.0
### Option: StartAgents
# Number of pre-forked instances of zabbix_agentd that process passive checks.
# If set to 0, disables passive checks and the agent will not listen on any TCP port.
#
# Mandatory: no
# Range: 0-100
# Default:
# StartAgents=3
##### Active checks related
### Option: ServerActive
# Zabbix server/proxy address or cluster configuration to get active checks from.
# Server/proxy address is IP address or DNS name and optional port separated by colon.
# Cluster configuration is one or more server addresses separated by semicolon.
# Multiple Zabbix servers/clusters and Zabbix proxies can be specified, separated by comma.
# More than one Zabbix proxy should not be specified from each Zabbix server/cluster.
# If Zabbix proxy is specified then Zabbix server/cluster for that proxy should not be specified.
# Multiple comma-delimited addresses can be provided to use several independent Zabbix servers in parallel. Spaces are allowed.
# If port is not specified, default port is used.
# IPv6 addresses must be enclosed in square brackets if port for that host is specified.
# If port is not specified, square brackets for IPv6 addresses are optional.
# If this parameter is not specified, active checks are disabled.
# Example for Zabbix proxy:
# ServerActive=127.0.0.1:10051
# Example for multiple servers:
# ServerActive=127.0.0.1:20051,zabbix.domain,[::1]:30051,::1,[12fc::1]
# Example for high availability:
# ServerActive=zabbix.cluster.node1;zabbix.cluster.node2:20051;zabbix.cluster.node3
# Example for high availability with two clusters and one server:
# ServerActive=zabbix.cluster.node1;zabbix.cluster.node2:20051,zabbix.cluster2.node1;zabbix.cluster2.node2,zabbix.domain
#
# Mandatory: no
# Default:
# ServerActive=
ServerActive = {{ SERVERACTIVE }}
### Option: Hostname
# List of comma delimited unique, case sensitive hostnames.
# Required for active checks and must match hostnames as configured on the server.
# Value is acquired from HostnameItem if undefined.
#
# Mandatory: no
# Default:
# Hostname=
Hostname = {{ ansible_hostname }}
### Option: HostnameItem
# Item used for generating Hostname if it is undefined. Ignored if Hostname is defined.
# Does not support UserParameters or aliases.
#
# Mandatory: no
# Default:
# HostnameItem=system.hostname
### Option: HostMetadata
# Optional parameter that defines host metadata.
# Host metadata is used at host auto-registration process.
# An agent will issue an error and not start if the value is over limit of 2034 bytes.
# If not defined, value will be acquired from HostMetadataItem.
#
# Mandatory: no
# Range: 0-2034 bytes
# Default:
# HostMetadata=
### Option: HostMetadataItem
# Optional parameter that defines an item used for getting host metadata.
# Host metadata is used at host auto-registration process.
# During an auto-registration request an agent will log a warning message if
# the value returned by specified item is over limit of 65535 characters.
# This option is only used when HostMetadata is not defined.
#
# Mandatory: no
# Default:
# HostMetadataItem=
### Option: HostInterface
# Optional parameter that defines host interface.
# Host interface is used at host auto-registration process.
# An agent will issue an error and not start if the value is over limit of 255 characters.
# If not defined, value will be acquired from HostInterfaceItem.
#
# Mandatory: no
# Range: 0-255 characters
# Default:
# HostInterface=
### Option: HostInterfaceItem
# Optional parameter that defines an item used for getting host interface.
# Host interface is used at host auto-registration process.
# During an auto-registration request an agent will log a warning message if
# the value returned by specified item is over limit of 255 characters.
# This option is only used when HostInterface is not defined.
#
# Mandatory: no
# Default:
# HostInterfaceItem=
### Option: RefreshActiveChecks
# How often list of active checks is refreshed, in seconds.
#
# Mandatory: no
# Range: 1-86400
# Default:
# RefreshActiveChecks=5
### Option: BufferSend
# Do not keep data longer than N seconds in buffer.
#
# Mandatory: no
# Range: 1-3600
# Default:
# BufferSend=5
### Option: BufferSize
# Maximum number of values in a memory buffer. The agent will send
# all collected data to Zabbix Server or Proxy if the buffer is full.
#
# Mandatory: no
# Range: 2-65535
# Default:
# BufferSize=100
### Option: MaxLinesPerSecond
# Maximum number of new lines the agent will send per second to Zabbix Server
# or Proxy processing 'log' and 'logrt' active checks.
# The provided value will be overridden by the parameter 'maxlines',
# provided in 'log' or 'logrt' item keys.
#
# Mandatory: no
# Range: 1-1000
# Default:
# MaxLinesPerSecond=20
### Option: HeartbeatFrequency
# Frequency of heartbeat messages in seconds.
# Used for monitoring availability of active checks.
# 0 - heartbeat messages disabled.
#
# Mandatory: no
# Range: 0-3600
# Default: 60
# HeartbeatFrequency=
############ ADVANCED PARAMETERS #################
### Option: Alias
# Sets an alias for an item key. It can be used to substitute long and complex item key with a smaller and simpler one.
# Multiple Alias parameters may be present. Multiple parameters with the same Alias key are not allowed.
# Different Alias keys may reference the same item key.
# For example, to retrieve the ID of user 'zabbix':
# Alias=zabbix.userid:vfs.file.regexp[/etc/passwd,^zabbix:.:([0-9]+),,,,\1]
# Now shorthand key zabbix.userid may be used to retrieve data.
# Aliases can be used in HostMetadataItem but not in HostnameItem parameters.
#
# Mandatory: no
# Range:
# Default:
### Option: Timeout
# Spend no more than Timeout seconds on processing
#
# Mandatory: no
# Range: 1-30
# Default:
# Timeout=3
### Option: AllowRoot
# Allow the agent to run as 'root'. If disabled and the agent is started by 'root', the agent
# will try to switch to the user specified by the User configuration option instead.
# Has no effect if started under a regular user.
# 0 - do not allow
# 1 - allow
#
# Mandatory: no
# Default:
# AllowRoot=0
### Option: User
# Drop privileges to a specific, existing user on the system.
# Only has effect if run as 'root' and AllowRoot is disabled.
#
# Mandatory: no
# Default:
# User=zabbix
# NOTE: This option is overriden by settings in systemd service file!
### Option: Include
# You may include individual files or all files in a directory in the configuration file.
# Installing Zabbix will create include directory in /usr/local/etc, unless modified during the compile time.
#
# Mandatory: no
# Default:
# Include=
Include = /etc/zabbix/zabbix_agentd.d/*.conf
# Include=/usr/local/etc/zabbix_agentd.userparams.conf
# Include=/usr/local/etc/zabbix_agentd.conf.d/
# Include=/usr/local/etc/zabbix_agentd.conf.d/*.conf
####### USER-DEFINED MONITORED PARAMETERS #######
### Option: UnsafeUserParameters
# Allow all characters to be passed in arguments to user-defined parameters.
# The following characters are not allowed:
# \ ' " ` * ? [ ] { } ~ $ ! & ; ( ) < > | # @
# Additionally, newline characters are not allowed.
# 0 - do not allow
# 1 - allow
#
# Mandatory: no
# Range: 0-1
# Default:
# UnsafeUserParameters=0
### Option: UserParameter
# User-defined parameter to monitor. There can be several user-defined parameters.
# Format: UserParameter=<key>,<shell command>
# See 'zabbix_agentd' directory for examples.
#
# Mandatory: no
# Default:
# UserParameter=
### Option: UserParameterDir
# Directory to execute UserParameter commands from. Only one entry is allowed.
# When executing UserParameter commands the agent will change the working directory to the one
# specified in the UserParameterDir option.
# This way UserParameter commands can be specified using the relative ./ prefix.
#
# Mandatory: no
# Default:
# UserParameterDir=
####### LOADABLE MODULES #######
### Option: LoadModulePath
# Full path to location of agent modules.
# Default depends on compilation options.
# To see the default path run command "zabbix_agentd --help".
#
# Mandatory: no
# Default:
# LoadModulePath=${libdir}/modules
### Option: LoadModule
# Module to load at agent startup. Modules are used to extend functionality of the agent.
# Formats:
# LoadModule=<module.so>
# LoadModule=<path/module.so>
# LoadModule=</abs_path/module.so>
# Either the module must be located in directory specified by LoadModulePath or the path must precede the module name.
# If the preceding path is absolute (starts with '/') then LoadModulePath is ignored.
# It is allowed to include multiple LoadModule parameters.
#
# Mandatory: no
# Default:
# LoadModule=
####### TLS-RELATED PARAMETERS #######
### Option: TLSConnect
# How the agent should connect to server or proxy. Used for active checks.
# Only one value can be specified:
# unencrypted - connect without encryption
# psk - connect using TLS and a pre-shared key
# cert - connect using TLS and a certificate
#
# Mandatory: yes, if TLS certificate or PSK parameters are defined (even for 'unencrypted' connection)
# Default:
# TLSConnect=unencrypted
### Option: TLSAccept
# What incoming connections to accept.
# Multiple values can be specified, separated by comma:
# unencrypted - accept connections without encryption
# psk - accept connections secured with TLS and a pre-shared key
# cert - accept connections secured with TLS and a certificate
#
# Mandatory: yes, if TLS certificate or PSK parameters are defined (even for 'unencrypted' connection)
# Default:
# TLSAccept=unencrypted
### Option: TLSCAFile
# Full pathname of a file containing the top-level CA(s) certificates for
# peer certificate verification.
#
# Mandatory: no
# Default:
# TLSCAFile=
### Option: TLSCRLFile
# Full pathname of a file containing revoked certificates.
#
# Mandatory: no
# Default:
# TLSCRLFile=
### Option: TLSServerCertIssuer
# Allowed server certificate issuer.
#
# Mandatory: no
# Default:
# TLSServerCertIssuer=
### Option: TLSServerCertSubject
# Allowed server certificate subject.
#
# Mandatory: no
# Default:
# TLSServerCertSubject=
### Option: TLSCertFile
# Full pathname of a file containing the agent certificate or certificate chain.
#
# Mandatory: no
# Default:
# TLSCertFile=
### Option: TLSKeyFile
# Full pathname of a file containing the agent private key.
#
# Mandatory: no
# Default:
# TLSKeyFile=
### Option: TLSPSKIdentity
# Unique, case sensitive string used to identify the pre-shared key.
#
# Mandatory: no
# Default:
# TLSPSKIdentity=
### Option: TLSPSKFile
# Full pathname of a file containing the pre-shared key.
#
# Mandatory: no
# Default:
# TLSPSKFile=
####### For advanced users - TLS ciphersuite selection criteria #######
### Option: TLSCipherCert13
# Cipher string for OpenSSL 1.1.1 or newer in TLS 1.3.
# Override the default ciphersuite selection criteria for certificate-based encryption.
#
# Mandatory: no
# Default:
# TLSCipherCert13=
### Option: TLSCipherCert
# GnuTLS priority string or OpenSSL (TLS 1.2) cipher string.
# Override the default ciphersuite selection criteria for certificate-based encryption.
# Example for GnuTLS:
# NONE:+VERS-TLS1.2:+ECDHE-RSA:+RSA:+AES-128-GCM:+AES-128-CBC:+AEAD:+SHA256:+SHA1:+CURVE-ALL:+COMP-NULL:+SIGN-ALL:+CTYPE-X.509
# Example for OpenSSL:
# EECDH+aRSA+AES128:RSA+aRSA+AES128
#
# Mandatory: no
# Default:
# TLSCipherCert=
### Option: TLSCipherPSK13
# Cipher string for OpenSSL 1.1.1 or newer in TLS 1.3.
# Override the default ciphersuite selection criteria for PSK-based encryption.
# Example:
# TLS_CHACHA20_POLY1305_SHA256:TLS_AES_128_GCM_SHA256
#
# Mandatory: no
# Default:
# TLSCipherPSK13=
### Option: TLSCipherPSK
# GnuTLS priority string or OpenSSL (TLS 1.2) cipher string.
# Override the default ciphersuite selection criteria for PSK-based encryption.
# Example for GnuTLS:
# NONE:+VERS-TLS1.2:+ECDHE-PSK:+PSK:+AES-128-GCM:+AES-128-CBC:+AEAD:+SHA256:+SHA1:+CURVE-ALL:+COMP-NULL:+SIGN-ALL
# Example for OpenSSL:
# kECDHEPSK+AES128:kPSK+AES128
#
# Mandatory: no
# Default:
# TLSCipherPSK=
### Option: TLSCipherAll13
# Cipher string for OpenSSL 1.1.1 or newer in TLS 1.3.
# Override the default ciphersuite selection criteria for certificate- and PSK-based encryption.
# Example:
# TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256:TLS_AES_128_GCM_SHA256
#
# Mandatory: no
# Default:
# TLSCipherAll13=
### Option: TLSCipherAll
# GnuTLS priority string or OpenSSL (TLS 1.2) cipher string.
# Override the default ciphersuite selection criteria for certificate- and PSK-based encryption.
# Example for GnuTLS:
# NONE:+VERS-TLS1.2:+ECDHE-RSA:+RSA:+ECDHE-PSK:+PSK:+AES-128-GCM:+AES-128-CBC:+AEAD:+SHA256:+SHA1:+CURVE-ALL:+COMP-NULL:+SIGN-ALL:+CTYPE-X.509
# Example for OpenSSL:
# EECDH+aRSA+AES128:RSA+aRSA+AES128:kECDHEPSK+AES128:kPSK+AES128
#
# Mandatory: no
# Default:
# TLSCipherAll=
####### For advanced users - TCP-related fine-tuning parameters #######
## Option: ListenBacklog
# The maximum number of pending connections in the queue. This parameter is passed to
# listen() function as argument 'backlog' (see "man listen").
#
# Mandatory: no
# Range: 0 - INT_MAX (depends on system, too large values may be silently truncated to implementation-specified maximum)
# Default: SOMAXCONN (hard-coded constant, depends on system)
# ListenBacklog=

View File

@ -0,0 +1,7 @@
PidFile={{ PidFile }}
LogFile={{ LogFile }}
LogFileSize={{ LogFileSize }}
Server={{ Server }}
ServerActive={{ ServerActive }}
Hostname={{ Hostname }}
Include={{ Include }}

View File

@ -1,15 +1,18 @@
# Rôle zabbix-srv
# Rôle nagios
***
Rôle zabbix-srv pour la supervision des différentes machines
Rôle Nagios pour la supervision des différentes machines
## Tables des matières
1. Que fait le rôle zabbix-srv ?
1. [Que fait le rôle Zabbix ?]
## Que fait le rôle zabbix-srv ?
## Que fait le rôle Nagios ?
Le rôle zabbix-srv va installer `apache2` pour le serveur web, `zabbix-server` pour la supervision et `zabbix-agent` pour gérer les clients, **Zabbix** qui sera notre outil de supervision.
Lors de l'éxecution du playbook, les identifiants de la BDD sont crées avec le nom d'utilisateur "zabbix" et le mot de passe "password".
### Installation et configuration de Zabbix
Pour l'identifiant de Zabbix, c'est "Admin" et le mot de passe "zabbix", à l'adresse <http://s-mon/zabbix>.
Le rôle Zabbix va installer apache2 pour le serveur web, zabbix-server pour la supervision, zabbix qui sera notre outil de supervision.
Lors de la première connexion, on indique les identifiants de la BDD avec "zabbix" et "password".
Pour l'id de Zabbix, c'est "Admin" et "zabbix", à l'adresse "https://s-mon/zabbix".

View File

@ -53,14 +53,7 @@
login_unix_socket: /var/run/mysqld/mysqld.sock
- name: 9. Importer le schema initial
community.mysql.mysql_db:
state: import
name: zabbix
encoding: utf8mb4
login_user: zabbix
login_password: password
target: /usr/share/zabbix-sql-scripts/mysql/server.sql.gz
login_unix_socket: /var/run/mysqld/mysqld.sock
shell: zcat /usr/share/zabbix-sql-scripts/mysql/server.sql.gz | mysql --default-character-set=utf8mb4 -uzabbix -ppassword zabbix
- name: 10. Modifier la variable pour le schema
community.mysql.mysql_variables:
@ -75,20 +68,11 @@
regexp: '^# DBPassword='
replace: 'DBPassword=password'
- name: 12. Lancer le service zabbix-server
- name: 12. Lancer le service zabbix
service:
name: zabbix-server
state: restarted
enabled: yes
- name: 13. Lancer le service zabbix-agent
service:
name: zabbix-agent
state: restarted
enabled: yes
- name: 14. Lancer le service apache2
service:
name: apache2
name:
- zabbix-server
- zabbix-agent
- apache2
state: restarted
enabled: yes

View File

@ -1,13 +0,0 @@
---
- hosts: localhost
connection: local
roles:
# - base
# - goss
# - dhcp-fog
#- ssh-cli
#- snmp-agent
# - syslog-cli
- fog
#- post

View File

@ -5,8 +5,9 @@
roles:
- base
- goss
#- dhcp-fog
# - ssh-cli
#- fog
#- - journald-snd
- dhcp-fog
- ssh-cli
- snmp-agent
# - syslog-cli
- fog
- post

View File

@ -4,10 +4,10 @@
# include: config.yml
roles:
- base
- zabbix-cli
- goss
- goss
- dns-master
- webautoconf
- zabbix-cli
- journald-snd
- ssh-cli
- post

View File

@ -17,5 +17,4 @@
- glpi
- ssh-cli
# - syslog-cli
- journald-snd
- post

View File

@ -1,13 +0,0 @@
---
- hosts: localhost
connection: local
roles:
- base
#- goss
#- ssh-cli
- kea-master
#- zabbix-cli
#- journald-snd
#- snmp-agent
- post

View File

@ -1,13 +0,0 @@
---
- hosts: localhost
connection: local
roles:
- base
# - goss
# - ssh-cli
- kea-slave
# - zabbix-cli
# - journald-snd
# - snmp-agent
- post

View File

@ -9,5 +9,5 @@
- goss
- lb-bd
- post
#- zabbix-cli
- snmp-agent
- ssh-cli

View File

@ -4,9 +4,8 @@
roles:
- base
- goss
- post-lb
- lb-web
# - zabbix-cli
- snmp-agent
- ssh-cli

View File

@ -4,9 +4,8 @@
roles:
- base
- goss
- post-lb
- lb-web
# - zabbix-cli
- snmp-agent
- ssh-cli

View File

@ -6,7 +6,7 @@
- base
- goss
- lb-front
#- zabbix-cli
- snmp-agent
- ssh-cli
- post

View File

@ -7,5 +7,5 @@
- docker-nextcloud
- ssh-cli
# - syslog-cli
- zabbix-cli
- snmp-agent
- post

View File

@ -1,8 +1,8 @@
---
- name: Zabbix
hosts: localhost
become: yes
become_method: sudo
hosts: all
# become: yes
# become_method: sudo
# become_user: root
# vars:
# access: "Restricted Nagios4 Access"

View File

@ -9,8 +9,7 @@
roles:
- base
- goss
#- zabbix-cli
- snmp-agent
- lb-nfs-server
- ssh-cli
# - syslog-cli

View File

@ -1,21 +1,19 @@
#!/bin/bash
mkvmrelease="v1.3.2"
mkvmrelease="v1.3.1"
ovarelease="2023c"
ovafogrelease="2024a"
#ovafile="$HOME/Téléchargements/debian-bullseye-gsb-${ovarelease}.ova"
ovafile="$HOME/Téléchargements/debian-bookworm-gsb-${ovarelease}.ova"
ovafilefog="$HOME/Téléchargements/debian-bullseye-gsb-${ovafogrelease}.ova"
startmode=0
deletemode=0
usage () {
echo "$0 - version ${mkvmrelease} - Ova version ${ovarelease}"
echo "$0 : creation VM et parametrage interfaces"
echo "usage : $0 [-r] [-s] <s-adm|s-infra|r-int|r-ext|s-proxy|s-mon|s-appli|s-backup|s-itil|s-ncx|s-fog>"
echo " option -r : efface VM existante avant creation nouvelle"
echo " option -s : start VM apres creation"
echo "usage : $0 [-r] <s-adm|s-infra|r-int|r-ext|s-proxy|s-mon|s-appli|s-backup|s-itil|s-ncx|s-fog>"
echo " option -r : efface vm existante avant creation nouvelle"
exit 1
}
@ -61,19 +59,12 @@ fi
if [[ $1 == "--help" ]] || [[ $1 == "-h" ]] || [[ $1 == "-V" ]] ; then
usage
fi
while [[ -n "$1" ]] ; do
if [[ "$1" == "-s" ]] ; then
startmode=1
shift
elif [[ "$1" == "-r" ]] ; then
deletemode=1
shift
else
parm=$1
shift
fi
done
vm="${parm}"
if [[ $1 == "-r" ]] ; then
deletemode=1
shift
fi
vm="$1"
create_vm "${vm}"
if [[ "${vm}" == "s-adm" ]] ; then
bash addint.s-adm
@ -100,10 +91,6 @@ elif [[ "${vm}" == "s-nxc" ]] ; then
create_if "${vm}" "n-adm" "n-infra"
elif [[ "${vm}" == "s-fog" ]] ; then
create_if "${vm}" "n-adm" "n-infra" "n-user"
elif [[ "${vm}" == "s-kea1" ]] ; then
create_if "${vm}" "n-adm" "n-infra" "n-user"
elif [[ "${vm}" == "s-kea2" ]] ; then
create_if "${vm}" "n-adm" "n-infra" "n-user"
elif [[ "${vm}" == "s-dns-ext" ]] ; then
create_if "${vm}" "n-adm" "n-dmz"
elif [[ "${vm}" == "s-web-ext" ]] ; then
@ -136,6 +123,3 @@ else
echo "$0 : vm ${vm} non prevue "
exit 2
fi
if [[ $startmode == 1 ]] ; then
vboxmanage startvm "${vm}"
fi

View File

@ -102,22 +102,6 @@ elseif ($args[0] -eq "s-fog") {
create_if $args[0] "int" 3 "n-user"
}
elseif ($args[0] -eq "s-kea1") {
create_vm $args[0]
create_if $args[0] "int" 1 "n-adm"
create_if $args[0] "int" 2 "n-infra"
create_if $args[0] "int" 3 "n-user"
}
elseif ($args[0] -eq "s-kea2") {
create_vm $args[0]
create_if $args[0] "int" 1 "n-adm"
create_if $args[0] "int" 2 "n-infra"
create_if $args[0] "int" 3 "n-user"
}
elseif ($args[0] -eq "s-agence") {
create_vm $args[0]

View File

@ -1,32 +1,5 @@
!/bin/bash
#Ancien scipt 2023
#!/bin/bash
#stoper le fw
#systemctl stop ferm
#ouverture du service web pour copie distante
#cd /root/confwg/ && python3 -m http.server 8000 &
#Script 2024
# Fonction pour arrêter le serveur web
stop_server() {
echo "Arrêt du serveur et démarrage de ferm..."
pkill -f "python3 -m http.server"
}
# Stopper le ferm
systemctl stop ferm
# Ouverture du service web pour copie distante
#ouverture du service web pour copie distante
cd /root/confwg/ && python3 -m http.server 8000 &
echo "Ouverture du serveur"
# Timer pour récupéré le fichier avant de fermer le serveur python
sleep 120
#Appel de la fonction stop-serveur
stop_server

View File

@ -1,29 +1,22 @@
# Tuto Installation Windows Serveur
## Création de la nouvelle VM:
- Créer une nouvelle machine virtuelle avec l'image ISO "Windows Server 2019 fr-Fr x64 (Janvier 2020).iso" et cocher la case "Skip Unattended Installation". (lien source: http://store2.sio.lan/sionas/Public/iso/windows/?sort=namedirfirst&order=asc)
- Mémoire vive: 4096Mo
- Processor: 1
- Virtual Hard Disk: 30Go
# Tutoriel d'Installation Windows Serveur
## Démarrage de la VM Windows Server 2019:
- Une fois l'installation du serveur Windows 2019, démarrer la VM Windows Server.
- Faite l'installation de windows server. Vous pouvez suivre de l'étape 3 a l'étape 12 (Lien : https://www.infonovice.fr/guide-dinstallation-de-windows-server-2019-avec-une-interface-graphique/)
- Renommer votre nom de machine du serveur windows depuis les parametre de "Informations Systèmes" en **s-win**. Puis redémarrer la machine.
- Modifier la premiere carte de l'IP du serveur windows depuis le panneau de configuration en "192.168.99.6" avec la passerelle en "192.168.99.99 et la seconde en "172.16.0.6" et ajouter la passerelle par defaut en "172.16.0.254".
- Eteindre votre VM et ajouter une carte en pont dans les parametres reseaux de la VM. Allumer votre VM et installer git depuis le site officiel, il faudra sans doute activer certaines options dans les parametres d'internet explorer comme "JavaScript" ou encore l'option de Téléchargement (lien source: https://git-scm.com/download/win et https://support.microsoft.com/fr-fr/topic/procédure-d-activation-de-javascript-dans-windows-88d27b37-6484-7fc0-17df-872f65168279).
- Installer Serveur DNS et Services AD DS. Pour vous aider, suivre le TP de "Installation du service" a "Installation serveur DNS" (lien source: https://sio.lyc-lecastel.fr/doku.php?id=promo_2024:serveur_windows_2019-installation_ad)
## Installation Windpws Server 2019
- Une fois l'installation terminée, renommer le serveur en **s-win** et installer git.
- ajouter les deux cartes réseau adressées conformement aux spécifications
- Installer les services Windows suivants :
- Serveur DNS
- Service AD DS.
- Créer une nouvelle fôret pour le domaine **gsb.lan**.
- Configurer la zone inverse du DNS et l'alimenter avec les enregistrements souhaités (A et PTR pour **s-win**, **s-itil**, **r-int** et **s-infra*. Vous pouvez vous aidez de ce tutoriel (https://www.it-connect.fr/dns-sous-windows-server-2022-comment-configurer-une-zone-de-recherche-inversee/).
- Configurer la zone DNS inverse du DNS et l'alimenter avec les enregistrements . souhaités (NS, A pour **s-win**, **s-itil**, ..., PTR pour les précités)
## Création des dossiers partagés et des utilisateur
## Création des répertoires partagés et des utilisateurs
- Lancer `mkusr-compta.cmd` et `mkusr-ventes.cmd` sur s-win pour créer les utilisateurs avec leurs droits.
- Lancer `gsb-dossiers.cmd`. Il permet de créer des dossiers partagés.
- Le fichier `mkusr.cmd` permet de créer un autre utiliateur avec les mêmes droits que les autres.
- Lancer les script `mkusr-compta.cmd` et `mkusr-ventes.cmd` sur **s-win** pour créer les utilisateurs avec leurs droits.
- Le script `gsb-dossiers.cmd` permet de créer des dossiers partagés.
- Le script `mkusr.cmd` permet de créer un autre utilisateur avec les mêmes droits que les autres.
## Utilisation des comptes utilisateurs avec AD
- Pour utiliser l'AD du serveur **s-win**, installer un poste de travail Windows 10 dans le réseau `n-user`.
- Une fois la machine installée et démarrée, mettre a jours la carte réseau en DHCP la faire adhérer au domaine **gsb.lan** (Poste de Travail/Nom).
- Une fois la machine installéee et démarrée, la faire adhérer au domaine **gsb.lan** (Poste de Travail/Nom).
- Redémarrer puis se connecter avec les identifiants de domaine choisis.

View File

@ -18,4 +18,4 @@ echo ping r-vp2 interface interface interne
ping -c3 172.16.128.254
echo ping s-agence
ping -c3 172.16.128.10
ping -c3 172.16.128.11