Compare commits

..

47 Commits

Author SHA1 Message Date
298f105805 maj zabbix-srv 2024-01-25 11:22:18 +01:00
d88745e741 modif s-backup.yml 2024-01-25 11:11:56 +01:00
fffcb22db8 modif cle priv 2024-01-25 11:09:25 +01:00
abb8c15028 maj zabbix-srv 2024-01-25 11:01:02 +01:00
73b4560dd9 modif cle privee 2024-01-25 10:49:53 +01:00
91d8b57029 modif role 2024-01-25 10:10:50 +01:00
37bbbad9dd script recup cle pub 2024-01-25 10:03:20 +01:00
84215f502b generate cle publique et privee 2024-01-25 09:53:45 +01:00
flo
2606cd19b0 maj zabbix-srv 2024-01-25 09:51:35 +01:00
b27ce2a372 maj goss s-nxc 2024-01-25 08:31:06 +01:00
18ce1f65ad maj goss s-nxc 2024-01-25 08:19:51 +01:00
116b84d230 ajout role stork-agent 2024-01-24 23:56:15 +01:00
c92a7654d3 ajout role stork-server 2024-01-24 19:22:14 +01:00
02c7f3dffd script saveNextcloud 2024-01-23 11:22:40 +01:00
5a8558d701 modif script save 2024-01-22 17:54:29 +01:00
7d6b15844a script de sauvegarde de nextcloud 2024-01-22 17:07:05 +01:00
2653221559 MAJ role KEA MAJ test goss KEA 2024-01-22 16:49:58 +01:00
3100ba51e2 maj role lb-front 2024-01-22 16:26:02 +01:00
gsb
bbe58dbb01 Actualiser roles/fw-ferm/README.md 2024-01-22 15:27:26 +01:00
7124d8aaff type kea-ctrl-agent.conf.j2 2024-01-21 01:03:02 +01:00
0afa2c3596 corrections diverses template kea-ctrl pb avec jinja2 2024-01-21 00:54:39 +01:00
38602033b3 ajout role kea + s-kea1-ps.yml 2024-01-20 22:42:27 +01:00
1c1993021b ajout du role gotify 2024-01-20 20:22:03 +01:00
b146170467 maj docker 2024-01-20 17:11:15 +01:00
df9d3c6c1c maj goss s-nxc 2024-01-20 17:02:33 +01:00
d75f4ffb3f zabbix ajout auto 2024-01-20 17:19:38 +01:00
eaf75de89e maj role kea 2024-01-19 20:21:12 +01:00
02fc23d224 modif test goss 2024-01-19 15:47:43 +01:00
gsb
bdc71bbb3c Actualiser roles/wireguard-r/README.md 2024-01-19 15:05:39 +01:00
gsb
308504062e Actualiser roles/wireguard-r/README.md 2024-01-19 15:04:18 +01:00
c3ad470fd1 re teste zabbix api 2024-01-19 15:00:07 +01:00
gsb
2d3067d67b Actualiser roles/wireguard-r/README.md 2024-01-19 14:59:11 +01:00
7d885b08b8 modif port docker-compose 2024-01-19 14:42:50 +01:00
d88044350a test api zabbix 2024-01-19 14:41:21 +01:00
gsb
ca6d1d2e09 Actualiser wireguard/README.md 2024-01-19 14:35:17 +01:00
1a2c349969 Merge branch 'main' of https://gitea.lyc-lecastel.fr/gsb/gsb2024 2024-01-19 14:30:22 +01:00
3a18a3bd9a maj goss fichier 2024-01-19 14:28:41 +01:00
239480a12b maj goss fichier 2024-01-19 14:28:41 +01:00
gsb
f66774efe1 Actualiser wireguard/README.md 2024-01-19 14:28:37 +01:00
b57b0763e9 test goss machine s-mon 2024-01-19 14:25:34 +01:00
gsb
79279fc3a1 Actualiser wireguard/README.md 2024-01-19 14:22:50 +01:00
gsb
54ef5103ca Actualiser wireguard/README.md 2024-01-19 14:22:36 +01:00
gsb
a87853372c Actualiser wireguard/README.md 2024-01-19 14:17:49 +01:00
gsb
378a20f02a Ajouter wireguard/README.md 2024-01-19 14:13:55 +01:00
21ee40ab59 Maj README.md 2024-01-19 11:53:02 +01:00
d393b1eebe ajout entrees DNS s-stork et s-gotify 2024-01-19 11:48:33 +01:00
bff32cd191 maj goss lb 2024-01-19 10:47:31 +01:00
74 changed files with 1394 additions and 420 deletions

View File

@ -1,6 +1,6 @@
# gsb2024
2024-01-17 18h04 ps
2024-01-19 11h45 ps
Environnement et playbooks **ansible** pour le projet **GSB 2024**
@ -23,8 +23,8 @@ Prérequis :
* **r-ext** : routage, NAT
* **s-proxy** : proxy **squid**
* **s-itil** : serveur GLPI
* **s-backup** : DNS esclave + sauvegarde s-win (SMB)
* **s-mon** : supervision avec **Nagios4**, notifications et syslog
* **s-backup** : DNS esclave + sauvegarde s-win (SMB), Stork et Gotify
* **s-mon** : supervision avec **Nagios4/Zabbix**, notifications et journald
* **s-fog** : deploiement postes de travail avec **FOG**
* **s-win** : Windows Server 2019, AD, DNS, DHCP, partage fichiers
* **s-nxc** : NextCloud avec **docker** via proxy inverse **traefik** et certificat auto-signé

25
goss.yaml Normal file
View File

@ -0,0 +1,25 @@
port:
tcp:22:
listening: true
ip:
- 0.0.0.0
tcp6:22:
listening: true
ip:
- '::'
service:
sshd:
enabled: true
running: true
user:
sshd:
exists: true
uid: 101
gid: 65534
groups:
- nogroup
home: /run/sshd
shell: /usr/sbin/nologin
process:
sshd:
running: true

93
goss/s-kea1.yaml Normal file
View File

@ -0,0 +1,93 @@
file:
/etc/kea/kea-ctrl-agent.conf:
exists: true
mode: "0644"
owner: _kea
group: root
filetype: file
contents: []
/etc/kea/kea-dhcp4.conf:
exists: true
mode: "0644"
owner: _kea
group: root
filetype: file
contents: []
/tmp/kea4-ctrl-socket:
exists: true
mode: "0755"
size: 0
owner: _kea
group: _kea
filetype: socket
contains: []
contents: null
/usr/lib/x86_64-linux-gnu/kea:
exists: true
mode: "0755"
owner: root
group: root
filetype: directory
contents: []
package:
isc-kea-common:
installed: true
versions:
- 2.4.1-isc20231123184533
isc-kea-ctrl-agent:
installed: true
versions:
- 2.4.1-isc20231123184533
isc-kea-dhcp4:
installed: true
versions:
- 2.4.1-isc20231123184533
isc-kea-hooks:
installed: true
versions:
- 2.4.1-isc20231123184533
libmariadb3:
installed: true
versions:
- 1:10.11.4-1~deb12u1
mariadb-common:
installed: true
versions:
- 1:10.11.4-1~deb12u1
mysql-common:
installed: true
versions:
- 5.8+1.1.0
addr:
udp://172.16.64.254:67:
local-address: 127.0.0.1
reachable: true
timeout: 500
port:
tcp:8000:
listening: true
ip:
- 172.16.0.20
service:
isc-kea-ctrl-agent.service:
enabled: true
running: true
isc-kea-dhcp4-server.service:
enabled: true
running: true
interface:
enp0s3:
exists: true
addrs:
- 192.168.99.20/24
mtu: 1500
enp0s8:
exists: true
addrs:
- 172.16.0.20/24
mtu: 1500
enp0s9:
exists: true
addrs:
- 172.16.64.20/24
mtu: 1500

93
goss/s-kea2.yaml Normal file
View File

@ -0,0 +1,93 @@
file:
/etc/kea/kea-ctrl-agent.conf:
exists: true
mode: "0644"
owner: _kea
group: root
filetype: file
contents: []
/etc/kea/kea-dhcp4.conf:
exists: true
mode: "0644"
owner: _kea
group: root
filetype: file
contents: []
/tmp/kea4-ctrl-socket:
exists: true
mode: "0755"
size: 0
owner: _kea
group: _kea
filetype: socket
contains: []
contents: null
/usr/lib/x86_64-linux-gnu/kea:
exists: true
mode: "0755"
owner: root
group: root
filetype: directory
contents: []
package:
isc-kea-common:
installed: true
versions:
- 2.4.1-isc20231123184533
isc-kea-ctrl-agent:
installed: true
versions:
- 2.4.1-isc20231123184533
isc-kea-dhcp4:
installed: true
versions:
- 2.4.1-isc20231123184533
isc-kea-hooks:
installed: true
versions:
- 2.4.1-isc20231123184533
libmariadb3:
installed: true
versions:
- 1:10.11.4-1~deb12u1
mariadb-common:
installed: true
versions:
- 1:10.11.4-1~deb12u1
mysql-common:
installed: true
versions:
- 5.8+1.1.0
addr:
udp://172.16.64.254:67:
local-address: 127.0.0.1
reachable: true
timeout: 500
port:
tcp:8000:
listening: true
ip:
- 172.16.0.21
service:
isc-kea-ctrl-agent.service:
enabled: true
running: true
isc-kea-dhcp4-server.service:
enabled: true
running: true
interface:
enp0s3:
exists: true
addrs:
- 192.168.99.21/24
mtu: 1500
enp0s8:
exists: true
addrs:
- 172.16.0.21/24
mtu: 1500
enp0s9:
exists: true
addrs:
- 172.16.64.21/24
mtu: 1500

View File

@ -1,21 +1,38 @@
package:
mysql-server:
installed: true
versions:
- 5.5.54-0+deb8u1
command:
egrep "#bind-address" /etc/mysql/my.cnf:
exit-status: 0
stdout:
- "#bind-address\t\t= 127.0.0.1"
stderr: []
timeout: 10000
addr:
tcp://192.168.102.1:80:
reachable: true
timeout: 500
tcp://192.168.102.2:80:
reachable: true
timeout: 500
service:
mariadb:
enabled: true
running: true
mysql:
enabled: true
running: true
user:
mysql:
exists: true
uid: 104
gid: 111
groups:
- mysql
home: /nonexistent
shell: /bin/false
group:
mysql:
exists: true
gid: 111
interface:
enp0s3:
exists: true
addrs:
- 192.168.99.13/24
enp0s8:
exists: true
addrs:
- 192.168.102.50/24
enp0s3:
exists: true
addrs:
- 192.168.99.154/24
mtu: 1500
enp0s8:
exists: true
addrs:
- 192.168.102.254/24
mtu: 1500

View File

@ -1,63 +1,62 @@
package:
apache2:
installed: true
versions:
- 2.4.10-10+deb8u7
php5:
installed: true
versions:
- 5.6.29+dfsg-0+deb8u1
apache2:
installed: true
versions:
- 2.4.57-2
nfs-common:
installed: true
versions:
- 1:2.6.2-4
port:
tcp:22:
listening: true
ip:
- 0.0.0.0
tcp6:22:
listening: true
ip:
- '::'
tcp6:80:
listening: true
ip:
- '::'
tcp6:80:
listening: true
ip:
- '::'
service:
apache2:
enabled: true
running: true
sshd:
enabled: true
running: true
user:
sshd:
exists: true
uid: 105
gid: 65534
groups:
- nogroup
home: /var/run/sshd
shell: /usr/sbin/nologin
command:
egrep 192.168.102.14:/export/www /etc/fstab:
exit-status: 0
stdout:
- 192.168.102.14:/export/www /var/www/html nfs _netdev rw 0 0
stderr: []
timeout: 10000
apache2:
enabled: true
running: true
nfs-common:
enabled: false
running: false
process:
apache2:
running: true
sshd:
running: true
apache2:
running: true
mount:
/var/www/html:
exists: true
opts:
- rw
- relatime
vfs-opts:
- rw
- vers=4.2
- rsize=131072
- wsize=131072
- namlen=255
- hard
- proto=tcp
- timeo=600
- retrans=2
- sec=sys
- clientaddr=192.168.102.1
- local_lock=none
- addr=192.168.102.253
source: 192.168.102.253:/home/wordpress
filesystem: nfs4
interface:
enp0s3:
exists: true
addrs:
- 192.168.99.11/24
enp0s8:
exists: true
addrs:
- 192.168.101.1/24
enp0s9:
exists: true
addrs:
- 192.168.102.1/24
enp0s3:
exists: true
addrs:
- 192.168.99.101/24
mtu: 1500
enp0s8:
exists: true
addrs:
- 192.168.101.1/24
mtu: 1500
enp0s9:
exists: true
addrs:
- 192.168.102.1/24
mtu: 1500

View File

@ -1,63 +1,62 @@
package:
apache2:
installed: true
versions:
- 2.4.10-10+deb8u7
php5:
installed: true
versions:
- 5.6.29+dfsg-0+deb8u1
apache2:
installed: true
versions:
- 2.4.57-2
nfs-common:
installed: true
versions:
- 1:2.6.2-4
port:
tcp:22:
listening: true
ip:
- 0.0.0.0
tcp6:22:
listening: true
ip:
- '::'
tcp6:80:
listening: true
ip:
- '::'
tcp6:80:
listening: true
ip:
- '::'
service:
apache2:
enabled: true
running: true
sshd:
enabled: true
running: true
user:
sshd:
exists: true
uid: 105
gid: 65534
groups:
- nogroup
home: /var/run/sshd
shell: /usr/sbin/nologin
command:
egrep 192.168.102.14:/export/www /etc/fstab:
exit-status: 0
stdout:
- 192.168.102.14:/export/www /var/www/html nfs _netdev rw 0 0
stderr: []
timeout: 10000
apache2:
enabled: true
running: true
nfs-common:
enabled: false
running: false
process:
apache2:
running: true
sshd:
running: true
apache2:
running: true
mount:
/var/www/html:
exists: true
opts:
- rw
- relatime
vfs-opts:
- rw
- vers=4.2
- rsize=131072
- wsize=131072
- namlen=255
- hard
- proto=tcp
- timeo=600
- retrans=2
- sec=sys
- clientaddr=192.168.102.2
- local_lock=none
- addr=192.168.102.253
source: 192.168.102.253:/home/wordpress
filesystem: nfs4
interface:
enp0s3:
exists: true
addrs:
- 192.168.99.12/24
enp0s8:
exists: true
addrs:
- 192.168.101.2/24
enp0s9:
exists: true
addrs:
- 192.168.102.2/24
enp0s3:
exists: true
addrs:
- 192.168.99.102/24
mtu: 1500
enp0s8:
exists: true
addrs:
- 192.168.101.2/24
mtu: 1500
enp0s9:
exists: true
addrs:
- 192.168.102.2/24
mtu: 1500

View File

@ -1,28 +1,55 @@
package:
haproxy:
installed: true
versions:
- 2.6.12-1+deb12u1
addr:
tcp://192.168.101.1:80:
reachable: true
timeout: 500
tcp://192.168.101.2:80:
reachable: true
timeout: 500
port:
tcp:80:
listening: true
ip:
- 192.168.100.11
tcp:80:
listening: true
ip:
- 192.168.100.10
service:
haproxy:
enabled: true
running: true
sshd:
enabled: true
running: true
haproxy:
enabled: true
running: true
user:
haproxy:
exists: true
uid: 104
gid: 111
groups:
- haproxy
home: /var/lib/haproxy
shell: /usr/sbin/nologin
group:
haproxy:
exists: true
gid: 111
process:
haproxy:
running: true
interface:
enp0s3:
exists: true
addrs:
- 192.168.99.100/24
mtu: 1500
enp0s8:
exists: true
addrs:
- 192.168.100.11/24
mtu: 1500
enp0s9:
exists: true
addrs:
- 192.168.101.254/24
mtu: 1500
enp0s3:
exists: true
addrs:
- 192.168.99.100/24
mtu: 1500
enp0s8:
exists: true
addrs:
- 192.168.100.10/24
mtu: 1500
http:
http://192.168.100.10/:
status: 200
allow-insecure: false
no-follow-redirects: false
timeout: 5000
body: []

View File

@ -1,92 +1,62 @@
package:
apache2:
installed: true
zabbix-server-mysql:
installed: true
zabbix-frontend-php:
installed: true
zabbix-apache-conf:
installed: true
zabbix-sql-scripts:
installed: true
zabbix-agent:
installed: true
mariadb-server:
installed: true
python3-pymysql:
installed: true
systemd-journal-remote:
installed: true
file:
/etc/systemd/system/systemd-journal-remote.service:
exist: true
mode: "0777"
filetype: directory
/var/log/journal/remote:
exist: true
mode: "0777"
filetype: directory
port:
tcp:80:
listening: true
ip:
- 0.0.0.0
tcp:3306:
listening: true
ip:
- 127.0.0.1
tcp:10050:
listening: true
ip:
- 0.0.0.0
tcp:10051:
listening: true
ip:
- 0.0.0.0
tcp:19532:
listening: true
ip:
- '*'
/etc/systemd/system/systemd-journal-remote.service:
exists: true
mode: "0644"
owner: root
group: root
filetype: file
contents: []
/var/log/journal/remote:
exists: true
mode: "0755"
owner: systemd-journal-remote
group: systemd-journal-remote
filetype: directory
contents: []
package:
apache2:
installed: true
versions:
- 2.4.57-2
mariadb-server:
installed: true
versions:
- 1:10.11.4-1~deb12u1
systemd-journal-remote:
installed: true
versions:
- 252.19-1~deb12u1
service:
apache2:
enabled: true
running: true
zabbix-server:
enabled: true
running: true
zabbix-agent:
enabled: true
running: true
systemd-journal-remote.socket:
enabled: true
running: true
command:
sysctl net.ipv4.ip_forward:
exit-status: 0
stdout:
- net.ipv4.ip_forward = 0
stderr: []
timeout: 10000
process:
apache2:
running: true
zabbix_server:
running: true
mariadb:
running: true
apache2:
enabled: true
running: true
mariadb.service:
enabled: true
running: true
systemd-journal-remote.socket:
enabled: true
running: true
zabbix-agent:
enabled: true
running: true
zabbix-server:
enabled: true
running: true
interface:
enp0s3:
exists: true
addrs:
- 192.168.99.8/24
enp0s8:
exists: true
addrs:
- 172.16.0.8/24
enp0s3:
exists: true
addrs:
- 192.168.99.8/24
mtu: 1500
enp0s8:
exists: true
addrs:
- 172.16.0.8/24
mtu: 1500
http:
http://localhost/zabbix:
status: 401
allow-insecure: false
no-follow-redirects: false
timeout: 5000
body: []
http://s-mon.gsb.lan/zabbix:
status: 200
allow-insecure: false
no-follow-redirects: false
timeout: 5000
body: []

55
goss/s-nas.yaml Normal file
View File

@ -0,0 +1,55 @@
file:
/home/wordpress:
exists: true
mode: "0755"
owner: www-data
group: www-data
filetype: directory
contents: []
package:
file:
installed: true
versions:
- 1:5.44-3
nfs-common:
installed: true
versions:
- 1:2.6.2-4
nfs-kernel-server:
installed: true
versions:
- 1:2.6.2-4
addr:
tcp://192.168.102.1:80:
reachable: true
timeout: 500
tcp://192.168.102.2:80:
reachable: true
timeout: 500
service:
nfs-common:
enabled: false
running: false
nfs-kernel-server:
enabled: true
running: true
nfs-mountd:
enabled: true
running: true
nfs-server:
enabled: true
running: true
nfs-utils:
enabled: true
running: false
interface:
enp0s3:
exists: true
addrs:
- 192.168.99.153/24
mtu: 1500
enp0s8:
exists: true
addrs:
- 192.168.102.253/24
mtu: 1500

145
goss/s-nxc.yaml Normal file
View File

@ -0,0 +1,145 @@
file:
/root/nxc:
exists: true
mode: "0755"
#size: 4096
#owner: root
#group: root
filetype: directory
contains: []
/root/nxc/certs:
exists: true
mode: "0755"
#size: 4096
#owner: root
#group: root
filetype: directory
contains: []
/root/nxc/config:
exists: true
mode: "0755"
#size: 4096
#owner: root
#group: root
filetype: directory
contains: []
/root/nxc/config/dynamic.yml:
exists: true
mode: "0644"
#size: 415
#owner: root
#group: root
filetype: file
contains: []
/root/nxc/config/static.yml:
exists: true
mode: "0644"
#size: 452
#owner: root
#group: root
filetype: file
contains: []
/root/nxc/docker-compose.yml:
exists: true
mode: "0644"
#size: 2135
#owner: root
#group: root
filetype: file
contains: []
/root/nxc/nxc-debug.sh:
exists: true
mode: "0755"
#size: 64
#owner: root
#group: root
filetype: file
contains: []
/root/nxc/nxc-prune.sh:
exists: true
mode: "0755"
#size: 110
#owner: root
#group: root
filetype: file
contains: []
/root/nxc/nxc-start.sh:
exists: true
mode: "0755"
#size: 34
#owner: root
#group: root
filetype: file
contains: []
/root/nxc/nxc-stop.sh:
exists: true
mode: "0755"
#size: 32
#owner: root
#group: root
filetype: file
contains: []
/usr/local/bin/mkcert:
exists: true
mode: "0755"
#size: 4788866
#owner: root
#group: root
filetype: file
contains: []
#addr:
#tcp://s-nxc.gsb.lan:443:
#reachable: true
#timeout: 500
port:
tcp:22:
listening: true
ip:
- 0.0.0.0
tcp:80:
listening: true
ip: []
tcp:443:
listening: true
ip: []
#tcp:8081:
#listening: true
#ip:
#- 0.0.0.0
interface:
enp0s3:
exists: true
addrs:
- 192.168.99.7/24
mtu: 1500
enp0s8:
exists: true
addrs:
- 172.16.0.7/24
mtu: 1500
http:
https://s-nxc.gsb.lan:
status: 200
allow-insecure: true
no-follow-redirects: false
timeout: 5000
body:
- Nextcloud

View File

@ -5,7 +5,7 @@
;
$TTL 604800
@ IN SOA s-infra.gsb.lan. root.s-infra.gsb.lan. (
2024011800 ; Serial
2024011900 ; Serial
7200 ; Refresh
86400 ; Retry
8419200 ; Expire
@ -16,9 +16,11 @@ $TTL 604800
@ IN A 127.0.0.1
@ IN AAAA ::1
s-infra IN A 172.16.0.1
s-backup IN A 172.16.0.4
s-proxy IN A 172.16.0.2
s-appli IN A 172.16.0.3
s-backup IN A 172.16.0.4
s-stork IN A 172.16.0.4
s-gotify IN A 172.16.0.4
s-win IN A 172.16.0.6
s-mess IN A 172.16.0.7
s-nxc IN A 172.16.0.7

Binary file not shown.

View File

@ -7,7 +7,7 @@
- name: on verifie si docker est installe
stat:
path: /usr/bin/docker
# command: which docker
#command: which docker
register: docker_present
- name: Execution du script getdocker si docker n'est pas deja installe

View File

@ -1,6 +1,76 @@
Configuration de ferm
# [Ferm](http://ferm.foo-projects.org/)
Modifier l'execution d'iptables [plus d'info ici](https://wiki.debian.org/iptables)
Modifier l'execution d'iptables [plus d'info ici#!/bin/bash
set -u
set -e
# Version Site to Site
AddressAwg=10.0.0.1/32 # Adresse VPN Wireguard cote A
EndpointA=192.168.0.51 # Adresse extremite A
PortA=51820 # Port ecoute extremite A
NetworkA=192.168.1.0/24 # reseau cote A
NetworkC=192.168.200.0/24 #reseau cote A
NetworkD=172.16.0.0/24 #reseau cote A
AddressBwg=10.0.0.2/32 # Adresse VPN Wireguard cote B
EndpointB=192.168.0.52 # Adresse extremite B
PortB=51820 # Port ecoute extremite B
NetworkB=172.16.128.0/24 # reseau cote B
umask 077
wg genkey > endpoint-a.key
wg pubkey < endpoint-a.key > endpoint-a.pub
wg genkey > endpoint-b.key
wg pubkey < endpoint-b.key > endpoint-b.pub
PKA=$(cat endpoint-a.key)
pKA=$(cat endpoint-a.pub)
PKB=$(cat endpoint-b.key)
pKB=$(cat endpoint-b.pub)
cat <<FINI > wg0-a.conf
# local settings for Endpoint A
[Interface]
PrivateKey = $PKA
Address = $AddressAwg
ListenPort = $PortA
# IP forwarding
PreUp = sysctl -w net.ipv4.ip_forward=1
# remote settings for Endpoint B
[Peer]
PublicKey = $pKB
Endpoint = ${EndpointB}:$PortB
AllowedIPs = $AddressBwg, $NetworkB
FINI
cat <<FINI > wg0-b.conf
# local settings for Endpoint B
[Interface]
PrivateKey = $PKB
Address = $AddressBwg
ListenPort = $PortB
# IP forwarding
PreUp = sysctl -w net.ipv4.ip_forward=1
# remote settings for Endpoint A
[Peer]
PublicKey = $pKA
Endpoint = ${EndpointA}:$PortA
AllowedIPs = $AddressAwg, $NetworkA, $NetworkC, $NetworkD
FINI
echo "wg0-a.conf et wg0-b.conf sont generes ..."
echo "copier wg0-b.conf sur la machine b et renommer les fichiers de configuration ..."](https://wiki.debian.org/iptables)
```shell
update-alternatives --set iptables /usr/sbin/iptables-legacy
```

View File

@ -0,0 +1,50 @@
---
- name: Mise a jour apt cache
apt:
update_cache: yes
- name: Creation /etc/gotify
ansible.builtin.file:
path: /etc/gotify
state: directory
mode: '0755'
- name: Creation /opt/gotify
ansible.builtin.file:
path: /opt/gotify
state: directory
mode: '0755'
- name: installation de gotify
get_url:
url: "https://github.com/gotify/server/releases/latest/download/gotify-linux-amd64.zip"
dest: "/tmp/gotify.zip"
- name: Extraction de Gotify
ansible.builtin.unarchive:
src: "/tmp/gotify.zip"
dest: "/opt/gotify"
become: yes
- name: Creation du fichier systemd
template:
src: "gotify.service.j2"
dest: "/etc/systemd/system/gotify.service"
become: yes
- name: Reload systemd
systemd:
daemon_reload: yes
- name: Creation du fichier conf gotify
template:
src: "config.yml.j2"
dest: "/etc/gotify/config.yml"
become: yes
- name: Demarage du gotify
systemd:
name: gotify
state: started
enabled: yes

View File

@ -0,0 +1,4 @@
server:
keepaliveperiodseconds: 0
listenaddr: "" # the address to bind on, leave empty to bind on all addresses
port: 8008

View File

@ -0,0 +1,13 @@
[Unit]
Description=Gotify Server
After=network.target
[Service]
Type=simple
User=root
ExecStart=/opt/gotify/gotify-linux-amd64
Restart=on-failure
[Install]
WantedBy=multi-user.target

View File

@ -1,66 +0,0 @@
// This is an example of a configuration for Control-Agent (CA) listening
// for incoming HTTP traffic. This is necessary for handling API commands,
// in particular lease update commands needed for HA setup.
{
"Control-agent":
{
// We need to specify where the agent should listen to incoming HTTP
// queries.
"http-host": "172.16.64.1",
// This specifies the port CA will listen on.
"http-port": 8000,
"control-sockets":
{
// This is how the Agent can communicate with the DHCPv4 server.
"dhcp4":
{
"comment": "socket to DHCPv4 server",
"socket-type": "unix",
"socket-name": "/tm/kea4-ctrl-socket"
},
// Location of the DHCPv6 command channel socket.
# "dhcp6":
# {
# "socket-type": "unix",
# "socket-name": "/tmp/kea6-ctrl-socket"
# },
// Location of the D2 command channel socket.
# "d2":
# {
# "socket-type": "unix",
# "socket-name": "/tmp/kea-ddns-ctrl-socket",
# "user-context": { "in-use": false }
# }
},
// Similar to other Kea components, CA also uses logging.
"loggers": [
{
"name": "kea-ctrl-agent",
"output_options": [
{
"output": "stdout",
// Several additional parameters are possible in addition
// to the typical output. Flush determines whether logger
// flushes output to a file. Maxsize determines maximum
// filesize before the file is rotated. maxver
// specifies the maximum number of rotated files being
// kept.
"flush": true,
"maxsize": 204800,
"maxver": 4,
// We use pattern to specify custom log message layout
"pattern": "%d{%y.%m.%d %H:%M:%S.%q} %-5p [%c/%i] %m\n"
}
],
"severity": "INFO",
"debuglevel": 0 // debug level only applies when severity is set to DEBUG.
}
]
}
}

21
roles/kea/README.md Normal file
View File

@ -0,0 +1,21 @@
# Rôle Kea
***
Rôle Kea: Configuration de 2 serveurs KEA en mode haute disponbilité.
## Tables des matières
1. [Que fait le rôle Kea ?]
2. [Installation et configuration de ka]
3. [Remarques]
## Que fait le rôle Kea ?
Le rôle KEA permet de configurer 1 serveurs kea (s-kea1 et s-kea2) en mode haute disponibilité.
- Le serveur **s-kea1** sera en mode **primary** il délivrera les baux DHCP sur le réseau n-user.
- Le serveur **s-kea2**, sera en mode **stand-by** le service DHCP basculera donc sur **s-kea2** en cas disponibilité du serveur**s-kea1**.
### Installation et configuration de kea
Le rôle kea installe les packets **kea dhcp4, hooks, admin** une fois les packets installer. Il configure un serveur kea pour qu'il distribue les ips sur le réseau n-user et soit en haute disponibilité.
### Remarquees ###
Une fois le playbook **s-kea** correctement terminé et la machine **s-kea** redemarrée, redémarrée le service **isc-kea-dhcp4.service** afin de prendre en compte les modifications éfféctuées sur la couche réseau par le role POST.

View File

@ -6,7 +6,7 @@
{
// We need to specify where the agent should listen to incoming HTTP
// queries.
"http-host": "172.16.64.1",
"http-host": "172.16.0.20",
// This specifies the port CA will listen on.
"http-port": 8000,
@ -18,7 +18,7 @@
{
"comment": "socket to DHCPv4 server",
"socket-type": "unix",
"socket-name": "/tm/kea4-ctrl-socket"
"socket-name": "/tmp/kea4-ctrl-socket"
},
// Location of the DHCPv6 command channel socket.

View File

@ -0,0 +1,12 @@
---
- name: Restart isc-kea-dhcp4-server
ansible.builtin.service:
name: isc-kea-dhcp4-server.service
state: restarted
enabled: yes
- name: Restart isc-kea-ctrl-agent
ansible.builtin.service:
name: isc-kea-ctrl-agent.service
state: restarted
enabled: yes

43
roles/kea/tasks/main.yml Normal file
View File

@ -0,0 +1,43 @@
---
- name: Preparation
ansible.builtin.shell: curl -1sLf 'https://dl.cloudsmith.io/public/isc/kea-2-4/setup.deb.sh' | sudo -E bash
- name: Update apt
ansible.builtin.apt:
update_cache: yes
#- name: Installation paquet isc-kea-common
# ansible.builtin.apt:
# deb: isc-kea-common
# state: present
- name: Installation isc-kea-dhcp4
ansible.builtin.apt:
name: isc-kea-dhcp4-server
state: present
- name: Installation isc-kea-ctrl-agent
ansible.builtin.apt:
name: isc-kea-ctrl-agent
state: present
- name: Installation isc-kea-hooks
ansible.builtin.apt:
name: isc-kea-hooks
state: present
- name: Generation ---- du fichier de configuration kea-ctrl-agent
ansible.builtin.template:
src: kea-ctrl-agent.conf.j2
dest: /etc/kea/kea-ctrl-agent.conf
notify:
- Restart isc-kea-ctrl-agent
- name: Generation du fichier de configuration kea-dhcp4.conf
ansible.builtin.template:
src: kea-dhcp4.conf.j2
dest: /etc/kea/kea-dhcp4.conf
notify:
- Restart isc-kea-dhcp4-server

View File

@ -0,0 +1,32 @@
{
"Control-agent":
{
"http-host": "{{ kea_ctrl_address_this }}",
"http-port": 8000,
"control-sockets":
{
"dhcp4":
{
"socket-type": "unix",
"socket-name": "/tmp/kea4-ctrl-socket"
},
},
"loggers": [
{
"name": "kea-ctrl-agent",
"output_options": [
{
"output": "stdout",
"flush": true,
"maxsize": 204800,
"maxver": 4,
{% raw %} "pattern": "%d{%y.%m.%d %H:%M:%S.%q} %-5p [%c/%i] %m\n", {% endraw %}
}
],
"severity": "INFO",
"debuglevel": 0
}
]
}
}

View File

@ -22,7 +22,7 @@
// The DHCPv4 server listens on this interface. When changing this to
// the actual name of your interface, make sure to also update the
// interface parameter in the subnet definition below.
"interfaces": [ "enp0s8" ]
"interfaces": ["{{ kea_dhcp_int }}"]
},
// Control socket is required for communication between the Control
@ -76,19 +76,19 @@
// deliver lease updates to the server as well as synchronize the
// lease database after failure.
{
"library": "/usr/local/lib/kea/hooks/libdhcp_lease_cmds.so"
"library": "/usr/lib/x86_64-linux-gnu/kea/hooks/libdhcp_lease_cmds.so"
},
{
// The HA hook library should be loaded.
"library": "/usr/local/lib/kea/hooks/libdhcp_ha.so",
"library": "/usr/lib/x86_64-linux-gnu/kea/hooks/libdhcp_ha.so",
"parameters": {
// Each server should have the same HA configuration, except for the
// "this-server-name" parameter.
"high-availability": [ {
// This parameter points to this server instance. The respective
// HA peers must have this parameter set to their own names.
"this-server-name": "kea1",
"this-server-name": "{{ kea_this_server }}",
// The HA mode is set to hot-standby. In this mode, the active server handles
// all the traffic. The standby takes over if the primary becomes unavailable.
"mode": "hot-standby",
@ -116,24 +116,24 @@
"peers": [
// This is the configuration of this server instance.
{
"name": "kea1",
"name": "{{ kea_srv1 }}",
// This specifies the URL of this server instance. The
// Control Agent must run along with this DHCPv4 server
// instance and the "http-host" and "http-port" must be
// set to the corresponding values.
"url": "http://172.16.64.1:8000/",
"url": "http://{{ kea_ctrl_address1 }}:8000/",
// This server is primary. The other one must be
// secondary.
"role": "primary"
},
// This is the configuration of the secondary server.
{
"name": "kea2",
"name": "{{ kea_srv2 }}",
// Specifies the URL on which the partner's control
// channel can be reached. The Control Agent is required
// to run on the partner's machine with "http-host" and
// "http-port" values set to the corresponding values.
"url": "http://172.16.64.2:8000/",
"url": "http://{{ kea_ctrl_address2 }}:8000/",
// The other server is secondary. This one must be
// primary.
"role": "standby"
@ -152,7 +152,7 @@
// There are no relays in this network, so we need to tell Kea that this subnet
// is reachable directly via the specified interface.
"interface": "enp0s8",
"interface": "enp0s9",
// Specify a dynamic address pool.
"pools": [
@ -171,7 +171,7 @@
{
// For each IPv4 subnet you typically need to specify at least one router.
"name": "routers",
"data": "172.16.64.1"
"data": "172.16.64.254"
},
{
// Using cloudflare or Quad9 is a reasonable option. Change this
@ -179,7 +179,7 @@
// choice is 8.8.8.8, owned by Google. Using third party DNS
// service raises some privacy concerns.
"name": "domain-name-servers",
"data": "172.16.64.1"
"data": "172.16.0.1"
}
],

View File

@ -1,23 +0,0 @@
port:
tcp:80:
listening: true
ip:
- 192.168.100.11
service:
haproxy:
enabled: true
running: true
sshd:
enabled: true
running: true
interface:
enp0s8:
exists: true
addrs:
- 192.168.100.11/24
mtu: 1500
enp0s9:
exists: true
addrs:
- 192.168.101.254/24
mtu: 1500

View File

@ -41,7 +41,7 @@ frontend proxypublic
backend fermeweb
balance roundrobin
option httpclose
#option httpchk HEAD / HTTP/1.0
option httpchk HEAD / HTTP/1.0
server s-lb-web1 192.168.101.1:80 check
server s-lb-web2 192.168.101.2:80 check

View File

@ -14,7 +14,7 @@
backend fermeweb
balance roundrobin
option httpclose
#option httpchk HEAD / HTTP/1.0
option httpchk HEAD / HTTP/1.0
server s-lb-web1 192.168.101.1:80 check
server s-lb-web2 192.168.101.2:80 check

View File

@ -53,8 +53,8 @@ services:
image: nextcloud
container_name: app
restart: always
ports:
- 8081:80
#ports:
#- 8081:80
#links:
depends_on:
- db

View File

@ -0,0 +1,29 @@
#!/bin/bash
# Mettre le serveur NextCloud en mode maintenance
docker compose exec -u www-data app php occ maintenance:mode --on
# Extraire les dossiers de sauvegarde
cd /root/nxc
# Copie locale de la sauvegarde
rsync -Aavx nextcloud/ nextcloud-dirbkp/
# Base de données MySQL/MariaDB
docker compose exec db mysqldump -u nextcloud -pAzerty1+ nextcloud > nextcloud-sqlbkp.bak
# Sortir du mode maintenance
docker compose exec -u www-data app php occ maintenance:mode --off
# création d'une archive
tar cvfz nxc.tgz nextcloud-sqlbkp.bak nextcloud-dirbkp
# envoie sur s-backup
BACKUP=/home/backup/s-nxc
# Préparation des dossiers qui vont accueillir les données à sauvegarder (-e lance le répertoire si il existe)
[[ -e "${BACKUP}" ]] || mkdir -p "${BACKUP}"
# Sauvegarde du fichier nxc.tgz vers la machine s-backup
scp root@s-nxc:/root/nxc/nxc.tgz "${BACKUP}/"

View File

@ -22,7 +22,7 @@
// The DHCPv4 server listens on this interface. When changing this to
// the actual name of your interface, make sure to also update the
// interface parameter in the subnet definition below.
"interfaces": [ "enp0s8" ]
"interfaces": [ "enp0s9" ]
},
// Control socket is required for communication between the Control
@ -88,7 +88,7 @@
"high-availability": [ {
// This parameter points to this server instance. The respective
// HA peers must have this parameter set to their own names.
"this-server-name": "kea1",
"this-server-name": "s-kea1.gsb.lan",
// The HA mode is set to hot-standby. In this mode, the active server handles
// all the traffic. The standby takes over if the primary becomes unavailable.
"mode": "hot-standby",
@ -116,24 +116,24 @@
"peers": [
// This is the configuration of this server instance.
{
"name": "kea1",
"name": "s-kea1.gsb.lan",
// This specifies the URL of this server instance. The
// Control Agent must run along with this DHCPv4 server
// instance and the "http-host" and "http-port" must be
// set to the corresponding values.
"url": "http://172.16.64.1:8000/",
"url": "http://172.16.64.20:8000/",
// This server is primary. The other one must be
// secondary.
"role": "primary"
},
// This is the configuration of the secondary server.
{
"name": "kea2",
"name": "s-kea2.gsb.lan",
// Specifies the URL on which the partner's control
// channel can be reached. The Control Agent is required
// to run on the partner's machine with "http-host" and
// "http-port" values set to the corresponding values.
"url": "http://172.16.64.2:8000/",
"url": "http://172.16.64.21:8000/",
// The other server is secondary. This one must be
// primary.
"role": "standby"
@ -152,7 +152,7 @@
// There are no relays in this network, so we need to tell Kea that this subnet
// is reachable directly via the specified interface.
"interface": "enp0s8",
"interface": "enp0s9",
// Specify a dynamic address pool.
"pools": [
@ -171,7 +171,7 @@
{
// For each IPv4 subnet you typically need to specify at least one router.
"name": "routers",
"data": "172.16.64.1"
"data": "172.16.64.254"
},
{
// Using cloudflare or Quad9 is a reasonable option. Change this
@ -179,7 +179,7 @@
// choice is 8.8.8.8, owned by Google. Using third party DNS
// service raises some privacy concerns.
"name": "domain-name-servers",
"data": "172.16.64.1"
"data": "172.16.0.1"
}
],

View File

@ -0,0 +1,8 @@
#variable kea
kea_ver: "2.4.1"
kea_dbname: ""
kaa_dbuser: ""
kea_dbpasswd: ""
kea_dhcp4_dir: "/etc/kea/kea-dhcp4.conf"
kea_ctrl_dir: "/etc/kea/kea-ctrl-agent.conf"

View File

@ -0,0 +1 @@
###Génération de clé publique et privée###

View File

@ -0,0 +1,20 @@
---
- name: on genere une cle privee pour s-backup
openssh_keypair:
path: /root/id_rsa_sbackup
type: rsa
state: present
- name: copie cle publique dans gsbstore
copy:
src: /root/id_rsa_sbackup.pub
dest: /var/www/html/gsbstore
mode: 0644
remote_src: yes
- name: copie cle privee dans gsbstore
copy:
src: /root/id_rsa_sbackup
dest: /var/www/html/gsbstore
mode: 0600
remote_src: yes

View File

@ -0,0 +1,13 @@
---
- name: creation .ssh
file:
path: ~/.ssh
state: directory
mode: 0700
- name: recuperation de la cle privee generee par s-adm
get_url:
url: http://s-adm.gsb.adm/gsbstore/id_rsa_sbackup
dest: /root/.ssh/id_rsa_sbackup
mode: 0600

View File

@ -0,0 +1,6 @@
---
- name: recuperation de la cle publique generee par s-adm
ansible.posix.authorized_key:
user: root
state: present
key: http://s-adm.gsb.adm/gsbstore/id_rsa_sbackup.pub

View File

@ -0,0 +1,21 @@
# Rôle Kea
***
Rôle Kea: Configuration de 2 serveurs KEA en mode haute disponbilité.
## Tables des matières
1. [Que fait le rôle Kea ?]
2. [Installation et configuration de ka]
3. [Remarques]
## Que fait le rôle Kea ?
Le rôle KEA permet de configurer 1 serveurs kea (s-kea1 et s-kea2) en mode haute disponibilité.
- Le serveur **s-kea1** sera en mode **primary** il délivrera les baux DHCP sur le réseau n-user.
- Le serveur **s-kea2**, sera en mode **stand-by** le service DHCP basculera donc sur **s-kea2** en cas disponibilité du serveur**s-kea1**.
### Installation et configuration de kea
Le rôle kea installe les packets **kea dhcp4, hooks, admin** une fois les packets installer. Il configure un serveur kea pour qu'il distribue les ips sur le réseau n-user et soit en haute disponibilité.
### Remarquees ###
Une fois le playbook **s-kea** correctement terminé et la machine **s-kea** redemarrée, redémarrée le service **isc-kea-dhcp4.service** afin de prendre en compte les modifications éfféctuées sur la couche réseau par le role POST.

View File

@ -0,0 +1,7 @@
---
- name: Restart isc-stork-agent
ansible.builtin.service:
name: isc-stork-agent.service
state: restarted
enabled: yes

View File

@ -0,0 +1,21 @@
---
- name: Preparation
ansible.builtin.shell: curl -1sLf 'https://dl.cloudsmith.io/public/isc/stork/cfg/setup/bash.deb.sh' | sudo bash
- name: Update apt
ansible.builtin.apt:
update_cache: yes
- name: Installation isc-stork-agent
ansible.builtin.apt:
name: isc-stork-agent
state: present
- name: Generation du fichier de configuration agent.env
ansible.builtin.template:
src: agent.env.j2
dest: /etc/stork/agent.env
notify:
- Restart isc-stork-agent

View File

@ -0,0 +1,45 @@
### the IP or hostname to listen on for incoming Stork server connections
STORK_AGENT_HOST={{ stork_host }}
### the TCP port to listen on for incoming Stork server connections
STORK_AGENT_PORT={{ stork_port }}
### listen for commands from the Stork server only, but not for Prometheus requests
# STORK_AGENT_LISTEN_STORK_ONLY=true
### listen for Prometheus requests only, but not for commands from the Stork server
# STORK_AGENT_LISTEN_PROMETHEUS_ONLY=true
### settings for exporting stats to Prometheus
### the IP or hostname on which the agent exports Kea statistics to Prometheus
# STORK_AGENT_PROMETHEUS_KEA_EXPORTER_ADDRESS=
### the port on which the agent exports Kea statistics to Prometheus
# STORK_AGENT_PROMETHEUS_KEA_EXPORTER_PORT=
### how often the agent collects stats from Kea, in seconds
# STORK_AGENT_PROMETHEUS_KEA_EXPORTER_INTERVAL=
## enable or disable collecting per-subnet stats from Kea
# STORK_AGENT_PROMETHEUS_KEA_EXPORTER_PER_SUBNET_STATS=true
### the IP or hostname on which the agent exports BIND 9 statistics to Prometheus
# STORK_AGENT_PROMETHEUS_BIND9_EXPORTER_ADDRESS=
### the port on which the agent exports BIND 9 statistics to Prometheus
# STORK_AGENT_PROMETHEUS_BIND9_EXPORTER_PORT=
### how often the agent collects stats from BIND 9, in seconds
# STORK_AGENT_PROMETHEUS_BIND9_EXPORTER_INTERVAL=
### Stork Server URL used by the agent to send REST commands to the server during agent registration
STORK_AGENT_SERVER_URL=http://s-backup.gsb.lan:8080/
### skip TLS certificate verification when the Stork Agent connects
### to Kea over TLS and Kea uses self-signed certificates
# STORK_AGENT_SKIP_TLS_CERT_VERIFICATION=true
### Logging parameters
### Set logging level. Supported values are: DEBUG, INFO, WARN, ERROR
# STORK_LOG_LEVEL=DEBUG
### disable output colorization
# CLICOLOR=false
### path to the hook directory
# STORK_AGENT_HOOK_DIRECTORY=

View File

@ -0,0 +1,21 @@
# Rôle Kea
***
Rôle Kea: Configuration de 2 serveurs KEA en mode haute disponbilité.
## Tables des matières
1. [Que fait le rôle Kea ?]
2. [Installation et configuration de ka]
3. [Remarques]
## Que fait le rôle Kea ?
Le rôle KEA permet de configurer 1 serveurs kea (s-kea1 et s-kea2) en mode haute disponibilité.
- Le serveur **s-kea1** sera en mode **primary** il délivrera les baux DHCP sur le réseau n-user.
- Le serveur **s-kea2**, sera en mode **stand-by** le service DHCP basculera donc sur **s-kea2** en cas disponibilité du serveur**s-kea1**.
### Installation et configuration de kea
Le rôle kea installe les packets **kea dhcp4, hooks, admin** une fois les packets installer. Il configure un serveur kea pour qu'il distribue les ips sur le réseau n-user et soit en haute disponibilité.
### Remarquees ###
Une fois le playbook **s-kea** correctement terminé et la machine **s-kea** redemarrée, redémarrée le service **isc-kea-dhcp4.service** afin de prendre en compte les modifications éfféctuées sur la couche réseau par le role POST.

View File

@ -0,0 +1,8 @@
#variable kea
kea_ver: "2.4.1"
kea_dbname: ""
kaa_dbuser: ""
kea_dbpasswd: ""
kea_dhcp4_dir: "/etc/kea/kea-dhcp4.conf"
kea_ctrl_dir: "/etc/kea/kea-ctrl-agent.conf"

View File

@ -0,0 +1,6 @@
---
- name: Restart isc-stork-server.service
ansible.builtin.service:
name: isc-stork-server.service
state: restarted
enabled: yes

View File

@ -0,0 +1,31 @@
---
- name: Preparation
ansible.builtin.shell: curl -1sLf 'https://dl.cloudsmith.io/public/isc/stork/cfg/setup/bash.deb.sh' | sudo bash
- name: Update apt
ansible.builtin.apt:
update_cache: yes
#- name: Installation paquet isc-kea-common
# ansible.builtin.apt:
# deb: isc-kea-common
# state: present
- name: Installation isc-stork-server postgresql
ansible.builtin.apt:
pkg:
- isc-stork-server
- postgresql-15
- name: lancer la commande de création de la base de donnees stork
ansible.builtin.shell: su postgres --command "stork-tool db-create --db-name {{ stork_db_name }} --db-user {{ stork_db_user }} --db-password {{ stork_db_passwd }}"
- name: Generation ---- du fichier de configuration server.env
ansible.builtin.template:
src: server.env.j2
dest: /etc/stork/server.env
notify:
- Restart isc-stork-server.service

View File

@ -0,0 +1,52 @@
### database settings
### the address of a PostgreSQL database
STORK_DATABASE_HOST=localhost
### the port of a PostgreSQL database
STORK_DATABASE_PORT=5432
### the name of a database
STORK_DATABASE_NAME={{ stork_db_name }}
### the username for connecting to the database
STORK_DATABASE_USER_NAME={{ stork_db_user }}
### the SSL mode for connecting to the database
### possible values: disable, require, verify-ca, or verify-full
# STORK_DATABASE_SSLMODE=
### the location of the SSL certificate used by the server to connect to the database
# STORK_DATABASE_SSLCERT=
### the location of the SSL key used by the server to connect to the database
# STORK_DATABASE_SSLKEY=
### the location of the root certificate file used to verify the database server's certificate
# STORK_DATABASE_SSLROOTCERT=
### the password for the username connecting to the database
### empty password is set to avoid prompting a user for database password
STORK_DATABASE_PASSWORD={{stork_db_passwd }}
### REST API settings
### the IP address on which the server listens
# STORK_REST_HOST=
### the port number on which the server listens
# STORK_REST_PORT=
### the file with a certificate to use for secure connections
# STORK_REST_TLS_CERTIFICATE=
### the file with a private key to use for secure connections
# STORK_REST_TLS_PRIVATE_KEY=
### the certificate authority file used for mutual TLS authentication
# STORK_REST_TLS_CA_CERTIFICATE=
### the directory with static files served in the UI
STORK_REST_STATIC_FILES_DIR=/usr/share/stork/www
### the base URL of the UI - to be used only if the UI is served from a subdirectory
# STORK_REST_BASE_URL=
### enable Prometheus /metrics HTTP endpoint for exporting metrics from
### the server to Prometheus. It is recommended to secure this endpoint
### (e.g. using HTTP proxy).
# STORK_SERVER_ENABLE_METRICS=true
### Logging parameters
### Set logging level. Supported values are: DEBUG, INFO, WARN, ERROR
# STORK_LOG_LEVEL=DEBUG
### disable output colorization
# CLICOLOR=false
### path to the hook directory
# STORK_SERVER_HOOK_DIRECTORY=

View File

@ -24,7 +24,10 @@ bash r-vp1-post.sh
```
## Sur **r-vp2**:
Lancer le script r-vp2-post.sh pour récuperer le fichier de configuration et activer l'interface wg0.
Lancer le playbook : *ansible-playbook -i localhost, -c local* r-vp2.yml sur **r-vp2**
Puis lancer le script r-vp2-post.sh pour récuperer le fichier de configuration et activer l'interface wg0.
### 🛠️ Lancer le script
```bash
cd /tools/ansible/gsb2023/Scripts
@ -34,7 +37,11 @@ bash r-vp2-post.sh
```
## Fin
redemarer les machines
Pour finir redemarer les machines.
```bash
reboot
```
Veuillez maintenant vous rendre dans le dossier du role ferm :
*gsb2024/roles/fw-ferm*
*Modification : jm*

View File

@ -1,2 +1,3 @@
SERVER: "127.0.0.1"
SERVERACTIVE: "172.16.0.8"
SERVERACTIVE: "192.168.99.8"
TOKENAPI: "f72473b7e5402a5247773e456f3709dcdd5e41792360108fc3451bbfeed8eafe"

View File

@ -28,3 +28,11 @@
state: restarted
enabled: yes
- name: mise en place script hostcreate
template:
src: hostcreate.sh.j2
dest: /tmp/hostcreate.sh
#- name: lancement script hostcreate
#command: bash /tmp/hostcreate.sh

View File

@ -0,0 +1 @@
curl -X POST -H "Content-Type: application/json" -d '{ "jsonrpc":"2.0","method":"host.create","params": {"host": "{{ ansible_hostname }}","groups": [{"groupid": "6"}],"templates": [{"templateid": "10343"}],"inventory_mode": 0,"inventory": {"type": 0}},"auth": "{{ TOKENAPI }}","id": 1}' http://{{ SERVERACTIVE }}/zabbix/api_jsonrpc.php

View File

@ -29,65 +29,46 @@
name: mariadb
state: started
- name: 6. Créer la base de données
community.mysql.mysql_db:
name: zabbix
encoding: utf8mb4
collation: utf8mb4_bin
state: present
login_unix_socket: /var/run/mysqld/mysqld.sock
- name: 7. Creer un utilisateur et lui attribuer tous les droits
community.mysql.mysql_user:
name: zabbix
password: password
priv: '*.*:ALL,GRANT'
state: present
login_unix_socket: /var/run/mysqld/mysqld.sock
- name: 8. Modifier une variable pour importer un schema
- name: 6. Modifier la variable trust function creators pour importer la base données
community.mysql.mysql_variables:
variable: log_bin_trust_function_creators
value: 1
mode: global
login_unix_socket: /var/run/mysqld/mysqld.sock
- name: 9. Importer le schema initial
- name: 7. Récupérer la base de données
get_url:
url: http://s-adm.gsb.adm/gsbstore/zabbix.sql.gz
dest: /tmp
- name: 8. Importer la base de données
community.mysql.mysql_db:
state: import
name: zabbix
encoding: utf8mb4
login_user: zabbix
login_password: password
target: /usr/share/zabbix-sql-scripts/mysql/server.sql.gz
target: /tmp/zabbix.sql.gz
login_unix_socket: /var/run/mysqld/mysqld.sock
- name: 10. Modifier la variable pour le schema
- name: 9. Remettre a zero la variable trust function creators
community.mysql.mysql_variables:
variable: log_bin_trust_function_creators
value: 0
mode: global
login_unix_socket: /var/run/mysqld/mysqld.sock
- name: 11. Configurer le mdp de la db
replace:
path: /etc/zabbix/zabbix_server.conf
regexp: '^# DBPassword='
replace: 'DBPassword=password'
- name: 12. Lancer le service zabbix-server
- name: 10. Lancer le service zabbix-server
service:
name: zabbix-server
state: restarted
enabled: yes
- name: 13. Lancer le service zabbix-agent
- name: 11. Lancer le service zabbix-agent
service:
name: zabbix-agent
state: restarted
enabled: yes
- name: 14. Lancer le service apache2
- name: 12. Lancer le service apache2
service:
name: apache2
state: restarted

View File

@ -7,6 +7,7 @@
- s-ssh
- dnsmasq
- squid
- ssh-backup-key-gen
# - local-store
- zabbix-cli
## - syslog-cli

View File

@ -1,14 +1,20 @@
---
- hosts: localhost
connection: local
vars:
stork_db_user: "stork-server"
stork_db_passwd: "Azerty1+"
stork_db_name: "stork"
roles:
- base
- goss
# - proxy3
- zabbix-cli
# - ssh-cli
# - syslog-cli
- gotify
- stork-server
- ssh-cli
#- syslog-cli
- smb-backup
- dns-slave
- post
- ssh-backup-key-private

View File

@ -1,13 +1,24 @@
---
- hosts: localhost
connection: local
vars:
kea_this_server: "s-kea1"
kea_srv1: "s-kea1"
kea_srv2: "s-kea2"
kea_ctrl_address_this: "172.16.0.20"
kea_ctrl_address1: "172.16.0.20"
kea_ctrl_address2: "172.16.0.21"
kea_dhcp_int: "enp0s9"
stork_host: "s-kea1.gsb.lan"
stork_port: "8081"
roles:
- base
#- goss
#- ssh-cli
- kea-master
#- zabbix-cli
#- journald-snd
#- snmp-agent
- goss
- ssh-cli
- kea
- stork-agent
- zabbix-cli
- journald-snd
- snmp-agent
- post

View File

@ -1,13 +1,24 @@
---
- hosts: localhost
connection: local
vars:
kea_this_server: "s-kea2"
kea_srv1: "s-kea1"
kea_srv2: "s-kea2"
kea_ctrl_address_this: "172.16.0.21"
kea_ctrl_address1: "172.16.0.20"
kea_ctrl_address2: "172.16.0.21"
kea_dhcp_int: "enp0s9"
stork_host: "s-kea2.gsb.lan"
stork_port: "8081"
roles:
- base
# - goss
# - ssh-cli
- kea-slave
# - zabbix-cli
# - journald-snd
# - snmp-agent
- goss
- ssh-cli
- kea
- stork-agent
- zabbix-cli
- journald-snd
- snmp-agent
- post

View File

@ -4,6 +4,7 @@
roles:
- base
- goss
- post-lb
- lb-web
# - zabbix-cli

View File

@ -4,6 +4,7 @@
roles:
- base
- goss
- post-lb
- lb-web
# - zabbix-cli

View File

@ -9,6 +9,7 @@
roles:
- base
- goss
#- zabbix-cli
- lb-nfs-server
- ssh-cli

55
s-nxc.yaml Normal file
View File

@ -0,0 +1,55 @@
command:
ls -l .:
exit-status: 0
stdout:
- total 200
- -rwxr-xr-x 1 root root 232 15 janv. 17:38 agoss
- -rw-r--r-- 1 root root 212 15 janv. 17:38 changelog
- drwxr-xr-x 3 root root 4096 15 janv. 17:38 doc
- drwxr-xr-x 2 root root 4096 19 janv. 10:50 goss
- -rwxr-xr-x 1 root root 209 15 janv. 17:38 gsbchk
- -rwxr-xr-x 1 root root 7174 15 janv. 17:38 gsbstart
- -rwxr-xr-x 1 root root 728 15 janv. 17:38 gsbstartl
- -rw-r--r-- 1 root root 289 15 janv. 17:38 lisezmoi.txt
- drwxr-xr-x 2 root root 4096 15 janv. 17:38 old
- drwxr-xr-x 2 root root 4096 19 janv. 09:16 pre
- -rw-r--r-- 1 root root 477 19 janv. 09:16 pull-config
- -rw-r--r-- 1 root root 5070 19 janv. 09:16 README.md
- -rw-r--r-- 1 root root 141 15 janv. 17:38 r-ext.yml
- -rw-r--r-- 1 root root 151 15 janv. 17:38 r-int.yml
- drwxr-xr-x 55 root root 4096 19 janv. 09:16 roles
- -rw-r--r-- 1 root root 177 15 janv. 17:38 r-vp1-fw.yml
- -rw-r--r-- 1 root root 259 15 janv. 17:38 r-vp1.yml
- -rw-r--r-- 1 root root 173 15 janv. 17:38 r-vp2-fw.yml
- -rw-r--r-- 1 root root 305 15 janv. 17:38 r-vp2.yml
- -rw-r--r-- 1 root root 181 19 janv. 09:16 s-adm.yml
- -rw-r--r-- 1 root root 119 15 janv. 17:38 s-agence.yml
- -rw-r--r-- 1 root root 166 19 janv. 09:16 s-appli.yml
- -rw-r--r-- 1 root root 182 19 janv. 09:16 s-backup.yml
- drwxr-xr-x 3 root root 4096 19 janv. 09:16 scripts
- -rw-r--r-- 1 root root 213 15 janv. 17:38 s-docker.yml
- -rw-r--r-- 1 root root 144 15 janv. 17:38 s-elk.yml
- -rw-r--r-- 1 root root 178 19 janv. 09:16 s-fog-post.yml
- -rw-r--r-- 1 root root 162 19 janv. 09:16 s-fog.yml
- -rw-r--r-- 1 root root 199 19 janv. 09:16 s-infra.yml
- -rw-r--r-- 1 root root 351 15 janv. 17:38 s-itil.yml
- -rw-r--r-- 1 root root 185 19 janv. 09:16 s-kea1.yml
- -rw-r--r-- 1 root root 174 19 janv. 09:16 s-kea2.yml
- -rw-r--r-- 1 root root 131 19 janv. 09:16 s-lb-bd.yml
- -rw-r--r-- 1 root root 127 19 janv. 09:16 s-lb-web1.yml
- -rw-r--r-- 1 root root 127 19 janv. 09:16 s-lb-web2.yml
- -rw-r--r-- 1 root root 145 19 janv. 09:16 s-lb.yml
- -rw-r--r-- 1 root root 148 19 janv. 09:16 s-mess.yml
- -rw-r--r-- 1 root root 241 19 janv. 09:16 s-mon.yml
- -rw-r--r-- 1 root root 290 19 janv. 09:16 s-nas.yml
- -rw-r--r-- 1 root root 156 15 janv. 17:38 s-nxc.yml
- -rw-r--r-- 1 root root 140 15 janv. 17:38 s-peertube.yml
- -rw-r--r-- 1 root root 148 19 janv. 09:16 s-proxy.yml
- -rw-r--r-- 1 root root 161 15 janv. 17:38 s-test.yml
- drwxr-xr-x 3 root root 4096 15 janv. 17:38 sv
- drwxr-xr-x 2 root root 4096 15 janv. 17:38 tests
- drwxr-xr-x 2 root root 4096 15 janv. 17:38 vagrant
- drwxr-xr-x 2 root root 4096 15 janv. 17:38 windows
- drwxr-xr-x 7 root root 4096 19 janv. 09:16 wireguard
stderr: []
timeout: 10000

View File

@ -10,3 +10,4 @@
# - syslog-cli
- snmp-agent
- post
- ssh-backup-key-pub

18
wireguard/README.md Normal file
View File

@ -0,0 +1,18 @@
# **Explication :**
Le dossier Wireguard comprend tous les tests de ping à effectuer une fois l'installation complète complète de wireguard.
Les dossiers présent dans ce dossier contiennent les routes qui doivent être présent sur nos différentes machines. Vous pouvez comparer les interface avec un "ip a" en cas de disfonctionnement.
# **Etapes pour lancer les tests:**
Pour tester le bon fonctionnement du VPN et faire la phase de test, rendez vous sur la machine ou vous voulez faire les tests de ping (nous allons prendre ping-sinfra.sh comme exemple)
* Mettez vous dans le dossier tools/ansible/gsb2024/wireguard
* Lancer le script de s-infra : bash ping-sinfra.sh
Une fois lancer une série de ping vont se lancer automatiquement, si tout est bon le scipt devrait arrivé à sa fin.
Si toutefois un ping ne passe pas, le scipt vaa bloquer sur le ping qui est en cours d'éxécution !
*Modification : jm*