Compare commits

..

42 Commits

Author SHA1 Message Date
6200de2cda correc role nfs-server 2023-01-26 11:23:29 +01:00
0074367972 wp tentative 2 2023-01-25 17:35:28 +01:00
3aa4a58252 modification README.md 2023-01-25 17:08:49 +01:00
8fd183998e Merge branch 'main' of https://gitea.lyc-lecastel.fr/gadmin/gsb2023 2023-01-25 17:05:42 +01:00
f4b736847e Merge branch 'main' of https://gitea.lyc-lecastel.fr/gadmin/gsb2023 2023-01-25 17:02:52 +01:00
5c8efd5e62 modification README.md 2023-01-25 17:02:49 +01:00
ab2cc8da96 Merge branch 'main' of https://gitea.lyc-lecastel.fr/gadmin/gsb2023 2023-01-25 17:02:42 +01:00
44c8fc32a5 tentative de faire marcher wp 1 2023-01-25 17:02:39 +01:00
385563b4f2 Mise à jour du playbook pour l'installation de GLPI 2023-01-25 16:54:29 +01:00
fff62c5507 Merge branch 'main' of https://gitea.lyc-lecastel.fr/gadmin/gsb2023 2023-01-25 16:34:11 +01:00
6139095296 MAJ role lb-web 2023-01-25 16:33:56 +01:00
9b609e6418 Merge branch 'main' of https://gitea.lyc-lecastel.fr/gadmin/gsb2023 2023-01-25 16:26:08 +01:00
332c8a2167 mise a jour goss s-agence 2023-01-25 16:25:40 +01:00
a3c2d85952 erreur dans lb-web 2023-01-25 16:09:44 +01:00
f8e3eabb9d Merge branch 'main' of https://gitea.lyc-lecastel.fr/gadmin/gsb2023 2023-01-25 15:59:38 +01:00
043a273589 nouveau role lb-web 2023-01-25 15:59:35 +01:00
5981b67dd9 Merge branch 'main' of https://gitea.lyc-lecastel.fr/gadmin/gsb2023 2023-01-25 15:33:10 +01:00
36336384e6 haproxy FINAL correc 2023-01-25 15:31:26 +01:00
0da9fc0d5a mise a jour goss r-vp2 2023-01-25 15:25:07 +01:00
62f9591c62 goss s-backup 2023-01-25 15:24:53 +01:00
c32cf92cf5 correction role lb-front 2023-01-25 15:17:18 +01:00
d0ba31e795 Merge branch 'main' of https://gitea.lyc-lecastel.fr/gadmin/gsb2023 2023-01-25 11:29:48 +01:00
69aa1ac739 update test goss 2023-01-25 11:29:45 +01:00
90222678ce correction haproxy 2023-01-25 11:26:54 +01:00
1fc84c8f19 goss s-mon correction 2023-01-25 11:21:09 +01:00
b17d0fbac1 correction ip s-elk en 99.11 dns-master et compagnie 2023-01-25 11:07:20 +01:00
edbce48966 correc2 2023-01-25 11:02:49 +01:00
56f3780480 Merge branch 'main' of https://gitea.lyc-lecastel.fr/gadmin/gsb2023 2023-01-25 10:45:47 +01:00
5eae26a67c correction roles lb 2023-01-25 10:45:36 +01:00
7711d023e8 Merge branch 'main' of https://gitea.lyc-lecastel.fr/gadmin/gsb2023 2023-01-25 10:43:19 +01:00
1777bec595 mise a jour 2023-01-25 10:43:14 +01:00
12621bb60a ajout readme 2023-01-25 10:28:22 +01:00
592843932c modif doc README 2023-01-25 00:23:46 +01:00
abfe277180 script s-backup backup.sh trap 2023-01-24 10:49:32 +01:00
c2eb2b85a4 correction script gsb partage 2023-01-24 10:13:40 +01:00
c20f44ec6e mkusr-backup windows 2023-01-24 09:34:23 +01:00
0c7d48caf3 Merge branch 'main' of https://gitea.lyc-lecastel.fr/gadmin/gsb2023 2023-01-24 09:23:40 +01:00
12de1c8891 commenter erreur 2023-01-24 09:23:21 +01:00
5fffbc77e2 ajout echo pour ping 2023-01-24 08:50:27 +01:00
b1e87cdd1e modification ping infra 2023-01-23 11:32:54 +01:00
7f7207cf46 ortho 2023-01-21 17:37:36 +01:00
1187a5e28d doc... 2023-01-21 17:36:02 +01:00
44 changed files with 337 additions and 294 deletions

View File

@ -1,35 +1,40 @@
# gsb2023
2023-01-18 ps
2023-01-25 ps
Environnement et playbooks ansible pour le projet GSB 2023
## Quickstart
prérequis :
Prérequis :
* une machine Debian Bullseye
* VirtualBox
* fichier machines viruelles ova :
* debian-bullseye-gsb-2023a.ova
* debian-buster-gsb-2023a.ova
* fichier machines viruelles **ova** :
* **debian-bullseye-gsb-2023a.ova**
* **debian-buster-gsb-2023a.ova**
## Les machines
* s-adm : routeur adm, DHCP + NAT, deploiement, proxy squid
* s-infra : DNS maitre
* r-int : routaage, DHCP
* r-ext : routage, NAT
* s-proxy : squid
* s-itil : serveur GLPI
* s-backup : DNS esclave + sauvegarde s-win
* s-mon : supervision avec **Nagios4** et syslog
* s-fog : deploiement postes de travail avec **FOG**
* s-win : Windows Server 2019, AD, DNS, DHCP, partage fichiers
* s-nxc : NextCloud avec **docker**
* s-elk : pile ELK dockerisée
* s-lb : Load Balancer **HaProxy** pour application Wordpress
* r-vp1 : Routeur VPN Wireguard coté siège
* r-vp2 : Routeur VPN Wireguard coté agence, DHCP
* **s-adm** : routeur adm, DHCP + NAT, deploiement, proxy squid
* **s-infra** : DNS maitre, autoconfiguration navigateurs avec **wpad**
* **r-int** : routage, DHCP
* **r-ext** : routage, NAT
* **s-proxy** : squid
* **s-itil** : serveur GLPI
* **s-backup** : DNS esclave + sauvegarde s-win (SMB)
* **s-mon** : supervision avec **Nagios4**, notifications et syslog
* **s-fog** : deploiement postes de travail avec **FOG**
* **s-win** : Windows Server 2019, AD, DNS, DHCP, partage fichiers
* **s-nxc** : NextCloud avec **docker**
* **s-elk** : pile ELK dockerisée
* **s-lb** : Load Balancer **HaProxy** pour application Wordpress (DMZ)
* **r-vp1** : Routeur VPN Wireguard coté siège
* **r-vp2** : Routeur VPN Wireguard coté agence, DHCP
* **s-agence** : Serveur agence
* **s-lb** : Load Balancer **HaProxy** pour application Wordpress
* **s-lb-web1** : Serveur Wordpress 1 Load Balancer
* **s-lb-web2** : Serveur Wordpress 2 Load Balancer
* **s-lb-db** : Serveur Mariadb pour Wordpress
* **s-lb-nfs** : Serveur NFS pour application Wordpress
## Les playbooks
@ -39,7 +44,7 @@ prérequis :
On utilisera l'image de machine virtuelle suivante :
* **debian-bullseye-2023a.ova** (2023-01-06)
* Debian Bullseye 11 - 2 cartes - 1 Go - stockage 20 Go
* Debian Bullseye 11.6 - 2 cartes - 1 Go - stockage 20 Go
### Machine s-adm

View File

@ -1,67 +1,56 @@
file:
/etc/wireguard/wg0.conf:
exists: true
mode: "0644"
owner: root
group: root
filetype: file
contains:
- AllowedIPs = 10.0.0.2/32, 172.16.128.0/24
package:
# ferm:
# installed: true
strongswan:
wireguard:
installed: true
port:
udp:68:
listening: true
versions:
- 1.0.20210223-1
wireguard-tools:
installed: true
versions:
- 1.0.20210223-1
service:
# dnsmasq:
# enabled: true
# running: true
strongswan:
enabled: true
running: true
ssh:
wg-quick@wg0:
enabled: true
running: true
command:
sysctl net.ipv4.ip_forward:
host 192.168.99.99:
exit-status: 0
stdout:
- net.ipv4.ip_forward = 1
- 99.99.168.192.in-addr.arpa domain name pointer s-adm.gsb.adm.
stderr: []
timeout: 10000
command:
ping -c 4 192.168.0.52:
ping -c4 10.0.0.2:
exit-status: 0
stdout:
- 4 received = 1
- 0% packet loss
stderr: []
timeout: 10000
command:
ping -c 4 192.168.1.1:
exit-status: 0
stdout:
- 4 received = 1
stderr: []
timeout: 10000
command:
ping -c 4 192.168.200.254:
exit-status: 0
stdout:
- 4 received = 1
stderr: []
timeout: 10000
command:
ping -c 4 172.16.0.1:
exit-status: 0
stdout:
- 4 received = 1
stderr: []
timeout: 10000
#process:
# dnsmasq:
# running: true
# squid:
# running: true
interface:
enp0s3:
exists: true
addrs:
- 192.168.99.112/24
mtu: 1500
enp0s8:
exists: true
addrs:
- 192.168.0.51/24
- 192.168.1.2/24
mtu: 1500
enp0s9:
exists: true
addrs:
- 192.168.1.2/24
- 192.168.0.51/24
mtu: 1500
wg0:
exists: true
addrs:
- 10.0.0.1/32
mtu: 1420

52
goss/r-vp2.yaml Normal file
View File

@ -0,0 +1,52 @@
file:
/etc/wireguard/wg0.conf:
exists: true
mode: "0644"
owner: root
group: root
filetype: file
contains: []
package:
wireguard:
installed: true
versions:
- 1.0.20210223-1
wireguard-tools:
installed: true
versions:
- 1.0.20210223-1
service:
isc-dhcp-server:
enabled: true
running: true
wg-quick@wg0:
enabled: true
running: true
command:
ping -c4 10.0.0.1:
exit-status: 0
stdout:
- 0% packet loss
stderr: []
timeout: 10000
interface:
enp0s3:
exists: true
addrs:
- 192.168.99.102/24
mtu: 1500
enp0s8:
exists: true
addrs:
- 172.16.128.254/24
mtu: 1500
enp0s9:
exists: true
addrs:
- 192.168.0.52/24
mtu: 1500
wg0:
exists: true
addrs:
- 10.0.0.2/32
mtu: 1420

View File

@ -1,67 +0,0 @@
package:
ferm:
installed: true
ipsec:
installed: true
port:
tcp:53:
listening: true
udp:67:
listening: true
udp:68:
listening: true
service:
dnsmasq:
enabled: true
running: true
ferm:
enabled: true
running: true
ssh:
enabled: true
running: true
command:
sysctl net.ipv4.ip_forward:
exit-status: 0
stdout:
- net.ipv4.ip_forward = 1
stderr: []
timeout: 10000
sysctl ping -c 4 192.168.0.51:
exit-status: 0
stdout:
- 4 received = 1
stderr: []
timeout: 10000
sysctl ping -c 4 192.168.1.1:
exit-status: 0
stdout:
- 4 received = 1
stderr: []
timeout: 10000
sysctl ping -c 4 192.168.200.254:
exit-status: 0
stdout:
- 4 received = 1
stderr: []
timeout: 10000
sysctl ping -c 4 172.16.0.1:
exit-status: 0
stdout:
- 4 received = 1
stderr: []
timeout: 10000
process:
dnsmasq:
running: true
squid3:
running: true
interface:
enp0s8:
exists: true
addrs:
- 172.16.128.254/24
enp0s9:
exists: true
addrs:
- 192.168.0.52/24

View File

@ -1,39 +1,19 @@
command:
ip r:
ip route |grep default:
exit-status: 0
stdout:
- default via 172.16.128.254 dev enp0s8
- 172.16.128.0/24
- 192.168.99.0/24
stderr: []
timeout: 10000
ping -c 2 172.16.128.254:
ping -c4 172.16.0.1:
exit-status: 0
stdout:
- 0% packet loss
stderr: []
timeout: 10000
ping -c 2 192.168.1.2:
ping -c4 172.16.128.254:
exit-status: 0
stdout:
- 0% packet loss
stderr: []
timeout: 10000
ping -c 2 192.168.1.1:
exit-status: 0
stdout:
- 0% packet loss
stderr: []
timeout: 10000
ping -c 2 192.168.200.254:
exit-status: 0
stdout:
- 0% packet loss
stderr: []
timeout: 10000
ping -c 2 172.16.0.1:
exit-status: 0
stdout:
- 0% packet loss
- 0% packet loss
stderr: []
timeout: 10000

41
goss/s-backup.yaml Normal file
View File

@ -0,0 +1,41 @@
package:
bind9:
installed: true
cifs-utils:
installed: true
rsync:
installed: true
smbclient:
installed: true
service:
bind9:
enabled: true
running: true
rsync:
enabled: true
running: false
command:
ping -c4 ns.gsb.lan:
exit-status: 0
stdout:
- 0% packet loss
stderr: []
timeout: 10000
#check si partage windows accesible
smbclient -L //s-win --user=uBackup%Azerty1+ | grep 'public':
exit-status: 0
stdout:
- public
stderr: []
timeout: 10000
interface:
enp0s3:
exists: true
addrs:
- 192.168.99.4/24
mtu: 1500
enp0s8:
exists: true
addrs:
- 172.16.0.4/24
mtu: 1500

View File

@ -49,7 +49,7 @@ interface:
enp0s3:
exists: true
addrs:
- 192.168.99.104/24
- 192.168.99.8/24
enp0s8:
exists: true
addrs:

View File

@ -9,16 +9,23 @@ apt update && apt upgrade
apt install -y apache2 git
STOREREP="/var/www/html/gsbstore"
GLPIREL=10.0.5
GLPIREL=10.0.6
str="wget -nc https://github.com/glpi-project/glpi/releases/download/${GLPIREL}/glpi-${GLPIREL}.tgz"
FIREL=10.0.3+1.0
str2="https://github.com/fusioninventory/fusioninventory-for-glpi/releases/download/glpi${FIREL}/fusioninventory-${FIREL}.tar.bz2"
FIAGREL=2.6
str31="wget -nc https://github.com/fusioninventory/fusioninventory-agent/releases/download/${FIAGREL}/fusioninventory-agent_windows-x64_${FIAGREL}.exe"
#Fusion Inventory
#FIREL=10.0.3+1.0
#str2="https://github.com/fusioninventory/fusioninventory-for-glpi/releases/download/glpi${FIREL}/fusioninventory-${FIREL}.tar.bz2"
#GLPI Agent
GLPIAGVER=1.4
str31="wget -nc https://github.com/glpi-project/glpi-agent/releases/download/${GLPIAGVER}/GLPI-Agent-${GLPIAGVER}-x64.msi"
str32="wget -nc https://github.com/glpi-project/glpi-agent/releases/download/${GLPIAGVER}/GLPI-Agent-${GLPIAGVER}-x86.msi"
str32="wget -nc https://github.com/fusioninventory/fusioninventory-agent/releases/download/${FIAGREL}/fusioninventory-agent_windows-x86_${FIAGREL}.exe"
FOGREL=1.5.9
str4="wget -nc https://github.com/FOGProject/fogproject/archive/${FOGREL}.tar.gz -O fogproject-${FOGREL}.tar.gz"

View File

@ -10,18 +10,23 @@
192.168.99.3 s-appli.gsb.adm
192.168.99.4 s-backup.gsb.adm
192.168.99.5 s-puppet.gsb.adm
192.168.99.6 s-win.gsb.adm
192.168.99.6 s-win.gsb.adm
192.168.99.7 s-nxc.gsb.adm
192.168.99.8 s-mon.gsb.adm
192.168.99.9 s-itil.gsb.adm
192.168.99.10 s-sspec.gsb.adm
192.168.99.11 s-web-ext.gsb.adm
192.168.99.10 s-lb.gsb.adm
192.168.99.11 s-elk.gsb.adm
192.168.99.10 s-dns.gsb.adm
192.168.99.12 r-int.gsb.adm
192.168.99.13 r-ext.gsb.adm
192.168.99.14 s-nas.gsb.adm
192.168.99.15 s-san.gsb.adm
192.168.99.16 s-fog.gsb.adm
192.168.99.50 s-lb-bd.gsb.adm
192.168.99.101 s-lb-web1.gsb.adm
192.168.99.102 s-lb-web2.gsb.adm
192.168.99.103 s-lb-web3.gsb.adm
192.168.99.8 syslog.gsb.adm

View File

@ -11,16 +11,20 @@
192.168.99.3 s-appli.gsb.adm
192.168.99.4 s-backup.gsb.adm
192.168.99.5 s-puppet.gsb.adm
192.168.99.6 s-win.gsb.adm
192.168.99.6 s-win.gsb.adm
192.168.99.7 s-nxc.gsb.adm
192.168.99.8 s-mon.gsb.adm
192.168.99.9 s-itil.gsb.adm
192.168.99.10 s-sspec.gsb.adm
192.168.99.11 s-web-ext.gsb.adm
192.168.99.10 s-lb.gsb.adm
192.168.99.11 s-elk.gsb.adm
192.168.99.10 s-dns.gsb.adm
192.168.99.12 r-int.gsb.adm
192.168.99.13 r-ext.gsb.adm
192.168.99.14 s-nas.gsb.adm
192.168.99.50 s-lb-bd.gsb.adm
192.168.99.101 s-lb-web1.gsb.adm
192.168.99.102 s-lb-web2.gsb.adm
192.168.99.103 s-lb-web3.gsb.adm
192.168.99.8 syslog.gsb.adm

View File

@ -5,7 +5,7 @@
;
$TTL 604800
@ IN SOA s-infra.gsb.lan. root.s-infra.gsb.lan. (
2022041200 ; Serial
2023012500 ; Serial
7200 ; Refresh
86400 ; Retry
8419200 ; Expire
@ -25,7 +25,7 @@ s-nxc IN A 172.16.0.7
s-docker IN A 172.16.0.7
s-mon IN A 172.16.0.8
s-itil IN A 172.16.0.9
s-elk IN A 172.16.0.10
s-elk IN A 172.16.0.11
s-gestsup IN A 172.16.0.17
r-int IN A 172.16.0.254
r-int-lnk IN A 192.168.200.254

View File

@ -5,7 +5,7 @@
;
$TTL 604800
@ IN SOA s-infra.gsb.lan. root.s-infra.gsb.lan. (
2022041200 ; Serial
2023012500 ; Serial
7200 ; Refresh
86400 ; Retry
8419200 ; Expire
@ -20,12 +20,12 @@ $TTL 604800
6.0 IN PTR s-win.gsb.lan.
7.0 IN PTR s-nxc.gsb.lan.
8.0 IN PTR s-mon.gsb.lan.
9.0 IN PTR s-itil.gsb.lan.
9.0 IN PTR s-itil.gsb.lan.
101.1 IN PTR s-web1
101.2 IN PTR s-web2
100.10 IN PTR s-lb
100.10 IN PTR s-lb.gsb.lan
10.0 IN PTR s-elk.gsb.lan.
11.0 IN PTR s-elk.gsb.lan.
17.0 IN PTR s-gestsup.lan
254.0 IN PTR r-int.gsb.lan.

View File

@ -0,0 +1,6 @@
depl_url: "http://s-adm.gsb.adm/gsbstore"
#depl_glpi: "glpi-9.5.6.tgz"
depl_glpi: "glpi-10.0.6.tgz"
#depl_fusioninventory: "fusioninventory-9.5+3.0.tar.bz2"
depl_glpi_agentx64: "GLPI-Agent-1.4-x64.msi"
depl_glpi_agentx86: "GLPI-Agent-1.4-x86.msi"

View File

@ -105,12 +105,12 @@
# - name: copy .my.cnf file with root password credentials
# copy: src=.my.cnf dest=/root/tools/ansible/.my.cnf owner=root mode=0600
- name: Installation de Fusioninventory pour Linux
unarchive:
src: "{{ depl_url }}/{{ depl_fusioninventory }}"
#src: http://depl/gsbstore/fusioninventory-{{ fd_version }}.tar.bz2
dest: /var/www/html/glpi/plugins
remote_src: yes
# - name: Installation de Fusioninventory pour Linux
# unarchive:
# src: "{{ depl_url }}/{{ depl_fusioninventory }}"
#src: http://depl/gsbstore/fusioninventory-{{ fd_version }}.tar.bz2
# dest: /var/www/html/glpi/plugins
# remote_src: yes
- name: Creation de ficlient
file:
@ -127,15 +127,15 @@
group: www-data
mode: 0775
- name: Installation de FusionInventory windows x64
- name: Installation de GLPI Agent windows x64
get_url:
url: "{{ depl_url }}/{{ depl_fusioninventory_agentx64 }}"
url: "{{ depl_url }}/{{ depl_glpi_agentx64 }}"
dest: "/var/www/html/ficlients"
- name: Installation de FusionInventory windows x86
get_url:
url: "{{ depl_url }}/{{ depl_fusioninventory_agentx86 }}"
dest: "/var/www/html/ficlients"
# - name: Installation de GLPI Agent windows x86
# get_url:
# url: "{{ depl_url }}/{{ depl_glpi_agentx86 }}"
# dest: "/var/www/html/ficlients"
- name: Attribution des permissions sur repertoire /plugins/fusioninventory
file:

View File

@ -1,6 +0,0 @@
depl_url: "http://s-adm.gsb.adm/gsbstore"
#depl_glpi: "glpi-9.5.6.tgz"
depl_glpi: "glpi-10.0.5.tgz"
depl_fusioninventory: "fusioninventory-9.5+3.0.tar.bz2"
depl_fusioninventory_agentx64: "fusioninventory-agent_windows-x64_2.6.exe"
depl_fusioninventory_agentx86: "fusioninventory-agent_windows-x86_2.6.exe"

View File

@ -44,7 +44,6 @@ backend fermeweb
#option httpchk HEAD / HTTP/1.0
server s-lb-web1 192.168.101.1:80 check
server s-lb-web2 192.168.101.2:80 check
#server s-lb-web3 192.168.101.3:80 check
listen stats

View File

@ -8,18 +8,18 @@
path: /etc/haproxy/haproxy.cfg
block: |
frontend proxypublic
bind 192.168.56.2:80
bind 192.168.100.10:80
default_backend fermeweb
backend fermeweb
balance roundrobin
option httpclose
#option httpchk HEAD / HTTP/1.0
server web1.test 192.168.56.3:80 check
#server web2.test 192.168.56.4:80 check
server s-lb-web1 192.168.101.1:80 check
server s-lb-web2 192.168.101.2:80 check
- name: redemarre haproxy
service:
name: haproxy
state: restarted
# state: restarted
enabled: yes

View File

@ -7,4 +7,4 @@
# Example for NFSv4:
# /srv/nfs4 gss/krb5i(rw,sync,fsid=0,crossmnt,no_subtree_check)
# /srv/nfs4/homes gss/krb5i(rw,sync,no_subtree_check)
/home/wordpress 192.168.102.0/255.255.255.0 (rw,no_root_squash,subtree_check)
/home/wordpress 192.168.102.0/255.255.255.0(rw,no_root_squash,subtree_check)

View File

@ -1,38 +1,10 @@
---
- name: creation repertoir
file:
path: /home/
state: directory
- name: download and extract wordpress
unarchive:
src: "{{ depl_url }}/{{ depl_wordpress }}"
dest: /home/
remote_src: yes
owner: www-data
group: www-data
- name: Copy sample config file
command: mv /home/wordpress/wp-config-sample.php /home/wordpress/wp-config.php creates=/home/wordpress/wp-config.php
- name: Changement du fichier de conf
copy:
src: wp-config.php
dest: /home/wordpress/wp-config.php
- name: Attributions des permissions
file:
path: /home/wordpress
recurse: yes
owner: 33
group: 33
# - name: Fix permissions
# shell: chown -R www-data /var/www/wordpress/*
#
# - name: Update default Apache site
# lineinfile:
# dest=/etc/apache2/sites-enabled/000-default.conf
# regexp="(.)+DocumentRoot /var/www/html"
# line="DocumentRoot /var/www/wordpress"
# notify:
# - restart apache2
---
- name: installation php et apache ...
apt:
name:
- apache2
- php
- php-mbstring
- php-mysql
- mariadb-client
state: present

View File

@ -2,19 +2,19 @@
Nextcloud et Traefik fonctionnent grâce à docker. Pour pouvoir faire fonctionner ce playbook, docker doit être installé.
## 1.
## 1.
Le playbook crée le dossier **nxc** à la racine de root.
Les fichiers "nextcloud.yml" et "traefik.yml" y seront copiés depuis le répertoire "files" du playbook.
Enfin, dans le répertoire nxc, seront créés les répertoires **certs** et **config**.
Enfin, dans le répertoire nxc, sont créés les répertoires **certs** et **config**.
## 2. Copie des fichiers
Le playbook copie les fichiers placés dans "files" et les placer dans les bons répertoires.
## 3. Generation du certificat
## 3. Génération du certificat
Le playbook crée un certificat **x509** grâce à **mkcert**, il s'agit d'une solution permettant de créer des certificats auto-signés. Pour cela, il télécharge **mkcert** sur **s-adm** (utiliser le script **getall**).

View File

@ -8,13 +8,13 @@ iface lo inet loopback
# cote N-adm
allow-hotplug enp0s3
iface enp0s3 inet static
address 192.168.99.10
address 192.168.99.11
netmask 255.255.255.0
gateway 192.168.99.99
# cote N-infra
allow-hotplug enp0s8
iface enp0s8 inet static
address 172.16.0.10
address 172.16.0.11
netmask 255.255.255.0
post-up route add -net 172.16.64.0/24 gw 172.16.0.254

View File

@ -1,27 +1,51 @@
#!/bin/bash
BDIR=/home/backup
SWIN=/tmp/s-win
LOCK=/tmp/s-backup.lock
#Fonction cleanup pour sortir propre dans tout les cas
cleanup()
{
rm "${LOCK}"
umount "${SWIN}"
echo "nettoyage effectue, sortie tout propre ..."
exit 3
}
#check si pas deja en cours d execution > sortie si fichier de lock existe
if [ -e "${LOCK}" ] ; then
echo "$0 : Verrouillage, deja en cours d execution"
trap cleanup 1 2 3 6
fi
#prepartion des dossiers qui vont accueillir les donnees à sauvegarder
[ -d "${BDIR}" ] || mkdir "${BDIR}"
[ -d "${BDIR}" ] || mkdir "${BDIR}/s-win"
[ -d "${BDIR}/s-win" ] || mkdir "${BDIR}/s-win"
[ -d "${SWIN}" ] || mkdir "${SWIN}"
mount -t cifs -o ro,vers=3.0,username=u-backup,password=Azerty1+ //s-win/commun "${SWIN}"
#etablissement du lock
touch "${LOCK}"
mount -t cifs -o ro,vers=3.0,username=uBackup,password=Azerty1+ //s-win/commun "${SWIN}"
if [ $? != 0 ] ; then
echo "$0 : erreur montage ${SWIN}"
exit 1
rm "${LOCK}"
trap cleanup 1 2 3 6
fi
rsync -av "${SWIN}/" "${BDIR}/s-win/commun"
umount "${SWIN}"
mount -t cifs -o ro,vers=3.0,username=u-backup,password=Azerty1+ //s-win/public "${SWIN}"
mount -t cifs -o ro,vers=3.0,username=uBackup,password=Azerty1+ //s-win/public "${SWIN}"
if [ $? != 0 ] ; then
echo "$0 : erreur montage"
exit 2
echo "$0 : erreur montage ${SWIN}"
trap cleanup 1 2 3 6
fi
rsync -av "${SWIN}/" "${BDIR}/s-win/public"
umount "${SWIN}"
#libere le verrou
rm "${LOCK}"
exit 0

View File

@ -0,0 +1,5 @@
#ajout du sleep 5
éditer "/etc/init.d/isc-dhcp-server"
aller au "case \"$1\" in" et rajouter "sleep 5" avant le "if"

View File

@ -17,5 +17,5 @@
#- name: copie du fichier de configuration depuis r-vp1
# command: "sshpass -p 'root' scp -r root@192.168.99.112:/root/confwg/wg0-b.conf /etc/wireguard/"
- name: renommage du fichier de configuration
command: "mv /etc/wireguard/wg0-b.conf /etc/wireguard/wg0.conf"
#- name: renommage du fichier de configuration
# command: "mv /etc/wireguard/wg0-b.conf /etc/wireguard/wg0.conf"

View File

@ -1,14 +1,13 @@
#Installation de r-vp1 (Wireguard)
Procédure d'installation de r-vp1 et de copie du fichier wg0-b.conf.
***
***
Ce fichier à pour but de présenter l'installation de r-vp1
***
Depuis r-vp1 se deplacer dans le repertoire **/tools/ansible/gsb2023** pour executer le playbook:
**"ansible-playbook -i localhost, -c local r-vp1.yml"** puis reboot r-vp1.
Attendre la fin de l'installation. Ensuite faire une copie distante du fichier
wg0-b.conf sur r-vp2 **"scp /confwg/wg0-b.conf root@'ip r-vp2':/etc/wireguard/"**.
Se rendre dans le dossier gsb2022 et éxécuter la commande suivante :
_"ansible-playbook -i localhost, -c local r-vp1.yml"_
Attendre la fin de l'installation, puis se rendre dans le dossier confwg
Faites une copie à distance du fichier wg0-b.conf sur r-vp2 et déplacer le fichier wg0-a.conf localement dans /etc/wireguard
Renommer les deux fichiers en wg0.conf
Executer _"systemctl enable wg-quick@wg0"_ puis _"systemctl start wg-quick@wg0"_ sur r-vp1 et r-vp2
Entrer la commande _"wg"_ si des paquets sont envoyés et reçus votre VPN fonctionne.
Lorsque votre infrastructure est prête rendez vous dans gsb2022 et éxécuter le **fichier ping-sagence** afin vérifier le bon fonctionnement.
Renommer les fichiers en **wg0.conf**
Executer **"systemctl enable wg-quick@wg0"** puis **"systemctl start wg-quick@wg0"** sur r-vp1 et r-vp2.
Entrer la commande **"wg"** pour voir si l'interface wg0 est correctement montée.

View File

@ -4,6 +4,7 @@
roles:
- base
- goss
# - proxy3
- snmp-agent
# - ssh-cli

View File

@ -2,23 +2,20 @@
- hosts: localhost
connection: local
vars:
#vars:
glpi_version: "9.4.5"
fd_version: "9.4+1.1"
fd_version64: "x64_2.5.2"
fd_version86: "x86_2.5.2"
glpi_dir: "/var/www/html/glpi"
glpi_dbhost: "127.0.0.1"
glpi_dbname: "glpi"
glpi_dbuser: "glpi"
glpi_dbpasswd: "glpi"
#glpi_version: "9.4.5"
#glpi_dir: "/var/www/html/glpi"
#glpi_dbhost: "127.0.0.1"
#glpi_dbname: "glpi"
#glpi_dbuser: "glpi"
#glpi_dbpasswd: "glpi"
roles:
- base
- goss
- snmp-agent
- itil
- glpi
- ssh-cli
- syslog-cli
- post

View File

@ -4,8 +4,7 @@
roles:
- base
- s-lb-web-ab
- lb-web
- snmp-agent
- s-nas-client
- lb-nfs-client
- post

View File

@ -4,8 +4,7 @@
roles:
- base
- s-lb-web-ab
- lb-web
- snmp-agent
- s-nas-client
- lb-nfs-client
- post

View File

@ -5,7 +5,7 @@
roles:
- base
- goss
- s-lb-ab
- lb-front
- snmp-agent
- post

View File

@ -10,8 +10,7 @@
roles:
- base
- snmp-agent
- s-lb-wordpress
- s-nas-server
- lb-nfs-server
- ssh-cli
- syslog-cli
- post

View File

@ -97,11 +97,11 @@ elif [[ "${vm}" == "s-nxc" ]] ; then
create_if "${vm}" "n-adm" "n-infra"
elif [[ "${vm}" == "s-lb" ]] ; then
create_if "${vm}" "n-adm" "n-dmz" "n-dmz-lb"
elif [[ "${vm}" == "s-web1" ]] ; then
elif [[ "${vm}" == "s-lb-web1" ]] ; then
create_if "${vm}" "n-adm" "n-dmz-lb" "n-dmz-db"
elif [[ "${vm}" == "s-web2" ]] ; then
elif [[ "${vm}" == "s-lb-web2" ]] ; then
create_if "${vm}" "n-adm" "n-dmz-lb" "n-dmz-db"
elif [[ "${vm}" == "s-web3" ]] ; then
elif [[ "${vm}" == "s-lb-web3" ]] ; then
create_if "${vm}" "n-adm" "n-dmz-lb" "n-dmz-db"
elif [[ "${vm}" == "s-lb-bd" ]] ; then
create_if "${vm}" "n-adm" "n-dmz-db"

View File

@ -2,14 +2,14 @@ mkdir C:\gsb\partages
cd C:\gsb\partages
mkdir compta
mkdir compta
mkdir ventes
mkdir public
mkdir commun
mkdir users
cd C:\gsb
mkdir users

3
windows/mkusr-backup.cmd Normal file
View File

@ -0,0 +1,3 @@
net group gg-backup /ADD
call mkusr uBackup "u-backup" gg-backup
icacls "C:\gsb\partages\public" /Grant:r uBackup:M /T

View File

@ -1,14 +1,22 @@
#!/bin/bash
echo ping interface paserelle r-vp2
ping -c3 172.16.128.254
echo ping r-vp1 interface n-linkv
ping -c3 192.168.1.2
echo ping r-ext interface n-linkv
ping -c3 192.168.1.1
echo ping r-ext interface n-link
ping -c3 192.168.200.253
echo ping r-int interface n-link
ping -c3 192.168.200.254
echo ping r-int interface s-infra
ping -c3 172.16.0.254
echo ping s-infra
ping -c3 172.16.0.1

View File

@ -1,14 +1,22 @@
#!/bin/bash
echo ping s-infra
ping -c3 172.16.0.1
echo ping r-int interface n-infra
ping -c3 172.16.0.254
echo ping r-int interface n-link
ping -c3 192.168.200.254
echo ping r-ext interface n-linkv
ping -c3 192.168.1.1
echo ping r-vp1 interface n-linkv
ping -c3 192.168.1.2
echo ping r-vp2 interface n-ag
ping -c3 172.16.128.254
echo ping s-agence
ping -c3 172.16.128.10

View File

@ -1,12 +1,19 @@
#!/bin/bash
echo ping s-infra
ping -c3 172.16.0.1
echo ping r-ext interface n-link
ping -c3 192.168.200.253
echo ping r-ext interface n-linkv
ping -c3 192.168.1.1
echo ping r-vp1 interface n-link
ping -c3 192.168.1.2
echo ping r-vp2 interface n-ag
ping -c3 172.16.128.254
echo ping s-agence
ping -c3 172.16.128.10

View File

@ -1,14 +1,21 @@
#!/bin/bash
echo ping vers r-int
ping -c3 172.16.0.254
echo ping r-int interface externe
ping -c3 192.168.200.254
echo ping r-ext interface interne
ping -c3 192.168.200.253
echo ping r-ext interface liaison
ping -c3 192.168.1.1
echo ping r-vp1 interface liaison n-linkv
ping -c3 192.168.1.2
ping -c3 172.16.125.254
echo ping r-vp2 interface interface interne
ping -c3 172.16.128.254
ping -c3 172.16.128.10
echo ping s-agence
ping -c3 172.16.128.11