Compare commits
21 Commits
v0.0.7r-ak
...
main
Author | SHA1 | Date | |
---|---|---|---|
|
45e4401dcc | ||
|
dc50059f19 | ||
|
ada657401f | ||
|
b126e7f9e3 | ||
|
f9f3e8da8e | ||
|
be4f3b9030 | ||
c590edb875 | |||
3cd52a230e | |||
|
8dd65d65eb | ||
|
aac0f13134 | ||
|
ea60c89bf1 | ||
|
272867304a | ||
|
b53cd8c848 | ||
|
8ed9ebe6f8 | ||
|
c1fe781ca2 | ||
c2c1e8acb7 | |||
|
5176ad216c | ||
|
e45118eef1 | ||
|
959e056c5f | ||
|
8d7c5f7cfb | ||
|
80295cba99 |
16
README.md
16
README.md
@ -1,6 +1,8 @@
|
|||||||
# gsb2024
|
# gsb2024
|
||||||
|
|
||||||
2024-01-19 11h45 ps
|
* 2024-05-23 16h07 ps
|
||||||
|
* 2024-04-12 8h55 ps
|
||||||
|
* 2024-01-19 11h45 ps
|
||||||
|
|
||||||
Environnement et playbooks **ansible** pour le projet **GSB 2024**
|
Environnement et playbooks **ansible** pour le projet **GSB 2024**
|
||||||
|
|
||||||
@ -11,8 +13,8 @@ Prérequis :
|
|||||||
* VirtualBox
|
* VirtualBox
|
||||||
* git
|
* git
|
||||||
* fichier machines virtuelles **ova** :
|
* fichier machines virtuelles **ova** :
|
||||||
* **debian-bookworm-gsb-2023c.ova**
|
* **debian-bookworm-gsb-2024b.ova**
|
||||||
* **debian-bullseye-gsb-2024a.ova**
|
* **debian-bullseye-gsb-2024b.ova**
|
||||||
|
|
||||||
|
|
||||||
## Les machines
|
## Les machines
|
||||||
@ -49,12 +51,12 @@ Il existe un playbook ansible pour chaque machine à installer, nommé comme la
|
|||||||
## Installation
|
## Installation
|
||||||
|
|
||||||
On utilisera les images de machines virtuelle suivantes :
|
On utilisera les images de machines virtuelle suivantes :
|
||||||
* **debian-bookworm-gsb-2023c.ova** (2023-12-18)
|
* **debian-bookworm-gsb-2024b.ova** (2024-05-23)
|
||||||
* Debian Bookworm 12.4 - 2 cartes - 1 Go - Stockage 20 Go
|
* Debian Bookworm 12.5 - 2 cartes - 1 Go - Stockage 20 Go
|
||||||
|
|
||||||
et pour **s-fog** :
|
et pour **s-fog** :
|
||||||
* **debian-bullseye-2024a.ova** (2024-01-06)
|
* **debian-bullseye-2024b.ova** (2024-04-11)
|
||||||
* Debian Bullseye 11.8 - 2 cartes - 1 Go - stockage 20 Go
|
* Debian Bullseye 11.9 - 2 cartes - 1 Go - stockage 20 Go
|
||||||
|
|
||||||
Les images **.ova** doivent etre stockées dans le répertoire habituel de téléchargement de l'utilisateur courant.
|
Les images **.ova** doivent etre stockées dans le répertoire habituel de téléchargement de l'utilisateur courant.
|
||||||
|
|
||||||
|
7
firewalld.yml
Normal file
7
firewalld.yml
Normal file
@ -0,0 +1,7 @@
|
|||||||
|
---
|
||||||
|
- hosts: localhost
|
||||||
|
connection: local
|
||||||
|
become: yes
|
||||||
|
|
||||||
|
roles:
|
||||||
|
- firewalld
|
@ -5,7 +5,11 @@
|
|||||||
name: awx
|
name: awx
|
||||||
groups: sudo
|
groups: sudo
|
||||||
append: yes
|
append: yes
|
||||||
shell: /bin/bash
|
|
||||||
|
- name: Cration d'un mdp pour user awx
|
||||||
|
user:
|
||||||
|
name: awx
|
||||||
|
password: '$5$1POIEvs/Q.DHI4/6$RT6nl42XkekxTPKA/dktbnCMxL8Rfk8GAK7NxqL9D70'
|
||||||
|
|
||||||
- name: Get awx key_pub
|
- name: Get awx key_pub
|
||||||
get_url:
|
get_url:
|
||||||
|
@ -7,6 +7,12 @@
|
|||||||
shell: /bin/bash
|
shell: /bin/bash
|
||||||
generate_ssh_key: yes
|
generate_ssh_key: yes
|
||||||
|
|
||||||
|
#- name: Creation mdp user awx
|
||||||
|
# ansible.builtin.user:
|
||||||
|
#name:
|
||||||
|
#user: awx
|
||||||
|
# password: '$5$1POIEvs/Q.DHI4/6$RT6nl42XkekxTPKA/dktbnCMxL8Rfk8GAK7NxqL9D70'
|
||||||
|
|
||||||
- name: Copie cle publique dans gsbstore
|
- name: Copie cle publique dans gsbstore
|
||||||
copy:
|
copy:
|
||||||
src: /home/awx/.ssh/id_rsa.pub
|
src: /home/awx/.ssh/id_rsa.pub
|
||||||
|
26
roles/firewalld/README.md
Normal file
26
roles/firewalld/README.md
Normal file
@ -0,0 +1,26 @@
|
|||||||
|
# Rôle awx
|
||||||
|
***
|
||||||
|
Rôle awx: Configuration d'un serveur AWX avec k3s.
|
||||||
|
|
||||||
|
## Tables des matières
|
||||||
|
1. [Que fait le rôle AWX ?]
|
||||||
|
2. [Connexion à l'interface WEB du serveur AWX]
|
||||||
|
|
||||||
|
**AWX** est l'application développée par **RedHat** permettant de lancer des playbooks **ansible** depuis une interface web évoluée plutôt qu'en ligne de commande. **AWX** utlise kubernetes mise en oeuvre ici avec **k3s**.
|
||||||
|
|
||||||
|
## Que fait le rôle AWX ?
|
||||||
|
Le rôle **awx** installe et configure un serveur **AWX** avec **k3s** pour cela le role:
|
||||||
|
- Installe **k3s** en spécifiant l'adresse IP ainsi que l'interface d'écoute
|
||||||
|
- Clone le dépot **Github** **awx-on-k3s**
|
||||||
|
- Procéde au déploiement du pod **awx-operator**
|
||||||
|
- Génére un certifiacat auto-signé utlisée par le serveur **AWX** en utilisant **OpenSSL**
|
||||||
|
- Edite le fichier awx.yaml afin d'y indique le nom d'hote du serveur en accord avec le nom utlisé par les certificats
|
||||||
|
- Déploie le serveur **AWX**
|
||||||
|
- Test l'accésibilité du serveur **AWX**.
|
||||||
|
|
||||||
|
### Connexions à l'interface WEB du serveur AWX ###
|
||||||
|
Une fois le role **awx** terminé il est possible de se connecter à l'interface web duserveur depuis un navigateur.
|
||||||
|
S'assurer que votre machine puisse résoudre **s-awx.gsb.lan**
|
||||||
|
- Se connecter sur : **https://s-awx.gsb.lan**
|
||||||
|
- Utlisateur: **admin** / Mot de passe: **Ansible123!**
|
||||||
|
|
91
roles/firewalld/tasks/main.yml
Normal file
91
roles/firewalld/tasks/main.yml
Normal file
@ -0,0 +1,91 @@
|
|||||||
|
---
|
||||||
|
- name: Installation de firewalld
|
||||||
|
apt:
|
||||||
|
state: present
|
||||||
|
name:
|
||||||
|
- firewalld
|
||||||
|
|
||||||
|
- name: affectation de l'interface enp0s3 a la zone external
|
||||||
|
ansible.posix.firewalld:
|
||||||
|
zone: external
|
||||||
|
interface: enp0s3
|
||||||
|
permanent: true
|
||||||
|
state: enabled
|
||||||
|
|
||||||
|
- name: affectation de l'interface enp0s8 a la zone external
|
||||||
|
ansible.posix.firewalld:
|
||||||
|
zone: internal
|
||||||
|
interface: enp0s8
|
||||||
|
permanent: true
|
||||||
|
state: enabled
|
||||||
|
|
||||||
|
- name: FirewallD rules pour la zone internal
|
||||||
|
firewalld:
|
||||||
|
zone: internal
|
||||||
|
permanent: yes
|
||||||
|
immediate: yes
|
||||||
|
service: "{{ item }}"
|
||||||
|
state: enabled
|
||||||
|
with_items:
|
||||||
|
- http
|
||||||
|
- https
|
||||||
|
- dns
|
||||||
|
- ssh
|
||||||
|
- rdp
|
||||||
|
|
||||||
|
- name: FirewallD rules pour la zone internal
|
||||||
|
firewalld:
|
||||||
|
zone: external
|
||||||
|
permanent: yes
|
||||||
|
immediate: yes
|
||||||
|
service: "{{ item }}"
|
||||||
|
state: enabled
|
||||||
|
with_items:
|
||||||
|
- ssh
|
||||||
|
- rdp
|
||||||
|
#- ansible.posix.firewalld:
|
||||||
|
# zone: internal
|
||||||
|
# service: http
|
||||||
|
# permanent: true
|
||||||
|
# state: enabled
|
||||||
|
|
||||||
|
#- ansible.posix.firewalld:
|
||||||
|
# zone: internal
|
||||||
|
# service: dns
|
||||||
|
# permanent: true
|
||||||
|
#state: enabled
|
||||||
|
|
||||||
|
#- ansible.posix.firewalld:
|
||||||
|
# zone: internal
|
||||||
|
# service: ssh
|
||||||
|
# permanent: true
|
||||||
|
# state: enabled
|
||||||
|
|
||||||
|
#- ansible.posix.firewalld:
|
||||||
|
# zone: internal
|
||||||
|
# service: rdp
|
||||||
|
#permanent: true
|
||||||
|
#state: enabled
|
||||||
|
|
||||||
|
|
||||||
|
- ansible.posix.firewalld:
|
||||||
|
zone: internal
|
||||||
|
port: 8080/tcp
|
||||||
|
permanent: true
|
||||||
|
state: enabled
|
||||||
|
|
||||||
|
- ansible.posix.firewalld:
|
||||||
|
zone: external
|
||||||
|
port: 3389/tcp
|
||||||
|
permanent: true
|
||||||
|
state: enabled
|
||||||
|
|
||||||
|
- ansible.posix.firewalld:
|
||||||
|
port_forward:
|
||||||
|
- port: 3389
|
||||||
|
proto: tcp
|
||||||
|
toaddr: "192.168.99.6"
|
||||||
|
toport: 3389
|
||||||
|
state: enabled
|
||||||
|
immediate: yes
|
||||||
|
|
22
roles/lb-front-ssl/README.md
Normal file
22
roles/lb-front-ssl/README.md
Normal file
@ -0,0 +1,22 @@
|
|||||||
|
# Rôle lb-front
|
||||||
|
***
|
||||||
|
Rôle lb-front pour la répartition de charge des serveurs web sur WordPress avec HAProxy
|
||||||
|
|
||||||
|
## Tables des matières
|
||||||
|
1. Que fait le rôle lb-front ?
|
||||||
|
2. Ordre d'installation des serveurs.
|
||||||
|
|
||||||
|
|
||||||
|
## Que fait le rôle lb-front ?
|
||||||
|
|
||||||
|
Le rôle lb-front va installer `haproxy` pour le load balancing/la répartition de charge et va configurer le fichier `/etc/haproxy/haproxy.cfg`.
|
||||||
|
|
||||||
|
le fichier va faire du Round-Robin, un algoritme qui va équilibrer le nombre de requêtes entre s-lb-web1 et s-lb-web2.
|
||||||
|
|
||||||
|
le site web est accessibe à l'adresse <http://s-lb.gsb.adm>.
|
||||||
|
|
||||||
|
## Ordre d'installation des serveurs.
|
||||||
|
1. Le serveur s-lb avec haproxy qui va "initialiser" les sous-réseaux dans la DMZ.
|
||||||
|
2. Le serveur s-lb-bd qui va contenir la base de données WordPress utilisée par les serveurs web.
|
||||||
|
3. Le serveur s-nas qui va stocker la configuration WordPress et la partager aux serveurs web en NFS. Il va aussi utiliser la base de données sur stockée s-lb-bd.
|
||||||
|
4. Les serveurs s-web1 et s-web2 qui vont installer Apache2, PHP et afficher le serveur WordPress.
|
55
roles/lb-front-ssl/files/haproxy.cfg
Normal file
55
roles/lb-front-ssl/files/haproxy.cfg
Normal file
@ -0,0 +1,55 @@
|
|||||||
|
global
|
||||||
|
log /dev/log local0
|
||||||
|
log /dev/log local1 notice
|
||||||
|
chroot /var/lib/haproxy
|
||||||
|
stats socket /run/haproxy/admin.sock mode 660 level admin
|
||||||
|
stats timeout 30s
|
||||||
|
user haproxy
|
||||||
|
group haproxy
|
||||||
|
daemon
|
||||||
|
|
||||||
|
# Default SSL material locations
|
||||||
|
ca-base /etc/ssl/certs
|
||||||
|
crt-base /etc/ssl/private
|
||||||
|
|
||||||
|
# Default ciphers to use on SSL-enabled listening sockets.
|
||||||
|
# For more information, see ciphers(1SSL). This list is from:
|
||||||
|
# https://hynek.me/articles/hardening-your-web-servers-ssl-ciphers/
|
||||||
|
ssl-default-bind-ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS
|
||||||
|
ssl-default-bind-options no-sslv3
|
||||||
|
|
||||||
|
defaults
|
||||||
|
log global
|
||||||
|
mode http
|
||||||
|
option httplog
|
||||||
|
option dontlognull
|
||||||
|
timeout connect 5000
|
||||||
|
timeout client 50000
|
||||||
|
timeout server 50000
|
||||||
|
errorfile 400 /etc/haproxy/errors/400.http
|
||||||
|
errorfile 403 /etc/haproxy/errors/403.http
|
||||||
|
errorfile 408 /etc/haproxy/errors/408.http
|
||||||
|
errorfile 500 /etc/haproxy/errors/500.http
|
||||||
|
errorfile 502 /etc/haproxy/errors/502.http
|
||||||
|
errorfile 503 /etc/haproxy/errors/503.http
|
||||||
|
errorfile 504 /etc/haproxy/errors/504.http
|
||||||
|
|
||||||
|
frontend proxypublic
|
||||||
|
bind 192.168.100.10:80
|
||||||
|
default_backend fermeweb
|
||||||
|
|
||||||
|
backend fermeweb
|
||||||
|
balance roundrobin
|
||||||
|
option httpclose
|
||||||
|
option httpchk HEAD / HTTP/1.0
|
||||||
|
server s-lb-web1 192.168.101.1:80 check
|
||||||
|
server s-lb-web2 192.168.101.2:80 check
|
||||||
|
|
||||||
|
|
||||||
|
listen stats
|
||||||
|
bind *:8080
|
||||||
|
stats enable
|
||||||
|
stats uri /haproxy
|
||||||
|
stats auth admin:admin
|
||||||
|
|
||||||
|
|
3
roles/lb-front-ssl/handlers/main.yml
Normal file
3
roles/lb-front-ssl/handlers/main.yml
Normal file
@ -0,0 +1,3 @@
|
|||||||
|
---
|
||||||
|
- name: restart haproxy
|
||||||
|
service: name=haproxy state=restarted
|
75
roles/lb-front-ssl/tasks/main.yml
Normal file
75
roles/lb-front-ssl/tasks/main.yml
Normal file
@ -0,0 +1,75 @@
|
|||||||
|
- name: install haproxy
|
||||||
|
apt:
|
||||||
|
name: haproxy
|
||||||
|
state: present
|
||||||
|
|
||||||
|
- name: Creer le repertoire du certificat
|
||||||
|
file:
|
||||||
|
path: /etc/haproxy/crt
|
||||||
|
state: directory
|
||||||
|
mode: '0755'
|
||||||
|
|
||||||
|
- name: Creer le repertoire de la cle privee
|
||||||
|
file:
|
||||||
|
path: /etc/haproxy/crt/private
|
||||||
|
state: directory
|
||||||
|
mode: '0755'
|
||||||
|
|
||||||
|
- name: Generer une clee privee avec les valeurs par defaut (4096 bits, RSA)
|
||||||
|
openssl_privatekey:
|
||||||
|
path: /etc/haproxy/crt/private/haproxy.pem.key
|
||||||
|
size: 4096
|
||||||
|
type: RSA
|
||||||
|
state: present
|
||||||
|
|
||||||
|
- name: creer un certificat auto-signé
|
||||||
|
openssl_certificate:
|
||||||
|
path: /etc/haproxy/crt/private/haproxy.pem
|
||||||
|
privatekey_path: /etc/haproxy/crt/private/haproxy.pem.key
|
||||||
|
provider: selfsigned
|
||||||
|
state: present
|
||||||
|
|
||||||
|
- name: s'assurer que le certificat a les bonnes permissions
|
||||||
|
file:
|
||||||
|
path: /etc/haproxy/crt/private/haproxy.pem
|
||||||
|
owner: root
|
||||||
|
group: haproxy
|
||||||
|
mode: '0640'
|
||||||
|
|
||||||
|
- name: parametre global
|
||||||
|
blockinfile:
|
||||||
|
path: /etc/haproxy/haproxy.cfg
|
||||||
|
block: |
|
||||||
|
global
|
||||||
|
log /dev/log local0
|
||||||
|
log /dev/log local1 notice
|
||||||
|
chroot /var/lib/haproxy
|
||||||
|
stats socket /run/haproxy/admin.sock mode 660 level admin
|
||||||
|
stats timeout 30s
|
||||||
|
user haproxy
|
||||||
|
group haproxy
|
||||||
|
daemon
|
||||||
|
ssl-server-verify none
|
||||||
|
|
||||||
|
- name: parametre backend et fontend
|
||||||
|
blockinfile:
|
||||||
|
path: /etc/haproxy/haproxy.cfg
|
||||||
|
block: |
|
||||||
|
frontend proxypublic
|
||||||
|
bind 192.168.100.10:80
|
||||||
|
bind 192.168.100.10:443 ssl crt /etc/haproxy/crt/private/haproxy.pem
|
||||||
|
http-request redirect scheme https unless { ssl_fc }
|
||||||
|
default_backend fermeweb
|
||||||
|
|
||||||
|
backend fermeweb
|
||||||
|
balance roundrobin
|
||||||
|
option httpclose
|
||||||
|
option httpchk HEAD / HTTP/1.0
|
||||||
|
server s-lb-web1 192.168.101.1:80 check
|
||||||
|
server s-lb-web2 192.168.101.2:80 check
|
||||||
|
|
||||||
|
- name: redemarre haproxy
|
||||||
|
service:
|
||||||
|
name: haproxy
|
||||||
|
# state: restarted
|
||||||
|
enabled: yes
|
@ -7,11 +7,12 @@
|
|||||||
roles:
|
roles:
|
||||||
- base
|
- base
|
||||||
- s-ssh
|
- s-ssh
|
||||||
|
#- zabbix-cli
|
||||||
- dnsmasq
|
- dnsmasq
|
||||||
- squid
|
- squid
|
||||||
- ssh-backup-key-gen
|
- ssh-backup-key-gen
|
||||||
|
# awx-user
|
||||||
# - local-store
|
# - local-store
|
||||||
- zabbix-cli
|
|
||||||
## - syslog-cli
|
## - syslog-cli
|
||||||
- post
|
- post
|
||||||
# - goss
|
# - goss
|
||||||
|
@ -6,6 +6,7 @@
|
|||||||
- base
|
- base
|
||||||
- goss
|
- goss
|
||||||
- ssh-cli
|
- ssh-cli
|
||||||
|
- awx-user-cli
|
||||||
#- awx
|
#- awx
|
||||||
# - zabbix-cli
|
# - zabbix-cli
|
||||||
- journald-snd
|
- journald-snd
|
||||||
|
@ -6,12 +6,13 @@
|
|||||||
|
|
||||||
roles:
|
roles:
|
||||||
- base
|
- base
|
||||||
- zabbix-cli
|
#- zabbix-cli
|
||||||
- goss
|
- goss
|
||||||
- dns-master
|
- dns-master
|
||||||
- webautoconf
|
- webautoconf
|
||||||
# - elk-filebeat-cli
|
# - elk-filebeat-cli
|
||||||
# - journald-snd
|
- journald-snd
|
||||||
- ssh-cli
|
- ssh-cli
|
||||||
|
#- awx-user-cli
|
||||||
- post
|
- post
|
||||||
|
|
||||||
|
3
s-lb.yml
3
s-lb.yml
@ -5,7 +5,8 @@
|
|||||||
roles:
|
roles:
|
||||||
- base
|
- base
|
||||||
- goss
|
- goss
|
||||||
- lb-front
|
#- lb-front
|
||||||
|
- lb-front-ssl
|
||||||
#- zabbix-cli
|
#- zabbix-cli
|
||||||
- ssh-cli
|
- ssh-cli
|
||||||
- post
|
- post
|
||||||
|
20
scripts/mkvm
20
scripts/mkvm
@ -2,8 +2,8 @@
|
|||||||
|
|
||||||
mkvmrelease="v1.3.3"
|
mkvmrelease="v1.3.3"
|
||||||
|
|
||||||
ovarelease="2023c"
|
ovarelease="2024b"
|
||||||
ovafogrelease="2024a"
|
ovafogrelease="2024b"
|
||||||
#ovafile="$HOME/Téléchargements/debian-bullseye-gsb-${ovarelease}.ova"
|
#ovafile="$HOME/Téléchargements/debian-bullseye-gsb-${ovarelease}.ova"
|
||||||
ovafile="$HOME/Téléchargements/debian-bookworm-gsb-${ovarelease}.ova"
|
ovafile="$HOME/Téléchargements/debian-bookworm-gsb-${ovarelease}.ova"
|
||||||
ovafilefog="$HOME/Téléchargements/debian-bullseye-gsb-${ovafogrelease}.ova"
|
ovafilefog="$HOME/Téléchargements/debian-bullseye-gsb-${ovafogrelease}.ova"
|
||||||
@ -17,6 +17,11 @@ vmMem[s-nas]=512
|
|||||||
vmMem[s-infra]=768
|
vmMem[s-infra]=768
|
||||||
vmMem[s-backup]=768
|
vmMem[s-backup]=768
|
||||||
vmMem[s-elk]=3072
|
vmMem[s-elk]=3072
|
||||||
|
vmMem[s-awx]=4096
|
||||||
|
|
||||||
|
declare -A vmCpus
|
||||||
|
vmCpus[s-peertube]=2
|
||||||
|
vmCpus[s-awx]=2
|
||||||
|
|
||||||
usage () {
|
usage () {
|
||||||
echo "$0 - version ${mkvmrelease} - Ova version ${ovarelease}"
|
echo "$0 - version ${mkvmrelease} - Ova version ${ovarelease}"
|
||||||
@ -40,12 +45,15 @@ create_vm () {
|
|||||||
if [[ "${deletemode}" = 1 ]] ; then
|
if [[ "${deletemode}" = 1 ]] ; then
|
||||||
VBoxManage unregistervm --delete "${nom}"
|
VBoxManage unregistervm --delete "${nom}"
|
||||||
fi
|
fi
|
||||||
vboxmanage import "${nomova}" --vsys 0 --vmname "${nom}"
|
mem=1024
|
||||||
|
cpus=1
|
||||||
if [[ -v vmMem[${nom}] ]]; then
|
if [[ -v vmMem[${nom}] ]]; then
|
||||||
mem=${vmMem[${nom}]}
|
mem=${vmMem[${nom}]}
|
||||||
echo "machine ${nom}: ${mem} ..."
|
|
||||||
VBoxManage modifyvm "${nom}" --memory "${mem}"
|
|
||||||
fi
|
fi
|
||||||
|
if [[ -v vmCpus[${nom}] ]]; then
|
||||||
|
cpus=${vmCpus[${nom}]}
|
||||||
|
fi
|
||||||
|
vboxmanage import "${nomova}" --vsys 0 --vmname "${nom}" --memory "${mem}" --cpus "${cpus}"
|
||||||
}
|
}
|
||||||
|
|
||||||
setif () {
|
setif () {
|
||||||
@ -145,6 +153,8 @@ elif [[ "${vm}" == "r-vp2" ]] ; then
|
|||||||
./addint.r-vp2
|
./addint.r-vp2
|
||||||
elif [[ "${vm}" == "s-agence" ]] ; then
|
elif [[ "${vm}" == "s-agence" ]] ; then
|
||||||
create_if "${vm}" "n-adm" "n-agence"
|
create_if "${vm}" "n-adm" "n-agence"
|
||||||
|
elif [[ "${vm}" == "s-awx" ]] ; then
|
||||||
|
create_if "${vm}" "n-adm" "n-infra"
|
||||||
else
|
else
|
||||||
echo "$0 : vm ${vm} non prevue "
|
echo "$0 : vm ${vm} non prevue "
|
||||||
exit 2
|
exit 2
|
||||||
|
@ -4,8 +4,8 @@
|
|||||||
#mkvm pour toutes les vms
|
#mkvm pour toutes les vms
|
||||||
|
|
||||||
$mkvmrelease="v1.3.1"
|
$mkvmrelease="v1.3.1"
|
||||||
$ovarelease="2023c"
|
$ovarelease="2024b"
|
||||||
$ovafogrelease="2024a"
|
$ovafogrelease="2024b"
|
||||||
$ovafile="$HOME\Downloads\debian-bookworm-gsb-${ovarelease}.ova"
|
$ovafile="$HOME\Downloads\debian-bookworm-gsb-${ovarelease}.ova"
|
||||||
$ovafilefog="$HOME\Downloads\debian-bullseye-gsb-${ovafogrelease}.ova"
|
$ovafilefog="$HOME\Downloads\debian-bullseye-gsb-${ovafogrelease}.ova"
|
||||||
$vboxmanage="C:\Program Files\Oracle\VirtualBox\VBoxManage.exe"
|
$vboxmanage="C:\Program Files\Oracle\VirtualBox\VBoxManage.exe"
|
||||||
@ -18,14 +18,20 @@ $vmMem = @{
|
|||||||
"s-infra" = "768"
|
"s-infra" = "768"
|
||||||
"s-backup" = "768"
|
"s-backup" = "768"
|
||||||
"s-elk" = "3072"
|
"s-elk" = "3072"
|
||||||
|
"s-awx" = "4096"
|
||||||
|
"s-peertube" = "4096"
|
||||||
}
|
}
|
||||||
|
|
||||||
|
$vmCpus = @{
|
||||||
|
"s-awx" = "2"
|
||||||
|
"s-peertube" = "2"
|
||||||
|
}
|
||||||
#FONCTIONS
|
#FONCTIONS
|
||||||
|
|
||||||
function create_vm{ param([string]$nomvm)
|
function create_vm{ param([string]$nomvm)
|
||||||
|
|
||||||
if ($vmMem.ContainsKey($nomvm)) {
|
if (($vmMem.ContainsKey($nomvm)) -and ($vmCpus.ContainsKey($nomvm))) {
|
||||||
& "$vboxmanage" import "$ovafile" --vsys 0 --vmname "$nomvm" --memory $vmMem[$nomvm]
|
& "$vboxmanage" import "$ovafile" --vsys 0 --vmname "$nomvm" --memory $vmMem[$nomvm] --cpus $vmCpus[$nomvm]
|
||||||
Write-Host "Machine $nomvm importée"
|
Write-Host "Machine $nomvm importée"
|
||||||
} else {
|
} else {
|
||||||
#Importation depuis l'ova
|
#Importation depuis l'ova
|
||||||
@ -133,6 +139,22 @@ elseif ($args[0] -eq "s-kea2") {
|
|||||||
create_if $args[0] "int" 3 "n-user"
|
create_if $args[0] "int" 3 "n-user"
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
|
elseif ($args[0] -eq "s-awx") {
|
||||||
|
|
||||||
|
create_vm $args[0]
|
||||||
|
create_if $args[0] "int" 1 "n-adm"
|
||||||
|
create_if $args[0] "int" 2 "n-infra"
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
elseif ($args[0] -eq "s-peertube") {
|
||||||
|
|
||||||
|
create_vm $args[0]
|
||||||
|
create_if $args[0] "int" 1 "n-adm"
|
||||||
|
create_if $args[0] "int" 2 "n-infra"
|
||||||
|
}
|
||||||
|
|
||||||
elseif ($args[0] -eq "s-agence") {
|
elseif ($args[0] -eq "s-agence") {
|
||||||
|
|
||||||
create_vm $args[0]
|
create_vm $args[0]
|
||||||
|
Loading…
x
Reference in New Issue
Block a user