Skip to content

Infrastructure

This section will describe our servers, where the pages are running.

Hardware

The server is a root server running at and sponsored by our hosting sponsor manitu.de.

Our Servers are:

ServerUsageStats
onionwebsitesmunin
garliccloud, gitlab-runnermunin

Base System

We are running Debian on our Servers. As filesystem we are using ZFS with different subvolumes, that are configured for the needs of the subvolume. The System ist managed by ansible documented in the foodsharing-ansible repository.

Components

Running foodsharing requires different components, that are described in the following sections.

beta and production are running on the same database. Both are prod PHP applications, so FS_ENV=prod for beta and production.

uml diagram

Webserver

As webserver we are running NGINX. The TLS Certificates are managed by acme.sh. The TLS settings (Ciphers etc.) are serverwide.

There are two site configurations (beta and production). If changes are needed here you need to check if they can be applied before a deployment. Talk to an admin for that. If this is not possible create a MR in the ansible repository and let an admin merge both MRs with a short Maintenance time.

PHP

Both sites (beta and production) have a own pool running with php-fpm PHP Version and other Settings are defined in ansible variables.

For the application there are also two different configuration files for beta and production. If changes are needed here you need to check if they can be applied before a deployment. Talk to an admin for that. If this is not possible create a MR in the ansible repository and let an admin merge bot MRs with a short Maintenance time.

The fpm processes are running as a restricted user on the server. NGINX is communicating through a unix socket with the processes.

Database

We are using MariaDB as Database server. There are some settings defined in foodsharing-ansible. php-fpm is communication via IP (unix Socket is prepared) with MariaDB.

Redis

There is a dedicated Redis instance running for both applications with a single Database. Both php processes and the websocket server are communicating over the network stack.

Websocket Server

There is a nodeJS application running for the websocket server. It is running as a systemd service on the production code as a separate user. The nodeJS version is defined in ansible variables. If changes are needed here you need to check if they can be applied before a deployment.

fs-mailqueuerunner

This service helps us delivering our Mails. The Service is running as a systemd Service.

cronjob

The command bin/console foodsharing:cronjob is running all 5 minutes. Mainly this job is for fetching mails.

process-bounce-emails

The command bin/console foodsharing:process-bounce-emails is running all 30 Minutes. Bounce mails are fetched and used to mark the addresses in the database.

daily tasks

The command bin/console foodsharing:daily-cronjob and bin/console foodsharing:stats are running every night. At the cronjob the sleeping hats are renewed and notification mails for empty pickup slots are send. At the stats command the pickup stats are renewed. Further files older than 2 days are deleted from the tmp folder

Deployment

The Application is deployed with deployer from the GitLab CI. The process is described in the deploy.php. If you change something here you should be very sure and talk to an admin before. Whenever there is new commit pushed to the branches master or production the CI is running and the version gets deployed. If the maintenance mode is active in the new deployed version it will not be active anymore.