Self-Hosted Cloud Environment (Build)

There are many ‘cloud’ provided services floating around these days. Everything from groupware (email, contacts, calendaring, tasks) to file sharing, paste bins to search engines. A large majority of these services really want to know who you are; email address, real name, government issued id. Do you really need to submit to all of these rules, regulations and privacy intrusions?

The answer of course is “no”, and if you found this page then you might be looking for a ‘cloud’ service replacement. I will explain some steps needed below to help you replace some or all of these providers with your own setup, using free (usually open-source as well) software.

Lets’ begin!


The first step might be the hardest one, lets’ consider the following list and what will be possible with the options available to you.

  1. A computer that can remain on all of the time. In your home, on your internet connection.
    • Requires stable power (ups/battery backup is required).
    • Runs twenty-four hours a day, depending how powerful this might use more power.
    • Requires a stable internet connection.
    • Future steps might work if this is a NAS device, say a Synology or QNAP device, if it can run Docker images.
    • Running a email server will likely not be possible on a home internet connection as ISPs (rightfully) block incoming/outgoing mail on SMTP ports.
    • Some ISPs also block low incoming ports such as 21, 22, 80, 443 (FTP, SSH, HTTP, HTTPS).
  2. Renting a small VPS (Virtual Private Server) from a hosting company.
    • Most come with a static IP address (Best for running email servers).
    • Fewer but still a good amount of VPS providers will allow you to control the Reverse DNS record. This is a must for reliable spam-free mail delivery.
    • Very cheap.
    • You should be able to choose a modern, up-to-date Linux distribution to run your base Docker install.
  3. Renting a hardware (physical) server from a hosting company.
    • Pricing can range from the low end to much higher depending on your hardware requirements. Usually not as cheap as a VPS.
    • Choose your own Operating System to install on the hardware.
    • All hardware hosting services I have seen offer multiple IP addresses cheaply and allow control of all Reverse DNS entries per IP address.

If you choose to host a small server at home, that is a good place to start if you already have the hardware. You can easily do all your testing locally, then move to a VPS or hardware hosted server in the future.

I won’t list or go into which provider is the best, which allows what content and such. I will leave the searching to you and you can make your best informed decision. I will state you will need a provider that you can do the following.

  1. Choose your Operating System
    • You are going to want a stable foundation OS to run docker on. At this time I usually pick a plain Debian 11 install for my base OS.
    • You will require root access to the Operating System. This should not be an issue for most VPS providers or Hardware hosting.
  2. Public IP Addresses
    • With a VPS or Operating System installed directly and using Docker, you can get away with having one usable IP address on the system. If you choose a more advanced setup with something like VMware ESX (vSphere) loaded you might need more then one IP on the host as the Hypervisor will need one as well.
  3. Ability to set Reverse Hostnames on the IP address(s)
    • The ability to set your Reverse Hostname on your IP address(s) allows you to have a much more reliable spam-free mail delivery setup. If your DNS incoming MX record points to: and the IP of that record is but when doing a reverse lookup to returns then that’s not a good sign for a low spam score.
  4. Able to install Docker.
    • This should not be a issue a majority of the time. If a provider will not allow you to install software on your VPS then you need a different provider.

As far as requirements you are probably wanting at least 2GB of memory, 2 CPU cores (vCPU), and probably around 20GB of storage space. These requirements may be lower or higher based on what you can afford and plan to run.

(In the following I am going to follow the basic setup on a VPS, where I have root access and a single IP address as well as having Debian 11 installed for me by the provider.)

So you found your provider, you get a single IP address for free with the VPS, your paying an amount your comfortable with and you read through the reviews to see what people are generally saying about the service. You have signed up, and were emailed or somehow given a few details.

  1. Public IP address of the server.
  2. SSH access to the server.
    • Your username was chosen or happens to be just ‘root’.
    • A password and/or a certificate file.
    • Some instructions on how to SSH into your new server.

Go ahead and SSH into your new VPS, and login as the user you were provided. If the user is not ‘root’ you will need to try and ‘sudo’ into the root user now.


username@vps-8734:~$ sudo -i
[sudo] password for username: ...

Lets update the system just in case the provider is using a slightly older install image.

root@vps-8734:~# apt-get update -y

You should not have to reboot, but if you pulled a good amount of packages you can do so if you wish. If you do be sure to reconnect to the SSH session, and ‘sudo’ back to root using the above commands.

Lets now install some dependencies for Docker CE.

root@vps-8734:~# apt-get install apt-transport-https ca-certificates curl gnupg2 software-properties-common

We now pull the GPG public keys for the Docker CE repositories.

root@vps-8734:~# curl -fsSL | gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg

We can now add the Docker CE repository to the local system repository list file.

root@vps-8734:~# echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] $(lsb_release -cs) stable" | tee /etc/apt/sources.list.d/docker.list > /dev/null

Next, update the repository and install Docker CE with the following two commands.

root@vps-8734:~# apt-get update -y
root@vps-8734:~# apt-get install docker-ce docker-ce-cli -y

Once Docker CE is installed, verify the Docker installation using the following command.

root@vps-8734:~# docker version

You should see something similar to the following:

Client: Docker Engine - Community
 Version:           20.10.12
 API version:       1.41
 Go version:        go1.16.12
 Git commit:        e91ed57
 Built:             Mon Dec 13 11:45:48 2021
 OS/Arch:           linux/amd64
 Context:           default
 Experimental:      true

Server: Docker Engine - Community
  Version:          20.10.12
  API version:      1.41 (minimum version 1.12)
  Go version:       go1.16.12
  Git commit:       459d0df
  Built:            Mon Dec 13 11:43:56 2021
  OS/Arch:          linux/amd64
  Experimental:     false
  Version:          1.4.12
  GitCommit:        7b11cfaabd73bb80907dd23182b9347b4245eb5d
  Version:          1.0.2
  GitCommit:        v1.0.2-0-g52b36a2
  Version:          0.19.0
  GitCommit:        de40ad0

Now lets install another piece of software associated with Docker; Docker Compose.

root@vps-8734:~# wget

Copy the downloaded binary to the system path, set execute permissions on the binary and then test the command.

root@vps-8734:~# cp docker-compose-linux-x86_64 /usr/local/bin/docker-compose
root@vps-8734:~# chmod +x /usr/local/bin/docker-compose
root@vps-8734:~# docker-compose --version

You should have output again, similar to the following.

Docker Compose version v2.2.2

Now you have a base, bare minimum Docker server running on your VPS. You really can’t do much yet but the foundations have been laid.


Now, because I do enjoy a good web interface to manage my devices I am going to continue with this post by providing the next steps to install a piece of software to help manage the docker containers this server will host in some future articles.

Lets install Portainer which will assist us in managing, adding, and generally administrating the docker images this server will be running.

root@vps-8734:~# docker volume create portainer_data

This next command will pull (download) the Portainer image from Docker Hub (a site that hosts lots of the docker images (programs) you will use later) and maps a network port (9000) from your host (the VPS operating system) to the port used by the program running inside the Portainer container.

root@vps-8734:~# docker run -d -p 9000:9000 --name=portainer --restart=always -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer

If everything went smoothly you should be able to access your Portainer installation by visiting the IP address of your VPS in your browser and providing the port Portainer is running on (9000). Because this is an HTTP connection, you might need to use a Private Browser window or accept a message in your browser that this connection is not secure.


Once your connected in the browser, you should be able to setup your username and password. Be sure to make a very strong password as at the moment, your Portainer install is available to the world. If your VPS provider offers a basic firewall you should use it to secure port 9000 from everyone but a handful of addresses for you to manage it from.

Once the username and password are chosen, you can now choose the type of Docker environment to manage, you will want to choose “Local” here.


Great, so we have Portainer running on port 9000 and it has a password but I suggest we secure that a little more then via a password. Let’s get a firewall installed and allow some ports.

Note that if your VPS host, or service provider offers a firewall managed through a web interface of sorts, then you do not need to configure a firewall on the machine. You should use their administrative panels to secure your SSH and Portainer ports.

Please be extremely careful and follow the commands exactly, as you could lock yourself out of the SSH session and future sessions resulting in you possibly having to reinstall the operating system on your VPS.

Run the following commands, being very careful to allow SSH before starting the firewall for the first time, as by default UFW will block all incoming connections by default.

root@vps-8734:~# apt install ufw
root@vps-8734:~# ufw status

The first command installed UFW, the second command will show the current status, and it should read as inactive.

We need to allow at minimum SSH through the firewall.

root@vps-8734:~# ufw app list

You should get a list, we are looking for SSH or OpenSSH. Once you have found one of these services, we need to enable it. Be sure to replace “SSH” with “OpenSSH” if that is in your list and “SSH” is not.

root@vps-8734:~# ufw allow "SSH"

If you want to lock down SSH even more, you can specify which IP address is allowed to connect to your SSH server. Be very careful with this, if you do not have a static IP you could lock yourself out of your own server. This is not necessary and if you do not understand it, just ignore this for now.

root@vps-8734:~# ufw allow from proto tcp to any port 22

We can now allow Portainer access through our firewall. If you do not have a static IP address at home, just allow port 9000 through the firewall.

root@vps-8734:~# ufw allow 9000

Again, if you are more advanced and wish to lock down access to Portainer to a specific IP address you can do such, just as with SSH above.

root@vps-8734:~# ufw allow from proto tcp to any port 9000

We can now enable the firewall by issuing the following command.

root@vps-8734:~# ufw enable
root@vps-8734:~# ufw status verbose

The first command will enable the firewall, the second will give us another status. This time it should say Active and give us a list of services that are allowed through the firewall.

Now that we have a functioning Docker server with easy web management, I will be adding future posts about which software you can start loading on your docker server to replace the ‘cloud’ providers mentioned at the top of this article.

In our next post we will setup a reverse proxy to handle all of our incoming web traffic and directing it to the correct docker instances.

3 thoughts on “Self-Hosted Cloud Environment (Build)

Leave a Reply

Your email address will not be published. Required fields are marked *