Some VPS will manage your kernel for you and use a ‘optimized’ kernel. You can often change it to what you want. But sometimes you have to go into the control panel to change it.
Some VPS will manage your kernel for you and use a ‘optimized’ kernel. You can often change it to what you want. But sometimes you have to go into the control panel to change it.
My primary ‘backup’, or easy recovery method is that I use ZFS, and take snapshots via sanoid frequently. I have a mydumper jump making backups of my mariadb server. I use syncoid to doing sends to external storage. So most things can just be fixed by copying the files from an older snapshot.
I also have a completely separate backups of my system made using borg to storage I have at borgbase.com, but this only happens a couple times a week, and is only my ‘important’ data and not large things like downloaded video/music/etc. I am thinking about switching borg out for restic though, since restic is also compatible with borgbase.
There are risk, that a newer version of an image will accidentally, break things, apply breaking changes and so on.
Good, frequent, tested backups, could be a mitigation to this. If an image breaks, you just restore your data from the backup, and pull the older image.
I use the klausmeyer/docker-registry-browser
, and that recently broke, but it just needed me to provide an additional configuration variable.
I use advplyr/audiobookshelf
, which upgraded to a different database engine and schema a couple months ago. For some small subset of people (including me) the migration to the new database didn’t go well. But I had a backup from 6 hours before the update, so restoring and then using the older image until the fixes were released was easy.
Even with the occasional issues I prefer letting watchtower automatically update most of my images for my home. I don’t really want to spend my time manually applying updates when 98% of the time it will be fine. But again, having a reliable and tested backup system is an essential part of why I am comfortable doing this.
Do I need to attach a few external HDDs to my Pi,
Provably would be a really good idea to attach some kind of additional. The write endurance on microSD cards tends to be relatively low. So depending on your usage, it might die pretty quickly.
Well enterprise content filters are able to do something like this.
I don’t know of any set of tools that would give you a full setup, but you would probably need to start doing the same kind of setup as a squid+squidguard content filter. You have to run it in interception mode, which means generating and distributing your own CA.
After you get all that configured, you swap out the squidguard with some magical tool that does content editing.
From the perspective of the stuff you are transporting over the VPN tunnel or ssh proxy. There is not much difference. The payload is encrypted and can’t be seen in transit assuming the underlying VPN, or SSH tunnel is secure.
There are functionality difference. For an SSH proxy, you have to be able to use a different IP port, or you have to use a browser and configure a SOCKS proxy.
There are potential performance differences. Last time I tried using them for large transfers, I found the performance over a SOCKS via ssh tended to kinda suck. At least compare with a transfer over a VPN. It is possible this had nothing to do with SSH, and was problems with the browser I was using at the time. But this might not matter much for your usage.
How things could potentially leak might be different with a proxy. Depending on how you configure SSH and the implementation, your DNS might be resolved on your original DNS servers, or via the tunnel.
What I meant, and perhaps I have a misunderstanding, i
Yes, I understand what you mean, and you don’t seem to be misunderstanding how TLS client certificates function.
But my point was, that usually it is web server is that accepts and validates the client certificate. A web server is externally visible, and so it is potentially something that can be attacked even if the attacker doesn’t have a valid client certificate.
Why don’t people use client certificates
The difference is that the client certificates are usually implemented as part of the web server. If there is a issue with either configuration, or bug in the web server, you potentially immediately can bypass the certificate requirement. On the other hand a VPN is often a completely separate piece of software, that is operating at the network layer.
Another thing. If you run a simple port scan against the Internet it is easy to find http/https servers. Some VPN protocols that have been strongly configured will be more or less invisible to any kind of port scans. This eliminates a lot of the scanning and probing get for basically thing that is visible on the Internet.
Not saying client certs don’t have their place. Just not sure I would choose them, when I think a VPN provides stronger protection, and is potentially pretty easy to implement for a selfhosted environment.
Is there anything else I can do to harden the server?
Depending on how sensitive your docs are, you could configure the server to only be usable when accessed via a VPN connection.
I think they are asking how to configure Port forwarding on their border router/firewall for incoming SSH connections from elsewhere on the Internet.
Not how to transport other protocols over a working SSH connection.
Do you have any access logs on the server? Or can you enable them? Examine your logs and see what the bots are accessing, then block that?
I expect you already have it installed, but it probably would have been better to use PostgreSQL, mariadb or mysql if you wanted to use some tool to run queries.
Looking through Google, it can be done using the CLI.
Are you running nmap from the system itself? Depending on the network type you use docker does some pretty complicated things with NAT, so scanning a machine from itself isn’t always very useful.