Seconding this answer, I started typing up more or less the exact same response before scrolling down. Network will not be your bottleneck here. You are far more likely to have a storage or CPU bottleneck, but the only way to know is to try it.
Seconding this answer, I started typing up more or less the exact same response before scrolling down. Network will not be your bottleneck here. You are far more likely to have a storage or CPU bottleneck, but the only way to know is to try it.
I already apply these rules myself, but these are the five major things I emphasize to everyone.
Don’t overcomplicate things. You don’t need Proxmox on every machine “just in case”. Sometimes, a system can be single purpose. Just using Debian is often good enough; if you need a single VM later, you can do that on any distro. This goes for adding services, too. Docker makes it very easy to spin things up to play with, but you should also know when to put things down. Don’t get carried away, you’ll just make more work for yourself and end up slacking and/or giving up.
Don’t put all your eggs in one basket if you can avoid it. For instance, something like Home Assistant should run on its own system. If you rely heavily on your NAS, your NAS should be a discrete system. You will eventually break something and not have the time or energy to fix it immediately. Anything you truly rely on should be resilient so that your tinkering doesn’t leave you high and dry.
Be careful who you let in. First, anybody with access to your systems is a potential liability to your security, and so you must choose your tenants carefully. Second, if others come to rely on your systems, that drastically reduces your window to tinker unless you have a dedicated testbench. Sharing your projects with others is fun and good experience, but it must be done cautiously and with properly set expectations. You don’t want to be on the receiving end of an angry phonecall because you took Nextcloud down while playing around.
Document when it’s fresh in your mind, not later. In fact, most of the time you should document it before you do it. If things don’t go according to plan, make minor adjustments. And update docs when things change. What you think is redundant info today might save your ass tomorrow.
Don’t rely on anything you don’t understand. If it works, and you don’t know how or why it works on at least a basic level, don’t simply accept it and move on. Figure it out. Don’t just copy and paste, don’t just buy a solution. If you don’t know it, you don’t control it.
You have things set up correctly. Normally, Docker creates a virtual network connection for every container. When you use network_mode: "service:container"
you are making those two containers share a single network connection instead. From a network perspective, it is effectively the same as running two pieces of software on one computer, just virtually. All your other software in that stack that is set up that way will piggyback on Gluetun’s network interface instead of creating their own, and they all see each other as being on the same localhost
as if you ran multiple programs directly on your computer.
You also opened ports correctly. Since everything is using Gluetun’s network connection, you have to open ports on Gluetun. Opening ports in Docker carries no risk of leaking your connection or breaking your VPN, as long as you don’t forward those ports from your router (which you said you can’t do anyways). Ports are to let traffic in, not out. By opening it, you are telling your computer to “listen” for anybody trying to talk to it using that port; otherwise, it would just ignore anybody who tried.
Gluetun by default does something VPN software calls “split tunneling”, which is that traffic goes out to the VPN only if it’s externally bound traffic. Any traffic bound for an address in the private network spaces (192.168.x.x, 172.16.x.x, 10.x.x.x) will not use the VPN. This is done by design, specifically so you can do what you just did. This way you can access stuff that’s intended to be accessed locally, but still forward all external traffic to the VPN.
The only thing you don’t really need is the depends_on
because network_mode: "service:container"
already acts like depends_on
, so it’s redundant, but it’s not hurting anything to explicitly call it out either.
As /u/SpacezCowboy said, for peace of mind you can test your torrent client with https://ipleak.net/. You can also test a regular connection from inside the container by running docker exec -t qbittorrent curl icanhazip.com
There’s nothing different. Just expose the ports using docker, and then point nginx at it like any other application. If it’s running on the same server, you can just point it at localhost.
For nginx proxy manager, you can define custom configs easily, but for most usage you should be able to set up each redirect with a couple of clicks in the GUI. You should set it up to generate its own certs using the built-in tool though, but if you really want you can import certs manually through the GUI as well.
Alright, I guess in lieu of a better FOSS solution coming along, I’m getting a Kobo. Thanks for sharing this, I had no idea it supported that.