![]() I use a Raspberry Pi, run 12 services, and it handles it all fine. Modern computers can and should be able to do multiple things. To your point of a server only being used for nextcloud only, where does that come from? If I had a super beefy desktop with crazy specs and I wanted to run NC, I shouldn’t run anything else on it? Seems like a waste of hardware. In other words, I’m more comfortable with docker containers than running anything natively I’ve been using docker for years, so if something is not working it’s much easier for me to figure out why. ![]() It’s basically running a snapshot in time so I feel much more comfortable exploring and running multiple versions, “trying before I commit”. The rollback runs exactly the same image as before (all dependencies and everything). I can also independently update each app whenever I want, and if I’m not happy, I can just easily undo and run the cached local version of the previous image. It can even crash and burn, destroy everything, and my host OS is fine and so are my other apps. Running everything in it’s own container means that whatever is on the “machine” for each container, it is catered specifically to that one app and the app can do whatever it wants in that space without affecting my other apps. Secondly, sometimes different softwares do certain things to the host OS during native installs or install different versions of packages. Right now I’m on OS X, but when I switch to Linux, it’ll be exactly the same scripts I use. If I get new hardware, I just plug in my HDD, install docker, and do “docker compose up” and I’m back where I started. Why do I do this?įirst, I’m not tied to my specific OS or hardware. On this server though I’m running a bunch of apps (Nextcloud, Joplin, photoprism, etc). That's also why they seem to have so much trouble with maintaining and upgrading it.įor me the server I use has really only one app installed natively: docker. The main reason I think most redditors use docker is that they can easily install large complex services like Nextcloud without understanding or caring how it works. That's why I use docker, I'm too old and tired to do everything manually if I don't have to. The services are as modular as VMs and can be snapshot and migrated with ease. We can now more efficiently share HW and a single kernel among many services. Now we are back to where we started with a few big servers running several services, but improvments to the Linux kernel (cgroups, etc) make it possible to not only chroot the filesystem, but the process table, cpu, memory, users pace. Now we realized we were running 50+ separate Linux kernels on a single machine, wasting CPU cycles, Disk space, electricity, air conditioning, backup software licenses.įast forward to the container revolution. The VMs could be easily monitored, snapshot, backed up, and migrated. Now we could virtualize most of those servers onto a few pieces of HW. There were fewer single points of failure, but there was too much HW to manage.įast forward to the virtual machine revolution. Now we had dozens or hundreds of servers to manage. It was now possible to separate services onto separate servers that would be managed separately. ![]() One of the solutions was to put many of the services in there own chroot jails (This was also good for security).įast forward to the PC revolution & Linux/BSD distros. ![]() It was a major pain to manage library paths & update services. We would constantly have to install multiple versions of libraries for different packages. In the old days, we'd run multiple services on our biggest (and sometimes only) server (NIS, NFS, FTP, web, Bind, Sendmail, printing.). ![]()
0 Comments
Leave a Reply. |