Linux
This is more a story of my history, and how I ended up using what I use.
My early formative computing comes from BBC Micros and other Acorn equipment, including their wonderful RISC machines (birthplace of the ARM chip series). I would also experience the later MS-DOS eras along side Windows 2 and more commonly Windows 3 and 3.11.
Eventually I would meet slackware during one of its very earliest releases, and while I was very fond of the RISC OS platform, at its time ahead of and equivalent to basic Windows systems, it had a delightful expandable OS that I expanded my assembly in and wrote network remote desktop / control tools. Linux however was different, the first platform I'd really met that had its own enforcement of logons and users ; a far cry from the world of even Windows For Workgroups with its basically network authentication and no local security.
Early Bumblings
Slackware would eventually give way to RedHat, I obtained a 4CD version of RedHat, this had all the binaries and sources, which in an era of limited internet access was a useful resource. Updates were available online for some years.
Eventually RedHat would make what I thought was a poor choice ; the early Linux era was marked by everything being charged for ; compilers, the usual software development tools and everything would come at cost ; this would change over the following decade, persumable platform manufacturers realised it was more profitable long term for their platform if development was free and they'd make more returns on having a well fleshed out product suite made by 3rd parties. Unfortunately against this backdrop RedHat did the opposite and decided to start charging for software updates ; something no other distribution did at the time, and at worst future platforms would tend to split supported / community releases rather than outright limiting their entire lineup.
This naturally led to the end of RedHat for my personal use, and I'd never go near their products again, or recommend them. I'm not that familiar with how RedHat turned out in the end or what their offerings are, I simply never consider their stuff when looking for platforms.
The Ubuntu period
After abandoning RedHat I moved through a few things including OpenSUSE and Debian, which never had the software support, before finally settling on the common and familiar Ubuntu. I would probably end up using Ubuntu for several years before I finally changed course once more…
Ultimately what put me off was the 6 month release cycle ; after one update set I started to get some odd behaviour in one of my applications, and while this was ultimately my fault it would have been much easier to diagnose if I was patching things more regularly rather than having some 600+package refresh when the 6 month cycle came around.
I guess I understand the philosophy, for most business needs you'd pick a Long Term Support version and use that for 4 years or whatever its lifespan is, and then make a project of doing the upgrade. But I wanted systems I could comission with an indeterminate future, probably many years, without having massive change sets to deploy and worry about regularly, that my change logs would have smaller things to review if something happened…
The first Gentoo
I found Gentoo, basically “build it yourself”, this meant new updates were available as they came out, you'd get updates, do builds and be done with it. Though Gentoo assumes that every node builds its own updates it's quite possible to build binary packages on one host and install them on others disabling local compilation (though removing the compiler its self is difficult).
Gentoo is where I created “forges”, these went through many variations over the years:
Virtual machines or Containers
Build sets can be created inside virtual machines, or later I switched to using SystemD's nspawnd to create containers in which I could create updates, even for the local machine, in safety, knowing the whole set would either complete, or wouldn't, without partially completed runs.
Host or group
This one is a more complicated decision, as various machines benefit from different USE flags multiple builds must be done, doing this per host is rather wasteful as many machines contain the same flags set in most cases, on the other hand compiling groupings like “server” “web app server” etc results in less overall builds, but less fine tuning of each individual build.
Arch! No Gentoo! No something else!
Eventually I started to get restless with Gentoo, particularly how long it take just to run emerge even in binary only mode on low power hardware bugged me, and I looked for alternative binary distributions with rolling releases, and I met Arch.
I can't remember why I first moved away from Arch, I'd return to Gentoo for about a year before feeling like giving Arch one more stab. And this would be the end of my binary distribution days really.
Part of my network uses Samba to provide both AD and LDAP services, and AD of course loves DNS, which is done through bind and DLZ support. Within a month of me returning to Arch however there was /some/ issue in the compilation of Samba or bind in the Arch repository, even a brand new install would give some kind of “Bad pointer” in bind which would crash it when using a DLZ. Bringing down all my nameservers is a bit of an issue. Ultimately I had to pin bind to an older version to keep this working, but this wasn't a great solution, and I lamented my Gentoo days where I'd just have compiled it all myself and thus they would interact properly.
And so I switched back to Gentoo.
Enter NixOS
This revert to Gentoo would last slightly less than a year. Someone introduced me to NixOS, it took me some time to understand how it would all work but I was initially attracted to the configuration language.
I'd been messing around with ansible to manage my Gentoo installations, but it was /slow/ to push things out, making ssh connections for every single step, running “everything” took many minutes, so things got split into many workbooks to be run on deploy. But inevitably the tedium of testing changes this way would result in changes being made locally on the servers and then you'd have to dry-run ansible and work out what needed reintegrating into the main config.
NixOS does this much faster, much better. It also completely solves the issue of disparate installation sets, you can customise whatever packages with whatever build flags you want, and all the builds with the same inputs will share the outputs ; this completely resolves the issue of how “granular” you make each host, under Gentoo you'd typically end up with whole different sets of binaries for each build set, but under NixOS only the packages that have different inputs have stored outputs, anything that is literally shared only exists once, no matter how many individual hosts it goes to.
It's more than just ansible too, the configuration language is literally code, and while I'm not done tweaking my configuration, the fact that I can package my settings up however I like and then functionally translate them into configuration structures has left me with a level of freedom to simplify the common parts of my configuration while still having a high degree of fine tuning.
NixOS hits all the things I want ; source builds where needed, individual build flags without creating excess duplicate binaries, or repeated or unnecessary rebuilds, it's fast, efficient, customisable.
And that's to say nothing of the base functions of NixOS being separation of binaries and their libraries allowing configurations that might conflict on a normal system.
On top of this 'impermenance' gave me a level of control over my data that I've come to love - while with other Linux systems backups require you to figure out where all the data goes, all those bits in /var, /etc, maybe other custom folders or locations, instead impermenance makes NOTHING persist by default. One of the first scripts to write is to find and filter all things in the temporary root file system that will be “lost” on reboot, and from this, when a new service is installed, it's quite straight forward to work out where it wants to store stuff. Most packages will allow you to specify /where/ to store data, and for those that dont, clever mounting tricks allow you to keep parts of the root filesystem elsewhere. This is all covered in more detail in the Backups section.
Build updates and deployments are all managed by GitLab
I've been with NixOS for about a year now and see no imminent reason to move, it seems to provide the best of all the worlds I'm interested in and only wish I'd found this many years ago.