ExploreTrendingAnalytics
Nostr Archives
ExploreTrendingAnalytics
Floppy PNG4h ago
I'm working on my build system this morning making minor tweaks to simplify it and building the system up through LaTeX. Perhaps I'll even get to #e16. It reminded me that there is never a truly predictable complicated system IRL. This reminds me of when I was monitoring thousands of servers. Even if the same exact model was purchased, the data coming back was always slightly different. The NIC or SPU or MB had different characteristics that made automating the deployment tricky at a certain level of detail. I know we put layers upon layers of things to try and make this not so with containers and declarative OSs like NixOS. *Perhaps*, if you have control of the full supply chain, some things are controllable if it is a priority, but that isn't practical with any system I've worked with. True, I used to drive my vendors crazy by asking for exact models of SCSI drives and controllers. (I ran SCSI in the 90s vs. IDE as a general rule). Anyhoo... I could make the build system more automated, run through all of the steps in sequence, or look at dependency graphs to determine the next compile, but the most useful form my stuff is likely just the shell script and the LFS/BLFS entry. This is a similar issue as "resilience". Resilience happens at time of crisis. An abstract of one of my favorite articles: "In domains concerned with global change, achieving resilience in socio-ecological systems is highly desired; however making this concept operational in reality has been a struggle partly due to the conflation of the term by these domains. Although resilience is vastly researched in sustainability science, climate change and disaster management for some reason this concept is not dealt with from an ontological perspective. In this paper, the foundation for a formal theory of resilience is laid out. I propose that the common view of resilience as ‘the ability of a system to cope with a disturbance’ is a disposition that is realized through processes since resilience cannot exist without its bearer i.e. a system and can only be discerned over a period of time when a potential disturbance is identified. To this end, the constructs of the Basic Formal Ontology are applied to ground the proposed categorization of resilience. In so doing, I adhere to the notion of semantic reference frames by employing a top-level ontology to anchor the notion of resilience." Daniel Desiree. 2014. “Resilience as a Disposition.” in Frontiers in Artificial Intelligence and Applications. IOS Press. We like to have control. We want to see a map from the present to a future state, despite disruption events. We also want frictionless wins, a custom built OS. But, there is always some form of trade-off. Back to the topic: If I run top in a window to watch the build go, display the script I'm running, kick it off in the background, and refer the reader to the LFS/BLFS documentation if it exists for the build step, running one step at a time, overall that system will prove to be more resilient to the reader than a tested and refined sequence. This is *particularly* true for LFS builds, as upgrading the CPU might break the whole system. Put another way, for anybody that has attempted an LFS build, my guess is something *always* goes wrong at some point. In the end? Remember that every framework or abstraction exposes you to risk as far as resiliency. Do I need to fully understand Glibc? No. But having a slightly different version of basic OS libraries in every container and expecting to maintain code that is increasingly created by scraping/matching, no matter how excellent that match is, is not particularly resilient. Now, here is the vector we are on, and the exception. If we are fed as consumers constantly and just toss out whatever we don't like, and accept the new food/gruel of the day as part of our subscription, with massive engines grinding out supposedly better and better gruel, then we don't have to analyze. We don't need to be resilient. We depend on corporations (or crowdsourcing/hive mind) to always appear like we are moving forward. Perhaps we are. Finally... it is never all or nothing. There is always some kind of middle ground. This is particularly true in the open source world. Every one of these packages represents a huge effort by many people with skill high above my own. LFS/BLFS books are created by people who truly understand what is going on underneath. For that matter, Debian, which is package based, not source based (from the user perspective) is often a much more sane alternative. The main reason why I'm going back to NoNIC for #e16 is because I realized how complicated the window manager/display manager/systemd had become. And, like the above thread, that complication threatens resilience depending on the system.
💬 0 replies

Replies (0)

No replies yet.