• atzanteol@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    10
    ·
    4 months ago

    I remember partitioned systems being a big thing in like the '90s '00s since those were the days you would pour $$$$ into large systems. But I thought the “cattle not pets” movement did away with that? Are we back to the days of “big iron”?

    • Lydia_K@lemmy.world
      link
      fedilink
      arrow-up
      4
      ·
      4 months ago

      What do you think all those cattle run on?

      Just big ass servers with tons of cores and ram.

      • atzanteol@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        3
        ·
        4 months ago

        I figured it was cattle all the way down. Even if they’re big. Especially when you have thousands of them.

        Though maybe these setups can be scripted/automated to be easy to replicate and reproduce?

    • fruitycoder@sh.itjust.works
      link
      fedilink
      arrow-up
      2
      ·
      4 months ago

      Constant back and forth. Moving things closer increases efficenicy moving them apart increases resillency.

      So we are constantly shuffling between the two for different workloads to optimize for the given thing.

      That said i see this as an extension too the cattle idea by making even the kernel a thing to raised and culled on demand. This matter a lot more with heavy workloads like HPC and AI stuff where a process can be measure in days or weeks and stable uptime is paramount, vs the stateless work of intended k8s stuff (i say intended because you can k8s all the things now but it needs extensions to handle the new lifecycles).

  • HiddenLayer555@lemmy.ml
    link
    fedilink
    English
    arrow-up
    9
    ·
    edit-2
    4 months ago

    If we’re going to this amount of trouble, wouldn’t it be better to replace the monolithic kernel with a microkernel and servers that provide the same APIs for Linux apps? Maybe even seL4 which has its behaviour formally verified. That way the microkernel can spin up arbitrary instances of whatever services are needed most.

    • Avid Amoeba@lemmy.ca
      link
      fedilink
      arrow-up
      9
      ·
      edit-2
      4 months ago

      I imagine there’s some overhead savings but I don’t know what. I guess with classic hypervisor there’s still calls going through the host kerbel whereas with this they’d go straight to the hardware without special passthrough features?

    • friend_of_satan@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      4 months ago

      I recently heard this great phrase:

      “A VM makes an OS believe that it has the machine to itself; a container makes a process believe that it has the OS to itself.”

      This would be somewhere between that, where each container could believe it has the OS to itself, but with different kernels.

  • geneva_convenience@lemmy.ml
    link
    fedilink
    arrow-up
    7
    ·
    4 months ago

    Docker has little overhead and wouldn’t this require running the entire kernelmultiple times, take up more RAM?

    Also dynamically allocating the RAM seems more efficient than having to assign each kernel a portion at boot.

    • LeFantome@programming.dev
      link
      fedilink
      arrow-up
      3
      ·
      edit-2
      4 months ago

      Xen is running full virtual machines. You run full operating systems on simulated hardware. The real “host” operating system is the hypervisor (Xen). Inside a VM, you have the concept of one or more CPUs but you do not know which actual CPU cores that maps to. The load can be distributed to any of them by the real host.

      In something like Docker, you only run a single host kernel. On top of that you run sandbox environments that run on the kernel that “think” they have an environment to themselves but are actually sharing a single host kernel. The single host kernel directly manages the real hardware. Processes can run on any of the CPUs managed by the single host kernel.

      In both of the above, updating the host means shutting the system down.

      With this new approach, you have multiple kernels, all running natively on real hardware. Any given CPU is being managed by only one of the kernels. No hypervisor.