A Containerized Night Out: Docker, Podman, and LXC Walk into a Bar


🌆 Setting: The Busy Byte Bar, a local hangout spot for tech processes, daemons, and containerization tools.


🍺 Docker: walks in and takes a seat at the bar Bartender, give me something light and easy-to-use—just like my platform.

🍸 Bartender: Sure thing, Docker. One “Microservice Mojito” coming up.


🥃 Podman: strides in, surveying the scene Ah, Docker, there you are. I heard you’ve been spinning up a lot of containers today.

🍺 Docker: Ah, Podman, the one who claims to be just like me but rootless. What’ll it be?

🥃 Podman: I’ll have what he’s having but make it daemonless.


🍹 LXC: joins the party, looking slightly overworked You two and your high-level functionalities! I’ve been busy setting up entire systems, right down to the init processes.

🍺 Docker: Oh, look who decided to join us. Mr. Low-Level himself!

🥃 Podman: You may call it low-level, but I call it flexibility, my friends.

🍸 Bartender: So, LXC, what can I get you?

🍹 LXC: Give me the strongest thing you’ve got. I need all the CPU shares I can get.


🍺 Docker: sips his mojito So, Podman, still trying to “replace” me?

🥃 Podman: Replace is such a strong word. I prefer to think of it as giving users more options, that’s all. winks

🍹 LXC: laughs While you two bicker, I’ve got entire Linux distributions depending on me. No time for small talk.


🍺 Docker: Ah, but that’s the beauty of abstraction, my dear LXC. We get to focus on the fun parts.

🥃 Podman: Plus, I can run Docker containers now, so really, we’re like siblings. Siblings where one doesn’t need superuser permissions all the time.

🍹 LXC: downs his strong drink Well, enjoy your easy lives. Some of us have more… weight to carry.


🍸 Bartender: Last call, folks! Anyone need a quick save and exit?

🍺 Docker: I’m good. Just gonna commit this state.

🥃 Podman: I’ll podman checkpoint this moment; it’s been fun.

🍹 LXC: Save and snapshot for me. Who knows what tomorrow’s workloads will be?


And so, Docker, Podman, and LXC closed their tabs, leaving the Busy Byte Bar to its quiet hum of background processes. They may have different architectures, capabilities, and constraints, but at the end of the day, they all exist to make life easier in the ever-expanding universe of software development.

And they all knew they’d be back at it, spinning up containers, after a well-deserved system reboot.

🌙 The End.

I was bored a bit after working with podman, docker and lxc. So I asked chat gpt to generate a fun story about these technologies. I think its really funny and way better than these things usually turn out. I did a quick search to see if I can find something similar but I couldn’t find anything. I really suspect it being repurposed from somewhere.

I hope you can enjoy it despite being ai generated.

    • raldone01@lemmy.worldOP
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      You mean open ai? I agree.

      I am currently looking for oss self host alternatives. If you know good ones let me know.

      I have a beefy server that needs something to do. I already downloaded wizard coder but have yet to setup a ui.

      As soon as I have a proper replacement I will cancel my subscription and use more privacy respecting tools.

      • TechieDamien@lemmy.ml
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        You can run llms on text-generation-ui such as open llama and gpt2. It is very similar to the stable diffusion web ui.

          • TechieDamien@lemmy.ml
            link
            fedilink
            arrow-up
            2
            ·
            1 year ago

            If I’m being honest, it is fairly slow. It takes a good few seconds to respond on a 6800XT using the medium vram option. But that is the price to pay to running ai locally. Of course, a cluster should drastically improve the speed of the model.

            • raldone01@lemmy.worldOP
              link
              fedilink
              arrow-up
              1
              ·
              edit-2
              1 year ago

              I don’t have a cluster and there is only one GPU in my server busy with image generation. I hope CPU inference is somewhat usable (74 cores) but I will have to try. If it isn’t usable I can still rent GPU time from cloud providers.