You Are Most Likely Incapable of Reading The Fucking Manual

From You Are Most Likely Misusing Docker, I quote:

Docker started as a tool to enable easier scaling of your applications….

Today docker is usually used to distribute applications instead of just for easier scaling. If you’re using docker for your development environment you’re probably not using docker for scaling, you’re using it to get a reproducable development environment.

But that’s just the tip of the iceberg of dumb.

Facts

First, let’s look at exactly what Docker is: Operating-system-level virtualization. It’s gained a lot of traction, but it is just a new spin on a old technology.

Now, it’s important to note a few facts here:

Operating-system-level virtualization usually imposes little to no overhead, because programs in virtual partitions use the operating system’s normal system call interface and do not need to be subjected to emulation or be run in an intermediate virtual machine….

Operating-system-level virtualization is not as flexible as other virtualization approaches since it cannot host a guest operating system different from the host one, or a different guest kernel.

Another fact for which I have no Wikipedia quote is that a container contains changes from its base image. For instance, if you use an Ubuntu image to create a container, and the image is about 200 megs, the container is not 200 megs. It is virtually empty at first. If you go into that container and change some files, the container’s size is only as big as those changed files, still typically nowhere near 200 megs. When you destroy the container, you free up the space used by the changes to the image, but the image itself remains untouched. Containers are meant to be disposable, and in fact if they contain important state that’s not in an image, they lose their reproducibility, since it is the image and not the container that is shipped.

You suck at life

So let’s start randomly picking apart this shitty, shitty, hopelessly shitty rant.

This is why containers are awesome: You can play around and later just restart the container. You avoid the side effects of package managers by just shipping the final cake instead of the recipe to make it.

First, no. If you “play around” in the container, you’re not changing the image, so whatever you play around with goes nowhere.

Second, it’s fairly common to have the Dockerfile in your source code. That file is EXACTLY a recipe to make your “final cake”. Without that, the developer could never make the cake again. So … shut up and RTFM. It’s true that you don’t have to provide a Dockerfile to others, but it’s pretty much a must for devs who are using Docker for reproducible environments – which is precisely the situation about which this douche is whining.

The down side is that the container will be huge in size

No, a container won’t be huge. The base Ubuntu 16.04 image is just a 50-meg download, which expands to under 200 megs. But the container will only be whatever has changed on top of that base image.

This means slow deployments and slower tools

Refer again to the wikipedia quote about how containers have little or no overhead.

Containerization doesn’t mean slow deployments unless you’re really shitty at using Docker (my gut says the author is shitty at everything, though, so maybe he’s just writing himself a reminder to not use technology).

The image (again, image vs. container — LEARN the difference) uses a layering filesystem that caches components. So if your recipe is:

  • Ubuntu 16.04
  • Install 7 packages
  • Download some random thing from the interwebs
  • Copy a pile of PHP

Once a deploy happens (which may be slow the first time), all four layers are cached. If, from that point forward, the developer typically only changes the PHP bit, the rest of the cake stays the same. Docker throws away the old “Copy a pile of PHP” and puts in the new one. Nothing else is downloaded or updated.

The other layers change only when the dev(s) decide they change. For instance, if “random thing from the interwebs” needed an update. In that case, the Ubuntu 16.04 bit is still cached, as are the 7 packages.

When an application is pushed to Docker and packages don’t need to update frequently, new code pushes are exactly as fast as they used to be. Possibly faster, since Docker is automating that piece.

In cases where the packages or OS have to change, shitty package managers, including Nix, don’t help much. The devs still have to do a ton of work per person. With Docker, you’re downloading very compact, pre-installed images with exactly what you need. With Nix, each developer still has to download and install whatever packages the software uses.

The containers also usually contain an entire operating system. Think about it: You’re running an operating system just to run containers with their own operating systems. This kind of feels dirty.

WHAT?!? Again, containers are super small. Images are the things that have space. Even so, if you s/container/image/, you’re still wrong as fuck.

Refer to Wikipedia again, you ignorant little shitbunny: the containerization software relies on your OS. Not just Docker, ALL container tech is like this, and has been for many years! How the hell do you get something this wrong and still manage to get enough food into your face for basic survival?

And … even if what you said were true, which, dear christ it so so isn’t even close to being true, but if it were, how does it feel dirty? Virtual Machines have been doing that forever!

Maybe it’s just me, but for me…

For development, it’s more than just about packages. Sometimes developers have to install stuff from source and it gets really fucking hairy. Sometimes we have to run the exact right version of some crazy proprietary database, and we’d rather keep that separate from our codebase. Sometimes we want to get a toolsuite on our system in a way that is extremely easy to remove. Say, for instance, I grab nixos/nix. Once I realize it’s a shitty horrific tool only loved by pedophiles, I can just docker rmi nixos/nix and never worry that it could have shat all over my dev environment.

Nix can’t do for developers what Docker does. If you think it can, it’s because you haven’t actually done very serious dev on teams of more than you and your hand. Also, masturbating while reading the latest Nix fanfic isn’t commonly referred to as “development”, so… yeah.

Sorry

UPDATE

It’s come to my attention that Nix isn’t just loved by pedos, so I guess I should apologize. I know shit about Nix. Maybe it’s amazing. I mocked it because it was loved by the imbecile who wrote the article I mocked. I assumed that his inability to comprehend basic technology must mean Nix was terrible. Plus I really enjoy being an asshole.

But if Nix is decent, and there are people who like it but don’t like “having relations” with young children, well, heck… color me embarrassed.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>