Ask Slashdot: Is Dockerization a Fad? 252
Long-time Slashdot reader Qbertino is your typical Linux/Apache/MySQL/PHP (LAMP) developer, and writes that "in recent years Docker has been the hottest thing since sliced bread."
You are expected to "dockerize" your setups and be able to launch a whole string of processes to boot up various containers with databases and your primary PHP monolith with the launch of a single script. All fine and dandy this far.
However, I can't shake the notion that much of this -- especially in the context of LAMP -- seems overkill. If Apache, MariaDB/MySQL and PHP are running, getting your project or multiple projects to run is trivial. The benefits of having Docker seem negilible, especially having each project lug its own setup along. Yes, you can have your entire compiler and Continuous Integration stack with SASS, Gulp, Babel, Webpack and whatnot in one neat bundle, but that doesn't seem to dimish the usual problems with the recent bloat in frontend tooling, to the contrary....
But shouldn't tooling be standardised anyway? And shouldn't Docker then just be an option, who couldn't be bothered to have (L)AMP on their bare metal? I'm still skeptical of this Dockerization fad. I get it makes sense if you need to scale microsevices easy and fast in production, but for 'traditional' development and traditional setups, it just doesn't seem to fit all that well.
What are your experiences with using Docker in a development environment? Is Dockerization a fad or something really useful? And should I put up with the effort to make Docker a standard for my development and deployment setups?
The original submission ends with "Educated Slashdot opinions requested." So leave your best answers in the comments.
Is Dockerization a fad?
However, I can't shake the notion that much of this -- especially in the context of LAMP -- seems overkill. If Apache, MariaDB/MySQL and PHP are running, getting your project or multiple projects to run is trivial. The benefits of having Docker seem negilible, especially having each project lug its own setup along. Yes, you can have your entire compiler and Continuous Integration stack with SASS, Gulp, Babel, Webpack and whatnot in one neat bundle, but that doesn't seem to dimish the usual problems with the recent bloat in frontend tooling, to the contrary....
But shouldn't tooling be standardised anyway? And shouldn't Docker then just be an option, who couldn't be bothered to have (L)AMP on their bare metal? I'm still skeptical of this Dockerization fad. I get it makes sense if you need to scale microsevices easy and fast in production, but for 'traditional' development and traditional setups, it just doesn't seem to fit all that well.
What are your experiences with using Docker in a development environment? Is Dockerization a fad or something really useful? And should I put up with the effort to make Docker a standard for my development and deployment setups?
The original submission ends with "Educated Slashdot opinions requested." So leave your best answers in the comments.
Is Dockerization a fad?
Docker is really about portability and scalability (Score:5, Insightful)
With docker you can scale out and you take your docker containers (usually) to other sites much easier then you can transport that lamp stack running on an OS.
If you don't need these things then docker is probably overkill.
Re: (Score:2)
That isn't really true, puppet, chef, etc. all did basically the same thing for applications/environments. Its more about isolation to which allows people to more effectively share resources.
Scalability is still entirely the obligation of the application developer.
Re:Docker is really about portability and scalabil (Score:5, Insightful)
Docker is a new hammer, and everything is a nail. Running unprivileged workloads has a security benefit, save for the bugs found so far, and the fact that Docker runs as root.
Docker is seductively simple, and such things always get misused. This said, fast orchestration and tear down for compliance sake is pretty simple-- if you pull clean workloads and keep them clean.
Learn Libvirt, or do it manually with cgroups, lxc, and various secret file system and networking sauces. The diversity in choices is why docker is successful, as almost everyone can do docker and so it has a gravitational pull when you're fishing for fast stuff for the scrum master, so he won't chain you to the whipping post.
Re: Docker is really about portability and scalabi (Score:2, Interesting)
I was initially skeptical of docker when I first was told about it for many of the same reasons you outline. But, I have become a believer. The portability and repeatability is well worth it. We recently took a series of micro services originally deployed as docker containers across 1800+ sites and moved it to a central k8s cluster in less than four weeks changing the code a bit for performance reasons. Prior to containers this feat wouldâ(TM)ve been more daunting. I wouldâ(TM)ve had to write
Re:Docker is really about portability and scalabil (Score:5, Insightful)
Re: Docker is really about portability and scalabi (Score:3)
Ok, but that's super easy to get working without Docker. Linux is Linux, whether it's running on a dead simple platform like Linode, or huge, complicated mass of services and features like AWS.
Launching a server into any of these environments is a matter of selecting a base image and running a single command.
After the server is launched, it's all Linux. The scripts to provision AWS or Google Cloud or Azure or Linode are the same.
Re: (Score:3, Interesting)
This is the idea behind it, but it's built on sand.
You're basically asking to move around virtual machines, and the end result is that you end up with a lot more code rot than you would had you just installed the OS fresh and then kept a separate "disk image" for your software and a "disk image" for your data. If you manage to screw up a transplant, you can rollback to earlier images, and you can maintain them offline.
Docker doesn't do that. Docker basically spins up an "environment" image, and you work out
Re: (Score:2)
Those good reasons are likely why a certain large social media platform is not interested in virtualizing or containerizing its massive production environment.
Re: (Score:3)
No but you could tar up your system, copy it across and chroot() into the resulting location long before docker even existed.
Or you could install into a specific --prefix, and tar that up instead.
Re: (Score:2)
Resource limiting (cgroups, etc.) were only introduced into the Linux kernel in 2007 and a similar concept for FreeBSD came about in 2000. Traditional chroot is risky without limiting compute and memory usage.
Then there's the networking stack for each container that allows conflict-free access to all ports on a network interface.
Then there's the implementation of overlay filesystems in containers to reduce redundancy while maintaining the ability to tailor each container to its needs, and an image reposito
Re:Docker is really about portability and scalabil (Score:5, Informative)
Scalability is gained because Docker has a lower overhead than virtual machines. In theory, you can run tons of Docker images on the same physical machine, and they'll share all the elements they have in common, so for storage in particular, you eliminate a lot of redundancy. Also, it makes it much easier to run a very minimal production image without any development tools, instead of a full Linux distribution which is what most people would end up with on a VM solution. That also makes it easier to avoid running any daemons that are normally part of your distribution but not relevant for your application.
Of course, that's all in theory. I haven't needed to put it to practice myself, so I can't comment on how well it works out in the real world.
Re: Docker is really about portability and scalabi (Score:2)
Amazon has servers for $5/mo. How much money can you really save with Docker? If you want to run a bunch of containers, you're going to eat up RAM. On AWS, server cost are almost directly correlated to available RAM. So, if you run a bunch of containers, you are going to need to spend more on the instance to run them.
Re: (Score:2)
Thanks, but I'm happy at Dell EMC. We had a mandatory set of training that included Docker, even though it doesn't apply directly to our workgroup. It was interesting, though; probably the most interesting of the mandatory training sessions in that batch. I'm glad I had a chance to learn about it.
Re: No it isn't (Score:4, Informative)
Re: No it isn't (Score:2)
How does a system dependent on ever-changing packaged binaries help solve the issue of monolithic binaries one doesn't know how to generate?
Re: No it isn't (Score:3)
Actually, Docker is one of those technologies that makes experienced sysadmins shake their heads and go, "this isn't solving any problem I have." You just start wondering about the world where web developers live and how it got so ignorant of the underlying platform.
Docker is for Mac web developers too afraid to learn Linux.
Re: No it isn't (Score:2)
"When I'm trying to push out a feature the last thing I want is to be bogged down dealing with build systems, minutia, and gotchas for some auxiliary service"
I can appreciate that, but at the end of the day that isn't too hard to deliver without Docker by following good practices. Keeping things loosely coupled and independently verifiable seems much more scalable.
Docker seems to push towards an application monolith. If you find yourself developing across multiple micro-services at the same time, it might b
Re: Docker is really about portability and scalabi (Score:5, Informative)
A sort of virtual machine system that works in a pretty hands-off approach, very unix command line interface, good compatibility so you can run the same VM image on different platforms with zero fuss.
Say, you have 10 different projects - completely independent things, some web apps, some server apps, whatever. You want to run them on one machine.
You can install them all, install all their common dependencies - databases, WWW server, etc - and while the CPU and RAM overhead will be reduced as the common parts are serving multiple clients, getting the configuration to work is a pain, as one may require the db server to be configured one way, another - a different way, one needs legacy SSL versions on the WWW for backwards compatibility, another requires it's blocked and only newest are active for security... Never mind migration pains if you want to move *just one* of the services to a different machine.
With Docker, you create 10 virtual machines, each running its own service, each configured strictly to fit that service's needs, and if you find one takes too much resources, moving it to a different machine is trivial.
Unlike the classic VMs, where you create e.g. one VM per customer, or one VM per OS variant, here you use virtualization to encapsulate services. Not that Docker would stop you from doing the former, or you couldn't do the latter with other virtualization systems, but Docker is made with this sort of encapsulation in mind.
Containers ; Mess (Score:5, Interesting)
Unlike the classic VMs, where you create e.g. one VM per customer, or one VM per OS variant, here you use virtualization to encapsulate services. Not that Docker would stop you from doing the former, or you couldn't do the latter with other virtualization systems, but Docker is made with this sort of encapsulation in mind.
Specially, Docker leverage the infrastructure for containers and namespace isolation that the Linux kernel has (same as LXC does). .) you just setup a "partition" inside your kernel and run only *the service isolated from the rest*.
So instead of running a full-fledged VM from the the ground up (including it's own kernel, it's own virtualized hardware, etc
Think of it as "chroot on steroids" (not only for path and trees, but for everything else in the kernel) or "jails" if you're used to other flavors of UNIX which have those.
So it's much light-weight than an actual VM and thus with dockers (and LXCs, etc.) it's much easy to have 10 different concurrent ones running, each with its own service, than say, 10 fully fledged VirtualBox.
Or at least that's what the advertisement say.
Also, the building of docker containers themselve is scripted by design (Dockerfile) with each layer being chained together to the previous one (a bit in a "git"-like DAG fashion).
So it should be possible to relatively easily swap a layer and built the same on top on a different base (use a different base distribution and make a container with the same software. Or upgrade the "Go/Ruby/Python/Node.JS/WTFBBQ" framework and rebuilt atop), just the way a competent git herder could rebase commit on a different base (say port currently WIP drivers to the newest kernel release).
(The keyword, saddly being "competent". Usually git rebasing in the wrong hands tend to be a horrible mess)
In practice, it seems to me that it is used as an excuse by some of the worse developpers - those who require you to download a giant mess of dependencies half of Github's worth of repositories (all at various random commit, none at an acutal release) most of which only used for 1 single function ("align string with space-fill") and it's a miracle that the software somehow works without breaking appart in a DLL-hell explosion.
(See all the new popular languages like Node.JS, Go, Rust, etc. Some of them don't even have a standard library to begin with)
You know, the kind who'll say "but it works on my machine !" - the machine itself having a dozen of mutually incompatible Pyhton environments (ob xkcd ref) [xkcd.com].
Now thanks to docker, they can ship their machine around - without other people able to throw at them the obvious argument about VM being too heavy/ressource hungry.
Re: Docker is really about portability and scalab (Score:2)
If your micro-services architecture benefits from such technology as Docker, that seems to be a sign that your architecture is monolithic. Sure, you split it up into 10 different project, but if they are so inter-dependent that you feel the need to launch 10 things to test one, then you have just created a very complicated monolithic app.
Re: Docker is really about portability and scalabi (Score:4)
Docket is a light-weight virtual machine. It shares a kernel with the host but has its own filesystem, network and process space. It generally has just enough of the OS installed to run its embedded application.
Re: (Score:2)
The problem is that shared kernel in my view. So running a RHEL8 docker image on a RHEL7 docker host is not an actual representation of a real RHEL8 because you are using a 3.10 kernel and not a 4.18 one, so you are missing a bunch of features. Further rebooting for a kernel update of the host requires a full reboot of all the docker images too. There is no equivalent of vmotion (well you can checkpoint and migrate but there is downtime).
Sure it is less overhead than a full VM per app/service approach, but
Re: (Score:3)
running a RHEL8 docker image on a RHEL7 docker host is not an actual representation of a real RHEL8
You use it to run a single application, not an operating system environment.
Further rebooting for a kernel update of the host requires a full reboot of all the docker images too.
If you need to update Windows or Linux, you update all of your VMs and have to reboot them. VMotion only works when you're updating the underlying host VM.
Re: (Score:2)
Disk space isn't really an issue on properly-designed host and/or image repository. Containers usually share common files using an overlay filesystem like aufs [wikipedia.org] and OverlayFS [wikipedia.org].
Re: (Score:2)
Re: (Score:2)
That's what templated VMs are for. One base image, lots of diff files for each test case...minimal storage overhead.
Useful for systems without the normal ecosystem (Score:3, Interesting)
I've personally only used docker a little bit, but where I have found it useful is on my QNAP. The QNAP OS doesn't have the full range of tools available, and not everything is available via entware. Docker is able to fill the gaps.
For example, I have a tool I've written that is several times faster using pypy than python (the BeautifulSoup library is so much faster using pypy). Unfortunately, pypy isn't available in entware, and I've failed in getting it compiled for my QNAP. However, there's a docker container with pypy that I was able to install into Container Station, and I'm able to run it using that.
Re:Useful for systems without the normal ecosystem (Score:4, Insightful)
Of course, you can also use LXC on the QNAP, and for some things it's more appropriate (e.g. I have LXC containers acting as VPN clients and SOCKS5 proxies).
Basically, I use Docker where I just want to be able to run a tool, and LXC for when I want a complete environment (including cron, etc).
Re:Useful for systems without the normal ecosystem (Score:4)
This. I have an LXC container running a complicated Backup script that uses GIT to download changes. And it maps to the QNAP filesystem so that the data is in turn sent to a tape/backups disk/offsite. I could do this with Docker but LXC makes it much easier.
That said, after my QNAP's last software upgrade my LXC container was just...gone :(
So now I go to Synology, that does not support LXC, and I have to dockerize the script
Helpful for people who can't write shell scripts (Score:2)
There are some other advantages, for example, it makes it easier to transfer between AWS and Google cloud, but that's mostly not why people use it.
Docker will lose popularity if someone figures out an easier way to deploy.
Re: Helpful for people who can't write shell scrip (Score:2, Insightful)
You mean like chef, puppet, ansible or a bunch of semi-well written bad scripts?
Docker exists so dev shops can do a half assed job with deploy and release that any mid level sysadmin could do 25 years ago nothing but Unix tools like bash, ask, sed, scp, rsync, rdist, etc.
When you are clueless about the perfectly good tools that already existed because I Am Not A Sysadmin! you write all new tools to do the same thing but poorly, with extra bloat, and new security holes never imagined before. And you get new
Re: Helpful for people who can't write shell scr (Score:5, Funny)
You must be new here
Separation of concerns. (Score:5, Insightful)
That, to me, is what containerisation is all about.
Having a database engine that's effectively isolated from the web server host, and those sitting on one virtual (of which you can have many), adds a layer of security.
Then there's the ability to have either ephemeral or persistent containers (or any combination of those) on a single virtual.
And swarm clustering for high availability (or even powering up more nodes on underused hardware when one of your apps needs extra grunt, and scaling back when it doesn't).
I think there's a lot to be said for it, personally. But it's an option I'd only bring in on very mature infrastructures. There's absolutely no point in bringing in containers if you're not capable of maintaining the hardware that comprises your infrastructure. Or if the virtuals that sit on your hardware aren't second nature, and backing up and recovering them is child's play.
Way too many people think that it's simplicity to run 'disposable' containers, and they will definitely work great, until you discover that something really isn't right.
That's when you absolutely have to have a full, and complete understanding of everything you've done, and how it _really_ works.
As long as you have the mature infrastructure and skill base, then containers are damnably useful. But they're definitely 'cherry on the cake'.
Re:Separation of concerns. (Score:4, Insightful)
Having a database engine that's effectively isolated from the web server host, and those sitting on one virtual (of which you can have many), adds a layer of security
How does it add a layer of security? If someone gains access to the web server host, then they have access to the DB, because obviously the credentials and ability for the web server to access the DB must reside there. At the very least they could simply modify the existing code that accesses the DB to do other things.
Now, in addition to the web server being an attack vector, the DB itself is also an attack vector that can potentially be accessed directly, since it resides elsewhere and allows (at least some) remote connections.
I just don't see how that makes anything more secure. More scalable, yes (along with worse latency though).
Re: (Score:2)
Indeed. It does _not_ increase security. I can make security someone else's responsibility (great for the standard developer that has no clue about security), but it always is _your_ problem when it fails.
Re: (Score:3, Informative)
How does it add a layer of security? If someone gains access to the web server host, then they have access to the DB, because obviously the credentials and ability for the web server to access the DB must reside there.
You don't get full access to the DB, you get the same access to the DB that was granted to the web server.
The web server is going to be restricted to only read and maybe write to one context in the DB.
Other processes elsewhere (not in the webserver) will have access to that data plus other data to make additional links to other software, but none of that will be accessible to the webserver.
At the very least they could simply modify the existing code that accesses the DB to do other things.
OK, so you can do what the webserver can do. You can read a table to get some data, and that's it.
How does this aid yo
Re: (Score:3)
How does it add a layer of security? If someone gains access to the web server host, then they have access to the DB, because obviously the credentials and ability for the web server to access the DB must reside there
php-fpm is running in a separate container and communicating over a Unix socket in a shared file system. nginx doesn't have read access to config.php, but php-fpm does. nginx doesn't have access to the database socket, but php-fpm does. You have to crack open php-fpm to gain access to the database.
At the very least they could simply modify the existing code that accesses the DB to do other things.
The only mount points shared between the two are mounted read-only for nginx, so you can't just add a new php script.
the DB itself is also an attack vector that can potentially be accessed directly, since it resides elsewhere and allows (at least some) remote connections.
Only accessible from php via a socket file. Everything's often on one system.
When on an
Re: (Score:3)
You pay for that with potential security problems within each container and in the containerization layer. It actually decreases security in general. It may increase the skills an attacker needs slightly. What you want instead if you do security is something like SELinux or AppArmor, configured restrictively and, of course, good use of the standard UNIX isolation model. But that needs real skill, hence this ElCheapo approach and the fantasy that this is good for security.
It's just a tool for deployment, not a panacea (Score:5, Interesting)
Re: (Score:2)
This is exactly it. Everyone who seems to think that docker is about security or something else has no clue what they're talking about.
A Docker container is a unit of deployment, no more and no less. Everyone who has used Docker can think of ways that it could be done better, and when something better comes along, Docker will adapt or decline. Unkind people might call that "a fad", but most technology has a churn rate.
No (Score:5, Interesting)
Re: No (Score:2)
That sounds very much along my thoughts. Some workloads are supremely better at being containerized (such as things you would otherwise consider using a chroot jail for), and some aren't. But Docker allows you to easily replicate the same kind of environment for local development as production, which avoids the common "it works on my machine" problems even Vagrant VMs can have.
And then there are CaaS offerings like Amazon's Lambda/API Gateway and CodeBuild services that use containers even though you're not
Comment removed (Score:5, Informative)
Re: (Score:2)
If you're deploying multiple applications, or deploying apps with convoluted dependencies,
"Convoluted dependencies" sounds like a problem you did to yourself. Use cases where you need convoluted dependencies are rare, and docker is just a bandaid.
Re: As permanent as bitcoin (Score:2)
Docker is command line based, even on Windows.
It is practically mandatory for modern development (Score:3, Interesting)
I'm looking for a job. I've been on maybe 7-8 interviews and if the company doesn't currently use docker, it's on their roadmap. I've only used it a little, not for a full tooling, and it's always frowned upon when I mention this. Every interview I go to either has, or is working towards, continuous deployment and the fact that I don't have this is heavily frowned upon too. I don't know what the next fad will be, but this one is pretty strong at this point.
Re:It is practically mandatory for modern developm (Score:5, Funny)
Every interview I go to either has, or is working towards, continuous deployment and the fact that I don't have this is heavily frowned upon too.
In the interview, just say how much you like it, how great you think it is.
Re:It is practically mandatory for modern developm (Score:4, Insightful)
Proper interview skill: "Fake it till you make it"
That being said the OP's actual point lines up: "Docker" the brand may be a fad.. "Docker" the concept ala "Kleenex" or "Band-Aid" is a really damn good one and not likely to go away anytime soon. If anything there's work to improve upon the idea not get rid of it.
Containers are easy to manage and reproduce. They give you process isolation that is invaluable when deployed on systems shared with other services you don't trust. Bring in the CI/CD you mentioned: Deploys are reliable enough with containers that CI/CD is even practical especially since rollbacks are more consistent. The industry has learned that "more deployments' is generally better.. true CD all the way to prod isn't necessarily feasible (or even desirable sometimes) but the smoother/frictionless/effortless you can make your deploys the better your life will be and once they are truly effortless then flipping the switch to make them automatic is more of a political step than a technical one.
I'll state a preamble that's the current state of the world right now: The world still has software that runs directly on client machines.. these don't need docker so much although much like the JVM allows you to run Java nearly anywhere without changing the code, docker allows this at a higher, less language dependent, way. You can also do things like run Linux containers on a Windows host for other cross-platform perks. Still lets call this the low bar. The world also still has people/companies who run their own DCs and have their own hardware. There are definitely applications, especially in research, where you might even need to have your software running directly on that bare metal but for the most part of you are a commercial firm and you're not managing your hardware pool with at least one layer of virtualization you are at least making your life harder, at worst wasting resources. This may or may not extend to a containerization landscape but it's a pretty good idea for best packing. --> The real clarity occurs in the cloud. You don't own the bare metal. You can "own" your own VM but most clouds make that expensive and they are still not as friendly as they should be to deploy especially if you have a lot of post-launch setup to do. These days you are really heading one of 2 directions: a fleet of containers OR serverless (which surprise surprise is running on containers behind the scenes). Any cloud-focused ecosystem is made possible by containers whether by direct usage or by one of the many services you are interacting with so.. get to know them.
Re: (Score:2)
You've noticed a trend, you've noticed the skills that are in demand. Spend a weekend to do it on a small project, learn a bunch, and then when you go to your next interview the line "My current job doesn't use it, but I've implemented in a personal project so that I'm familiar with the subject" will go over WAY better than "no".
Perhaps (Score:2)
Love the concept.. (Score:3)
Docker's a mixed bag - we've had alot of problems with their network stack using the swarm to the point that I would not recommend it to anyone for production use.
I need to figure try Kubernetes-ing (gerand?) these things and see if life is more smooth.
no its not a fad (Score:2)
no its not a fad.
Its an efficient and lightweight way to get stuff working, while maintaining isolation of the various pieces. This makes managing upgrades and changes in the various pieces simpler because they are isolated.
Having apache and mysql in different docker containers, for example, has some real niceties.
Or if you want to run more than one lamp stack application on the same physical PC; containers can make that easy. Because you don't have to worry about interactions and conflicts between the two
Re: (Score:2)
It is actually heavy-weight, even if that is hidden. Because now, instead of administrating one system, you have to administrate one system and n containers. Sure, deployment gets easier, but everything else gets harder. And container security with Docker just sucks.
Re: (Score:2)
Docker really isn't that heavy compared to bare metal. And its lightweight compared to virtualization.
And I'm not sure what you think is 'harder'. I find administrating containers to be a lot simpler because of the isolation. I only have to concern myself with the one service in the container. I don't have to worry about dependency conflicts between different services, I don't have to worry about much of anything.
I can upgrade services, restore them, swap them around, migrate them to other physical hosts...
Re: (Score:2, Interesting)
Re: (Score:2)
Re: (Score:3)
Uhm, yeah, sure.
Re: No, it's been a god-send for developers (Score:2)
Re: (Score:2)
Indeed.
Re: (Score:2)
We used numpy in our project, which required a bunch of dependencies which cannot be installed via pip... So you have to install those on your host manually.
Pip will install numpy. In those cases where you have dependencies that cannot be installed by pip (which of course happens), it's easiest to just commit them to your git repository. That's what Google does.
Re: (Score:2)
Re: (Score:2)
It is a fad (Score:5, Funny)
Docker is not, its derivatives are (Score:2)
Docker and especially Swarm itself is a great system, it has basically fixed a lot of issues and combined ideas from LXC, BSD jails and Solaris containers. The problem is that everyone then continues building a stack-on-a-stack, which is what Kubernetes and co are.
As long as you know what it's for and follow some basic guidelines, it works well. The problem is that you now have a hammer and everything becomes a nail.
More work, but awesome benefits! (Score:5, Informative)
PHP and Rails style development patterns came before Docker was popularized. So they don't really fit naturally into that paradigm. Drupal expects you to copy and modify auto-generated templates, and have something like NFS shared storage for HA setups. A more modern app would probably use an S3 compatible object store, and (in all honesty) be written in Go.
You know what's great about docker?
* Immutable artifacts. If you build your container correctly, it will be the exact same package on your laptop, staging and production. This helps eliminate the "well it works on my machine" problem.
* Reproducible builds.
* Bundled dependencies. Give your app exactly what it needs. Upgrade libraries without needing to upgrade all of your sites at once on a shared VM.
https://jamstack.org/ [jamstack.org]
JAMstack alternative to classic PHP CMS:
https://www.netlifycms.org/ [netlifycms.org]
Templating config files inside a Docker container.
https://github.com/kelseyhight... [github.com]
Supports pongo2 "mustache" templating, which may be more familiar for Drupal, Django, etc users.
https://github.com/HeavyHorst/... [github.com]
Now I've learned something today (Score:3)
You outsource system administration (Score:5, Interesting)
Basically, the containers are not administrated by you. If you are a competent sysadmin, that is a disadvantage, potentially a huge one. If you are a typical modern developer that knows nothing about system administration, this can seem like an advantage though. It is possibly why this non-idea is so successful.
Re: (Score:2)
What you said applies to Linux in general. Docker provides the same level of administration as a Linux system. You can even open up a shell on them and modify to your hearts content. What you get in both cases is a curated and pre-prepared application stack. The difference is you can spin up two docker container with different versions of libraries and even different versions of software without hosing your entire system or firing up a VM to do it.
The fact you think containers are not administered by you ju
Re: (Score:3)
You did not get my point. Which, incidentally, shows you have no clue about system administration. Place two containers on your system and suddenly you have to administrate 3 three (!) different (!) systems. Have everything running on the base system and you administrate and maintain one system.
it's another tool is all (Score:2)
Re:it's another tool is all (Score:5, Insightful)
A lot of things are just used in the wrong place at the wrong time. The "fad" is in the misuse, not necessarily the mere existence. As they say, use the right tool for the job.
Too many developers or architects use the work-place as a "resume lab" to pile up buzzwords. Unless your org intentionally is into R&D, limit your experiments to a mild level.
Re: (Score:2)
Not sure it's a fad (Score:4, Funny)
Business casual has been going on for a while now, even at relatively conservative organizations, but I've never heard of employees being required to wear Dockers. Usually any sort of khaki pant is acceptable.
Maybe, maybe not (Score:2)
It's about security (Score:2)
Virtualization, and now Docker containers, are about fixing the lack of security that a Linux, Windows, etc. fail to provide. They are a hack to provide the security capabilities that should have been a part of operating systems since the 1960s. Multics almost got there, and the Viet Nam air war provided the motivation to figure out how to achieve multi-level secure systems.. which have been done. However, the rise of the minicomputer, unix, and finally the desktop machine had us prematurely optimize on
Not a fab, but for security it is just bad. (Score:2)
Where it really fails is with security. Sysadmins, or whatever their title, download or build a container, and then it is just ignored until there is an update for the main application. So you have these containers with old versions of software with and a lack of patching.
Re: (Score:2)
a few weeks (Score:4, Interesting)
A few weeks in to using docker when we were struggling with it, shame on us. A year into it and we're still struggling shame on Docker.
There were problems with file systems, with remembering what command was run to launch the image, with the container running out of disk space, with different containers requiring different kernels, with figuring out why the container had died.
Overall, moving to ansible to build everything from scratch was a great step forward and I wish the detour into docker had never happened
.
Docker helps a lot in development (Score:5, Interesting)
But thats not the sole reason why i write what i write now (if you believe my word which you shouldn't so judge on your own).
Ever since I was shown how BSD jails and chroot works i had used them for personal development (thats for last 20 years now). Then came the VM ecosystem and even at work i could get same benefits (more on that below) as those personal setups. Then came LXC and subsequently docker... in my view its the same with addition of docker hub/repo and most importantly versioning addition, which is a huge differentiator.
What are those benefits?
And plenty more at one time cost of creating a docker environment. Thats one time across your entire dev team not just you, cost
And these are all just from the dev point of view. Not going into efficiency of baremetal vs vm vs containers etc. and production is a different beast.
And as added bonus you can integrate your tests easily with any CI CD just docker pull and run tests on any worker machine. Or even dev environments can be sent on build farms in a jiffy (my startup works in this space) but from the author's description it doesn't seem like build and test are too costly for him. But still.
Ease of use (Score:2)
Dockerised systems provide access to people with no and/or almost none whatsoever system architecture knowledge. So obviously there is a "need" for such solutions in large segments of the market. I will not get into details of security and efficiency issues, not with the assumption of they would not be understood, but with the assumption of they are already o
Nothing is for ever... (Score:2)
Containers will be here for a while now.
Docker is by far and away the most popular container tech, but there are others; and there's certainly room for improvement.
You see very many examples of poor use of containers. E.g. treating a container as a kind of lightweight virtual machine. But this is kind of category error. A container should (usually) be a lightweight environment for a single process. Orchestration of the interactions between process and the outside world should be done with other tools - e.g.
Wow. In which universe does this article exist? (Score:2)
In short, the article suggests when you have an environment which is capable to run multiple services seamlessly then you do not need to dockerize the stack and deploy your service in there. This is obviously true. It is so obvious that you really do not need to have an article about it. Docker is for making (micro-)services and let them run together, be able to scale them (but that needs other software too) and be able to move it quick from one docker stack to another docker stack.
If you have a Wordpress,
There is a xkcd for it (Score:4, Insightful)
https://xkcd.com/2044/ [xkcd.com]
Anyway, Docker is a great thing when you use it for services which need more complex stacks and integrate across multiple technologies aka languages, frameworks etc.
Depends on your scale (Score:2)
a lot of these 'modern' devops things might seem like overkill, but that only depends on your scale of operations.
do you only have to worry about one server, why would you invest in cfg mgnt, containers, ci/cd, etc?
the benefits to these things become more clear as your environment grows.
Serverless is the New Black (Score:2)
As many have noted, Docker doesn't fit in everywhere. But if your existing application needs dynamic scaling, or you need to deliver pre-built containers to your customers, it can be a big help.
That said, anyone doing new development might want to skip over the Docker generation. Why wrestle with containers and dynamic scaling when you can drop your functions into AWS Lamba, Azure Functions, or any of the other serverless offerings? These things auto-scale for you. Serverless is the New Black.
Granted, s
No, but yes (Score:2)
In the sense that it's a "passing fad", no. Most technologies at this level don't go away, they just lower in popularity as they begin to be used only for their best purposes as bigger problems get solved by even better solutions -- so the stretching that they do up-front goes away.
But like most of these cure-all technical designs, it came about to solve a very specific problem, almost always a very small problem, within a company that had the small problem in huge proportions. Think google's power train,
Layers of abstraction (Score:3)
Docker is a layer of abstraction to make it easier for the end-user to use the exposed functionality. That's it. It will last until someone makes another layer of abstraction or when the industry as a whole decides to move away from containers to something else entirely.
Difficult, expensive and necessary (Score:2)
I started working with docker containers about 5 years ago and have followed the trends of docker compose and then kubernetes. As someone who previously spun up either bare metal machines or raw VMs, installed everything needed, and tended to vertically scale machines to match demand-- working with containers, microservices and horizontal scaling has taken a lot of time and effort to learn and be able to "do right".
I would never go back to Chef/Puppet/Vanguard installations at this point. I feel comfortab
Docker isn't magic. Containers aren't a fad (Score:2)
1) A set of linux namespaces (a pid namespace, a mount namespace, etc)
2) A set of cgroups
3) a chroot to a virtual file system
It's not a virtualization technology, so it doesn't have run-time overhead. If your OS is Linux, the "dockerized" application is a native process that you can see when you run ps. You can do the exact same thing that Docker does if you know a few system calls to create the namespace, cgroup
Great tool but... (Score:4, Insightful)
Docker is a powerful tool, but it's basically the infrastructure equivalent of Visual Basic. It enables people that don't know jack about infrastructure, and there landmines contained therein, to slap together some half-baked thing and think they're gods gift to IT.
The biggest benefit for developers is that they can package their stuff up into a nice little bundle without having a sysadmin breathing down their neck about how they did this or that wrong.
The biggest drawback is that developers can package their stuff up into a nice little bundle without having a sysadmin breathing down their neck about how they did this or that wrong, which basically guarantees that they *will* do it wrong, and open up all sorts of security holes in the process. (There is a reason why sysadmins become curmudgeonly. We have to slap down know-it-all developers on a very regular basis for doing things like disabling firewalls cause that extra security is too inconvenient to their development process, or setting database admin passwords to 'password'. I wish I was joking.)
Re: (Score:2)
"We have to slap down know-it-all developers on a very regular basis for doing things like disabling firewalls cause that extra security is too inconvenient to their development process, or setting database admin passwords to 'password'. I wish I was joking.)"
Heathen! Bow before the mighty Full Stack Developer and repent. :-)
Seriously, even reasoned arguments go out the window when developers are involved these days. They could be 3 days out of coder bootcamp and most companies think they're gods. It sucks
Docker is cheaper AWS rent (Score:2)
hey, so I'm an embedded guy and have never needed to be exposed to Docker. Help me out. Having read a bit about it, the idea behind dockerizing is that instead of setting up a VM to go out of your way to get SW running, you're making software builds go out of their way to run a little nicer on the sorta-VM-Docker which runs on whatever hardware. It runs a little faster and with less space than a full VM, so it's cheaper to run on a rented server farm. It's a different build target, just like how you could h
Re: (Score:2)
If used properly it can be severely cheaper AWS rent but no that's not the primary benefit. (I mean from a purely AWS standpoint we recently converted an EC2 deployment -> a container running in ECS and our spend went from a grand a month to some low number of dollars with no loss of performance so there is that..)
To your points the "tiny" bit faster (basically the difference of the VM process overhead) is only meaningful for the most processor crushing applications (and then still only tiny percentages)
"Educated Slashdot opinions requested." (Score:2)
Definitely came to the wrong place...
Monocultures need not apply (Score:2)
If you've got a simple website that lives on one server and isn't using anything that isn't in your "monoculture" of LAMP, then yeah, Docker doesn't bring a lot of value to the table.
But if you've got 2-3 load balanced PHP front end serviers in front of a half dozen Java Workflow Engine instances that that run against data pumped out by a cluster of 8 Python Data-Science workers and they're all exchanging data through a RabbitMQ messaging layer and storing data in Redis backed by a MySQL data warehouse, nex
Re: (Score:2)
Maybe it just needs to be a proper kernel feature.
Re: (Score:2)
Isn't that what LXC is, which is what Docker uses under the hood on Linux to provide the basic containerization? On other platforms Docker uses heavier "containers" using full virtual machines.
Re: (Score:3)
It increases complexity, just like virtualization. That is always bad for security, even if often not readily obvious.
Re: (Score:2)
If I could be so bold, I believe the next step (after serverless) will involve conquering data flow using micro-schemas.
That's tough. People have been trying to solve that problem for decades.
The main problem is that if you are not writing something custom, then you don't really have a reason to write software. So you're never going to be able to come up with a solution that works everywhere.
Re: (Score:2)
Waiting for IO.