Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Cloud Programming Apache IT Linux

Ask Slashdot: Is Dockerization a Fad? 252

Long-time Slashdot reader Qbertino is your typical Linux/Apache/MySQL/PHP (LAMP) developer, and writes that "in recent years Docker has been the hottest thing since sliced bread." You are expected to "dockerize" your setups and be able to launch a whole string of processes to boot up various containers with databases and your primary PHP monolith with the launch of a single script. All fine and dandy this far.

However, I can't shake the notion that much of this -- especially in the context of LAMP -- seems overkill. If Apache, MariaDB/MySQL and PHP are running, getting your project or multiple projects to run is trivial. The benefits of having Docker seem negilible, especially having each project lug its own setup along. Yes, you can have your entire compiler and Continuous Integration stack with SASS, Gulp, Babel, Webpack and whatnot in one neat bundle, but that doesn't seem to dimish the usual problems with the recent bloat in frontend tooling, to the contrary....

But shouldn't tooling be standardised anyway? And shouldn't Docker then just be an option, who couldn't be bothered to have (L)AMP on their bare metal? I'm still skeptical of this Dockerization fad. I get it makes sense if you need to scale microsevices easy and fast in production, but for 'traditional' development and traditional setups, it just doesn't seem to fit all that well.

What are your experiences with using Docker in a development environment? Is Dockerization a fad or something really useful? And should I put up with the effort to make Docker a standard for my development and deployment setups?

The original submission ends with "Educated Slashdot opinions requested." So leave your best answers in the comments.

Is Dockerization a fad?
This discussion has been archived. No new comments can be posted.

Ask Slashdot: Is Dockerization a Fad?

Comments Filter:
  • by alphad0g ( 1172971 ) on Sunday June 02, 2019 @07:30PM (#58697284)

    With docker you can scale out and you take your docker containers (usually) to other sites much easier then you can transport that lamp stack running on an OS.

    If you don't need these things then docker is probably overkill.

    • by Luthair ( 847766 )

      That isn't really true, puppet, chef, etc. all did basically the same thing for applications/environments. Its more about isolation to which allows people to more effectively share resources.

      Scalability is still entirely the obligation of the application developer.

      • by postbigbang ( 761081 ) on Sunday June 02, 2019 @07:44PM (#58697324)

        Docker is a new hammer, and everything is a nail. Running unprivileged workloads has a security benefit, save for the bugs found so far, and the fact that Docker runs as root.

        Docker is seductively simple, and such things always get misused. This said, fast orchestration and tear down for compliance sake is pretty simple-- if you pull clean workloads and keep them clean.

        Learn Libvirt, or do it manually with cgroups, lxc, and various secret file system and networking sauces. The diversity in choices is why docker is successful, as almost everyone can do docker and so it has a gravitational pull when you're fishing for fast stuff for the scrum master, so he won't chain you to the whipping post.

    • by Anonymous Coward

      I was initially skeptical of docker when I first was told about it for many of the same reasons you outline. But, I have become a believer. The portability and repeatability is well worth it. We recently took a series of micro services originally deployed as docker containers across 1800+ sites and moved it to a central k8s cluster in less than four weeks changing the code a bit for performance reasons. Prior to containers this feat wouldâ(TM)ve been more daunting. I wouldâ(TM)ve had to write

    • by jrumney ( 197329 ) on Sunday June 02, 2019 @10:22PM (#58697942)
      The intention of Docker or other containerized solutions is not that you start with a server with a LAMP stack preinstalled, it is that you start with a cloud, any cloud with a bare minimum base system on any VM you spin up. The rest of getting your full system deployed is described in the container config and initiializaton script files, so whether you are switching to another cloud provider, self-hosting, starting a test server on your local machine before going live with the new version, or expanding capacity by spinning up new instances, you have one command to get everything up and running, and it gives you a consistent full stack every time, so no surprises because cloud provider A installs an older version of MySQL than cloud provider G, but all your location data is encoded with spatial SQL that is only supported in the new version.
      • Ok, but that's super easy to get working without Docker. Linux is Linux, whether it's running on a dead simple platform like Linode, or huge, complicated mass of services and features like AWS.

        Launching a server into any of these environments is a matter of selecting a base image and running a single command.

        After the server is launched, it's all Linux. The scripts to provision AWS or Google Cloud or Azure or Linode are the same.

    • Re: (Score:3, Interesting)

      by Anonymous Coward

      This is the idea behind it, but it's built on sand.

      You're basically asking to move around virtual machines, and the end result is that you end up with a lot more code rot than you would had you just installed the OS fresh and then kept a separate "disk image" for your software and a "disk image" for your data. If you manage to screw up a transplant, you can rollback to earlier images, and you can maintain them offline.

      Docker doesn't do that. Docker basically spins up an "environment" image, and you work out

      • by kriston ( 7886 )

        Those good reasons are likely why a certain large social media platform is not interested in virtualizing or containerizing its massive production environment.

    • by Bert64 ( 520050 )

      No but you could tar up your system, copy it across and chroot() into the resulting location long before docker even existed.
      Or you could install into a specific --prefix, and tar that up instead.

      • by kriston ( 7886 )

        Resource limiting (cgroups, etc.) were only introduced into the Linux kernel in 2007 and a similar concept for FreeBSD came about in 2000. Traditional chroot is risky without limiting compute and memory usage.

        Then there's the networking stack for each container that allows conflict-free access to all ports on a network interface.

        Then there's the implementation of overlay filesystems in containers to reduce redundancy while maintaining the ability to tailor each container to its needs, and an image reposito

  • by tdelaney ( 458893 ) on Sunday June 02, 2019 @07:33PM (#58697288)

    I've personally only used docker a little bit, but where I have found it useful is on my QNAP. The QNAP OS doesn't have the full range of tools available, and not everything is available via entware. Docker is able to fill the gaps.

    For example, I have a tool I've written that is several times faster using pypy than python (the BeautifulSoup library is so much faster using pypy). Unfortunately, pypy isn't available in entware, and I've failed in getting it compiled for my QNAP. However, there's a docker container with pypy that I was able to install into Container Station, and I'm able to run it using that.

  • If you don't know how to write an automated deploy script, then it's helpful. A lot of people don't know that, for some reason, but it's not going to change.

    There are some other advantages, for example, it makes it easier to transfer between AWS and Google cloud, but that's mostly not why people use it.

    Docker will lose popularity if someone figures out an easier way to deploy.
  • by malkavian ( 9512 ) on Sunday June 02, 2019 @07:51PM (#58697356)

    That, to me, is what containerisation is all about.
    Having a database engine that's effectively isolated from the web server host, and those sitting on one virtual (of which you can have many), adds a layer of security.
    Then there's the ability to have either ephemeral or persistent containers (or any combination of those) on a single virtual.
    And swarm clustering for high availability (or even powering up more nodes on underused hardware when one of your apps needs extra grunt, and scaling back when it doesn't).
    I think there's a lot to be said for it, personally. But it's an option I'd only bring in on very mature infrastructures. There's absolutely no point in bringing in containers if you're not capable of maintaining the hardware that comprises your infrastructure. Or if the virtuals that sit on your hardware aren't second nature, and backing up and recovering them is child's play.
    Way too many people think that it's simplicity to run 'disposable' containers, and they will definitely work great, until you discover that something really isn't right.
    That's when you absolutely have to have a full, and complete understanding of everything you've done, and how it _really_ works.

    As long as you have the mature infrastructure and skill base, then containers are damnably useful. But they're definitely 'cherry on the cake'.

    • by Dan East ( 318230 ) on Sunday June 02, 2019 @09:01PM (#58697672) Journal

      Having a database engine that's effectively isolated from the web server host, and those sitting on one virtual (of which you can have many), adds a layer of security

      How does it add a layer of security? If someone gains access to the web server host, then they have access to the DB, because obviously the credentials and ability for the web server to access the DB must reside there. At the very least they could simply modify the existing code that accesses the DB to do other things.

      Now, in addition to the web server being an attack vector, the DB itself is also an attack vector that can potentially be accessed directly, since it resides elsewhere and allows (at least some) remote connections.

      I just don't see how that makes anything more secure. More scalable, yes (along with worse latency though).

      • by gweihir ( 88907 )

        Indeed. It does _not_ increase security. I can make security someone else's responsibility (great for the standard developer that has no clue about security), but it always is _your_ problem when it fails.

      • Re: (Score:3, Informative)

        by Anonymous Coward

        How does it add a layer of security? If someone gains access to the web server host, then they have access to the DB, because obviously the credentials and ability for the web server to access the DB must reside there.

        You don't get full access to the DB, you get the same access to the DB that was granted to the web server.

        The web server is going to be restricted to only read and maybe write to one context in the DB.
        Other processes elsewhere (not in the webserver) will have access to that data plus other data to make additional links to other software, but none of that will be accessible to the webserver.

        At the very least they could simply modify the existing code that accesses the DB to do other things.

        OK, so you can do what the webserver can do. You can read a table to get some data, and that's it.
        How does this aid yo

      • How does it add a layer of security? If someone gains access to the web server host, then they have access to the DB, because obviously the credentials and ability for the web server to access the DB must reside there

        php-fpm is running in a separate container and communicating over a Unix socket in a shared file system. nginx doesn't have read access to config.php, but php-fpm does. nginx doesn't have access to the database socket, but php-fpm does. You have to crack open php-fpm to gain access to the database.

        At the very least they could simply modify the existing code that accesses the DB to do other things.

        The only mount points shared between the two are mounted read-only for nginx, so you can't just add a new php script.

        the DB itself is also an attack vector that can potentially be accessed directly, since it resides elsewhere and allows (at least some) remote connections.

        Only accessible from php via a socket file. Everything's often on one system.

        When on an

    • by gweihir ( 88907 )

      You pay for that with potential security problems within each container and in the containerization layer. It actually decreases security in general. It may increase the skills an attacker needs slightly. What you want instead if you do security is something like SELinux or AppArmor, configured restrictively and, of course, good use of the standard UNIX isolation model. But that needs real skill, hence this ElCheapo approach and the fantasy that this is good for security.

  • by technomom ( 444378 ) on Sunday June 02, 2019 @07:53PM (#58697364)
    I mean technically, you don't need jar or war files to deploy a Java project either, but I wouldn't describe those as "fads". They are just a convenient way to pull together a complete package in one file. Docker is sorta like that, it takes the entire system deployment (the OS library level, the runtimes that you need, and hooks for configuration for different systems and makes them into a convenient image that can be versioned and served from repositories. You could say the same thing about VMWare files and Vagrant before that. The advantage is that Docker is smaller and a bit easier to build. It's easy enough that it's become somewhat of a standard for Kubernetes, AWS Fargate, and lots of other platforms. Will something come along to take its place? Maybe? Probably? We're already seeing the next level of deployment artifact in Helm charts for Kubernetes, though they tend to include Docker.
    • This is exactly it. Everyone who seems to think that docker is about security or something else has no clue what they're talking about.

      A Docker container is a unit of deployment, no more and no less. Everyone who has used Docker can think of ways that it could be done better, and when something better comes along, Docker will adapt or decline. Unkind people might call that "a fad", but most technology has a churn rate.

  • No (Score:5, Interesting)

    by jemmyw ( 624065 ) on Sunday June 02, 2019 @07:54PM (#58697366)
    Containers are not a fad. They've been around longer than docker. Docker itself might not be the long term solution, but containers will remain and evolve. Docker is just some tooling that made containers easier, and I think there was pent up demand for being able to create an immutable image for a single exe.
    • by etrnl ( 65328 )

      That sounds very much along my thoughts. Some workloads are supremely better at being containerized (such as things you would otherwise consider using a chroot jail for), and some aren't. But Docker allows you to easily replicate the same kind of environment for local development as production, which avoids the common "it works on my machine" problems even Vagrant VMs can have.

      And then there are CaaS offerings like Amazon's Lambda/API Gateway and CodeBuild services that use containers even though you're not

  • by Anonymous Coward on Sunday June 02, 2019 @07:57PM (#58697388)

    I'm looking for a job. I've been on maybe 7-8 interviews and if the company doesn't currently use docker, it's on their roadmap. I've only used it a little, not for a full tooling, and it's always frowned upon when I mention this. Every interview I go to either has, or is working towards, continuous deployment and the fact that I don't have this is heavily frowned upon too. I don't know what the next fad will be, but this one is pretty strong at this point.

    • Every interview I go to either has, or is working towards, continuous deployment and the fact that I don't have this is heavily frowned upon too.

      In the interview, just say how much you like it, how great you think it is.

      • by Matheus ( 586080 ) on Monday June 03, 2019 @01:34PM (#58701560) Homepage

        Proper interview skill: "Fake it till you make it"

        That being said the OP's actual point lines up: "Docker" the brand may be a fad.. "Docker" the concept ala "Kleenex" or "Band-Aid" is a really damn good one and not likely to go away anytime soon. If anything there's work to improve upon the idea not get rid of it.

        Containers are easy to manage and reproduce. They give you process isolation that is invaluable when deployed on systems shared with other services you don't trust. Bring in the CI/CD you mentioned: Deploys are reliable enough with containers that CI/CD is even practical especially since rollbacks are more consistent. The industry has learned that "more deployments' is generally better.. true CD all the way to prod isn't necessarily feasible (or even desirable sometimes) but the smoother/frictionless/effortless you can make your deploys the better your life will be and once they are truly effortless then flipping the switch to make them automatic is more of a political step than a technical one.

        I'll state a preamble that's the current state of the world right now: The world still has software that runs directly on client machines.. these don't need docker so much although much like the JVM allows you to run Java nearly anywhere without changing the code, docker allows this at a higher, less language dependent, way. You can also do things like run Linux containers on a Windows host for other cross-platform perks. Still lets call this the low bar. The world also still has people/companies who run their own DCs and have their own hardware. There are definitely applications, especially in research, where you might even need to have your software running directly on that bare metal but for the most part of you are a commercial firm and you're not managing your hardware pool with at least one layer of virtualization you are at least making your life harder, at worst wasting resources. This may or may not extend to a containerization landscape but it's a pretty good idea for best packing. --> The real clarity occurs in the cloud. You don't own the bare metal. You can "own" your own VM but most clouds make that expensive and they are still not as friendly as they should be to deploy especially if you have a lot of post-launch setup to do. These days you are really heading one of 2 directions: a fleet of containers OR serverless (which surprise surprise is running on containers behind the scenes). Any cloud-focused ecosystem is made possible by containers whether by direct usage or by one of the many services you are interacting with so.. get to know them.

    • You've noticed a trend, you've noticed the skills that are in demand. Spend a weekend to do it on a small project, learn a bunch, and then when you go to your next interview the line "My current job doesn't use it, but I've implemented in a personal project so that I'm familiar with the subject" will go over WAY better than "no".

  • It is, of course, not the silver bullet some are trying to sell. But, then again, this happens often in the industry - witness things like object-oriented design or Agile.
  • by steveb3210 ( 962811 ) on Sunday June 02, 2019 @08:02PM (#58697410)

    Docker's a mixed bag - we've had alot of problems with their network stack using the swarm to the point that I would not recommend it to anyone for production use.

    I need to figure try Kubernetes-ing (gerand?) these things and see if life is more smooth.

  • no its not a fad.

    Its an efficient and lightweight way to get stuff working, while maintaining isolation of the various pieces. This makes managing upgrades and changes in the various pieces simpler because they are isolated.

    Having apache and mysql in different docker containers, for example, has some real niceties.

    Or if you want to run more than one lamp stack application on the same physical PC; containers can make that easy. Because you don't have to worry about interactions and conflicts between the two

    • by gweihir ( 88907 )

      It is actually heavy-weight, even if that is hidden. Because now, instead of administrating one system, you have to administrate one system and n containers. Sure, deployment gets easier, but everything else gets harder. And container security with Docker just sucks.

      • by vux984 ( 928602 )

        Docker really isn't that heavy compared to bare metal. And its lightweight compared to virtualization.

        And I'm not sure what you think is 'harder'. I find administrating containers to be a lot simpler because of the isolation. I only have to concern myself with the one service in the container. I don't have to worry about dependency conflicts between different services, I don't have to worry about much of anything.

        I can upgrade services, restore them, swap them around, migrate them to other physical hosts...

  • Re: (Score:2, Interesting)

    Comment removed based on user account deletion
    • Note that there is no reason it can't be that simple without Docker. If the installation problem is complex, the problem is the team, not the lack of Docker.
    • "DevOps can customize docker images as needed, create a Docker Compose configuration and hand it back to the developers." If you have a department called "DevOps" you're doing it wrong.
  • It is a fad (Score:5, Funny)

    by LynnwoodRooster ( 966895 ) on Sunday June 02, 2019 @08:16PM (#58697484) Journal
    Dockers - in fact, chinos in general - went out in the mid 2000s. It's all jeans now, ideally non-denim based jeans, in black or a bright color (light brown, red, green, etc). Give it a while, bellbottoms are coming back in now. Dockers will be back in-style sometime around 2032-2035.
  • Docker and especially Swarm itself is a great system, it has basically fixed a lot of issues and combined ideas from LXC, BSD jails and Solaris containers. The problem is that everyone then continues building a stack-on-a-stack, which is what Kubernetes and co are.

    As long as you know what it's for and follow some basic guidelines, it works well. The problem is that you now have a hammer and everything becomes a nail.

  • by skinlayers ( 621258 ) on Sunday June 02, 2019 @09:01PM (#58697670)

    PHP and Rails style development patterns came before Docker was popularized. So they don't really fit naturally into that paradigm. Drupal expects you to copy and modify auto-generated templates, and have something like NFS shared storage for HA setups. A more modern app would probably use an S3 compatible object store, and (in all honesty) be written in Go.

    You know what's great about docker?

    * Immutable artifacts. If you build your container correctly, it will be the exact same package on your laptop, staging and production. This helps eliminate the "well it works on my machine" problem.

    * Reproducible builds.

    * Bundled dependencies. Give your app exactly what it needs. Upgrade libraries without needing to upgrade all of your sites at once on a shared VM.

    https://jamstack.org/ [jamstack.org]
    JAMstack alternative to classic PHP CMS:
    https://www.netlifycms.org/ [netlifycms.org]

    Templating config files inside a Docker container.
    https://github.com/kelseyhight... [github.com]

    Supports pongo2 "mustache" templating, which may be more familiar for Drupal, Django, etc users.
    https://github.com/HeavyHorst/... [github.com]

  • by AndyKron ( 937105 ) on Sunday June 02, 2019 @09:03PM (#58697680)
    OK. This has nothing to do with pushing a laptop into a chunk of plastic to make it a real computer. Good. Now I've learned something today
  • by gweihir ( 88907 ) on Sunday June 02, 2019 @09:14PM (#58697712)

    Basically, the containers are not administrated by you. If you are a competent sysadmin, that is a disadvantage, potentially a huge one. If you are a typical modern developer that knows nothing about system administration, this can seem like an advantage though. It is possibly why this non-idea is so successful.

    • What you said applies to Linux in general. Docker provides the same level of administration as a Linux system. You can even open up a shell on them and modify to your hearts content. What you get in both cases is a curated and pre-prepared application stack. The difference is you can spin up two docker container with different versions of libraries and even different versions of software without hosing your entire system or firing up a VM to do it.

      The fact you think containers are not administered by you ju

      • by gweihir ( 88907 )

        You did not get my point. Which, incidentally, shows you have no clue about system administration. Place two containers on your system and suddenly you have to administrate 3 three (!) different (!) systems. Have everything running on the base system and you administrate and maintain one system.

  • It has it's use cases. Everything does not need to be "dockerized". Some things make sense - like Minecraft. I'm not running each if my LAMP apps in their own docker though. Just the wrong use case.
  • by CmdrPorno ( 115048 ) on Sunday June 02, 2019 @10:00PM (#58697892)

    Business casual has been going on for a while now, even at relatively conservative organizations, but I've never heard of employees being required to wear Dockers. Usually any sort of khaki pant is acceptable.

  • My experience with Docker is that it's a great concept, but it fails hard when you try to do basic stuff like networking of any even moderate complexity or within an SELinux environment basically at all. Kind of reminds me of node.js in the early days though so it might stick around.
  • Virtualization, and now Docker containers, are about fixing the lack of security that a Linux, Windows, etc. fail to provide. They are a hack to provide the security capabilities that should have been a part of operating systems since the 1960s. Multics almost got there, and the Viet Nam air war provided the motivation to figure out how to achieve multi-level secure systems.. which have been done. However, the rise of the minicomputer, unix, and finally the desktop machine had us prematurely optimize on

  • It is a great technology, with many reasons to use it. Microsoft has even gone into it for running under windows.
    Where it really fails is with security. Sysadmins, or whatever their title, download or build a container, and then it is just ignored until there is an update for the main application. So you have these containers with old versions of software with and a lack of patching.
  • a few weeks (Score:4, Interesting)

    by bugs2squash ( 1132591 ) on Monday June 03, 2019 @12:24AM (#58698244)

    A few weeks in to using docker when we were struggling with it, shame on us. A year into it and we're still struggling shame on Docker.

    There were problems with file systems, with remembering what command was run to launch the image, with the container running out of disk space, with different containers requiring different kernels, with figuring out why the container had died.

    Overall, moving to ansible to build everything from scratch was a great step forward and I wish the detour into docker had never happened

    .

  • by bain_online ( 580036 ) on Monday June 03, 2019 @01:31AM (#58698390) Homepage Journal
    First and foremost Disclaimer: I work for a startup that is making products using docker/containers and our future is hugely tied with fortunes of docker, make what you want of that.

    But thats not the sole reason why i write what i write now (if you believe my word which you shouldn't so judge on your own).

    Ever since I was shown how BSD jails and chroot works i had used them for personal development (thats for last 20 years now). Then came the VM ecosystem and even at work i could get same benefits (more on that below) as those personal setups. Then came LXC and subsequently docker... in my view its the same with addition of docker hub/repo and most importantly versioning addition, which is a huge differentiator.

    What are those benefits?

    • 1. Consistent environment: OS needs maintenance, typical upgrade causes my system to break down dev setups. I need quick way to reproduce what went in to dev on a newly patched system.
    • 2. Elimination of "works on my machine" . Well to the extent its caused by changes in environment and not incompetence. Need I say more?
    • 3. I can go back to a released product and figure out exact reasons the one big paying customer is having issues woth. Replicating the build that was released to the customer and testing it in the exact env released to customer even years backwards. Customers upgrading regularly is a myth and we can't force them to upgrade just because one of bugs is biting them.
    • 4. Elimination of dependency on IT for provisioning different environment. Need three setups to test three different JS / python libraries? need to make your code compatible with latest python? no need to wait for IT to provide new machine/vm just create new versions of dockerfile and run it on your dev machine simultaneously without interfering with your daily workflow.
    • 5. Broken dev setup? Packages all screwed up ? just go back to known working setup in jiffy.
    • 6. Testing becomes much more reproducible probably duplication of combination of above, but QA loves it

    And plenty more at one time cost of creating a docker environment. Thats one time across your entire dev team not just you, cost

    And these are all just from the dev point of view. Not going into efficiency of baremetal vs vm vs containers etc. and production is a different beast.

    And as added bonus you can integrate your tests easily with any CI CD just docker pull and run tests on any worker machine. Or even dev environments can be sent on build farms in a jiffy (my startup works in this space) but from the author's description it doesn't seem like build and test are too costly for him. But still.

  • I am an outlier (hopefully, or nostalgically, not in /. ) in comparison to general user base being a Unix system manager for last 25 years. That said;
    Dockerised systems provide access to people with no and/or almost none whatsoever system architecture knowledge. So obviously there is a "need" for such solutions in large segments of the market. I will not get into details of security and efficiency issues, not with the assumption of they would not be understood, but with the assumption of they are already o
  • Containers will be here for a while now.

    Docker is by far and away the most popular container tech, but there are others; and there's certainly room for improvement.

    You see very many examples of poor use of containers. E.g. treating a container as a kind of lightweight virtual machine. But this is kind of category error. A container should (usually) be a lightweight environment for a single process. Orchestration of the interactions between process and the outside world should be done with other tools - e.g.

  • In short, the article suggests when you have an environment which is capable to run multiple services seamlessly then you do not need to dockerize the stack and deploy your service in there. This is obviously true. It is so obvious that you really do not need to have an article about it. Docker is for making (micro-)services and let them run together, be able to scale them (but that needs other software too) and be able to move it quick from one docker stack to another docker stack.
    If you have a Wordpress,

  • by prefec2 ( 875483 ) on Monday June 03, 2019 @06:46AM (#58699138)

    https://xkcd.com/2044/ [xkcd.com]

    Anyway, Docker is a great thing when you use it for services which need more complex stacks and integrate across multiple technologies aka languages, frameworks etc.

  • a lot of these 'modern' devops things might seem like overkill, but that only depends on your scale of operations.
    do you only have to worry about one server, why would you invest in cfg mgnt, containers, ci/cd, etc?
    the benefits to these things become more clear as your environment grows.

  • As many have noted, Docker doesn't fit in everywhere. But if your existing application needs dynamic scaling, or you need to deliver pre-built containers to your customers, it can be a big help.

    That said, anyone doing new development might want to skip over the Docker generation. Why wrestle with containers and dynamic scaling when you can drop your functions into AWS Lamba, Azure Functions, or any of the other serverless offerings? These things auto-scale for you. Serverless is the New Black.

    Granted, s

  • In the sense that it's a "passing fad", no. Most technologies at this level don't go away, they just lower in popularity as they begin to be used only for their best purposes as bigger problems get solved by even better solutions -- so the stretching that they do up-front goes away.

    But like most of these cure-all technical designs, it came about to solve a very specific problem, almost always a very small problem, within a company that had the small problem in huge proportions. Think google's power train,

  • by skovnymfe ( 1671822 ) on Monday June 03, 2019 @09:14AM (#58699778)

    Docker is a layer of abstraction to make it easier for the end-user to use the exposed functionality. That's it. It will last until someone makes another layer of abstraction or when the industry as a whole decides to move away from containers to something else entirely.

  • I started working with docker containers about 5 years ago and have followed the trends of docker compose and then kubernetes. As someone who previously spun up either bare metal machines or raw VMs, installed everything needed, and tended to vertically scale machines to match demand-- working with containers, microservices and horizontal scaling has taken a lot of time and effort to learn and be able to "do right".

    I would never go back to Chef/Puppet/Vanguard installations at this point. I feel comfortab

  • Docker may be a fad, but containers are not. Containers are essentially just 3 things:
    1) A set of linux namespaces (a pid namespace, a mount namespace, etc)
    2) A set of cgroups
    3) a chroot to a virtual file system

    It's not a virtualization technology, so it doesn't have run-time overhead. If your OS is Linux, the "dockerized" application is a native process that you can see when you run ps. You can do the exact same thing that Docker does if you know a few system calls to create the namespace, cgroup
  • Great tool but... (Score:4, Insightful)

    by ilsaloving ( 1534307 ) on Monday June 03, 2019 @12:57PM (#58701306)

    Docker is a powerful tool, but it's basically the infrastructure equivalent of Visual Basic. It enables people that don't know jack about infrastructure, and there landmines contained therein, to slap together some half-baked thing and think they're gods gift to IT.

    The biggest benefit for developers is that they can package their stuff up into a nice little bundle without having a sysadmin breathing down their neck about how they did this or that wrong.

    The biggest drawback is that developers can package their stuff up into a nice little bundle without having a sysadmin breathing down their neck about how they did this or that wrong, which basically guarantees that they *will* do it wrong, and open up all sorts of security holes in the process. (There is a reason why sysadmins become curmudgeonly. We have to slap down know-it-all developers on a very regular basis for doing things like disabling firewalls cause that extra security is too inconvenient to their development process, or setting database admin passwords to 'password'. I wish I was joking.)

    • "We have to slap down know-it-all developers on a very regular basis for doing things like disabling firewalls cause that extra security is too inconvenient to their development process, or setting database admin passwords to 'password'. I wish I was joking.)"

      Heathen! Bow before the mighty Full Stack Developer and repent. :-)

      Seriously, even reasoned arguments go out the window when developers are involved these days. They could be 3 days out of coder bootcamp and most companies think they're gods. It sucks

  • hey, so I'm an embedded guy and have never needed to be exposed to Docker. Help me out. Having read a bit about it, the idea behind dockerizing is that instead of setting up a VM to go out of your way to get SW running, you're making software builds go out of their way to run a little nicer on the sorta-VM-Docker which runs on whatever hardware. It runs a little faster and with less space than a full VM, so it's cheaper to run on a rented server farm. It's a different build target, just like how you could h

    • by Matheus ( 586080 )

      If used properly it can be severely cheaper AWS rent but no that's not the primary benefit. (I mean from a purely AWS standpoint we recently converted an EC2 deployment -> a container running in ECS and our spend went from a grand a month to some low number of dollars with no loss of performance so there is that..)

      To your points the "tiny" bit faster (basically the difference of the VM process overhead) is only meaningful for the most processor crushing applications (and then still only tiny percentages)

  • Definitely came to the wrong place...

  • If you've got a simple website that lives on one server and isn't using anything that isn't in your "monoculture" of LAMP, then yeah, Docker doesn't bring a lot of value to the table.

    But if you've got 2-3 load balanced PHP front end serviers in front of a half dozen Java Workflow Engine instances that that run against data pumped out by a cluster of 8 Python Data-Science workers and they're all exchanging data through a RabbitMQ messaging layer and storing data in Redis backed by a MySQL data warehouse, nex

FORTRAN is not a flower but a weed -- it is hardy, occasionally blooms, and grows in every computer. -- A.J. Perlis

Working...