Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Software

Open Source Batch Management? 37

Asgard asks: "My employer is currently running a commercial batch management platform. Unfortunately the licensing model makes it unfeasible to run it in the development / testing environments, leading to poor usage of the tool and unexpected failures in production. I'm looking for an equivalent Open Source tool and am wondering how others have approached the problem. Does Slashdot have any suggestions?" Imagine a system like cron, but with job dependencies. Are there any batch systems out there like this?
"The tools I've found through web searches mostly treat 'batch management' from the cluster perspective -- a user submits an ad-hoc job and the tool figures out where and when to run it based on load and architecture requirements. Instead I am looking for something that manages daily schedules of jobs based on their dependencies with other jobs and external events, such as files arriving or time.

An example might be that every day jobs a, b, c, and d must run. Job a must not run before 9pm and requires file X to be present. Jobs b and c depend on a completing successfully. Job d must run after 2am and after b and c have completed successfully. If job c fails then an operator must fix the issue and rerun it, after which the tool will move on to job d. "
This discussion has been archived. No new comments can be posted.

Open Source Batch Management?

Comments Filter:
  • Right here on slashdot. Maybe a search through the archives will find it...
  • by one9nine ( 526521 ) on Friday February 11, 2005 @03:02PM (#11644820) Journal

    I thought it said Open Source Bath Management.

    Maybe I speak for myself, but some things are better off left proprietary.

  • That is would "normal" people be setting them up? If not, you could simply use make. Or Ant.
  • by blincoln ( 592401 ) on Friday February 11, 2005 @03:10PM (#11644923) Homepage Journal
    I don't know of any OSS systems like this, but they are *very* useful for larger companies.

    A few years ago I was working in change control, and updates to software stored on network shares across the company were handled using a decrepit old VB app that generated linear xcopy scripts that updated each server (of which there were about 160 spread across the US) one by one. Most of the servers were on slow links, so distributing a 10MB file could take twelve hours or more.

    I hadn't learned to code properly at that time, but we used an enterprise batch scheduler called Control-M* that worked like the original post describes. What I did was wrote a batch script that read a config file and then executed a single robocopy command targeted at the server in the Control-M job definition.

    I had a whole array of these jobs, one for every target server, and they all depended on another job that would run at - for example - 11PM. So when that time rolled around, all of the dependent jobs could run. As-is, that would have overloaded the WAN and source server bandwidth. So I assigned what Control-M called a "resource" to all of the jobs. It was just an integer counter that I capped at 16. So at any given time, there were 16 "threads" of robocopy running. It ended up being between 20 and 30 times more efficient than the crappy xcopy scripts.

    Anyway, they're really handy, and if there isn't an OSS project like this, it would be a great idea.

    * This is not an endorsement of Control-M. In my new(er) job, I'm working as an engineer, and I discovered that the encryption system that it uses for storing account passwords in the registry is so poor that I was able to write a universal decoder for it using only vbscript and Excel. There are certainly other downsides to the app as well, although one cool thing is it runs on just about any platform - Unix, AS/400, OS/390, Windows, etc.
  • by hankaholic ( 32239 ) on Friday February 11, 2005 @03:19PM (#11645022)
    Ummmm... cron+make?

    Build systems aren't just for running compilers. :)
    • by MarkusQ ( 450076 )

      It works great for me. Just have to do a caffeine check before making major changes (and remember to stop the cron job plus test in a sandbox).

      Some handy tips:

      • Use pid files to keep new instances from starting up if a job goes long.
      • "-j" can be your friend, but (like a real human friend) it can also get you into a heap of trouble if you aren't careful.
      • Running the make in a permanent loop and just touching things with cron can be a handy trick, especially if you need to let users (or external processes)
    • For example: (Score:5, Informative)

      by Ayanami Rei ( 621112 ) * <rayanami AT gmail DOT com> on Saturday February 12, 2005 @01:01AM (#11649740) Journal
      in cron.daily...
      make -j $NCPUs -C /working/dir /working/dir/Makefile -

      all: tasks/1 tasks/2 tasks/3

      tasks/1:
      foo bar baz
      frob fritz
      touch tasks/1

      tasks/2: tasks/2.1 tasks/2.2 some_make_test(tasks/2.3)
      bar baz qyzzy and touch tasks/2

      etc. etc. etc.
      • Clarifications (Score:4, Informative)

        by Ayanami Rei ( 621112 ) * <rayanami AT gmail DOT com> on Saturday February 12, 2005 @08:16PM (#11655276) Journal
        1) one thing make can't do is run tests that generate dependancies at runtime... it does it in one pass at the beginning. Since you're running it iteratively this isn't a big deal.

        2) For a batch automation system, you'll need to use make -k, and if you need to, put targets in .DELETE_ON_ERROR if you don't do something like manually touching a status file at the end of a command.

        3) If you have a dependancy chain of targets and you don't want to have to clean up explicitly (or you want your job to run entirely in phases), you can label intermediate targets with .INTERMEDIATE, and if make finishes processing these things in one invocation, it wil delete the outputs/status files when all the dependant jobs are run. If it doesn't make it, then it will be forced to restart from the preconditions.

        4) Make sure to fully outline dependancies. If you need to somehow prevent two things from running in parallel, you need to create an artificial barrier with the script itself unfortunately. The easiest way to do this would be perl and IPC::SysV, I should think. You might know of some other shell tricks or opening a device that blocks like a FIFO... but it sucks that gnu make doesn't have it. (However HP-UX and SCO's make have a .MUTEX pseudo-target that prevents two things from being run in parallel... shame)
    • How does that fit the requirements? Sure it can be hacked up to work, but consider that what he wants is not "run this at time X and follow these dependencies", it's "must run at any time after X and requires Y and Z to have happened." You can set a cron job to fire at X, but if Y and/or Z hasn't finished/happened yet what are you to do? Just sit there and wait? What if other things depend on this task? Will they all pile up into a heap of waiting processes? Can the waiting process really block on two
      • Back off, man -- he didn't mention having thought about using cron with a build system, so I suggested it. There was nothing else in the comments regarding an actual solution at the time, so I suggested an actual solution.

        You attack my suggestion as being wasteful or suboptimal, but at the time I posted there were no solutions.

        I hope it feels good to have pointed out the inadequacies of my attempt to provide some initial direction, especially given the fact that at the time I write this, 22 hours after th
        • I'm not trying to be disagreeable at all, and I don't really care about the timing of the posting of the article and the various replies. It doesn't matter to me if replies take days or weeks. I'm just trying to demonstrate how "cron+make" perhaps isn't the end-all solution to the question posed. A technical solution should stand on its own merits and I'm sad to say that "cron+make" works for a lot of cases but leaves the case of the original article poster in the dark.
      • Cron + make is actually a pretty good solution. Why?

        make can told to proceed as far as possible with missing results. If you keep running it every so often, it will eventually get all it's dependancies as soon as possible and produce the "final" result. (These results are intermediate files that just checkpoint progress... unless you are using a custom make test)

        What's interesting is that you can ask make to treat "dependancies" as either a all-present or a do-in-order type of thing (or both). Even cooler
  • The Condor project [freshmeat.net] looks promising. I've been looking for something similar as an alternative to LSF.
  • DOS (Score:1, Funny)

    by Anonymous Coward
    Wouldn't FreeDOS work? I have a bunch of batch files that work in MSDOS.
  • This sounds very much like a workflow system to me. There are many out there. I am currently working with jbpm [jbpm.org]. Many have all sorts of plugins and can be programmed to do more. They also come with process definitions ... and on another note. To some extend build tools like ant can do things like that too...
  • I've never heard of a vendor that isn't flexible when it comes to development and test environment licenses. I work in the financial sector and every system (EVERY SINGLE ONE) has at least a development environment and a pre-prod/UAT/Test environment. For more critical applications that go through a lot of regular change (i.e. website) there's actually SEVEN environments it goes through, the last being production.

    We use an enterprise scheduling system called AutoSys which is suppose to be the industry st
  • PBS and Sun's SGE do this kind of job
    management, but for clusters of machines.
    There's nothing that says you can't have
    a cluster of 1 machine though.
  • Use Gridengine (Score:1, Interesting)

    by Anonymous Coward
    http://gridengine.sunsource.net/

    It handles batch jobs, dependancies etc etc.

  • by Bryan_Casto ( 68979 ) on Friday February 11, 2005 @05:00PM (#11646265)
    I think TORQUE Resource Manager [clusterresources.com] will do what you're looking for. From their page:
    TORQUE (Tera-scale Open-source Resource and QUEue manager) is a resource manager providing control over batch jobs and distributed compute nodes. It is a community effort based on the original *PBS project and has incorporated significant advances in the areas of scalability, fault tolerance, and feature extensions contributed by NCSA, OSC, USC, the U.S. Dept of Energy, Sandia, PNNL, U of Buffalo, TeraGrid, and many other leading edge HPC organizations.
  • I'm currently designing such a system for work. My basic spec is to have a system that can run an arbitrary batch when another job has run, before or after a specified time and with regard to the file system information of a specified file.
    e.g. run Job A when job X has successfully completed and file P has been updated, run job B if job X hasn't run by 3am, run job C if job Y fails.
    I've got most of the rough design done, the main problem is specifiying date/time information - I would like to say "every 2n
    • by Anonymous Coward
      Complications: running on Windows 2000 platform, zero budget and a lot of the jobs will run 16 bit apps in a NTVDM.

      What, the project manager couldn't squeeze "devs must hit themselves in the balls with a hammer each hour" into the requirements list?
      • Actually if it had an associated cost I wouldn't be allowed to. On the other hand if it saved money I'm sure it would be compulsory.

        Actually there's no project manager for this yet, just me. I'm, er, doing a feasibility study at present, of course sometimes the only way to determine if something is possible is to do it...

  • Suggestions (Score:2, Informative)

    by RyanGWU82 ( 19872 )
    I'm working with systems like this right now. You might have better luck if you search for "workflow" instead of "batch." Googling for "open source" workflow management [google.com] also brings back a bunch of promising hits. And if you're Java-centric, there's a great page which summarizes all the open source workflow engines available for Java [manageability.org].
  • Apache's Ant [apache.org] may be worth a look. It handles dependancies very well. It may not be so great with timing of jobs (cron + ant?) or handling jobs running in parallel (ant plus a custom 'run task in the background'?).

    --
    Linux Server + Persistence => Solution [rimuhosting.com]

  • Really. It doesn't sound like it would be too difficult to write this yourself with some good Unix scripting (Perl, bash, etc.)

    You said it's to serve as a test system for a commercial application. I assume you already have a "schedule" in mind, so maybe you could simplify things a bit by writing a system that only runs your specific schedule, rather than writing something more general. I don't know if that would provide a valid test case for your purpose.

  • Its pretty flexible... With a bit of shell scripting around it, I imagine you could do this.
    http://fcron.free.fr/ [fcron.free.fr]
  • I've had the same need - a previous posted mentioned AutoSys, which for all of its ugly faults gave my last employer a very robust job scheduling platform that I found very reliable.

    I've been looking (waiting?) for an open source equivalent. What we really need is something like Condor and Globus, ala the NSF Cluster Toolkit, with a cron interface (cluster-centric solutions have great features like redirection of STDOUT and STDERR, but don't have the ability to schedule a job for later execution.) Java W

One man's constant is another man's variable. -- A.J. Perlis

Working...