Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Unix Operating Systems Software

How Well Do Most OSes Handle Resource Management? 10

schlika asks: "After getting some troubles with a highly loaded Web server running Linux 2.2.12 I read some infos about its max thread/processes limitation and 'not-so-great' virtual memory management. Could some of you comment such issues with other OS's such as FreeBSD (which everyone says it's better for that task without ever giving explanations), OpenBSD, NetBSD, Solaris, Linux2.4 and others? Please give real world examples/comparisons if possible."
This discussion has been archived. No new comments can be posted.

How Well Do Most OSes Handle Resource Management?

Comments Filter:
  • Linus has said the 2.4 kernel will be vastly better under high loads. He reported recently that he did some testing to see how responsive it was under high loads, and it did much better than the 2.2 series did. This was a perception test, how it felt, so no hard numbers are possible.

    The bsd's do their thing well.

    But I'm happy with linux.
  • Linux has this same setup also. Specifying the max amount of ram a user can use, etc. It works but I don't think it's what the original poster needs.
  • FWIW, Solaris has a (extra purchase) workload manager that'll do exactly that. You divide the system up into shares, and say what groups get what shares of resources (memory, cpu, io). Then you can drill down further into individual users. And if there are idle resources in one group, because you're using shares and not percentages, they can be used by other groups.
  • Troll eh? I hope that gets hit in metamod. It's not like I tried to hide the fact that I was linking to a sex site -- if you can't figure out that "sextracker" is porn related..

    Ah well, such is life.
  • At this time you still have to do some tuning if you are really going to sock the crap out of a Linux webserver.

    With 2.2 you have to do some (minor) kernel hacking and some OS tweaking to increase the number of processes and open files.

    It is also important to note that each process consumes a file(handle?) so that can become a limitation.

    Start by doing ulimit -a, and if you can only have 256 processes (or your webserver can only have 256) you aren't tuned.

    From there, search google for "nproc linux kernel apache" or something for more info.

    -Peter

  • You are right on most points, but slightly offtopic. The problems you mention pertain to the (cheap) PC architecture, which you compare to (expensive) mainframes. Also, the tasks most workstations and servers have to face are "online", so there is not much scheduling ahead for the entire night to be done.

    I think unix does a more than decent job for the given hardware and for the given problems. If you knew ahead all the jobs and their requirements, there are a hundred algorithms in any OS book on how to attain optimality - it is a solved problem, really. Compare this with "online" problems, when the OS gets the request that has to be answered ASAP, and the best you can do is multiplex everybody.

    Modern unices do not spend more than 0.1% on task switching. I also do not believe your figure of long term average of 30% machine utilization under unix. If you mean that the rest is spent thrashing the swap, then this is part of the problem to be sold, on the existing hardware. It is physically impossible to serve, say random 20M requests from a 1000M disk using 8M of RAM, without thrashing the disk (if you want the illusion of responsiveness, which means multiplexing). The mainframe is not the silver bullet, you are just applying today's comparative power to 20yr ago problems. The mainframes are still out there, in the number crunching community, overnight database jobs, etc, but your average office, home, university, company lab relies on unix/windows.

  • Well, FreeBSD (and others) have login.conf for specifying resource limits.

    Admittedly, these only operate per-user, not per-process. A lot of people choose to run important processes under a specific user anyway (news, bind).

    Cheers,
    Si
  • Remember how thing used to be not many years ago?

    Disk caches statically set at boot-time, to change the cache size needing a reboot. Calculating how much memory your processes would need, and using the rest for cache. Finding the optimum value was not easy, particularly if process sizes varied over the day.

    We also had to cope with resource restrictions such as concurrent socket connections, concurrently opened files, Inode tables and so on. This could only be changed by tweaking a kernel parameter, and some could not even be monitored easily. It wasn't always easy to guess the best value to set these to, given that increasing them increased the memory used by the kernel, the memory could not be used elsewhere, and memory was not as cheap and plentiful as it is now.

    They don't know they're born today! And yes, I am a Yorkshireman (born on Yorkshire day too!).

  • by Pygmy Marmoset ( 65910 ) on Tuesday January 09, 2001 @10:25PM (#517546)
    I starting working at a FreeBSD shop (flyingcroc.com [flyingcroc.com], parent company of sextracker.com [sextracker.com]) coming from a linux background.

    We do some pretty amazing things with freebsd. We have tons of servers doing between 100-300 reqs/second with apache, others that have done 40-50Mbit sustained (on a 100Mbit NIC, Intel hardware), and all kinds of other crazy stuff.

    I've ssh'd into various machines that were getting hammered and the load was in the 700s with disk/swap/ram/cpu all taking a beating, and I was still able to do what I needed. When I've dealt with linux machines under conditions nowhere near as bad as that, it was a total nightmare to even get logged in, let alone do anything.

    I still use linux for my workstation, because I love the desktop goodies, games, and debian, but for high performance servers it's hard to beat freebsd.
  • by sql*kitten ( 1359 ) on Tuesday January 09, 2001 @11:54PM (#517547)
    The answer is (and this is a bit of a rant) that almost all Unix implementations handle resources terribly.

    Unix allocates CPU time clumsily, nice and pbind are about as much control as a sysadmin has over a running process, other than stopping it altogether and restarting it. Contrast this with OS/390 or VMS where the sysadmin can control exactly how much CPU a process gets, the size of its working set, migrate processes around between nodes in a cluster. IBM have a tool called the "Work Load Manager". It is able to configure your system based on what you want to do, not how you want to do it. For example, you say that this batch job must complete by this time in the morning, this class of transaction must complete within this time, and this group of users get mo more than 10% of CPU in the morning, and 30% in the afternoons, and WLM will configure your cluster, if it is physically possible, to do it. You can run a mainframe class OS at 90% of the machines capability consistently, a Unix system rarely exceeds 30% of its capacity when averaged over a period of time, it simply spends too much time either waiting for things or trying to manage its own workload. And what's worse, the CPU gets involved in every I/O in a Unix system, because of the way buffers work. Every disk block gets transferred by a CPU through an operating system buffer. When you edit on a UNIX box, every single character goes to the CPU and gets echoed back. And on the network, even character gets a packet sent back and forth. VMS deals with "record" - whole lines of text, even at the network protocol level.

    And don't even get me started on the lost+found directory. You don't get that on an industrial grade file system, because it's journalled to ensure consistency.

    Thankyou for listening.

E = MC ** 2 +- 3db

Working...