Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Unix Operating Systems Software

On Leading vs. Following In The NOS World 123

This Anonymous Coward wishes to put this question before you all: "All of us know how well the Linux community can follow other technologies, case in point, Samba. I have to wonder when Linux will reach the point where it begins to lead the way vs. follow. A technology such as Linux Directory Services could be such a opportunity. Could Linux developers create a client/server based NOS that does not have to be bent, twisted, patched, or hacked to work with the leading OS's? Could we develop a new set of server processes which communicate with workstations through a custom built client?"

"Novell has done this, I log in with the Novell client for Windows every morning. As a result, certain network services are performed natively on both sides. If this were done, I'm sure most of us would readily use the extended abilities of a native client/server system. A system where servers are more than glorified disk controllers, able to execute remote applications as well as supply standard network services.

I would dread to think such an application would not be developed because it would not fit well into the current corporate wish-list. Let the suits follow for a change, it's their turn."

This discussion has been archived. No new comments can be posted.

On Leading vs. Following In The NOS World

Comments Filter:
  • Following other standard protocols seems to be how Microsoft manages to be so damned great.

    The only difference I get from the linux daemons are stability, configurability and security.
  • How are you going to do this without the cooperation of Microsoft? SMB support is integrated into Windows. I'm guessing that Windows supports Novell because of customer demand and a desire to infiltrate Novell based networks.
  • Unless I misunderstood the question, I think the poster may want to look at the OpenLDAP project. It's been around quite a while and offers some of the services requested. Check out www.openldap.org [openldap.org].
  • It could be said that "linux" excels at "tying it all together". ie, if there are two obscure systems or devices that need to talk to each other, Linux is the best way to tie them all together.

    Someday I'll have to tell y'all the funny story of how my workstation accidentally started routing between our ethernet and token ring networks for our entire corporate WAN.

    And didn't to too crummy a job either.
  • by Amphigory ( 2375 ) on Tuesday May 09, 2000 @06:22AM (#1083108) Homepage
    This could certainly be done, and it is a good and worthwhile undertaking. If I were designing it, I would probably design a system around Coda for file sharing, LDAP for a directory service, and CUPS for printing. In other words, most of the work has already been done. What is needed is integration. In order to work well, there needs to be a standard, well-defined way to find resources. When a new print server comes online, it should automagically be added to the directory. Likewise with file services. What is a little more ticklish is that you will probably need to develop your own security paradigm that can cross the gap between Windoze and UNIX security models. This will probably require modifications to both the filesystem and the print server software to be complete. (I guess you could do something based on ACL's pretty simply). Now here's the BUT: is there really enough market for this to justify it? Maintaining the client for Windows is going to require a tremendous amount of work, especially since there are at least 10 different variants in common use now. The advantage of Samba and friends is that they push that work onto Microsoft. Unfortunately, there are not a whole lot of open source types who want to develop for Microsoft platforms. This is the kind of thing that screams for a commercial open source approach (a la Redhat). You develop the product as an integrated whole, then make money selling it. In any case, It's probably going to need some $$'s to make it happen.

    --

  • AFAIK, samba isn't restriced to Linux. Neither are GNOME, KDE, Mozilla or other open source projects.

    If we rephrase the question as "when will open source start leading the way" - well, I don't think it needs an answer, unless it is asked by a clueless ZDNet "journalist".

  • by JamesSharman ( 91225 ) on Tuesday May 09, 2000 @06:23AM (#1083110)
    It's generally been a rule with technology that the first practical solution to a problem becomes the defacto standard, regardless usually of better solutions further down the road. There have been instances where standards have developed under unix and then been railroaded of by Microsoft, the reason for this is simple. The wintel architecture dominates the desktop, and to a lesser extent the corporate server market, if you put together a solution under linux you MUST get someone to write the windows/NT drivers to go with it. It's is only when you have the windows drivers that can talk to your new protocol (or the windows server and linux client) does it truly provide a solution as far as the wintel dominated corporate sector are concerned. Once you provide a solution people will use it, and once actually they start using it there is very little anyone including MS can do to change that.
  • So, what we may be looking at is a three-tier achitecture and develop middle-ware that accepts client requests from heterogenous clients and can process the requests utilising a set of heterogenous servers; possibly more than one to fulfill a single request. With perhaps automatic fall-over to other servers when primary is down.
  • I am all for the NOSs, opensource...sorta, but Linux, a bunch of 14 year olds, leading the world makes me want to call my mommy. MS either makes standards, or follows them. They have been great in both, I think that when you try and create standards.....such as MS/TCPIP ...that can cause huge problems. When OSWWW [whatever the new may be], as much as I like Unix, I don't want it to be the driving factor in this world. I want to use what works!!! If a company has dumb users and lots of cash, NT all the way....keep me employed forever. But, when Linux starts to make standards, a global ...uneducated [for the most part] group of hackers....something does not fit. Then we have FreeStandard, RedHatStandards, and SuSEStandards.......... plz dont mod me down.....let my voice be heard :)
  • The problem (IMHO) is that while we have tools for doing this sort of thing (NFS and NIS), tools which work wonderfully in the UNIX world, No one who fully understands them is willing to spend enough time in a Window$ Programming Environment to write strong, flexible, and stable clients for Window$, and as a result, we have this proliferation.

    If there were a concerted project to write Window$ network file services drivers that used the full potetial of NFS and NIS, then Window$ boxen could finally join the rest of the world in *real* networks.

    But I'm not gonna write it (out of ignorance, not eliteism. I don't know that much about how Window$ does its networking and file mounting, and I don't want to take the time to learn.)

    ---
    "Elegant, Commented, On Time; Pick any Two"
  • i think the ability to semlessly interact with other systems would be grand. I work for a school system whose users demand many different things. the elementary teachers want macs to run many of their programs the high school teachers want pc's to teach everyone how to use microsoft orfice (i shudder at this but i dont design the curriculum). The middle school teachers dont seem to know what they want. We pay astronomical amounts of money to microsoft for their damned back office licences for our nt systems... i have been begging my boss to let me turn the alpha systems we have into linux boxes instead of trading them in to compaq for intel machines... but he doesnt want to deal with that. 'to many issues could arise' he said... cant really question him there... they could, dont know if they would but they could. i think a client could be made for mac boxes, win boxes, and whatever else you want to dream of... open source would make this implementation a lot easier then having to reverse engineer an entire network protocol then have to implement it only to have the company sue you because you reverse engineered their hard work... (DMCA rant in another post). I say go for it... would be real nice not to have to spend a month of my time this summer swapping out alpha boxes for intel machines.
  • by Anonymous Coward

    When Linux was created, it was proudly dubbed a "Unix-like system", and that's why people were so excited about it: it gave them the impression of using a real, high-powered Unix on their PCs. Because of this desire to mimic Unix (Minix, really), not much new was added to the system, and it clung to the Unix world by supporting industry "standards" such as X, TCP/IP, gcc, etc.

    Fast forward 10 or so years. Linux is still primarily considered "Unix-like", and the developers still look to the established Unix companies (Sun, IBM, SCO) for ideas. Linux is sort of like Microsoft in that respect: it's "research department" is every other operating system. But by now, non-free Unix is dying after everyone has realized what a bad idea it was, and Linux is picking up the pieces. Unfortunately, while companies like SCO are dropping their flagship products in favor of Linux, the Linux developers are losing their ability to assimilate new ideas because no new ideas are forthcoming! This leaves them in a bad position: rather than playing catch-up, they are now leading (the dying remains of) the pack. Sure, XFree86, Samba, Wine, and the rest are great, but can they bring Linux into the next century? I think not.

    What we need is truly original ideas, something which has been sorely lacking in Linux ever since Linux "borrowed" some Minix code to create his own little kernel. Something bold, which will announce to the world that Linux is big, that it demands to be followed rather than follow. Something marketable for high-end e-commerce, something for students interested in hacking the OS, something for everyone else.

    What we need is Open Source Natalie Portman in the kernel. And we need it now.
  • by voidzero ( 85458 ) on Tuesday May 09, 2000 @06:30AM (#1083116) Homepage
    I strongly suggest that this site [zdnet.co.uk] be visited.

    Regret for the past is a waste of spirit

  • I fail to see why this could not be the case...

    The way new technologies become 'standard' be that as approved by ISO or similar or de-facto is for big businesses and other large organisations to adopt them. Corporate (America|UK|Europe) is already adopting Linux at server level for web serving, mail serving... It's a short step mentally from that to a directory service.

    Let's say you're responsible at a management level within a company for web content. I don't mean you're the web server admin, I mean you're where the buck stops before the CEO. You want people across the company to be able to contribute relevant information to the website, which has been running happily on Linux for the past three years. Your server admin informs you that he has no intention of giving every Tom Dick and Harry in the company shell access to the server, so what are you going to do? What you need is some method of maintaining information on people, and allowing them access to the server solely for this purpose - an opening...

    Mail again is a natural opening to directory services. If people are already getting their mail from a Linux box, why not extend it to serve any information on them as may be required internally, subject to all the usual security disclaimers of course...

    All that is required really is for someone to start work on it - get a team of top notch hackers on board and away you go. Consult managers from the sort of corporation who this could be targetted at to find out what they'd want/expect out of such a system as a starting point. Believe it or not you can apply commercial software development ideas to open source development :)

    --
  • by richnut ( 15117 ) on Tuesday May 09, 2000 @06:32AM (#1083118)
    Linux doesn't even do a good job of fixing what's wrong with UNIX, let alone leading the way in anything. It's run by comittee and the people in the comittees like UNIX and will defend UNIX regardless of whether it's the best solution or not. Here we are in the year 2000 and our OS doesn't have a central, consistent, configuration database, for apps and system resources alike. They are just now getting a journaling filesystem. The security model of all or nothing is a joke. There isn't even mandatory file locking for crying out loud. This is not an OS that leads.

    It's not that people dont want to fix this sort of thing, it's just that they'll never get the voice or support to do something like this. Go ahead. Mention the word 'registry' to a Linux zealot and see how it goes. You'll see what I mean. Anyone here remember how it went for Linus when he tried to allow some C++ inside of the kernel around 0.99pl13? It was a disaster. No one wanted to wait out development time for proper C++ code, they just wanted UNIX.

    Dont get me wrong, I like Linux, and I use Linux, as I have for 7 years now. I wont stop using Linux. It just bothers me that there is no organized group of users who are actually trying to make it the perfect OS instead of the perfect UNIX.

    -Rich
  • If Linux excels at a particular server-oriented task, use it. If another NOS (you pick) is better at a particular task, then I'd recommend you use it instead.

    You needn't be ashamed to admit that Windows, Novel, BSD, etc. are particularly good at certain aspects of being a NOS.
  • Then we have FreeStandard, RedHatStandards, and SuSEStandards

    Apache does a good job of following standards, as do all of the system daemons. What makes you think we wouldn't stick with one standard? Slap it in a RFC and say 'Ye shall use this' is all it takes!
  • NFS is not a 'good' tool for network file sharing - just the accepted one on Unices. From a performance perspective, it's just not there.

    Further, the point of the question isn't file-and-print-services, it's file-and-print-and-DCOM-and-remote-Database-and-*s earches for good client/server hotbutton* directory-enabled pushed-to-desk-application deployment (i.e. Novell's ZENworks for Desktops, but with support for multiple OS/architectures and a (sigh) better UI)
  • If you say "Linux", then it's unlikely that you'll see much leading. "Linux" is just the kernel. But, if you say "Open Source Software" or "Free Software", you'll see that this has been the case for a long time. OSS made the Internet. Sendmail, BIND, Apache (well, I guess NCSA HTTPD at that point) all lead the way. Other OS vendors have followed this lead.
  • Well, amongst other issues, some of these
    technologies are very UNIX specific - i.e.,
    NFS is VERY UNIX specific : designed basically
    as a block device. It's been done, but is
    much more painful than first imagined.

    Starting technologies on the UNIX side will always
    have the problem that UNIX will always be the
    one with an advantage on the feature set set.
    The Windows client will always be the hacked one.

    Writing UNIX clients for Windows servers has
    the advantage that UNIX is so much more flexible
    that it's probably easier to write then the original Windows clients!

    Switching it around is going to make the already
    broken-hearted Open Source Windows programmer
    (what kind of sad self-deprecating soul would do
    that stuff voluntarily?) suicidal. Let's
    not do that to them. please.

    --

  • I agree with part of your statement.

    I'd go a step further and say that Linux really has never "led the way" (of course there are certain projects, but as a whole no). Linux itself is a clone of unix functionality, nothing really inovative technically (no gee wiz stuff, just re-implementation of other tech).

    I disagree with "when will open source start leading the way"... and rephrase that to when did open source STOP leading the way. Think of the old projects before open source was really called open source. Sendmail, bind, innd... all of these were produced BEFORE the open source craze they were the pioneers, they led the way, they were the ones who made the rules... now for some reason people seem to be cool with just copying commercial software functionality.

    Anymore the way of the world is this: make a cool product, see lots of dollar signs, decide to keep it closed source for the additional income instead of sharing it, then the open source people see how cool it is and say we need that and start making copies.

    To answer my own statement about when it lost it's way... I guess my opinion is once they found how easy they could make money off the same products they normally would give away. To head off the typers, open source can/does/will make money, but lets be honest closed source tends to make larger amounts of money faster.

    Spelling & grammar checker off because I don't care
  • by wesman ( 6993 )
    Linux Directory Access Protocal Right? :)

  • Yeah - standards are great! Everybody should have one!
  • Someday I'll have to tell y'all the funny story of how my workstation accidentally started routing between our ethernet and token ring networks for our entire corporate WAN.

    I've been told that an old version of netware did IPX/IP routing out of the box by default, and many sites had problem.

    Cheers

    --fred

  • by Pike ( 52876 ) on Tuesday May 09, 2000 @06:52AM (#1083128) Journal
    ...but if you do, make sure to use Unicode and give it i18n functionality, so the rest of the world can translate it into their own languages. I've found that there's nothing that annoys them more than not being able to use their own character sets in an otherwise good piece of software.

    -JD
  • I would suspect that it has less to do with customer demand, and more to do with infiltration.
  • So can you tell me where your configuration management API/libraries are?

    Maybe you'd like to help with one of the existing systems?

    Like:

    Libcfg
    http://www.yelm.freeserve.co.uk/libcfg/
    Gconf
    http://cvs.gnome.org/lxr/source/gconf/
    Libproplist
    http://cvs.gnome.org/lxr/source/libPropList/

    There are more. Part of the problem is that Linux software must be able to run/build on existing commercial Unix systems so the configuration management system must also be available on commercial systems with commercial applications, not just GPL'd applications.

  • one of linux's strengths has always been that it more or less universally aims for standards compliance. while whiz-bang 'extra functionality' may seem like an attractive target, it is usually less valuable than a system that works well, and works well with the rest of your systems.

    squeezing an extra 10% of performance out of commodity hardware seems less valuable to me than knowing that your linux box will work with whatever sort of network you need to put it into.

    all IMHO, of course.

    --
    blue
  • The security model of all or nothing is a joke.

    I don't get this criticism. Isn't security innately an `all or nothing' affair?

  • by ethereal ( 13958 ) on Tuesday May 09, 2000 @06:59AM (#1083133) Journal

    I think some of these are straw men:

    Here we are in the year 2000 and our OS doesn't have a central, consistent, configuration database, for apps and system resources alike.

    I admit that I had a knee-jerk reaction against a "registry" - sorry, it's a conditioned fear and pain response :) A central configuration system would be neat, but on the other hand you would break compatibility with a lot of existing Unix applications which expect /etc, /proc, and so forth. I guess you could set up this database in a different directory and only new apps would know about it. Better make it flat text, though - I don't think a binary registry will fly very far.

    They are just now getting a journaling filesystem.

    Does Windows NT ship with a JFS? I was under the impression that it didn't, although I'm sure to be corrected if I'm wrong. Linux isn't the first system to get a JFS, but it's not going to be the last either. And it may end up with two or three :)

    The security model of all or nothing is a joke.

    Sounds like someone's been reading the Microsoft Myths about Linux page :) Have you ever heard of groups?

    There isn't even mandatory file locking for crying out loud.

    Well, it isn't necessary for every file, so why should it be necessary? That sounds like overhead that an application should handle if it needs it.

    I'll be the first person to admit that Linux has problems, but I don't think that they're necessarily the ones that you pointed out.

  • Why should some one re-write applications that already exist in usable form??? Granted it is nicer for an application to be Open Source but the meat and potatoes of Directory Services (DS) are standardized and extendable: the X500 standard and the schema is extendable.

    Case in point MS implementation of DS in W2K. They extended the schema half way to hell and closely tied their OS to their implementation of DS. Never-the-less any X500 compliant client can access any of the info in MS Active Directory (escpecialy now since you can authenticate using Kerberos.)

    If we are just looking for a port then check out openLDAP.

  • our OS doesn't have a central, consistent, configuration database, for apps and system resources alike

    Thankfully.

    No central configuration database for apps and system == no single point of failure

    You say you have been using Linux for seven years. Perhaps you have had the luxury of not running MS Windows for seven years? I have been using Linux on my own machine for four years, but at work I have to use MS Windows. The only advantage of the Windows registry is convenience for programmers. The disadvantage is that if its structure gets corrupted, your system is fscked. Its a brain dead idea, full stop.

    Or is my judgement clouded by experience with this particular implementation? Are there other OSes that implement such a configuration database without getting it so badly wrong?


  • Maybe you'd like to help with one of the existing systems?

    Like:

    Libcfg
    http://www.yelm.freeserve.co.uk/libcfg/


    Actually I would. I'd not heard of this. Looks Cool. I'll build it tomorrow. Thanks for the pointer.

    There are more. Part of the problem is that Linux software must be able to run/build on existing commercial Unix
    systems so the configuration management system must also be available on commercial systems with commercial
    applications, not just GPL'd applications.


    If the config system is GPL'ed isn't that done already?

    -Rich
  • Windows supports Novell because:
    • Novell is a registered Microsoft Developer.
    • Windows is open enough that Novell can plug in the stuff it takes to support Novell networking without active cooperation from Microsoft.
    • Microsoft is in enough trouble already - if they took active steps to prevent Novell from interoperating, the anti-trust sh*tstorm would be orders of magnitude worse than it is now.

    Yes, Microsoft would like to take away business from Novell, so MS does *just* enough to barely operate.


    ...phil

  • My reading of the question is this: "When can we have Microsoft making changes to suit our code instead of having Microsoft force us to make changes in our code to suit theirs?"

    The answer, of course, is "eventually". Look, an ISV's development effort is about making changes for change's sake so that their customers can justify paying you again for what they just bought last year. Free software is about acheiving a solution and then using that solution for as long as it's appropriate. So, it is only natural for a company like Microsoft to propose change after change after change, hardly any of which is useful. The sheer volume of changes makes it necessary for competitors to follow along.

    The free software community, on the other hand, figures out in advance what is needed to accomplish whatever tasks are at hand. The focus on the solution means that while the free software community proposes fewer changes, in the long run those changes are more likely to be useful and, therefore, to be adopted.

    So, Microsoft and Novell will lead the dance for a while, but don't worry. There is a time for everything and the time for free software to call the tune is coming. Just keep running what works for you and the rest will just happen.

  • Following other standard protocols seems to be how Microsoft manages to be so damned great.

    No, I think it is more like following, embracing, changing, taking over, and buying/licensing other standard protocols seems to be how Microsoft manages to be so damned great.
    Or perhaps that is just how I perceive it.
  • The headling "Linux Directory Services surpasses WinNT & Novell" could be a reality in just a few years!

    Just last night I thought about this. I've been thinking about it for a long time. Sure, samba and NFS are good. However, samba will ALWAYS be following the lead of Microsoft -- this cannot be helped. I could ramble on incessantly about how wonderful Linux Directory Services (TM GPL'd) could be and all the things it could do, but talk is cheap.

    I've thought about this a long time and would like to find open source or GPL'd projects that are working on such thing.

    By the way.. I have looked into LDAP and likely "Linux Directory Services" should probably not be based on it.

    If anyone has any links or such sites or constructive suggestions please - post away.

    Immediately after posting this message I am getting to work on this. A friend quoted a phrase recently that feels very appropriate "There are people who talk about things, and there are people who get things done."
  • Actually, starting from the assumption that NFS is great, in spite of its many security and portability issues is part of the problem.

    It should have been junked in the /bin years ago, along with X and a few other culprits. However, development of a lot of the fundamentals seems to have stagnated.

    Maybe, it's the much reduced margins that workstation vendors have that is to blame. If so, things are not likely to improve. How do we fund blue-skies R&D in the free software world?

  • My feeling is that server-side network standards emerge from a need on the client side. Where do those requirements come from? End users, of course.

    I don't think that Corel 8, StarOffice or even the general interface is very mature yet. It certainly isn't broadly adopted.

    Should that happen, the self-help aspect of Open Source would kick in, and you would start seeing people develop apps for their needs. For instance, multi-user spreadsheets and word processors. These exist, but aren't very good right now.

    But network standards don't come from the top down. They go from bottom level user requirements, up the line to the standards you need to satisfy the users. Or put another way, plumbing development follows kitchen and bathroom requirements more closely than it does pump requirements. Both have to be satisfied, but only one will give you complaints from homeowners.

  • by Anonymous Coward
    that there is no organized group of users who are actually trying to make it the perfect OS instead of the perfect UNIX I think this is an important point. Maintaining the 'emulation' of Unix and its features is good for Linux to a point. Linux as a 'movment' should also adopt new ideas and techs and try and implement the BEST technologies and features conceivable (IMHO). BeOS has an 'advantage' that is was designed 'anew' from the ground up, whats stopping Linux from incorporating the best techs from BeOS, MacOS, Windows, Multics etc etc etc. I want the BEST OS possible - not necessarily just 'Unix'. Please dont flame - I believe you can see my 'point' in the above statment as not a Unix bash.
  • I thought that when you 'slapped it in an RFC' it was because you were issuing a Request For Comments, and then awhile later it solidified into something firm, still paradoxically called a RFC.

    I wasn't aware that you just slapped it into an RFC and it transformed into an edict.

    Silly me!
  • ...so why are we expecting it to follow an implementation?

    Their record on confirming strictly to simple RFCs is abysmal. When they try and talk SMTP or some network standard like that, you end up with something that is almost, but not entirely, unlike what the standard requires. So every other vendor then has to add hacks and work-arounds for Microsoft's deficiencies.

    Given that they can't get things like de jure standards right, what makes you think they are going to follow an innovation from the open source world well enough to make it a de facto standard?

    More likely they will look at the idea and implement something quite different that does the same thing in a totally proprietary manner.

    --
    A "freaking free-loading Canadian" stealing jobs from good honest hard working Americans since 1997.
  • by Anonymous Coward
    in order to work well, there needs to be a standard, well-defined way to find resources.

    It's called slp. RFC 2608.

  • by Black Parrot ( 19622 ) on Tuesday May 09, 2000 @07:36AM (#1083147)
    > while whiz-bang 'extra functionality' may seem like an attractive target, it is usually less valuable than a system that works well

    Exhibit A: VBScripting.

    --
  • Well, NFS is just one big crisis, lurking in the shadows, on a Linux system. Take down the server and watch all the clients freak out and seize up. It seems to hold up better on the BSDs, but it's definitely a crisis on Linux.

    I've even seen the editors at Linux Journal openly dis NFS on Linux, recommending that someone who wrote in asking about it instead adopt Samba. At the time it struck me as ironic, using a Microsoft protocol for Linux to Unix connectivity. But I guess on Linux one should never be surprised.
  • by klund ( 53347 ) on Tuesday May 09, 2000 @07:40AM (#1083149)
    That's not to say that Microsoft isn't trying really hard to break NetWare... Remember the old good days of "DOS isn't done, until Lotus doesn't run"? It looks like Microsoft has a new manta: "Windows isn't done Until NetWare doesn't run".

    One of the really cool features of the Novell NetWare Client for Windows 95 is "Automatic Client Update" (ACU). By just putting

    #sys:\public\client\win95\setup.exe /acu
    in the appropriate login script, the Novell Client version is checked at login time, and upgraded automagically if necessary.

    This trick is especially useful when installing new machines, because it will even upgrade from the Microsoft Client for NetWare Networks. All you have to do in install Windows 95 from CD, and after logging into a NetWare server once, you're automatically running the latest and greatest client from Novell.

    However, Microsoft broke this feature in Windows 98. Trying to install Novell Client 3.x from a network drive causes the installation to fail with the errors

    "Install could not find the class type for device id NWWSMGR"

    "Install could not find the class type for device id NWNDPS"
    Copying the install files locally (or using a Novell Clients CD-ROM) works fine, but that is time consuming to do at every workstation. These errors are caused by a bug in the Windows 98 netdi.dll file. See Novell's Technical Infomation Document TID 2946390 [novell.com]. Microsoft knows about this problem. They even have a fix for it. You need a specific version of the netdi.dll file (version 4.10.2029, size 317,840 bytes). This hotfix is referenced in Microsoft Knowledge Base article Q190656 [microsoft.com]. But you can't have it. If you want it, you have to call Tech Support, and pay them $150 for an "incident". If you can convince them that all you needed was the hotfix, you might be able to get your money back, but don't count on it...

    There is a nice description of the problem of trying to get your money back at Trent University. [trentu.ca] Also, despite what the above Knowledge Base article says, this problem was not corrected in Windows 98 Second Edition!

    Now, according to Infoworld, [infoworld.com] the next version of Windows, Windows Millennium Edition (ME), won't have any NetWare connectivity built in. Microsoft is going to remove it from the box. That will fix it! You can't use ACU to upgrade Microsoft Client for NetWare Networks, because you can't have Microsoft Client for NetWare Networks at all!

    Okay, so I'm back to my conspiracy theories... Windows isn't done until NetWare doesn't run.

    • http://support.novell.com/cgi-bin/search/tidfind er.cgi?2946390
    • http://support.microsoft.com/support/kb/articles /q190/6/56.asp
    • http://www.trentu.ca/csd/software/netdi.shtml
    • http://www2.infoworld.com/articles/en/xml/00/03/ 13/000313enwinupgrade.xml?Template=/st orypages/printarticle.html

    --
  • > There is a time for everything and the time for free software to call the tune is coming.

    Truly, it warms my heart to sign up for a mailing list or call up a newsgroup, and see people asking again and again: "Can I run Freeciv on Windows?" "Can I run LyX on Windows?" "Can I run the Bubble Load Monitor on Windows?"

    Apparently, free software is not quite so bad as its critics claim.

    --
  • > That is why all the innovation comes from the closed source side.

    I'm not convinced that this trend must continue into the future. OSS is now very well placed in the server world, so it should be proportionately easy to provide a standard/protocol/service on OSS platforms that is truly useful, and which the CSS platforms would need to be able to support in order to be sold.

    Of course, the easy CSS solution would be to just port the OSS service to the CSS platform, but that's not such a bad thing either (at least in the context of this discussion).

    --
  • Yeah, but standards evolve through a combination of luck, fitness and practicality. See Stephen Jay Gould for the biological equivalent. It is the sort of quasi-biological chaos that is appealing to many of us: what works is adapted and adopted. You may imagine it could be better, but we have to have some concordance to function. If something better than an old standard comes along, we will drift toward that one.
  • by jilles ( 20976 ) on Tuesday May 09, 2000 @07:47AM (#1083153) Homepage
    "A central configuration system would be neat, but on the other hand you would break compatibility with a lot of existing Unix applications which expect /etc, /proc, and so forth. I guess you could set up this database in a different directory and only new apps would know about it. Better make it flat text, though - I don't think a binary registry will fly very far."

    It is this conservatism which makes it difficult to configure linux. Because of that managing a linux platform is more expensive than necessary.

    I'm in favor of moving away from shellscript based config files towards a central LDAP based config system. Mixing code and configuration as is common today is a bad thing.

    I'm against using text files because textfiles can be fucked up with typos and duplicate data. A good db like system protects you from making those errors. Using XML would be an improvement over the current situation but also a big misstake in my eyes since XML is just as unsuitable for permanent storage of data as a normal text file.

    I think current linux distributions with all their environment variables, init scripts, shell scripts and ancient tools are far more complex than necessary to accomplish the flexibility and security they offer. In my opinion an OS is nothing more than a kernel + application packages + configuration + user data. A good principle in software engineering is separation of concern. It is not practiced enough in linux because configuration files are applications which are partially stored as user data. Not too mention that the kernel's functioning depends on a legion of scripts.
  • OK, I'll have it in beta by the end of the week, then! :)

    Strong data typing is for those with weak minds.

  • There have been commercial versions of NFS for
    Win32 platforms for many years. The problem is,
    there are significant differences between how UNIX works and Win32 platforms work. These differences have caused no end of problems getting NFS to work well with Win32 - Microsoft has been very good at making their platforms non-conformant to pretty much every standard you might mention - so much the better to protect their monopoly. Do it our way or go away....


    From what I've seen, it is easier to get a UNIX platform to accept the idiosyncracies of SMB than to get the Win32 platform to accept the idiosyncracies of UNIX file systems. And so, the commercial versions of NFS for Win32 have slowly drifted to the side, replaced by SMB on UNIX/Linux.


    This might be different if Microsoft had open source (as might happen with the DOJ case). Perhaps then, when all is known about Win32,
    will NFS and other network service support be simpler.


    And while we are at it, can someone replace the dog that is NFS??? Please!?!?!

  • A central configuration system would be neat, but on the other hand you would break compatibility with a lot of existing Unix applications which expect /etc, /proc, and so forth.

    I was wondering... suppose you had a filesystem front end available for the configuration database (I refuse to call it a registry). You mount the filesystem on /etc and when you read, say, /etc/hosts it appears as a text file but is actually read from the database. (Sort of what /proc does for some kernel data.) Application configuration data would be manipulated via the database front end (whatever that might be -- SQL perhaps) and would be readable that way, but it would also be readable as a text file in the format desired by the application.

    This seems to me like an approach that would allow migration of configuration data to a managed system without modifying the applications at all! The only kicker would come from applications that not only read system configuration files but modify them as well. That, I think, is a relatively small number of applications. Most that manage configuration files do so on a per-user basis in files under the user's home directory. There's no particular reason to try to bring those files under central management, so leave well enough alone.

    You would also want the filesystem to allow "normal" files for those applications whose configuration wasn't yet merged into the database or that themselves update their configuration file.

  • I've thought of this too.. but figured it was too brain-damaged to implement. Would be great for backwards-compatibility - the old apps read (and write?) the old files, while newer apps get to use the LDAP interface for configuration. Sounds like a big itch to me..
  • I've been working on a directory services management system for over four years now. It works on Linux, BSD, Solaris, AIX, HP/UX.. a fellow here has even got the server running on OS/2. The system's GUI client works on all of the above plus Macintosh and all flavors of Win32.

    It's called Ganymede [utexas.edu], and it is a metadirectory system, which is to say that it is an object database with a sophisticated permissions system that accepts changes and turns around and updates NIS, DNS, Samba, our NT PDC, our routers, Sendmail, etc.

    Ganymede is designed to be a smart server, where the adopter can define their own network schema and write plug-ins that customize how various kinds of objects in the server behave and how they connect to each other. It's all written in Java, so it is quite robust and portable.

    It's not designed to replace something like OpenLDAP or DNS or NIS, it's designed to provide sophisticated management for all of the above. At our lab, we have a dozen technical groups that have their own resources, but we share directory services, and Ganymede is what manages the whole show.

    It has been a few months since I've made a release of Ganymede, but development hasn't stopped, by any means. Lots of performance and stability improvements on the server have been achieved, and this week I'm writing a Ganymede client that can take XML from external sources (Perl generated, etc.) and load that data into Ganymede. I expect a 1.0pre1 release will come out by the end of the month.

  • "MS either makes standards, or follows them"

    I don't know that this is entirely accurate. Microsoft's policy as I see it has been to break standards, and use it's position to force acceptance of the new version.

    The lastest fiasco with Kerebros is an excellent example of this, and to a lesser extent WINS as a form of DNS. There are many others.

    Martin Burke
    My Linux Articles [themestream.com]

  • whiz-bang 'extra functionality' may seem like an attractive target... squeezing an extra 10% of performance out of commodity hardware seems less valuable to me...

    I agree.

    The main Linux kernel tree should remain general and standards compliant. People who require more specialized features such as real time response or extra security can always build the features on the existing kernel. I mean that's one of the great strengths of the open source, isn't it. Not everyone requires a real time kernel and those who do should know how to patch a kernel. At least I don't see any need to include every niche feature to bloat the main tree (anyone else annoyed at the current size of the kernel source tarballs? 20 MB?!). "Keep It Simple Stupid"-principle is a good principle.

  • Please tell me you're joking....

    It took MS more than 10 years to create DOS and turn it into Windows, so don't think that linux/BSD have to follow for eternity.

    The only disadvantage to Open Source is that MOST of the software for it has to be written from scratch, reverse engineered, etc...

    I think that as time goes on and more people and companies contribute you will see Open Source catch up and eventually surpass the rest. INCLUDING MS. After all, MS went from DOS to where they are now. It won't be easy for Open Source to infiltrate both the desktop and server markets and it won't happen overnight, but MS didn't have an easy time either.
  • The security model of all or nothing is a joke.

    Users/groups is far from a joke, although it does have problems and limitations. Capabilities are coming. Some people are pushing for them to be in 2.4 (at least as experimental), but definitely in 2.6.

    It just bothers me that there is no organized group of users who are actually trying to make it the perfect OS instead of the perfect UNIX.

    There are plenty of groups trying to make "the perfect OS", (of course, all with different opinions of what 'perfect' means...) but Linux is derived from the concepts in UNIX, asking it to become something else means that it is no longer Linux.

    And some of us think that the fundemental concepts of Unix are pretty close to perfect as is. ;)

    Here we are in the year 2000 and our OS doesn't have a central, consistent, configuration database, for apps and system resources alike.

    Why is this a "perfect" criteria? As other /.ers point out, a "registry" leads to a single point of failure, reduces maintainability, breaks lots of standards, etc... There has been lots of talk on lkml regarding this topic, and generally people seem to like the idea of a central repository, text based, but much of it is a userspace issue, and a HUGE undertaking at that.

    The reason so called "Linux Zealots" go off the handle when people bring up registries ala win32, is because it's been talked to death, and the majority of people that know this stuff think it's a bad idea.

    This is not an OS that leads.

    This was a choice. They weren't out to build a completely new OS, they were out to build a free Unix-like OS. I would assume that once clonable features run dry, Linux will continue on at it's present pace, developing new features along the way.

    I am absolutely certain that there are plenty of new features already in development or already built that HAVE led. I don't know what they are off hand, but I'm certain that other /.ers can give examples.

  • >> No one who fully understands them is willing to spend enough time in a Window$ Programming Environment to write strong, flexible, and stable clients for Window$

    Microsoft charges $150 for its Services for Unix. It includes support for shell commands, perl and NFS.

    I don't see how they did much more than convert existing open source code and slapping on the $150 price tag. Why not do the same thing as a way to fund an open source project?

    FYI, I have hunted far and wide for a low cost NFS client for Windows. AFAIK, there isn't one. So right there would be something that could be an open source project funded by selling the same thing on the Windows platform.

  • I guess I would make the counterargument that the operation of a Unix system based on small, flexible text-based tools is a strength, and you don't necessarily have to have complexity as well. Granted, the current structure of /etc, /proc, and wherever the apps decide to toss their config files is all over the place, but it doesn't have to be that way.

    If you're going to retool the system from the ground up to be DB/directory oriented, wouldn't it just be simpler to update the apps to use specific directories under /config for example, mount /proc under /config, and move /etc into there as well? If you don't like all of the shell scripts, you can combine and/or replace them and put them all in /config/scripts or whatever (thus separating code and configuration). Then you can still use text files (for when your fancy DB tools don't work right or don't give you the whole story) but you would have a directory of configuration information, organized more cleanly than it is today. Of course, this would require major application retooling and might make some applications non-portable to other Unices. That's probably why such an effort hasn't occurred yet - portability and familiarity between Unix-like systems adds more useability for the administrator than any amount of clean-up which breaks that familiarity. But if a cleaner configuration style is enough of an improvement, people might switch anyway.

  • OSS made the Internet. That's an interesting point for debate. If you mean the hoary old Internet, you're talking about a long, long time ago. If by 'made' the Internet you mean what popularized the 'net and made it what it is today... ummm... then it could be said that what 'made' the Internet was when Marc Andreesen ran off and closed the Mosaic source to start up Netscape. That doesn't seem very "open source" to me. Then again, maybe we should say that the 'net was a thriving operation back in 1985 when Unix hackers all had dumb terminals.
  • (I'm writing this as a Novell CNA book is on my shelf, gathering dust...)

    Novell gained popularity because of the lack of networking capabilities within the Win 3.x OS. If, I remember the story correctly, MS didn't expect such a demand for networking capabilities when Win 3x first came out. If you think about it in a way, it was originally a 3rd party hack to get Windows users to network in the early 90s.

    Nowadays, how many people use Novell for their networking needs? Not as many as a few years ago because of the "improved" networking capabilities found in the MS OS and Server packages.

    An NOS for linux? How about making the client OS more robust.

  • I like this, this is a message that is more to spur people, I imagine, than to be taken literally.

    I think the main important thing is that UNIX is capable of being extended to become the perfect OS. Windows 9x is an OS on top of DOS; though it's hard to include it in a discussion of the 'perfect' OS, it shows how a higher layer can become the OS. There is a saying that something need not be fully-featured or perfect, just that it's simple to implement and maintainable, so others can build upon it. That's what an OS is all about.

    I am not a systems programmer, though I can understand the code. There are a number of engineering, and philosophical issues that go into it.
  • Unfortunately, with the market share Windows has,
    we have no choice but try to be interoperable
    with the garbage they throw out.

    The reason linux must be tweaked/twisted/etc is
    because of the protocol hoops microsoft makes
    us jump through in order to do exactly what you're
    talking about (make a Linux Directory Services
    that interoperate with other vendors).

    --Twivel
  • You can, indeed, set up all the appropriate daemons to provide all sorts of "standard" services, of which LDAP is certainly one.

    The problem is that starting up the LDAP daemon does not intrinsically provide you with any useful functionality. You have to have some separate setup done to put some useful data into the LDAP database.

    Thus, it's not terribly useful to have the LDAP server there unless it is usable for (say) user authentication, which would mean that you need some code that pushes data from (say) /etc/passwd into the LDAP database.

    Likewise, an LPD server is devoid of functionality until there is some information pushed into /etc/printcap to configure some printers. And from there, for this to be of use to SAMBA users, some configuration has to be pushed into the SAMBA configuration to "publish" the print queues from /etc/printcap there.

    There in effect need to be some "self-discovery" mechanisms that search the system for capabilities, and "publish" them as "public" network services.

    The big problem with this is that it is likely to defy standardization due either to:

    • Local customizations ( e.g. - where I set up some "internal" print queues that are not for public consumption) or
    • Local security considerations ( e.g. - where certain user IDs shouldn't get "published" as they are private to the server.)
  • by dicey ( 16889 )
    This should be moderated upwards. MOre detail?
    Its called Service Location Protocol and as far as I know there is an implementation for linux. I don't have the URL to hand - just do a search for it. I think it is linked of the SLP working group homepage on IETF (www.ietf.org).
  • Good thing I have my trusty, well-worn NT 4 Workstation Resource Kit nearby.
    I admit that I had a knee-jerk reaction against a "registry" - sorry, it's a conditioned fear and pain response :) A central configuration system would be neat, but....
    Remember one of the fundamentals of good UI design: segregation of data and presentation. An ideal situation would be a Microsoft Management Console-like app for Linux, with a plugin architecture for wrapping various text config files. Power users can still edit text files to their hearts' content without ever leaving an xterm (or launching X, for that matter), while NT admins will be more easily converted by the comfort of an easy-to-use GUI.
    Does Windows NT ship with a JFS?
    Microsoft describes NTFS as a "recoverable" FS, using transaction logging with cached lazy-commits. Checkpoints in the transaction log determine what is committed or rolled back in the event of a crash. Which, of course, never happens. :-)

    Every day we're standing in a wind tunnel
    Facing down the future coming fast
    - Rush
  • I think this is a laudable goal. Coming from a NetWare shop, I've been slightly frustrated that we're *this close* to having a viable Linux client system so that we can use Linux as a base desktop OS.

    The issue is making something that won't break in the enterprise environment. You need to be able to have seamless access to Novell and NT servers. Theoretically, both Novell and Microsoft are making it easy by supporting LDAP for directory information, and with some careful work with both samba and ncpfs, you could tie it all together pretty well. This is the issue--I could make it work but don't have the time to write the glue code necessary.

    No matter what, for Linux to make it in the enterprise you'll need the ability to make single sign-on a reality, and have the "logon to the desktop" paradigm that the Microsoft desktop OSes support (at least with the Novell client.) To be honest, Novell is working harder at making this working than Microsoft is. Novell's already got the NDS solution on Linux--where's the Microsoft Active Directory implementation?

    --Mike

  • A central configuration system would be neat, but on the other hand you would break compatibility with a lot of existing Unix applications which expect /etc, /proc, and so forth. I guess you could set up this database in a different directory and only new apps would know about it. Better make it flat text, though - I don't think a binary registry will fly very far.

    Proc is well on its way to being a registry, except one that doesn't suck. All it needs is persistant storage.
    --
  • by mosch ( 204 )
    I was going to e-mail but it appeared to be a spamtrap. What do you mean, exactly, by 'there is no mandatory locking'? I'm not trying to be stupid here, I'm just not sure what exactly is lacking with regards to available file locking.
    ----------------------------
  • Microsoft knows about this problem. They even have a fix for it. You need a specific version of the netdi.dll file (version 4.10.2029, size 317,840 bytes). This hotfix is referenced in Microsoft Knowledge Base article Q190656. But you can't have it. If you want it, you have to call Tech Support, and pay them $150 for an "incident". If you can convince them that all you needed was the hotfix, you might be able to get your money back, but don't count on it...

    That's funny...they had a link on the page for article Q1 90656 [microsoft.com] that took me to another page from which the updated file could be downloaded. Here's a direct link for netdi.dll [microsoft.com]. No phone call needed, no $150 spent.

  • It is an interesting article for its summary of the things that NOS customers are looking for. However, it seems to substitute overgeneralization for real information in the case of Linux. Here's one of the things I tripped over:

    In most cases, the choice of CPU is determined by the operating system. For example, Unix implementations optimally run on RISC-based systems, whereas NetWare and NT servers are nearly always Intel based.


    That would have been okay, except that it didn't go on to explain that Linux, while widely ported, is native to the i386 family and most widely used on Intel processors.
  • "one of linux's stregnths has always been that it more or less universally aims for the lowest common denominator."

    standards bodies vary in their speed of change and the degree to which they are on the 'leading edge.' in some cases, sticking to a standard can mean that you miss a lot of good functionality, but i think those cases are rare.

    more than aiming for the least common demominator, linux, or anything based on standards compliance, should aim for the greatest common factor. i understand where you could make the LCD/GCF mistake tho, as i speak a peculiar dialect of geek, and those two phrases do sound a lot alike. :)

    --
    blue
    • Linux joined the UNIX fold by becoming like UNIX which helped Linux with the porting of many applications and gaining developers from the UNIX world to join in the cause..
    • Linux has thus unified UNIX and continues to do so as Solaris, AIX, and others are moving quickly toward greater Linux compatibility.
    • The next phase, which is only beginning now is to bring UNIX into the desktop world. KDE/GNOME are gaining serious momentum on Windows (just now passing it with XFree86 4.0, KDE2 & KOffice in my opinion) and now we are potentially leading the development of X12 for the whole of Unixdom.
    • AND--We've gained a large number of Windows developers with GNOME and KDE.. My project (gbasic.sourceforge.com) will hopefull bring over some of the ***MANY*** Visual Basic programmers longing for a tool they can use on Linux.
    • Also, Cygwin/tk I hear might be heading towards a potential KDE for Windows...I've only heard talk...but I do believe Linux will gradually eat up Windows one way or another.
    • Don't dispair...Standards are improving.. Personally I don't think a centralized binary registry is good at all...but perhaps some mechanism can interface with /etc, /proc, etc... If there's a real need...don't worry, it'll be there--agreeable to everybody, eventually.
  • "...sorta, but Linux, a bunch of 14 year olds, leading the world makes me want to call my mommy."

    Guy wants to call his mommy and Linux is developed and advocated by 14 year olds.... OK.

    "MS either makes standards, or follows them."

    MS usually adopts standards and then implements them with a twist that makes everyone who follows the 'standard' incompatible.

    "a global ...uneducated [for the most part] group of hackers...."

    I'm trying to decide which attribute makes this statement an insult (reading contextually).

    Please, don't moderate me up (hey, it worked so far for this guy).

    carlos

  • The reason Linux is always playing "catchup" is because the commercial vendors are always going to be "innovating" new products to stay ahead rather than fixing the broken ones they already have.

    Opensource development isn't commerce-driven. We don't invent things that we don't need then try to find ways of making us need them, we tend to innovate at a lower level, in implementations of things we need (or would like) now.

    Opensource gave us useful innovations like the apache "ProxyPass" directive. It was a great idea and solved all sorts of problems at ISPs. Closed source gave us ASP. We already had a thousand and one ways of producing dynamic web content, but after the ASP marketing hype, ISPs are now scrambling to catch up with a unix-based implementation of this "innovation" to try to avoid using bloody NT.

    The moral here is that something doesn't have to look pretty or invoke a new protocol just to be innovative. The GNU OS desktop is far more advanced than anything M$ ever produced.
  • "I'm against using text files because textfiles can be fucked up with typos and duplicate data. A good db like system protects you from making those errors. Using XML would be an improvement over the current situation but also a big misstake in my eyes since XML is just as unsuitable for permanent storage of data as a normal text file."

    In that case, are you considering a binary file, or some kind of registry system? If so, check out the rant Linus went into over proc & devfs issues;

    "Guys, remember what made UNIX successful, and Plan-9 seductive? The "everything is a file" notion is a powerful notion, and should NOT be dismissed because of some petty issues with people being too lazy to parse a full name.

    The same is true of ASCII contents. Binary files for configuration data are BAD. This is true for kernel interfaces the same way it is true of interfaces outside the kernel. I tell you, you don't want the mess of having things like the Windows registry - we want to have dot-files that are ASCII, and readable with a regular editor, that you can do grep's on, and that can be manipulated easily with perl. Think of /etc/password. And think of the STUPIDITIES that a lot of UNIX vendors made with their user managment databases - it happened not once, but multiple times. All in the name of unified tools (never mind the fact that none of the standard tools worked any more), and in the name of efficiency (the "parsing ASCII wastes CPU cycles"). Do people think that .bashrc would be better in a binary format that uses special tools to edit it? I don't think so. Don't make the kernel interface files fall into that classic _stupid_ black hole. Plain-text ASCII is a goodness. Readable naming is a goodness. Yes, it takes more care, but the end result is simply _better_. The rant continues in Kernel Traffic, #64; http://kt.linuxcare.com/ke rnel-traffic/kt20000424_64.epl#1 [linuxcare.com]

    On a serious note, just because Linus said it doesn't make it universially correct...though he does have a point.

    I remember working on an old DOS program where line endings and file endings caused us all sorts of headaches in ASCII files. Till we handled them consistantly, we often ended up with odd problems parsing text configuration files. Once that was done, the headaches went away -- not the creation of some obscure binary file format only our program could touch.

  • It really is annoying that every single app has its own config file format. It also has to be time-consuming for the programmers to reinvent a config parser along with every app, plus it makes it difficult to graft on GUI configuration tools.

    Wouldn't it be cool if everyone used XML? I'm sure someone will point out what's wrong with my idea (this is Slashdot after all), but some of the benefits I see are:

    1) Is both machine and human-readable.

    2) Many XML parsers already exist, no need to write one for your app. Maybe someday a single XML lib will become defacto-standard on all distros.

    3) Makes it easy to write a GUI configurator.

    4) Makes it easy for apps to pull config data from other apps.

    Anyway, I could be entirely wrong. I've been wrong before.

    I don't necessarily think a registry is such a bad idea, but I agree that it should be text-based instead of binary.

    Also, NT does ship with a journaling filesystem (NTFS). I don't know any details, but I have heard claims that it is lacking in several areas as compared to XFS or JFS.

  • Users/groups is far from a joke, although it does have problems and limitations. Capabilities are coming. Some people are pushing for them to be in 2.4 (at least as experimental), but definitely in 2.6. Till then, there's LIDS and other tools used to control capabilities;

    www.lids.org [lids.org]

  • What you failed to mention, most "innovations" come from academics, not corporations.
  • but Linux, a bunch of 14 year olds, leading the world makes me want to call my mommy.

    Maybe you should converse with some of the people doing this work. There aren't too many "uneducated ... hackers" working on it so far as I can tell. Virtually every Linux hacker I know has formal education and works with Linux as a hobby. (And since it's a hobby they're more inclined to do it right than to ship something by June 1st, if you get my drift.)

    In the end, though, who cares who wrote it so long as it gets the job done? I mean, you're assuming that the guys who put NT together were well educated, that any education they had was actually useful, and also that even if both are true they'll do a good job. That's a lot of very questionable assumptions, particularly about employees who are known to have built MFC and that brain-dead FIFO page replacement algorithm used in NT. (Ok, the latter made a certain amount of sense on the VAX. But couldn't they have done something better now that they have page reference bits?)

    Now, bunches of NT people are sitting there thinking that I'm one of those Linux loonies, and there's certainly some bias on my part towards UNIX, but I was writing articles for UNIX people about the viability of NT back when NT was considered a joke by pretty much everyone in the business. NT is a damn fine workstation OS, particularly in a world where the majority of software is written for Windows, and I still use and recommend it in a lot of situations. Back in 1994 I figured that it'd decimate the UNIX workstation market -- and it did.

    So NT isn't poison to me, but I have serious reservations about it as a server, most particularly because its stability isn't so hot, but also because I haven't been all that fond of how much I've had to spend to get extra software to do things that UNIX has done out of the box for years.

    Yea, yea, I hear you saying "Win2K fixes the stability problem." So Microsoft claims, and maybe it's even true, but given what it's doing its hardware requirements are a little out of hand and the cost ... well, I think Microsoft is taking a lot of people for a ride. For a lot of server functions you can buy the OS and hardware for less than Win2K alone.

    Then again, I'm educated enough to realize that I have a choice in the matter, and perhaps that's the real threat of Linux. It's not the 14 year olds, it's the guys who are smart enough to be comfortable with not using Windows. There are a lot of those guys, both old-timers and recent graduates, in IT shops and software development houses. Those are the guys who made Linux grow so much in the server space last year.

    Maybe Linux really is made largely by 14 year olds and I've just not run into them. So what if it is? It's cheap, it's stable, and it has a hell of a lot of functionality. It's not always the best choice for the job, but you're stupid not to at least consider it.

    Similarly, sometimes you have to bend over backwards to get Linux to do the job. This is particularly the case for a lot of specialized applications. So look around you and see what works best for what you have to do.

    I suspect that for a lot of people that'll be a mix of OSs. It certainly is for me.


    jim frost

  • An ideal situation would be a Microsoft Management Console-like app for Linux, with a plugin architecture for wrapping various text config files.

    Can this be done with Linuxconf? I'm not too familiar with how it works, but I've used it for setting up networking, etc. rather than investigating Mandrake's rc files to find out where to add the necessary info. It appears to have a plugin-oriented architecture, although I've never dealt with it much.

    Microsoft describes NTFS as a "recoverable" FS...

    I figured that I would get corrections about that :) I stand corrected.

  • I'm not so sure about that.
    On the enlightenment mailing list we were getting these questions all the time 3 years ago! :)
  • One of the weakest points with GNU/Linux is that there is really no innovation in any part of the system. It can be argued that devFS, and a few other features to the kernel are new and unique to Linux, however you can probably count these features on your hand.

    GNU/Linux (I am also including the applications bundled with the distributions) do not demonstrate any innovation. Every application is an effort to mimic something already developed in Microsoft Windows or another operating system. There are a few exceptions to this rule (scientific and server software), but for the most part this holds true. Another weakpoint is the XFree86 project. Instead of developing video drivers for the system, drivers are built specifically for the application. This means that we are going to be stuck with XFree86 forever. The Berlin Consortium has set out to solve problems in X and to add new features (Alpha transparency), but without drivers the project is destined to fail. And, if you say that X is "good enough," well "good enough" never succeeds unless it is the only option.

    So, what then is the solution? I wish I knew it. Most professional organizations devote a lot of their resources to research & development. As far as i know, there are no research & development groups for GNU/Linux. This is beginning to change with the aid of coporate interests, but it will take years for this to happen. I mean we still do not have a journaling file system!

    One other thing to notice, is that Bell Labs recognized in the 1980's that UNIX was riddled with problems, and so they began work on Plan 9, which later became Inferno. The only thing they took from UNIX was treating devices as a file system. So, when will the rest of the community realize that we are trying to repair something that needs to be redesigned.

  • Actually, yes. The "Capabilities" referred to are an SGI spinoff that will allow a piece of software to temporarily take on additional capabilities and then shed them. This will allow finer-grained access control to files and also go a long way towards removing the necessity of having suid-root proggies.
  • See:

    http://www.srvloc.org/index.html

    and

    http://playground.sun.com/srvloc/slp_white_paper.h tml

    for the SLP home page, and an informative white paper.

    t_t_b
    --

  • And if you look beyond linux, most networking protocols start life as open source. Consider SMTP, HTTP, TCP/IP, POP, IMAP, DNS, etc.
  • Implicit in your comment is the assumption that NT is always the tool that works. This is a false assumption. Sometimes there is more than one tool that can get the same job done equally well, and then it comes down to personal preference and efficiency. MS says that "our way is the only way" regardless of whether or not there may be individuals who may be more efficient working in a non-MS sanctioned fashion. I personally would like to see enough interoperability standards setup so that people can use whatever they like the most and is best tailored to the way they like to work. The idea is to give everyone a choice, not reduce their choices for the sake of dysfunctional comformity.

    As for the rest of your comment, it sounds like you're blowing a lot of hot air. You either don't really understand the problem at hand or you just wanted to make a self-serving post. Either way, your name-calling and unsupported arguments don't add much to the discussion.

  • WINS is NOT a form of DNS. It is a hack to get around the non-routability of NetBIOS. It does match names to IPs but it matches NetBIOS names, not DNS names. It is intended specifically to assist the master browser/browsing network in building the master browse list (what you see when you click "network neighborhood"). It is in a parallel pipe to DNS -- similar task in a different environment, with some of the same structures (because name resolution is pretty standard no matter how you do it).

    Aetius
  • It seems absolutely typical of Unix zealots that they should lie about the "capabilities" of their operating system in this way.

    Very unfair. The term "lie" indicates that I was deliberately misinforming people. That is certainly not true. I was using the term that the people I've seen talk about this Linux feature use. I will admit I have not spent the time to really understand "capabilities" or "privileges".

    You are welcome to cite references to your distinction between "privileges" and "capabilities".

    A few links:

    Pavel's capabilities page [mff.cuni.cz]

    Linux Weekly News listing [lwn.net] of Linux capabilities as of 2.2.13.

    Secure-programs-how to [unc.edu] contains a lot of security related information, including references to the POSIX standards. The POSIX information looks a little dated though.

    This link [linuxcare.com] from kernel-traffic indicates that there are several different concepts of what "capabilites" are, and gives some details about what each style consists of.

    Let me be clear, I don't know much about capabilities, but I know that they are talked a LOT about on lkml. Simply calling me a liar and saying that it's "privileges" not "caps" doesn't really help educate anyone.

  • First off, leadership does not neccesarily mean establishing new proprietary ways of communicating with clients. Linux could become a leader in clustering and high-availability solutions. It could become a leader in web application development/deployment platforms/tools.

    There are many ways Linux could innovate and jump ahead of the pack. But that's not neccesarily a good thing.

    Right now, Open Source has to play catch up because there are serious areas which it is deficient in. It is tempting to postpone development in those areas, or to begin cool new development in other areas but that isn't what we need.

    Let other companies take the risks and fight the big battles. I'm more than content to have Linux take the winning protocol/standard/whatever and implement it better than the commercial OS that championed it.

    But I don't object to anyone doing what Open Source is about: Scratching an itch. If someone needs a revolutionary new way of sharing data between clients, or a revolutionary new web application platform, be my guest! Innovate to your heart's content. Do it because you need it, but don't do it just because you want to be ahead of Microsoft.

    -JF
  • Did you know that Plan 9 2.0 is coming soon from Bell Labs, and the main architect Rob Pike?

    Plan 9 is the next reasearch version of Unix from the real programmers. Way superior to tired, old Unix clones.

    Plan is a distributed, multiprocessor system form the start. It has the most elegant threading model (processes have freedom to share resources like memory space selectively). It's distribution mechanism with procedural file systems and union directories provides language independent, persistent network objects with inheritance.

    The new version is more Unix compatible than the old one, which was maybe a little too much for an average non-educated hacker to grasp.

    Plan 9 has application programmer transparent cryptographic authentication and security at networked object / file access level. Any set of resources can be set up as a per process file name space to guarantee security of any binary.

    Plan 9 also integrates tightly with Inferno, which is a virtual networked OS and VM which is everything Java should have been, and available for a wide range of platforms, including Windowzes and Linux.

    http://www.cs.bell-labs.com/plan9/

    http://plan9.bell-labs.com/cm/cs/who/rob/

    http://inferno.bell-labs.com/inferno/
  • The Samba team likes Linus, is all in favor of Linux, but they PRE-DATE the linux community and are compatible with many other systems. Although I cannot officially speak for them.
    In fact, there was briefly a Samba for Netware downloadable from Netware.com - for people wanting to convert from NT to Novell - but it has been removed it from their site because people were using it to convert from Novell to Linux/Samba!
    LDAP with a NDS back end is becoming the industry standard these days - all its competitors are in fact imitators - but there's no reason the linux community couldn't make an LDAP/mySQL bastard that would serve the same purpose without the annoying per-seat licensing costs.
    Bob Hart stated (at Brainshare in Utah) that Red Hat would be very interested in funding development of open-source directory software, preferably with broad compatibility via LDAP.
    Jitsu (author of Pandora's encryption logic) could probably clone NDS if sufficiently motivated/funded. Not that I speak for him or NMRC either.
    --Charlie
    I am the Lorax, and I speak for the trees.
  • The current situation with .dot-files scattered all around the place works somewhat well when only a single person uses a non-networked computer.

    In any bigger networked system, with several servers, clients, networked printers etc you want one single unified system for configurating everything. You need to store the information in some kind of distributed database, for example with LDAP. Textfiles aren't up to the task because:

    • Insufficiently flexible permissions for modifying the configuration, either because filesystem lacks acls, or whole files are not granular enough.
    • Difficult to inherit/replicate configurations, for say 20 identical clients.
    • Textfile configurations easily end up getting typos, inconsistent or duplicated data. A configuration database could be typed stronger and check referential integrity.
    • Allows for for a flexible permissions system, let a user remove printjobs from the printer on his desk, or a teacher add user accounts for her classes on a certain server or user group.
    • Administrate everything without needing to log on to a dozen computers editing files all over.
    • Move around configurations and configured items in the tree easily. For example imagine dragging the the apache object from server A to B and voila you've moved your webserver to run on the other computer instead.

    Novell's NDS does pretty much all of this correctly, but it needs some "fixes". The free software community needs (and the rest too) something that's just that, free as in both speech and beer, and not based on proprietary standards. That way all software can gradually move over from using the good old textfiles to a new better system for the long run.

    Linus's idea with plain text files as an interface for configuring the kernel is still great, it's an easy way to interface with the kernel, easier than binary files in /proc or ioctls. We just need a user-space "configd" that reads configurations from the global database and then writes that to the various /proc-files whenever the configuration database is changed, or maybe even reads /proc-files when dynamic parts of the configuration database is read.

  • The windows registry uses a tree structure to store data in. The individual nodes of that tree consist of ascii indeed. However the point is that because it is a tree it is easy to find information. A textfile has no structure.

    The windows registry is actually not a smart thing either. Better is to use a directory server (novel does this). By using a remote server you can have your configuration remote (netsape uses this to implement roaming profiles).

    So, no, I don't have my head up my ass and I fully realize that it is going to be impossible to convince the entire unix community that their way of working with configuration info is far from optimal (to put it mildly). Using an editor to edit configuration files seems like a very primitive way of doing configuration. It requires that you know the fileformat (and as discussed before, fileformats usually don't adhere to any standard at the moment) and makes it the user's responsibility to keep the files consistent.

    The reason of my rant is that I have once spent a few days figuring out how to get my deskjet working under slackware. The HOW-TO at that time was not very helpfull and it occured to me this was the most user unfriendly way of configuring a printer i had encountered so far (mind you this was 1996). Unfortunately the whole linux system is constructed in a similar way. During boot time, the kernel wrestles itself through a enourmous spagethi of initfiles. As a newby, you can easily lose an afternoon figuring out what file to edit to set a stupid environment variable.

  • by ralphclark ( 11346 ) on Tuesday May 09, 2000 @01:21PM (#1083213) Journal
    Unix is Unix. If you don't like it, then feel free to write your own operating system. Just don't suppose for a minute that anyone's going to let you fuck ours up.

    Consciousness is not what it thinks it is
    Thought exists only as an abstraction
  • I don't know NT, but aren't there administrators able to change
    people's levels of security, add users, deleted users, etc.? Once you
    have such administrator powers then effectively you have root
    exploits. If not, then how are user permissions handled?
  • There seem to be a lot of people out there who really dislike the idea of a text-based, human-readable (and editable) configuration database. I'm going to address some of their points here.

    First, the easy one: Text files are bad because they can get messed up by typos.

    Um, right. And exactly how well does a binary file deal with typos?

    You're trying to solve the wrong problem. If the I make a mistake editing my system configuration files directly, I am going to be in trouble, regardless.

    The solution is to use an intelligent, front-end program which does sanity checking on the data entered. The difference is, a human-unreadable format cannot be fixed when the front-end program goes wrong. When the MS-Windows registry is corrupted, you reinstall the OS. Period. But when linuxconf screws up my /etc/fstab file, I can fire up emacs and fix it.

    That is the biggest reason why human-readable configuration files are vital: Because computers screw up, and I want to be able to fix them when they do.

    Now, let's move on to some of the other points: Text-based configuration data results in a performance penalty.

    Well, I guess this is technically true. But let's think about this. Parsing the configuration file is something that generally only needs to be done once, when the program initializes (or the file is changed). Most configuration files are small enough that this is really not a significant performance hit. Computers process data, often text data. They do it very well. Let's not get all worked up about asking them to do more of the same.

    Next: There is no standard format.

    Now, here the detractors have something. Unix evolved rather then being designed. The result is a hodge-podge of configuration formats. I am sure a great many of us would really prefer it if things were a bit more standardized, but they're not. And here that most evil demon of systems design, backwards compatibility , rears its ugly head once again. We can't change things without breaking everything -- programs and people alike.

    Unfortunately, there is no good answer to this problem, on any system. It would be easy enough to start rewriting things to use a more standardized format, but nobody does, because frankly, it isn't worth it. If it was, somebody would have done it by now. What we have works quite well, and the effort involved in changing everything is more then the effort needed to figure things out.

    It is worth pointing out that simply moving to a standardized format isn't going to alleviate the need to understand what you're editing before you edit it. I've seen enough misconfigured Macs and NT boxes to know that a pretty GUI or a rigid file format doesn't make a system fool-proof.

    The text-based nature of Unix's configuration database is actually a strength, here. You cannot comment the Windows registry. But I can (and do) add comments to all of my Unix configuration files. You can also use RCS, SCCS, or any other revision control system to keep track of what was changed, and why. Try doing that with NT.

    Now, let me address a few points by particular people:

    jilles writes: I think current linux distributions with all their environment variables, init scripts, shell scripts and ancient tools are far more complex than necessary to accomplish the flexibility and security they offer.

    I disagree. One of the reasons Unix has survived so long and adapted so well is that it is built on flexible tools, and easily modified and extended for new situations. Those "ancient tools" are still in use today because they work damn well.

    In my opinion an OS is nothing more than a kernel + application packages + configuration user data.

    You just described the entire computer software system for most cases, so I don't know what your point is.

    A good principle in software engineering is separation of concern. It is not practiced enough in linux because configuration files are applications which are partially stored as user data.

    Separation of concern is a design principle that states, roughly, that components should not concern themselves with duties that are not theirs. I fail to see how storing configuration data in shell scripts violates this principle.

    Not too mention that the kernel's functioning depends on a legion of scripts.

    Incorrect. The kernel does not require a single script to boot a running system. Issue "linux init=/bin/sh" at a LILO prompt sometime and you'll see what I mean.

    Now, overall service activity is controlled by a series of portable shell scripts because that is what shell scripts are for: Automating repetitive tasks. If they weren't controlled by scripts, you would have to write, maintain, and port a compiled program instead. Just because something is compiled doesn't mean it is better.

    Stefan writes: The current situation with .dot-files scattered all around the place works somewhat well when only a single person uses a non-networked computer.

    Um, every hear of a networked home directory?

    Insufficiently flexible permissions for modifying the configuration, either because filesystem lacks acls ...

    A lack of filesystem ACLs is a deficiency, and one that should be fixed. And it has been, on several commercial Unixes, and is coming Real Soon Now to the free ones too, or so I'm told.

    ... or whole files are not granular enough.

    Then you use more then one file.

    Difficult to inherit/replicate configurations, for say 20 identical clients.

    See cp(1) for details on that.

    Allows for for a flexible permissions system, let a user remove printjobs from the printer on his desk,

    Um, this can be done now.

    ... or a teacher add user accounts for her classes on a certain server or user group.

    Same here. Granted, you'll need the right front-end tools, but that is a universal condition.

    Administrate everything without needing to log on to a dozen computers editing files all over.

    Look at rsync(1) and rdist(1), as well as network filesystems and NIS. (Granted, NIS has a number of design and implementation flaws, but they are not inherit in the design of Unix.)

    Move around configurations and configured items in the tree easily. For example imagine dragging the the apache object from server A to B and voila you've moved your webserver to run on the other computer instead.

    Here you should look at the mv(1) command.

    IN SUMMARY

    Under Unix, everything is a file. Filesystem access controls enforce security. File editors change things. File revision control tracks changes. And file management commands move things around. Why design separate interfaces for everything if you already have them there in the filesystem?
  • $150 is the humor value for them naming it 'msfux.exe' :>

Two can Live as Cheaply as One for Half as Long. -- Howard Kandel

Working...