On Leading vs. Following In The NOS World 123
"Novell has done this, I log in with the Novell client for Windows every morning. As a result, certain network services are performed natively on both sides. If this were done, I'm sure most of us would readily use the extended abilities of a native client/server system. A system where servers are more than glorified disk controllers, able to execute remote applications as well as supply standard network services.
I would dread to think such an application would not be developed because it would not fit well into the current corporate wish-list. Let the suits follow for a change, it's their turn."
Following.. Worked for Microsoft (Score:1)
The only difference I get from the linux daemons are stability, configurability and security.
Client Integration (Score:2)
Look at openldap (Score:2)
Excellent criticism. (Score:1)
Someday I'll have to tell y'all the funny story of how my workstation accidentally started routing between our ethernet and token ring networks for our entire corporate WAN.
And didn't to too crummy a job either.
It can be done... BUT (Score:5)
--
Ill-posed question (Score:1)
If we rephrase the question as "when will open source start leading the way" - well, I don't think it needs an answer, unless it is asked by a clueless ZDNet "journalist".
The way it works.. (Score:4)
Possibly... (Score:1)
I pray that Linux does not lead the way......... (Score:1)
NFS, NIS, and Window$ programmers (Score:1)
If there were a concerted project to write Window$ network file services drivers that used the full potetial of NFS and NIS, then Window$ boxen could finally join the rest of the world in *real* networks.
But I'm not gonna write it (out of ignorance, not eliteism. I don't know that much about how Window$ does its networking and file mounting, and I don't want to take the time to learn.)
---
"Elegant, Commented, On Time; Pick any Two"
not a bad idea (Score:1)
Linux is derivative, not original (Score:1)
When Linux was created, it was proudly dubbed a "Unix-like system", and that's why people were so excited about it: it gave them the impression of using a real, high-powered Unix on their PCs. Because of this desire to mimic Unix (Minix, really), not much new was added to the system, and it clung to the Unix world by supporting industry "standards" such as X, TCP/IP, gcc, etc.
Fast forward 10 or so years. Linux is still primarily considered "Unix-like", and the developers still look to the established Unix companies (Sun, IBM, SCO) for ideas. Linux is sort of like Microsoft in that respect: it's "research department" is every other operating system. But by now, non-free Unix is dying after everyone has realized what a bad idea it was, and Linux is picking up the pieces. Unfortunately, while companies like SCO are dropping their flagship products in favor of Linux, the Linux developers are losing their ability to assimilate new ideas because no new ideas are forthcoming! This leaves them in a bad position: rather than playing catch-up, they are now leading (the dying remains of) the pack. Sure, XFree86, Samba, Wine, and the rest are great, but can they bring Linux into the next century? I think not.
What we need is truly original ideas, something which has been sorely lacking in Linux ever since Linux "borrowed" some Minix code to create his own little kernel. Something bold, which will announce to the world that Linux is big, that it demands to be followed rather than follow. Something marketable for high-end e-commerce, something for students interested in hacking the OS, something for everyone else.
What we need is Open Source Natalie Portman in the kernel. And we need it now.
Using Linux as a NOS (Score:5)
Regret for the past is a waste of spirit
why not? (Score:2)
The way new technologies become 'standard' be that as approved by ISO or similar or de-facto is for big businesses and other large organisations to adopt them. Corporate (America|UK|Europe) is already adopting Linux at server level for web serving, mail serving... It's a short step mentally from that to a directory service.
Let's say you're responsible at a management level within a company for web content. I don't mean you're the web server admin, I mean you're where the buck stops before the CEO. You want people across the company to be able to contribute relevant information to the website, which has been running happily on Linux for the past three years. Your server admin informs you that he has no intention of giving every Tom Dick and Harry in the company shell access to the server, so what are you going to do? What you need is some method of maintaining information on people, and allowing them access to the server solely for this purpose - an opening...
Mail again is a natural opening to directory services. If people are already getting their mail from a Linux box, why not extend it to serve any information on them as may be required internally, subject to all the usual security disclaimers of course...
All that is required really is for someone to start work on it - get a team of top notch hackers on board and away you go. Consult managers from the sort of corporation who this could be targetted at to find out what they'd want/expect out of such a system as a starting point. Believe it or not you can apply commercial software development ideas to open source development
--
Simply, No. (Score:4)
It's not that people dont want to fix this sort of thing, it's just that they'll never get the voice or support to do something like this. Go ahead. Mention the word 'registry' to a Linux zealot and see how it goes. You'll see what I mean. Anyone here remember how it went for Linus when he tried to allow some C++ inside of the kernel around 0.99pl13? It was a disaster. No one wanted to wait out development time for proper C++ code, they just wanted UNIX.
Dont get me wrong, I like Linux, and I use Linux, as I have for 7 years now. I wont stop using Linux. It just bothers me that there is no organized group of users who are actually trying to make it the perfect OS instead of the perfect UNIX.
-Rich
Use the right tool for the job... (Score:1)
You needn't be ashamed to admit that Windows, Novel, BSD, etc. are particularly good at certain aspects of being a NOS.
Re:I pray that Linux does not lead the way........ (Score:2)
Apache does a good job of following standards, as do all of the system daemons. What makes you think we wouldn't stick with one standard? Slap it in a RFC and say 'Ye shall use this' is all it takes!
Re:NFS, NIS, and Window$ programmers (Score:1)
Further, the point of the question isn't file-and-print-services, it's file-and-print-and-DCOM-and-remote-Database-and-*
Been doing it for 20 years (Score:2)
Re:NFS, NIS, and Window$ programmers (Score:1)
technologies are very UNIX specific - i.e.,
NFS is VERY UNIX specific : designed basically
as a block device. It's been done, but is
much more painful than first imagined.
Starting technologies on the UNIX side will always
have the problem that UNIX will always be the
one with an advantage on the feature set set.
The Windows client will always be the hacked one.
Writing UNIX clients for Windows servers has
the advantage that UNIX is so much more flexible
that it's probably easier to write then the original Windows clients!
Switching it around is going to make the already
broken-hearted Open Source Windows programmer
(what kind of sad self-deprecating soul would do
that stuff voluntarily?) suicidal. Let's
not do that to them. please.
--
Re:Ill-posed question (Score:1)
I'd go a step further and say that Linux really has never "led the way" (of course there are certain projects, but as a whole no). Linux itself is a clone of unix functionality, nothing really inovative technically (no gee wiz stuff, just re-implementation of other tech).
I disagree with "when will open source start leading the way"... and rephrase that to when did open source STOP leading the way. Think of the old projects before open source was really called open source. Sendmail, bind, innd... all of these were produced BEFORE the open source craze they were the pioneers, they led the way, they were the ones who made the rules... now for some reason people seem to be cool with just copying commercial software functionality.
Anymore the way of the world is this: make a cool product, see lots of dollar signs, decide to keep it closed source for the additional income instead of sharing it, then the open source people see how cool it is and say we need that and start making copies.
To answer my own statement about when it lost it's way... I guess my opinion is once they found how easy they could make money off the same products they normally would give away. To head off the typers, open source can/does/will make money, but lets be honest closed source tends to make larger amounts of money faster.
Spelling & grammar checker off because I don't care
LDAP (Score:1)
Re:Following.. Worked for Microsoft (Score:1)
Yeah - standards are great! Everybody should have one!
Re:Excellent criticism. (Score:1)
I've been told that an old version of netware did IPX/IP routing out of the box by default, and many sites had problem.
Cheers
--fred
Go ahead... (Score:3)
-JD
Re:Client Integration (Score:1)
So you got off your butt and wrote the software? (Score:1)
Maybe you'd like to help with one of the existing systems?
Like:
Libcfg
http://www.yelm.freeserve.co.uk/libcfg/
Gconf
http://cvs.gnome.org/lxr/source/gconf/
Libproplist
http://cvs.gnome.org/lxr/source/libPropList/
There are more. Part of the problem is that Linux software must be able to run/build on existing commercial Unix systems so the configuration management system must also be available on commercial systems with commercial applications, not just GPL'd applications.
standards compliance vs embrace + extend (Score:2)
squeezing an extra 10% of performance out of commodity hardware seems less valuable to me than knowing that your linux box will work with whatever sort of network you need to put it into.
all IMHO, of course.
--
blue
Re:Simply, No. (Score:1)
I don't get this criticism. Isn't security innately an `all or nothing' affair?
Re:Simply, No. (Score:4)
I think some of these are straw men:
I admit that I had a knee-jerk reaction against a "registry" - sorry, it's a conditioned fear and pain response :) A central configuration system would be neat, but on the other hand you would break compatibility with a lot of existing Unix applications which expect /etc, /proc, and so forth. I guess you could set up this database in a different directory and only new apps would know about it. Better make it flat text, though - I don't think a binary registry will fly very far.
Does Windows NT ship with a JFS? I was under the impression that it didn't, although I'm sure to be corrected if I'm wrong. Linux isn't the first system to get a JFS, but it's not going to be the last either. And it may end up with two or three :)
Sounds like someone's been reading the Microsoft Myths about Linux page :) Have you ever heard of groups?
Well, it isn't necessary for every file, so why should it be necessary? That sounds like overhead that an application should handle if it needs it.
I'll be the first person to admit that Linux has problems, but I don't think that they're necessarily the ones that you pointed out.
To re-write or not to re-write??? (Score:1)
Case in point MS implementation of DS in W2K. They extended the schema half way to hell and closely tied their OS to their implementation of DS. Never-the-less any X500 compliant client can access any of the info in MS Active Directory (escpecialy now since you can authenticate using Kerberos.)
If we are just looking for a port then check out openLDAP.
Re:Simply, No. (Score:1)
our OS doesn't have a central, consistent, configuration database, for apps and system resources alike
Thankfully.
No central configuration database for apps and system == no single point of failure
You say you have been using Linux for seven years. Perhaps you have had the luxury of not running MS Windows for seven years? I have been using Linux on my own machine for four years, but at work I have to use MS Windows. The only advantage of the Windows registry is convenience for programmers. The disadvantage is that if its structure gets corrupted, your system is fscked. Its a brain dead idea, full stop.
Or is my judgement clouded by experience with this particular implementation? Are there other OSes that implement such a configuration database without getting it so badly wrong?
Re:So you got off your butt and wrote the software (Score:2)
Maybe you'd like to help with one of the existing systems?
Like:
Libcfg
http://www.yelm.freeserve.co.uk/libcfg/
Actually I would. I'd not heard of this. Looks Cool. I'll build it tomorrow. Thanks for the pointer.
There are more. Part of the problem is that Linux software must be able to run/build on existing commercial Unix
systems so the configuration management system must also be available on commercial systems with commercial
applications, not just GPL'd applications.
If the config system is GPL'ed isn't that done already?
-Rich
Re:Client Integration (Score:2)
Yes, Microsoft would like to take away business from Novell, so MS does *just* enough to barely operate.
...phil
Be patient, all will come with time. (Score:1)
The answer, of course, is "eventually". Look, an ISV's development effort is about making changes for change's sake so that their customers can justify paying you again for what they just bought last year. Free software is about acheiving a solution and then using that solution for as long as it's appropriate. So, it is only natural for a company like Microsoft to propose change after change after change, hardly any of which is useful. The sheer volume of changes makes it necessary for competitors to follow along.
The free software community, on the other hand, figures out in advance what is needed to accomplish whatever tasks are at hand. The focus on the solution means that while the free software community proposes fewer changes, in the long run those changes are more likely to be useful and, therefore, to be adopted.
So, Microsoft and Novell will lead the dance for a while, but don't worry. There is a time for everything and the time for free software to call the tune is coming. Just keep running what works for you and the rest will just happen.
Following? No. (Score:1)
No, I think it is more like following, embracing, changing, taking over, and buying/licensing other standard protocols seems to be how Microsoft manages to be so damned great.
Or perhaps that is just how I perceive it.
Linux Directory Services surpasses WinNT & Nov (Score:1)
Just last night I thought about this. I've been thinking about it for a long time. Sure, samba and NFS are good. However, samba will ALWAYS be following the lead of Microsoft -- this cannot be helped. I could ramble on incessantly about how wonderful Linux Directory Services (TM GPL'd) could be and all the things it could do, but talk is cheap.
I've thought about this a long time and would like to find open source or GPL'd projects that are working on such thing.
By the way.. I have looked into LDAP and likely "Linux Directory Services" should probably not be based on it.
If anyone has any links or such sites or constructive suggestions please - post away.
Immediately after posting this message I am getting to work on this. A friend quoted a phrase recently that feels very appropriate "There are people who talk about things, and there are people who get things done."
Re:NFS, NIS, and Window$ programmers (Score:1)
It should have been junked in the /bin years ago, along with X and a few other culprits. However, development of a lot of the fundamentals seems to have stagnated.
Maybe, it's the much reduced margins that workstation vendors have that is to blame. If so, things are not likely to improve. How do we fund blue-skies R&D in the free software world?
Not until Linux drives client-side business apps (Score:2)
My feeling is that server-side network standards emerge from a need on the client side. Where do those requirements come from? End users, of course.
I don't think that Corel 8, StarOffice or even the general interface is very mature yet. It certainly isn't broadly adopted.
Should that happen, the self-help aspect of Open Source would kick in, and you would start seeing people develop apps for their needs. For instance, multi-user spreadsheets and word processors. These exist, but aren't very good right now.
But network standards don't come from the top down. They go from bottom level user requirements, up the line to the standards you need to satisfy the users. Or put another way, plumbing development follows kitchen and bathroom requirements more closely than it does pump requirements. Both have to be satisfied, but only one will give you complaints from homeowners.
Re:Simply, No. (Score:1)
Re:I pray that Linux does not lead the way........ (Score:1)
I wasn't aware that you just slapped it into an RFC and it transformed into an edict.
Silly me!
Microsoft can't even follow standards... (Score:2)
Their record on confirming strictly to simple RFCs is abysmal. When they try and talk SMTP or some network standard like that, you end up with something that is almost, but not entirely, unlike what the standard requires. So every other vendor then has to add hacks and work-arounds for Microsoft's deficiencies.
Given that they can't get things like de jure standards right, what makes you think they are going to follow an innovation from the open source world well enough to make it a de facto standard?
More likely they will look at the idea and implement something quite different that does the same thing in a totally proprietary manner.
--
A "freaking free-loading Canadian" stealing jobs from good honest hard working Americans since 1997.
SLP (Score:1)
It's called slp. RFC 2608.
Re:standards compliance vs embrace + extend (Score:3)
Exhibit A: VBScripting.
--
Re:NFS, NIS, and Window$ programmers (Score:1)
I've even seen the editors at Linux Journal openly dis NFS on Linux, recommending that someone who wrote in asking about it instead adopt Samba. At the time it struck me as ironic, using a Microsoft protocol for Linux to Unix connectivity. But I guess on Linux one should never be surprised.
Re:Novell Client Integration (off topic). (Score:3)
One of the really cool features of the Novell NetWare Client for Windows 95 is "Automatic Client Update" (ACU). By just putting
in the appropriate login script, the Novell Client version is checked at login time, and upgraded automagically if necessary.This trick is especially useful when installing new machines, because it will even upgrade from the Microsoft Client for NetWare Networks. All you have to do in install Windows 95 from CD, and after logging into a NetWare server once, you're automatically running the latest and greatest client from Novell.
However, Microsoft broke this feature in Windows 98. Trying to install Novell Client 3.x from a network drive causes the installation to fail with the errors
Copying the install files locally (or using a Novell Clients CD-ROM) works fine, but that is time consuming to do at every workstation. These errors are caused by a bug in the Windows 98 netdi.dll file. See Novell's Technical Infomation Document TID 2946390 [novell.com]. Microsoft knows about this problem. They even have a fix for it. You need a specific version of the netdi.dll file (version 4.10.2029, size 317,840 bytes). This hotfix is referenced in Microsoft Knowledge Base article Q190656 [microsoft.com]. But you can't have it. If you want it, you have to call Tech Support, and pay them $150 for an "incident". If you can convince them that all you needed was the hotfix, you might be able to get your money back, but don't count on it...There is a nice description of the problem of trying to get your money back at Trent University. [trentu.ca] Also, despite what the above Knowledge Base article says, this problem was not corrected in Windows 98 Second Edition!
Now, according to Infoworld, [infoworld.com] the next version of Windows, Windows Millennium Edition (ME), won't have any NetWare connectivity built in. Microsoft is going to remove it from the box. That will fix it! You can't use ACU to upgrade Microsoft Client for NetWare Networks, because you can't have Microsoft Client for NetWare Networks at all!
Okay, so I'm back to my conspiracy theories... Windows isn't done until NetWare doesn't run.
--
Re:Be patient, all will come with time. (Score:2)
Truly, it warms my heart to sign up for a mailing list or call up a newsgroup, and see people asking again and again: "Can I run Freeciv on Windows?" "Can I run LyX on Windows?" "Can I run the Bubble Load Monitor on Windows?"
Apparently, free software is not quite so bad as its critics claim.
--
Re:The very nature... (Score:2)
I'm not convinced that this trend must continue into the future. OSS is now very well placed in the server world, so it should be proportionately easy to provide a standard/protocol/service on OSS platforms that is truly useful, and which the CSS platforms would need to be able to support in order to be sold.
Of course, the easy CSS solution would be to just port the OSS service to the CSS platform, but that's not such a bad thing either (at least in the context of this discussion).
--
Re:Following.. Worked for Microsoft (Score:1)
Re:Simply, No. (Score:3)
It is this conservatism which makes it difficult to configure linux. Because of that managing a linux platform is more expensive than necessary.
I'm in favor of moving away from shellscript based config files towards a central LDAP based config system. Mixing code and configuration as is common today is a bad thing.
I'm against using text files because textfiles can be fucked up with typos and duplicate data. A good db like system protects you from making those errors. Using XML would be an improvement over the current situation but also a big misstake in my eyes since XML is just as unsuitable for permanent storage of data as a normal text file.
I think current linux distributions with all their environment variables, init scripts, shell scripts and ancient tools are far more complex than necessary to accomplish the flexibility and security they offer. In my opinion an OS is nothing more than a kernel + application packages + configuration + user data. A good principle in software engineering is separation of concern. It is not practiced enough in linux because configuration files are applications which are partially stored as user data. Not too mention that the kernel's functioning depends on a legion of scripts.
Re:Possibly... (Score:1)
Strong data typing is for those with weak minds.
Re:I pray that Linux does not lead the way........ (Score:1)
Win32 platforms for many years. The problem is,
there are significant differences between how UNIX works and Win32 platforms work. These differences have caused no end of problems getting NFS to work well with Win32 - Microsoft has been very good at making their platforms non-conformant to pretty much every standard you might mention - so much the better to protect their monopoly. Do it our way or go away....
From what I've seen, it is easier to get a UNIX platform to accept the idiosyncracies of SMB than to get the Win32 platform to accept the idiosyncracies of UNIX file systems. And so, the commercial versions of NFS for Win32 have slowly drifted to the side, replaced by SMB on UNIX/Linux.
This might be different if Microsoft had open source (as might happen with the DOJ case). Perhaps then, when all is known about Win32,
will NFS and other network service support be simpler.
And while we are at it, can someone replace the dog that is NFS??? Please!?!?!
Re:Simply, No. (Score:1)
I was wondering... suppose you had a filesystem front end available for the configuration database (I refuse to call it a registry). You mount the filesystem on /etc and when you read, say, /etc/hosts it appears as a text file but is actually read from the database. (Sort of what /proc does for some kernel data.) Application configuration data would be manipulated via the database front end (whatever that might be -- SQL perhaps) and would be readable that way, but it would also be readable as a text file in the format desired by the application.
This seems to me like an approach that would allow migration of configuration data to a managed system without modifying the applications at all! The only kicker would come from applications that not only read system configuration files but modify them as well. That, I think, is a relatively small number of applications. Most that manage configuration files do so on a per-user basis in files under the user's home directory. There's no particular reason to try to bring those files under central management, so leave well enough alone.
You would also want the filesystem to allow "normal" files for those applications whose configuration wasn't yet merged into the database or that themselves update their configuration file.
Re:Simply, No. (Score:1)
Been working on this for over four years (Score:2)
I've been working on a directory services management system for over four years now. It works on Linux, BSD, Solaris, AIX, HP/UX.. a fellow here has even got the server running on OS/2. The system's GUI client works on all of the above plus Macintosh and all flavors of Win32.
It's called Ganymede [utexas.edu], and it is a metadirectory system, which is to say that it is an object database with a sophisticated permissions system that accepts changes and turns around and updates NIS, DNS, Samba, our NT PDC, our routers, Sendmail, etc.
Ganymede is designed to be a smart server, where the adopter can define their own network schema and write plug-ins that customize how various kinds of objects in the server behave and how they connect to each other. It's all written in Java, so it is quite robust and portable.
It's not designed to replace something like OpenLDAP or DNS or NIS, it's designed to provide sophisticated management for all of the above. At our lab, we have a dozen technical groups that have their own resources, but we share directory services, and Ganymede is what manages the whole show.
It has been a few months since I've made a release of Ganymede, but development hasn't stopped, by any means. Lots of performance and stability improvements on the server have been achieved, and this week I'm writing a Ganymede client that can take XML from external sources (Perl generated, etc.) and load that data into Ganymede. I expect a 1.0pre1 release will come out by the end of the month.
Re:I pray that Linux does not lead the way........ (Score:2)
I don't know that this is entirely accurate. Microsoft's policy as I see it has been to break standards, and use it's position to force acceptance of the new version.
The lastest fiasco with Kerebros is an excellent example of this, and to a lesser extent WINS as a form of DNS. There are many others.
Martin Burke
My Linux Articles [themestream.com]
Re:standards compliance vs embrace + extend (Score:1)
I agree.
The main Linux kernel tree should remain general and standards compliant. People who require more specialized features such as real time response or extra security can always build the features on the existing kernel. I mean that's one of the great strengths of the open source, isn't it. Not everyone requires a real time kernel and those who do should know how to patch a kernel. At least I don't see any need to include every niche feature to bloat the main tree (anyone else annoyed at the current size of the kernel source tarballs? 20 MB?!). "Keep It Simple Stupid"-principle is a good principle.
Re:Linux will always follow (Score:1)
It took MS more than 10 years to create DOS and turn it into Windows, so don't think that linux/BSD have to follow for eternity.
The only disadvantage to Open Source is that MOST of the software for it has to be written from scratch, reverse engineered, etc...
I think that as time goes on and more people and companies contribute you will see Open Source catch up and eventually surpass the rest. INCLUDING MS. After all, MS went from DOS to where they are now. It won't be easy for Open Source to infiltrate both the desktop and server markets and it won't happen overnight, but MS didn't have an easy time either.
Re:Simply, No. (Score:2)
Users/groups is far from a joke, although it does have problems and limitations. Capabilities are coming. Some people are pushing for them to be in 2.4 (at least as experimental), but definitely in 2.6.
It just bothers me that there is no organized group of users who are actually trying to make it the perfect OS instead of the perfect UNIX.
There are plenty of groups trying to make "the perfect OS", (of course, all with different opinions of what 'perfect' means...) but Linux is derived from the concepts in UNIX, asking it to become something else means that it is no longer Linux.
And some of us think that the fundemental concepts of Unix are pretty close to perfect as is. ;)
Here we are in the year 2000 and our OS doesn't have a central, consistent, configuration database, for apps and system resources alike.
Why is this a "perfect" criteria? As other /.ers point out, a "registry" leads to a single point of failure, reduces maintainability, breaks lots of standards, etc... There has been lots of talk on lkml regarding this topic, and generally people seem to like the idea of a central repository, text based, but much of it is a userspace issue, and a HUGE undertaking at that.
The reason so called "Linux Zealots" go off the handle when people bring up registries ala win32, is because it's been talked to death, and the majority of people that know this stuff think it's a bad idea.
This is not an OS that leads.
This was a choice. They weren't out to build a completely new OS, they were out to build a free Unix-like OS. I would assume that once clonable features run dry, Linux will continue on at it's present pace, developing new features along the way.
I am absolutely certain that there are plenty of new features already in development or already built that HAVE led. I don't know what they are off hand, but I'm certain that other /.ers can give examples.
Why not? (Score:1)
Microsoft charges $150 for its Services for Unix. It includes support for shell commands, perl and NFS.
I don't see how they did much more than convert existing open source code and slapping on the $150 price tag. Why not do the same thing as a way to fund an open source project?
FYI, I have hunted far and wide for a low cost NFS client for Windows. AFAIK, there isn't one. So right there would be something that could be an open source project funded by selling the same thing on the Windows platform.
Re:Simply, No. (Score:2)
I guess I would make the counterargument that the operation of a Unix system based on small, flexible text-based tools is a strength, and you don't necessarily have to have complexity as well. Granted, the current structure of /etc, /proc, and wherever the apps decide to toss their config files is all over the place, but it doesn't have to be that way.
If you're going to retool the system from the ground up to be DB/directory oriented, wouldn't it just be simpler to update the apps to use specific directories under /config for example, mount /proc under /config, and move /etc into there as well? If you don't like all of the shell scripts, you can combine and/or replace them and put them all in /config/scripts or whatever (thus separating code and configuration). Then you can still use text files (for when your fancy DB tools don't work right or don't give you the whole story) but you would have a directory of configuration information, organized more cleanly than it is today. Of course, this would require major application retooling and might make some applications non-portable to other Unices. That's probably why such an effort hasn't occurred yet - portability and familiarity between Unix-like systems adds more useability for the administrator than any amount of clean-up which breaks that familiarity. But if a cleaner configuration style is enough of an improvement, people might switch anyway.
Re:Been doing it for 20 years (Score:1)
A lesson about the NOS that is NOVELL (Score:1)
Novell gained popularity because of the lack of networking capabilities within the Win 3.x OS. If, I remember the story correctly, MS didn't expect such a demand for networking capabilities when Win 3x first came out. If you think about it in a way, it was originally a 3rd party hack to get Windows users to network in the early 90s.
Nowadays, how many people use Novell for their networking needs? Not as many as a few years ago because of the "improved" networking capabilities found in the MS OS and Server packages.
An NOS for linux? How about making the client OS more robust.
Re:Simply, No. (Score:1)
I think the main important thing is that UNIX is capable of being extended to become the perfect OS. Windows 9x is an OS on top of DOS; though it's hard to include it in a discussion of the 'perfect' OS, it shows how a higher layer can become the OS. There is a saying that something need not be fully-featured or perfect, just that it's simple to implement and maintainable, so others can build upon it. That's what an OS is all about.
I am not a systems programmer, though I can understand the code. There are a number of engineering, and philosophical issues that go into it.
We do lead, where we can (Score:1)
we have no choice but try to be interoperable
with the garbage they throw out.
The reason linux must be tweaked/twisted/etc is
because of the protocol hoops microsoft makes
us jump through in order to do exactly what you're
talking about (make a Linux Directory Services
that interoperate with other vendors).
--Twivel
Integration Is The Issue (Score:2)
The problem is that starting up the LDAP daemon does not intrinsically provide you with any useful functionality. You have to have some separate setup done to put some useful data into the LDAP database.
Thus, it's not terribly useful to have the LDAP server there unless it is usable for (say) user authentication, which would mean that you need some code that pushes data from (say) /etc/passwd into the LDAP database.
Likewise, an LPD server is devoid of functionality until there is some information pushed into /etc/printcap to configure some printers. And from there, for this to be of use to SAMBA users, some configuration has to be pushed into the SAMBA configuration to "publish" the print queues from /etc/printcap there.
There in effect need to be some "self-discovery" mechanisms that search the system for capabilities, and "publish" them as "public" network services.
The big problem with this is that it is likely to defy standardization due either to:
Re:SLP (Score:2)
Its called Service Location Protocol and as far as I know there is an implementation for linux. I don't have the URL to hand - just do a search for it. I think it is linked of the SLP working group homepage on IETF (www.ietf.org).
Re:Simply, No. (Score:1)
Microsoft describes NTFS as a "recoverable" FS, using transaction logging with cached lazy-commits. Checkpoints in the transaction log determine what is committed or rolled back in the event of a crash. Which, of course, never happens.
Every day we're standing in a wind tunnel
Facing down the future coming fast - Rush
Integration is the key (Score:2)
The issue is making something that won't break in the enterprise environment. You need to be able to have seamless access to Novell and NT servers. Theoretically, both Novell and Microsoft are making it easy by supporting LDAP for directory information, and with some careful work with both samba and ncpfs, you could tie it all together pretty well. This is the issue--I could make it work but don't have the time to write the glue code necessary.
No matter what, for Linux to make it in the enterprise you'll need the ability to make single sign-on a reality, and have the "logon to the desktop" paradigm that the Microsoft desktop OSes support (at least with the Novell client.) To be honest, Novell is working harder at making this working than Microsoft is. Novell's already got the NDS solution on Linux--where's the Microsoft Active Directory implementation?
--Mike
Proc can be a registry (Score:2)
Proc is well on its way to being a registry, except one that doesn't suck. All it needs is persistant storage.
--
Question (Score:2)
----------------------------
Re:Novell Client Integration (off topic). (Score:2)
That's funny...they had a link on the page for article Q1 90656 [microsoft.com] that took me to another page from which the updated file could be downloaded. Here's a direct link for netdi.dll [microsoft.com]. No phone call needed, no $150 spent.
Re:Using Linux as a NOS (Score:2)
That would have been okay, except that it didn't go on to explain that Linux, while widely ported, is native to the i386 family and most widely used on Intel processors.
Re:standards compliance vs embrace + extend (Score:1)
standards bodies vary in their speed of change and the degree to which they are on the 'leading edge.' in some cases, sticking to a standard can mean that you miss a lot of good functionality, but i think those cases are rare.
more than aiming for the least common demominator, linux, or anything based on standards compliance, should aim for the greatest common factor. i understand where you could make the LCD/GCF mistake tho, as i speak a peculiar dialect of geek, and those two phrases do sound a lot alike.
--
blue
Linux is Gearing to Lead.. (Score:1)
Re:I pray that Linux does not lead the way........ (Score:1)
Guy wants to call his mommy and Linux is developed and advocated by 14 year olds.... OK.
"MS either makes standards, or follows them."
MS usually adopts standards and then implements them with a twist that makes everyone who follows the 'standard' incompatible.
"a global ...uneducated [for the most part] group of hackers...."
I'm trying to decide which attribute makes this statement an insult (reading contextually).
Please, don't moderate me up (hey, it worked so far for this guy).
carlos
This could never happen (Score:1)
Opensource development isn't commerce-driven. We don't invent things that we don't need then try to find ways of making us need them, we tend to innovate at a lower level, in implementations of things we need (or would like) now.
Opensource gave us useful innovations like the apache "ProxyPass" directive. It was a great idea and solved all sorts of problems at ISPs. Closed source gave us ASP. We already had a thousand and one ways of producing dynamic web content, but after the ASP marketing hype, ISPs are now scrambling to catch up with a unix-based implementation of this "innovation" to try to avoid using bloody NT.
The moral here is that something doesn't have to look pretty or invoke a new protocol just to be innovative. The GNU OS desktop is far more advanced than anything M$ ever produced.
And Verily, Linus did spake unto the crowds... (Score:4)
"I'm against using text files because textfiles can be fucked up with typos and duplicate data. A good db like system protects you from making those errors. Using XML would be an improvement over the current situation but also a big misstake in my eyes since XML is just as unsuitable for permanent storage of data as a normal text file."
In that case, are you considering a binary file, or some kind of registry system? If so, check out the rant Linus went into over proc & devfs issues;
"Guys, remember what made UNIX successful, and Plan-9 seductive? The "everything is a file" notion is a powerful notion, and should NOT be dismissed because of some petty issues with people being too lazy to parse a full name.
The same is true of ASCII contents. Binary files for configuration data are BAD. This is true for kernel interfaces the same way it is true of interfaces outside the kernel. I tell you, you don't want the mess of having things like the Windows registry - we want to have dot-files that are ASCII, and readable with a regular editor, that you can do grep's on, and that can be manipulated easily with perl. Think ofOn a serious note, just because Linus said it doesn't make it universially correct...though he does have a point.
I remember working on an old DOS program where line endings and file endings caused us all sorts of headaches in ASCII files. Till we handled them consistantly, we often ended up with odd problems parsing text configuration files. Once that was done, the headaches went away -- not the creation of some obscure binary file format only our program could touch.
Re:Simply, No. (Score:1)
Wouldn't it be cool if everyone used XML? I'm sure someone will point out what's wrong with my idea (this is Slashdot after all), but some of the benefits I see are:
1) Is both machine and human-readable.
2) Many XML parsers already exist, no need to write one for your app. Maybe someday a single XML lib will become defacto-standard on all distros.
3) Makes it easy to write a GUI configurator.
4) Makes it easy for apps to pull config data from other apps.
Anyway, I could be entirely wrong. I've been wrong before.
I don't necessarily think a registry is such a bad idea, but I agree that it should be text-based instead of binary.
Also, NT does ship with a journaling filesystem (NTFS). I don't know any details, but I have heard claims that it is lacking in several areas as compared to XFS or JFS.
Re:Simply, No. (Score:1)
www.lids.org [lids.org]
Re:Linux is derivative, not original (Score:2)
Re:I pray that Linux does not lead the way........ (Score:2)
Maybe you should converse with some of the people doing this work. There aren't too many "uneducated ... hackers" working on it so far as I can tell. Virtually every Linux hacker I know has formal education and works with Linux as a hobby. (And since it's a hobby they're more inclined to do it right than to ship something by June 1st, if you get my drift.)
In the end, though, who cares who wrote it so long as it gets the job done? I mean, you're assuming that the guys who put NT together were well educated, that any education they had was actually useful, and also that even if both are true they'll do a good job. That's a lot of very questionable assumptions, particularly about employees who are known to have built MFC and that brain-dead FIFO page replacement algorithm used in NT. (Ok, the latter made a certain amount of sense on the VAX. But couldn't they have done something better now that they have page reference bits?)
Now, bunches of NT people are sitting there thinking that I'm one of those Linux loonies, and there's certainly some bias on my part towards UNIX, but I was writing articles for UNIX people about the viability of NT back when NT was considered a joke by pretty much everyone in the business. NT is a damn fine workstation OS, particularly in a world where the majority of software is written for Windows, and I still use and recommend it in a lot of situations. Back in 1994 I figured that it'd decimate the UNIX workstation market -- and it did.
So NT isn't poison to me, but I have serious reservations about it as a server, most particularly because its stability isn't so hot, but also because I haven't been all that fond of how much I've had to spend to get extra software to do things that UNIX has done out of the box for years.
Yea, yea, I hear you saying "Win2K fixes the stability problem." So Microsoft claims, and maybe it's even true, but given what it's doing its hardware requirements are a little out of hand and the cost ... well, I think Microsoft is taking a lot of people for a ride. For a lot of server functions you can buy the OS and hardware for less than Win2K alone.
Then again, I'm educated enough to realize that I have a choice in the matter, and perhaps that's the real threat of Linux. It's not the 14 year olds, it's the guys who are smart enough to be comfortable with not using Windows. There are a lot of those guys, both old-timers and recent graduates, in IT shops and software development houses. Those are the guys who made Linux grow so much in the server space last year.
Maybe Linux really is made largely by 14 year olds and I've just not run into them. So what if it is? It's cheap, it's stable, and it has a hell of a lot of functionality. It's not always the best choice for the job, but you're stupid not to at least consider it.
Similarly, sometimes you have to bend over backwards to get Linux to do the job. This is particularly the case for a lot of specialized applications. So look around you and see what works best for what you have to do.
I suspect that for a lot of people that'll be a mix of OSs. It certainly is for me.
jim frost
Re:Simply, No. (Score:1)
Can this be done with Linuxconf? I'm not too familiar with how it works, but I've used it for setting up networking, etc. rather than investigating Mandrake's rc files to find out where to add the necessary info. It appears to have a plugin-oriented architecture, although I've never dealt with it much.
I figured that I would get corrections about that :) I stand corrected.
Re:Be patient, all will come with time. (Score:1)
On the enlightenment mailing list we were getting these questions all the time 3 years ago!
Innovation (Score:1)
One of the weakest points with GNU/Linux is that there is really no innovation in any part of the system. It can be argued that devFS, and a few other features to the kernel are new and unique to Linux, however you can probably count these features on your hand.
GNU/Linux (I am also including the applications bundled with the distributions) do not demonstrate any innovation. Every application is an effort to mimic something already developed in Microsoft Windows or another operating system. There are a few exceptions to this rule (scientific and server software), but for the most part this holds true. Another weakpoint is the XFree86 project. Instead of developing video drivers for the system, drivers are built specifically for the application. This means that we are going to be stuck with XFree86 forever. The Berlin Consortium has set out to solve problems in X and to add new features (Alpha transparency), but without drivers the project is destined to fail. And, if you say that X is "good enough," well "good enough" never succeeds unless it is the only option.
So, what then is the solution? I wish I knew it. Most professional organizations devote a lot of their resources to research & development. As far as i know, there are no research & development groups for GNU/Linux. This is beginning to change with the aid of coporate interests, but it will take years for this to happen. I mean we still do not have a journaling file system!
One other thing to notice, is that Bell Labs recognized in the 1980's that UNIX was riddled with problems, and so they began work on Plan 9, which later became Inferno. The only thing they took from UNIX was treating devices as a file system. So, when will the rest of the community realize that we are trying to repair something that needs to be redesigned.
Re:Simply, No. (Score:1)
Re:SLP (Score:2)
http://www.srvloc.org/index.html
and
http://playground.sun.com/srvloc/slp_white_paper.h tml
for the SLP home page, and an informative white paper.
t_t_b
--
Linux is not all of opensource (Score:2)
Nice try but your assumptions are showing (Score:1)
Implicit in your comment is the assumption that NT is always the tool that works. This is a false assumption. Sometimes there is more than one tool that can get the same job done equally well, and then it comes down to personal preference and efficiency. MS says that "our way is the only way" regardless of whether or not there may be individuals who may be more efficient working in a non-MS sanctioned fashion. I personally would like to see enough interoperability standards setup so that people can use whatever they like the most and is best tailored to the way they like to work. The idea is to give everyone a choice, not reduce their choices for the sake of dysfunctional comformity.
As for the rest of your comment, it sounds like you're blowing a lot of hot air. You either don't really understand the problem at hand or you just wanted to make a self-serving post. Either way, your name-calling and unsupported arguments don't add much to the discussion.
Re:I pray that Linux does not lead the way........ (Score:1)
WINS is NOT a form of DNS. It is a hack to get around the non-routability of NetBIOS. It does match names to IPs but it matches NetBIOS names, not DNS names. It is intended specifically to assist the master browser/browsing network in building the master browse list (what you see when you click "network neighborhood"). It is in a parallel pipe to DNS -- similar task in a different environment, with some of the same structures (because name resolution is pretty standard no matter how you do it).
AetiusRe:Simply, No. (Score:1)
Very unfair. The term "lie" indicates that I was deliberately misinforming people. That is certainly not true. I was using the term that the people I've seen talk about this Linux feature use. I will admit I have not spent the time to really understand "capabilities" or "privileges".
You are welcome to cite references to your distinction between "privileges" and "capabilities".
A few links:
Pavel's capabilities page [mff.cuni.cz]
Linux Weekly News listing [lwn.net] of Linux capabilities as of 2.2.13.
Secure-programs-how to [unc.edu] contains a lot of security related information, including references to the POSIX standards. The POSIX information looks a little dated though.
This link [linuxcare.com] from kernel-traffic indicates that there are several different concepts of what "capabilites" are, and gives some details about what each style consists of.
Let me be clear, I don't know much about capabilities, but I know that they are talked a LOT about on lkml. Simply calling me a liar and saying that it's "privileges" not "caps" doesn't really help educate anyone.
Leadership: Not neccesarily a good thing. (Score:2)
There are many ways Linux could innovate and jump ahead of the pack. But that's not neccesarily a good thing.
Right now, Open Source has to play catch up because there are serious areas which it is deficient in. It is tempting to postpone development in those areas, or to begin cool new development in other areas but that isn't what we need.
Let other companies take the risks and fight the big battles. I'm more than content to have Linux take the winning protocol/standard/whatever and implement it better than the commercial OS that championed it.
But I don't object to anyone doing what Open Source is about: Scratching an itch. If someone needs a revolutionary new way of sharing data between clients, or a revolutionary new web application platform, be my guest! Innovate to your heart's content. Do it because you need it, but don't do it just because you want to be ahead of Microsoft.
-JF
Plan 9 2.0 is coming (Score:2)
Plan 9 is the next reasearch version of Unix from the real programmers. Way superior to tired, old Unix clones.
Plan is a distributed, multiprocessor system form the start. It has the most elegant threading model (processes have freedom to share resources like memory space selectively). It's distribution mechanism with procedural file systems and union directories provides language independent, persistent network objects with inheritance.
The new version is more Unix compatible than the old one, which was maybe a little too much for an average non-educated hacker to grasp.
Plan 9 has application programmer transparent cryptographic authentication and security at networked object / file access level. Any set of resources can be set up as a per process file name space to guarantee security of any binary.
Plan 9 also integrates tightly with Inferno, which is a virtual networked OS and VM which is everything Java should have been, and available for a wide range of platforms, including Windowzes and Linux.
http://www.cs.bell-labs.com/plan9/
http://plan9.bell-labs.com/cm/cs/who/rob/
http://inferno.bell-labs.com/inferno/
Samba is not a "linux community" project (Score:2)
In fact, there was briefly a Samba for Netware downloadable from Netware.com - for people wanting to convert from NT to Novell - but it has been removed it from their site because people were using it to convert from Novell to Linux/Samba!
LDAP with a NDS back end is becoming the industry standard these days - all its competitors are in fact imitators - but there's no reason the linux community couldn't make an LDAP/mySQL bastard that would serve the same purpose without the annoying per-seat licensing costs.
Bob Hart stated (at Brainshare in Utah) that Red Hat would be very interested in funding development of open-source directory software, preferably with broad compatibility via LDAP.
Jitsu (author of Pandora's encryption logic) could probably clone NDS if sufficiently motivated/funded. Not that I speak for him or NMRC either.
--Charlie
I am the Lorax, and I speak for the trees.
Why textfiles for configurations is not an option (Score:2)
In any bigger networked system, with several servers, clients, networked printers etc you want one single unified system for configurating everything. You need to store the information in some kind of distributed database, for example with LDAP. Textfiles aren't up to the task because:
Novell's NDS does pretty much all of this correctly, but it needs some "fixes". The free software community needs (and the rest too) something that's just that, free as in both speech and beer, and not based on proprietary standards. That way all software can gradually move over from using the good old textfiles to a new better system for the long run.
Linus's idea with plain text files as an interface for configuring the kernel is still great, it's an easy way to interface with the kernel, easier than binary files in /proc or ioctls. We just need a user-space "configd" that reads configurations from the global database and then writes that to the various /proc-files whenever the configuration database is changed, or maybe even reads /proc-files when dynamic parts of the configuration database is read.
Re:Simply, No. (Score:2)
The windows registry is actually not a smart thing either. Better is to use a directory server (novel does this). By using a remote server you can have your configuration remote (netsape uses this to implement roaming profiles).
So, no, I don't have my head up my ass and I fully realize that it is going to be impossible to convince the entire unix community that their way of working with configuration info is far from optimal (to put it mildly). Using an editor to edit configuration files seems like a very primitive way of doing configuration. It requires that you know the fileformat (and as discussed before, fileformats usually don't adhere to any standard at the moment) and makes it the user's responsibility to keep the files consistent.
The reason of my rant is that I have once spent a few days figuring out how to get my deskjet working under slackware. The HOW-TO at that time was not very helpfull and it occured to me this was the most user unfriendly way of configuring a printer i had encountered so far (mind you this was 1996). Unfortunately the whole linux system is constructed in a similar way. During boot time, the kernel wrestles itself through a enourmous spagethi of initfiles. As a newby, you can easily lose an afternoon figuring out what file to edit to set a stupid environment variable.
Re:Simply, No. (Score:3)
Consciousness is not what it thinks it is
Thought exists only as an abstraction
Re:Simply, No. (Score:2)
people's levels of security, add users, deleted users, etc.? Once you
have such administrator powers then effectively you have root
exploits. If not, then how are user permissions handled?
The case for text and files for config data (Score:3)
First, the easy one: Text files are bad because they can get messed up by typos.
Um, right. And exactly how well does a binary file deal with typos?
You're trying to solve the wrong problem. If the I make a mistake editing my system configuration files directly, I am going to be in trouble, regardless.
The solution is to use an intelligent, front-end program which does sanity checking on the data entered. The difference is, a human-unreadable format cannot be fixed when the front-end program goes wrong. When the MS-Windows registry is corrupted, you reinstall the OS. Period. But when linuxconf screws up my
That is the biggest reason why human-readable configuration files are vital: Because computers screw up, and I want to be able to fix them when they do.
Now, let's move on to some of the other points: Text-based configuration data results in a performance penalty.
Well, I guess this is technically true. But let's think about this. Parsing the configuration file is something that generally only needs to be done once, when the program initializes (or the file is changed). Most configuration files are small enough that this is really not a significant performance hit. Computers process data, often text data. They do it very well. Let's not get all worked up about asking them to do more of the same.
Next: There is no standard format.
Now, here the detractors have something. Unix evolved rather then being designed. The result is a hodge-podge of configuration formats. I am sure a great many of us would really prefer it if things were a bit more standardized, but they're not. And here that most evil demon of systems design, backwards compatibility , rears its ugly head once again. We can't change things without breaking everything -- programs and people alike.
Unfortunately, there is no good answer to this problem, on any system. It would be easy enough to start rewriting things to use a more standardized format, but nobody does, because frankly, it isn't worth it. If it was, somebody would have done it by now. What we have works quite well, and the effort involved in changing everything is more then the effort needed to figure things out.
It is worth pointing out that simply moving to a standardized format isn't going to alleviate the need to understand what you're editing before you edit it. I've seen enough misconfigured Macs and NT boxes to know that a pretty GUI or a rigid file format doesn't make a system fool-proof.
The text-based nature of Unix's configuration database is actually a strength, here. You cannot comment the Windows registry. But I can (and do) add comments to all of my Unix configuration files. You can also use RCS, SCCS, or any other revision control system to keep track of what was changed, and why. Try doing that with NT.
Now, let me address a few points by particular people:
jilles writes: I think current linux distributions with all their environment variables, init scripts, shell scripts and ancient tools are far more complex than necessary to accomplish the flexibility and security they offer.
I disagree. One of the reasons Unix has survived so long and adapted so well is that it is built on flexible tools, and easily modified and extended for new situations. Those "ancient tools" are still in use today because they work damn well.
In my opinion an OS is nothing more than a kernel + application packages + configuration user data.
You just described the entire computer software system for most cases, so I don't know what your point is.
A good principle in software engineering is separation of concern. It is not practiced enough in linux because configuration files are applications which are partially stored as user data.
Separation of concern is a design principle that states, roughly, that components should not concern themselves with duties that are not theirs. I fail to see how storing configuration data in shell scripts violates this principle.
Not too mention that the kernel's functioning depends on a legion of scripts.
Incorrect. The kernel does not require a single script to boot a running system. Issue "linux init=/bin/sh" at a LILO prompt sometime and you'll see what I mean.
Now, overall service activity is controlled by a series of portable shell scripts because that is what shell scripts are for: Automating repetitive tasks. If they weren't controlled by scripts, you would have to write, maintain, and port a compiled program instead. Just because something is compiled doesn't mean it is better.
Stefan writes: The current situation with
Um, every hear of a networked home directory?
Insufficiently flexible permissions for modifying the configuration, either because filesystem lacks acls
A lack of filesystem ACLs is a deficiency, and one that should be fixed. And it has been, on several commercial Unixes, and is coming Real Soon Now to the free ones too, or so I'm told.
Then you use more then one file.
Difficult to inherit/replicate configurations, for say 20 identical clients.
See cp(1) for details on that.
Allows for for a flexible permissions system, let a user remove printjobs from the printer on his desk,
Um, this can be done now.
Same here. Granted, you'll need the right front-end tools, but that is a universal condition.
Administrate everything without needing to log on to a dozen computers editing files all over.
Look at rsync(1) and rdist(1), as well as network filesystems and NIS. (Granted, NIS has a number of design and implementation flaws, but they are not inherit in the design of Unix.)
Move around configurations and configured items in the tree easily. For example imagine dragging the the apache object from server A to B and voila you've moved your webserver to run on the other computer instead.
Here you should look at the mv(1) command.
IN SUMMARY
Under Unix, everything is a file. Filesystem access controls enforce security. File editors change things. File revision control tracks changes. And file management commands move things around. Why design separate interfaces for everything if you already have them there in the filesystem?
Re:Why not? (Score:2)