Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Upgrades

RSI, WIMPs and Pipes; What Next? 368

Tetard asks: "Long live the pipe! Since the `|' was invented by Doug McIlroy in 1973, has there ever been a more effective way of reusing tools and connecting data ? The mouse is a device of the Beatles era; Rather than try and provoke nostalgia in the older ones among us, I'm asking myself, as are others: when we don't try to reinvent the wheel, or at least improve it, why must we try and copy it every time ? Xerox PARC exposed us to WIMPs and we haven't done better: some innovation, some plastic surgery -- but no "paradigm shift" -- where's the creative destruction that will take us further ? Graphical component programming is turning us into click-happy bonobos^H^H^Hchimpanzees, as we fail to find new ways to manage and connect richer data streams. My web designer friends are damaged for life because of mice, and yet we persist... Where do we go from here ? If we ever invent the graphical pipe, let if have keyboard shortcuts." Yes, you've probably seen a similar question to this run by Ask Slashdot before, but this time I'm wondering if maybe we need new input devices before the WIMP paradigm is replaced with something better. Might any of you have ideas on what form these input devices might take?

For those interested, here are the previous stories that have handled this type of question:

So what it will take to break us out of the WIMP box (or prison, depending on your bias), maybe new input devices would do it, but quite frankly, I wouldn't be surprised if a 3D interface might be another route (it would possibly spark interest in designing a new input device that would work better with 3D interfaces, or maybe data-gloves could serve this purpose?). Going on a limb, maybe this guy might just be the ticket.

This discussion has been archived. No new comments can be posted.

RSI, WIMPs and Pipes; What Next?

Comments Filter:
  • Face Recognition (Score:4, Interesting)

    by Sludge ( 1234 ) <[gro.dessot] [ta] [todhsals]> on Monday October 08, 2001 @05:33PM (#2403706) Homepage

    With a sub-$100 webcam watching you, look at the point of the screen where you would click, and blink.

    Are there lots of problems to doing this? Yes. Should that stop me from throwing out the idea? No.

    • Re:Face Recognition (Score:2, Interesting)

      by dwlemon ( 11672 )
      You know, that could work..

      suppose all the software had to do was to find your eyes in relation to your nose or mouth and ears. then moving your head would cause those parameters to change, and the cursor would move.
    • Re:Face Recognition (Score:3, Interesting)

      by DarkFyre ( 23233 )
      This can be done, and is done for physically disabled people who cannot use a mouse. The problem is that a sub-$100 'webcam' doesn't cut it. You need high-quality video in reasonably real-time in order for this to work. The cameras are more like $400-$500 (CDN, so that's $250-$350 USD).

      The main problem with the systems right now is that they cannot track only eye movement. You need to use your whole head for large-distance traversals. You think RSI sucks in your wrist, wait until you start getting neck cramps from your favorite RTS game.
      • The main problem with the systems right now is that they cannot track only eye movement.

        Maybe not all systems, but I'm almost sure there are systems that can track eye movement. I've seen a short documentary with people with ALS (the same disease that Stephen Hawkings has if I'm not mistaken), and they showed Jason Becker [jasonbecker.com], a former guitar player that has this disease and can only move some muscles in his face.

        His site does not say much about the equipment he has, but he uses many gadgets that track his eye movements (one of the only parts of his body that he can move) and translate into commands to his computer. Actually, he has produced a lot of music in the last years this way (and you guys should check his material, it *is* awesome).

        It's a shame I can't show any more specific links, but maybe serching through his pages you can find something.

    • by megaduck ( 250895 )

      Great idea, but it doesn't get us out of the WIMP paradigm. You're just replacing your mouse with a more efficient type of pointer.

      Really, that's the problem in a nutshell. We are so used to the WIMP interface, that the best we can vizualize is an improvement of the WIMP system. Until we can come up with a totally different metaphor for interacting with our computers, we won't see WIMP go away. Personally, I think it will take a "mad genius" type to break out of the WIMP paradigm and move us forward.

    • by slamb ( 119285 )

      There's a time and a place for diplomacy, and this isn't it.

      <div diplomacy="off"> That's a dumb idea. </div>

      My attention wanders and consequently my eyes do as well. I blink subconsciously. You can't change these things, and it's stupid to try. Don't make an input system that fails when they happen.

      I think Douglas Adams does a good job of describing the failings of this type of input system:

      A loud clatter of gunk music flooded through the Heart of Gold cabin as Zaphod searched the sub-etha radio wavebands for news of himself. The machine was rather difficult to operate. For years radios had been operated by means of pressing buttons and turning dials; then as the technology became more sophisticated the controls were made touch-sensitive - you merely had to brush the panels with your fingers; now all you had to do was wave your hand in the general direction of the components and hope. It saved a lot of muscular expenditure of course, but meant that you had to sit infuriatingly still if you wanted to keep listening to the same programme.

      Douglas Adams, the Hitchhiker's Guide to the Galaxy

      That's not even taking into account the way the eye inherently jitters, according to the other replies. Even without that, this wouldn't work out well except for extremely disabled people who have little other choice and aren't likely to complain about something that makes it at all possible for them to use a computer.

    • People would get horrible damage to their eyes, perhaps drying their eyeballs out trying to control their blinking ;-)
  • by cperciva ( 102828 ) on Monday October 08, 2001 @05:37PM (#2403721) Homepage
    What do Weakly Interactive Massive Particles have to do with pipes?
  • NLP (Score:3, Informative)

    by Zen Mastuh ( 456254 ) on Monday October 08, 2001 @05:37PM (#2403722)

    Natural Language Processing has my vote. Some of these folks [mit.edu] are working on it already. Wouldn't it be nice to say "move this thing over here", or some other combination of speech and gesturing, rather than all these inane menus and clicks? Someone still needs to develop the pipe infrastructure, tho. Just *don't* make it so narrow as to become worthless.

    • Are you trying to say that speaking "move this thing over here" or waving your arms around is more efficient than a small twitch of my wrist on the mouse? Not to mention that speech input is annoying to everyone around you.

      • Yes. That is what is happening in the brain, right? Read up on some Cog. Psych. & Human Factors lit and you will see documentation of both (1) increased reaction time and (2) increased probability of erroneous commands in translating a thought into a series of manual commands.

        Many people (you & I included) are very adept with a mouse. A little NLP will go a long way. I anticipate a steep learning curve, though. Yea innovation!

    • Re:NLP (Score:5, Insightful)

      by JanneM ( 7445 ) on Monday October 08, 2001 @05:49PM (#2403781) Homepage
      Nope. It will have a role in niche applications, but I don't see it ever being the dominant method of interaction. A couple of reasons:

      Imagine a large room full of office workers. Now, imagine the same room with every worker talking to his wordprocessor or spreadsheet, trying to make him or herself heard over all the others, getting irritated and fatigued because of the constant noise of everybody else talking to their computers.

      Imagine trying to do some work in an airport or on an airplane. Now, imagine trying to do the work using your voice _without_ other people hearing the budget details for your company or hearing the steamy endearments you will be mailing to your spouse.

      Imagine talking. Now, imagine constantly talking all day, every day. Some actors and singers get permanent damage to their vocal cords - and they've had professional coaching and access to medical services. It could become RSI for your throat.

      /Janne
      • I agree. But is our chosen modality of business--centralized--the reason why this technology can't work, or does this proposal point out another flaw in the centralized office scheme? It sounds like another tick in the "Pros" column for the Pros/Cons debate over telecommuting & home offices to me.

      • Trying to order a drink by a voice command in a bar. Everybody is talking loudly around you, it's not your first drink and your voice is slurry, the waitress is trying to elbow you from her area...


        No, I think we should do it the traditional way, by clicking the mouse.

        • Now try ordering a drink once every minute in the same bar. Imagine spending eight hours a day yelling your order at the top of your lungs. How many drinks could you order in a day? how many of them would be the right drink.

          Sure you can order a drink after waiting five or ten minutes to get the bartenders attention and repeating yourself three times but I would not want to do that just to type in an email. It would never work in a business setting.
        • Have you ever seen a stock exchange?
      • Imagine a large room full of office workers. Now, imagine the same room with every worker talking to his wordprocessor or spreadsheet, trying to make him or herself heard over all the others, getting irritated and fatigued because of the constant noise of everybody else talking to their computers.

        Yeah, but imagine a typical telemarketing firm. Dozens, even hundreds of cubicals. Everyone talking all at the same time. No problems communicating, because the microphone and speaker are mounted on the worker's head.

        It's a solved problem.

    • by Greyfox ( 87712 ) on Monday October 08, 2001 @06:24PM (#2403910) Homepage Journal
      A lot of people think NLP implies voice recognition and to a large extent that's true, but it also means being able to communicate with your computer using plain english sentences. Difficult because English is so damn ambiguous (Note to self: Perhaps Klingon would be a better place to start...) Still, if I want to tell my computer, "Book the least expensive flight from Denver to Miami on the 23rd of this month." I should be able to say that to the computer or type it and have the computer understand what I'm telling it to do. I should also be able to modify my command to specify times or airline or both and my computer should be smart enough to say "There's a flight for $100 less 5 minutes earlier than the timeframe you specified. Would that be acceptable?"

      That sort of thing will be the wave of the future, and it will mean that apps will have to be smarter and communicate a lot more than they do today. My personal agent should reside on my local machine, not the network, and should watch out for my personal privacy. It should divulge only what is necessary to others in order to perform the commands that I give it. It should be flexible and configurable, but I should never have to configure it; it should learn what I like by how I interact with it.

      Several large companies have been working toward this holy grail for years, but thus far not even common voice recognition much less NLP has emerged from their research. Sure there are some voice recognition packages out there, but there's very little integration, and AFAIK nothing at all in the NLP arena. We could start working toward the level of integration that would be a necessary foundation for a lot of this stuff, but I don't know that you could get the necessary level of cooperation in ANY software development community.

      • And NLP == Strong AI, as you describe it. Doing any /specific/ example falls just barely within the realm of the possible; for a specific domain, just beyond (say, a decade or so); but for any example, any domain, what you describe isn't an interface so much as an enslaved sentience.

        I know you're going to say "but I only mean within the specific domain of the computer;" but what isn't within the domain of the computer these days? Finally, as I've mentioned earlier -- and others here -- speech doesn't handle many kinds of data, or a lot of a single kind, very well at all. The key to better interfaces is to make them more specific, not less -- ubiquity. If the whiteboard can duplicate itself to another whiteboard, and vice-versa, you hardly need to dick around with a window manager to do remote collaboration. If you've got smart paper, you don't need to worry about how to send an e-mail; and so on.

        -_Quinn
      • Perhaps you know Doug Lenat's work. [cyc.com] I would go even further and state that the computer shouldn't have just what Lenat calls "common sense", but a true Artificial Intelligence should have a built in model of the Universe. It shoud get input from the external world and update that model accordingly. In other words, if a software needs to understand you, it should have a working model of your point of view, of your world, inside itself.


        In that context, voice recognition is just one more way of getting input. And I think that what's needed is not just one more way of getting input. What we need is for computers to have an increased level of understanding of the Real World.

      • Well, of course using a different language to talk to the computer from what you use to talk to other people would help prevent crosstalk. ;-)

    • So we finally can tell the animated office assistant, and whatever else MS tries to stick up our asses, to FUCK OFF!! Without having to click and select..

      And I can tell my project - GO, FIX YOURSELF!!

  • Comment removed (Score:5, Insightful)

    by account_deleted ( 4530225 ) on Monday October 08, 2001 @05:39PM (#2403729)
    Comment removed based on user account deletion
    • We are still using wheels, but we don't use them exclusively, and they have been *massively* improved over those 5,000 years.

      If I want to make a trans-continental journey, I use wings. To ride across the harbor, a hull, and hydrodynamics. The wheels on your nearly brand new car are a far cry from the round stone of yester-century. They are even more advanced than the wheels you would find on a car just 20 years ago.

      Change for the sake of change is one thing, but change for the sake of better usability, safety and economy is completely a different matter.

      I for one appreciate the ride (and style)I get from my air-filled, steel belted radials wrapping polished aluminum rims.
      • by mangu ( 126918 )
        they have been *massively* improved over those 5,000 years


        Can you name a single improvement to the concept of the wheel in those years? AFAIK, they are still a round thing that revolves around an axis. Sure, the machining precision of that roundness and that axis are many levels of magnitude better than 5000 years ago, but the concept is still the same.

        • Can you name a single improvement to the concept of the wheel in those years? AFAIK, they are still a round thing that revolves around an axis. Sure, the machining precision of that roundness and that axis are many levels of magnitude better than 5000 years ago, but the concept is still the same.


          Talking about wheels... Has it ever occured to you that someday a few thousand years ago somebody enhanced a square "wheel" predecessor into a triangular wheel predecessor? I am sure he must have considered it an heavy improvement, because there bas one bump eliminated with every revolution...
    • That's a great idea, assuming you have something that works. The WIMP interface is terrible for many tasks, because it requires frequent switches between the keyboard and mouse, requires careful aim with the mouse for many tasks, requires aiming the mouse while holding buttons, and so forth. None of these tasks is efficient. GUI is a great way to present information, but is not great for most input.

      Of course, keyboard input, to a great extent, worked. But people switched to using the mouse, probably because it seemed to go well with graphics, and was the next new thing.
  • by Nooface ( 526234 ) on Monday October 08, 2001 @05:39PM (#2403731) Homepage
    This is exactly the topic of the new Slash site that I just set up at http://nooface.net [nooface.net]. The goal of the site is to promote out-of-the-box thinking about truly next-generation user interfaces that are designed for new types of users and computing devices, and go beyond the WIMP (Windows, Icons, Menus, Pointing Device) method that most current interfaces are based on.

    Nooface [nooface.net]
    In Search of the Post-PC Interface
  • Eye Tracking (Score:2, Informative)

    by Da J Rob ( 469571 )

    IBM has been working on eye-tracking movement. Supposedly it can tell what part of the screen your eyes are focused on. Would be cool for first person shooters, but for an OS I think moving a mouse is just as simple.

    I saw this on TechTv, i think it was fresh gear, but I'm not sure, anyone have a link?
  • So far... (Score:2, Interesting)

    by Syre ( 234917 )
    So far, we're pointing at things on a screen, moving them around, and typing messages. Datagloves and other visual manipulations will be important for all sorts of specialized tasks, but the way we tend to communicate is through speech and body language.

    Speech recognition is only useful for very limited functionality, mainly because computers haven't been fast enough or with large enough databases to really make use of syntax and context. Continuous speech recognition today typically uses waveform profiles with no contextual or grammatical analysis.

    But with faster processors and larger memories, I expect speech recognition to go to the next quantum level within 5-10 years. Once we add contextual and grammatical constructs to speech recognition, computers will start to be able to really understand what we're saying. To go from that to understanding what we *mean* is another step, but that's coming too.

    I also expect computers to have video cameras and to be responsive to our body language and facial expressions. They will be able to judge whether what they're doing is interesting or useful, and will ask for guidance or attempt to correct based on that feedback.

    In other words, I expect interaction with computers to become more like interaction with people!

    • personally, i've never bought into the voice recognition thing becuase how would it help me write code with less pain? am i going to recite a c-program to a voice interpreter?

      however, i would definitely like to search the web in my kitchen when up to my elbows in bread dough or something... you know, shout out "search for thin crust bread recipies, knead time" and then have a voice read back to me the recipe or something like that...

      and it's really not all that trekkian: it's all keywords and T2S. but the big hassle is connecting all the peripherals to the kitchen. now if i had firewire connecting every room so all i have to do is plug a speaker/mic combo into one outlet and start surfing, that would be cool

      maybe the next big leap isn't interface, but infrastructure? replace the '|' character with a USB/Firewire line throughout the house, replace the shellscripts with small devices.

      but back to speech to text: i'd hate to be in an office with 300 people using voice recognition software. it's bad enough how much noise i have to block out in my cube already.

  • Reading this story was like reading a story on IE-XP: do you think you could have made some more words into links?

    Furthurmore, it wasn't PARC that introduced us to GUIs, it was Douglas C. Engelbart.

    However, this IS a good question - is the "Desktop" metaphor the height of the GUI? I've read about some folks playing around with REAL uses for 3-D on the desktop: modeling files as a sort of "billboard" shaped like a U, with your point of view being at the bottom of the U. The part you are focusing on is at the bend of the U, closest to you and in highest res, the other parts are on the sides of the U and are show receding into the distance in decreasing res. You scroll data along the U, bringing interesting bits close but still having some awareness of the other parts.

    Now, why doesn't somebody make THAT into a UI?

    Simple: the same reason KDE and (to a lesser extent) Gnome look like Windows - if you make a radically different desktop interface, Joe Bloggs and his family will have their heads explode when they see it. Their tiny little minds are burned like a PROM to only accept the Windows(tm) way, and anything else will cause catastrophic cranial overpressure failure.

  • Emacs, naturally (Score:3, Insightful)

    by tmoertel ( 38456 ) on Monday October 08, 2001 @05:46PM (#2403762) Homepage Journal
    Might any of you have ideas on what form these input devices might take?
    They might just take the form of Emacs.

    I'm serious.

    No, really.

    Nothing gives me hand pain as quickly as using mice, especially ones with that wheely thing. Keeping my hands in good form over the "home row" of my keyboard -- and away from the mouse -- has virtually eliminated pain from my computing life. I spend half of my waking hours in Emacs, and I have come to love (and depend on) its there's-a-key-for-everything nature.

    Dump the mice. Keep Emacs.

    • I'm genuinely happy for you that using the keyboard eases your hand pain.

      But don't dare suggest taking away my mouse. I'll fight you.

      This is very short sighted.

      I like both CLI and GUI tools. I use both. I would not let anyone take away either one from me.
    • IMHO it's not mice that cause hand funkiness, it's switching back and forth between keyboard and mouse. Mostly-mouse user interfaces are just as pleasant as mostly-keyboard, at least as long as you're not using those screwy Apple hockey puck mice.


      I do have a serious ergonomic bone to pick wrt emacs, anything that makes me use CTRL, ALT, and/or ESC frequently is going to give me hand cramps real quick becuase of the distance those keys are from the alphanumeric ones (i mean finger distance, not whole-arm-movement distance). Vim with appropriate settings and nedit get my vote for Things That Just Let You Type. But to each his own, the whole point of ergonomics is after all that "one size fits all" is a steaming load of livestock byproduct.

  • Raskin? (Score:2, Informative)

    by stew77 ( 412272 )
    Maybe you just want to read Jef Raskin's "the humane interface" [jefraskin.com]?
  • by kurisuto ( 165784 ) on Monday October 08, 2001 @05:48PM (#2403778) Homepage
    I'm wondering if maybe we need new input devices before the WIMP paradigm is replaced with something better



    This seems a bit like asking what it would take to replace the current way of driving a car (steering wheel, gas and pedal brakes, etc.) with something better. But the interface between humans and automobiles is pretty much a solved problem, and nobody seems to spend much time speculating on what a paradigm change in automobile control would be like.



    There's a curious assumption which I've seen repeatedly-- namely, that a paradigm shift in human/computer interaction would be a good thing. Why, exactly? I see no reason to pursue a paradigm change for its own sake; I view it as a problem which has basically been solved for now, much as the problem of steering cars is a solved problem.

    • As a carpal-tunnel sufferer, I would be ecstatic to see WIMP (or at least the P) go away. Face it, our current input devices are less than ergonomic by their very nature. A fundamental shift in computer interaction would probably be towards an interface more suited to the human than the machine. Our current system of sitting motionless, staring at a screen, twitching a mouse, and banging on a keyboard are as archaic (and potentially painful) as the lawn sickle.

      I firmly believe that my grandkids won't be using a keyboard and mouse like I do. They also will probably never know the term "RSI", and they'll wonder why Grandpa's wrists make those funny noises.

    • by tim_maroney ( 239442 ) on Monday October 08, 2001 @06:45PM (#2403996) Homepage
      There's a curious assumption which I've seen repeatedly-- namely, that a paradigm shift in human/computer interaction would be a good thing. Why, exactly?

      That's an excellent question. By Kuhn's model of paradigm shifts, the shift must be preceded by a number of anomalies in the current paradigm. In command-line interfaces, the anomalies were numerous -- the need for constant relearning of old habits, the need for memorization, the ease of making errors, the computer being in control of the human rather than the other way around, etc. Eventually social factors caused the anomalies to be recognized as such, so that when a new paradigm was created, its values were widely recognized. Perfect recipe for a Kuhnian shift.

      What are the anomalies today which would force a change in the paradigm? Serious question, not rhetorical. For starters, I'd say Gelernter's new project is an attempt to rectify some anomalies which have not yet attained social recognition as anomalies.

      Tim
      • Most of the "anomalies" you cite for command-line interfaces just aren't that. I will go through them one by one:

        * constant relearning of old habits: Just as command-line languages can differ from one another, so can GUIs. Granted, the differences cannot be as dramatic as with languages (the reason probably being that languages are far more expressive), but they are there.

        * the need for memorization: Again, this is not a question of quality, but of quantity. Granted, languages require much more learning effort than GUIs, but then again they are more expressive.

        * the ease of making errors: This is not a shortcoming of command-lines per se. A command-line can easily be configured to alert the user of any unpleasant side-effects a command to be executed might have. The fact that it usually isn't is quite probably due to its user feeling comfortable and secure enough.

        * the computer being in control of the human: What is that supposed to mean? I use several command-line languages every day and I do not feel myself being controlled by the computer. On the contrary: being fairly competent in using those languages I can command the computer to do things automatically which a GUI user would have to do by hand repeatedly.

        All this "command-line is a thing of the past - the future belongs to GUIs" is nonsense. Command lines give you a language which is usually Turing-complete, meaning you can express the automation of arbitrary tasks. This is something a GUI just cannot do. GUIs provide ways for performing an array of functions, but only very limited means, if at all, of tying these functions together and doing something automatically. And the automation of tasks is what a computer is ultimately for, is it not?

        bye
        schani
    • This seems a bit like asking what it would take to replace the current way of driving a car (steering wheel, gas and pedal brakes, etc.) with something better. But the interface between humans and automobiles is pretty much a solved problem, and nobody seems to spend much time speculating on what a paradigm change in automobile control would be like.

      Oh yeah? Two words: cruise control. It completely redefined the "car interface". How about two more: intermittent wipers. True, the inventor got shafted by Detroit [engineerguy.com] and had to fight tooth and nail for years to get his due, but he too changed the "car interface" dramatically.

      There's a curious assumption which I've seen repeatedly-- namely, that a paradigm shift in human/computer interaction would be a good thing. Why, exactly?

      Simple: because the quantum increase in computer access that was engendered by the WIMP interface isn't by any stretch of the imagination the endpoint of interface evolution. Want an example? Don Hopkins has been pushing his concept of Pie Menus [piemenu.com] for about 15 years now, and has implemented them everywhere he can find an amenable display system (starting with (*shudder*) X10 and including MS-Windows!). If you think you know how user interfaces should work and you haven't read any of Don's exhortations on the human-factors improvements inherent in non-linear menus, you need to get with the program.

      • Not just the Sims (Score:2, Insightful)

        by Pope ( 17780 )
        Alias|Wavefront has been using their version, called "marking menus" in PowerAnimator and Maya for a couple of years. They fruiggin' rock!
        By customizing the Marking Menus a little but, you can drive PowerAnimator with a 3 button mouse and the Control, Shift and Alt keys. Way cool.
  • by www.sorehands.com ( 142825 ) on Monday October 08, 2001 @05:50PM (#2403786) Homepage
    Part of the problem is with fine motion control v. gross motion control.


    Add a touch screeen so that when it comes to window selection less fine manipulation, you can use the large muscles of the arm and shoulder, then use the mouse for the finer that you cannot get with a touch screen.


    The more important aspect is the comfort and the breaks. No matter what mechanism is used, trauma can accumulate over time. You need time to allow the body to recover from that trauma.

  • by SClitheroe ( 132403 ) on Monday October 08, 2001 @05:53PM (#2403800) Homepage
    Use the right tool for the right job. That's the UNIX/Linux philosophy. If you think that the WIMP interface is cramping your style with respect to whatever task you are trying to accomplish, then you are using the wrong tool.

    With regards to the command line or WIMP interfaces being old, and not particularly forward looking, you are also missing a fundamental point: A graphical "pipe" isn't innovative either. You're simply shoehorning two paradigms together, and even worse, two totally incompatible paradigms at that. The pipe is a useful metaphor and operator for stream-oriented I/O. The WIMP is useful for (obviously) visually oriented information, and its designed for a completely different purpose than the pipe. The WIMP is designed to allow humans to manipulate data and abstract objects in a visual manner. The pipe is designed to allow users to allow the computer to do the same, without intervention.

    If you want an innovative computing interface, worrying about streams, or visual representations of data is a waste of time. You're going to have to come up with something totally new. One good example is the use of sound to communicate the health and performance of networks or systems.
  • by Bonker ( 243350 ) on Monday October 08, 2001 @05:54PM (#2403806)
    The real problem with computer software advancement is that its all very firmly based in 2D land. WIMP environments are about as eficient as you can get. Even assuming that you can wrap your work habits around something as 'next-gen' as 'The Brain', you're still stuck in 2D land.

    The real advance that will open innovation (real innovation and not some corporation's twisted idea of it) is the beginning of a 3D workstation environment.

    We already have the primatives for this kind of environment in games like Unreal, Q3A, and Black and White. Assuming that we implement a 'graphical pipe' that will work for a truly 3D application system, ie: Allow 3D applications to pass information back and forth between each other semi-effortlessly, this will ignite a new 'interface revolution' similiar to what we experienced as a result of Xerox's early WIMP system and the first versions of Apple's MacOS.

    Once programs and applications can truly be represented as 'objects' in a 3D environment, we'll end up with something like the 'God' interface in Black and White, where processes are represented by animated people and files are represented as other objects. Tasks best handled in 2-D such as composition, coding, or painting will still be easy to handle, but tasks best performed in 3-D such as file management, database management, and even some advanced programming tasks like linking and compiling files, will take place in a representational environment. Imagine opening up your HDD and pouring objects into it, then sorting them into containers based on type, as you would sort files into directories.

    Eventually, I see us moving into something like Stephenson's 'Street' metaphor for shared environments.

    Along with these advances, will come new interfaces. I think that eye-tracking cameras have the biggest potential, but we keep coming back to the data-glove in one form or the other. I know CADesigners who still have an old Nintendo powerglove hacked for basic 3D manipution tasks. We're also probably going to see a renaissance of 'body tracker' devices that will track human motion via sonar or laser. Any one of these has the potential to vastly reduce RSI injuries.

    The real trick in jumping from 2D to 3D is reverse compatibility. All the shells I've seen that attempt 3D interaction do it badly. Even then, they fail completely when faced with most of the tasks we do on a daily basis, like write or paint. I think we're going to have to use 'easels' or something similiar. In Cowboy Bebop, Radical Edward's computing environment is shown as a multitude of 2D windows hovering around her in 3D space. This wouldn't be that hard to do, really.

    Navigation will be the true challenge for any 3D application designer. It will be that itch to scratch that will spawn new, inventive input and coding ideas.

    • The trouble with most GUI desktops is that they are designed for manipulating items on a GUI desktop and customing the GUI desktop rather than making the interface transparent. As a counterexample consider the Palm, where the idea is to make the UI be as lightweight and unobtrusive as possible, because people want to just take notes and view their schedules. A WIMP desktop is overkill, so they went with something an order of magnitude simpler.

      A 3D desktop is a step in the opposite direction, placing more emphasis on the desktop itself than what people want to do.
    • by Mr. Sketch ( 111112 ) <<moc.liamg> <ta> <hcteks.retsim>> on Monday October 08, 2001 @07:00PM (#2404066)
      This would be good, but it would require a 3d interface. I think the only way to truly do a 3d environment is for it to exist in the physical world. Any 3d interface on a 2d screen will become kludgy pretty quick. The best way to do a fully 3d interface is to put in in the real world. Imagine if your desk *WAS* the computer. The desktop *WAS* your actual desktop. You open your draw to see 'real' manilla folders with names on the tabs for your documents and thumbing through them to find the financial report you were working on, 'grabbing' it and pulling it out and it appears on your desktop for you to work with. You open up another draw and see pens, pencils, markers highlighters, etc that you then 'grab' to select what you want to start writing with. You could just slide your hand across the desktop to move documents out of the way and tab or 'grab' a document that was 'under' the one you were working on and it comes to the top and you can begin working on that one.

      This would require a lot of holography and motion tracking, touch sensors, etc, but it would be the ultimate in 3d interfaces. It would avoid klunky HUDs and gloves, etc that just detract from the actual work. You could even bring up a keyboard on the desktop and use that instead of a virtual pen or pencil.

      3d interfaces would be nice, but on a 2d display I think it's best to stick with a 2d interface.
  • by sulli ( 195030 ) on Monday October 08, 2001 @05:54PM (#2403807) Journal
    WIMPs are such old hat. [harvard.edu] The new metaphor is MACHOs [harvard.edu].

    (WIMP = Weakly interactive massive particle; MACHO = Massive compact halo object)

  • I love when I have to visit 15 Goddamned links just to understand what the fuck this guy's question is!!

    How about next time, you make individual letters & punctuation into hyperlinks?! THAT WOULD RULE!!11!!
    • S [gnu.org] o [oreilly.com] r [adobe.com] t [belmont.ma.us] o [globo.com] f [f-secure.com] l [slashdot.org] i [yahoo.com] k [kde.org] e [eonline.com] t [att.com] h [hotbot.com] i [rsac.org] s [rambler.ru] ? [surrey.ac.uk]

      [All links pulled from google's first page for each letter - sick, ain't it?]

  • Graphical Pipes (Score:4, Informative)

    by EnglishTim ( 9662 ) on Monday October 08, 2001 @05:57PM (#2403822)
    There are several programs out there that make use of graphical pipes - Maya, Eddie, Shake, grphedit (DirectShow graph editor) to name but a few.

    The nice thing about graphical pipes is the ability to easily and transparently connect several forms of data to one node. With command line pipes, you've effectively only got one input and output.

    What would be needed would be graphical terminal programs (built into the OS, or at least window manager) for connecting these things together, Oh, and a standardized way of defining input and output types - I dunno - would MIME work there?

    I expect somebody can point us at a project that has already done this?
  • I have seen (and maybe somebody else can provide links) some innovations in the research of data representations. Basic approach is to display data in more than 3 dimensions, using colors, forms and relative position to enrich the data and provide a better decision making process (first usage is in risk management for Financial Institutions, analysis of brokerage data, etc.) One very impressive example was done by SGI [sgi.com] and it consisted of a wheel with different rings that you could turn and move freely that would directly indicate the performance of of certain stocks. It could be used for long-term analysis or day trading. I believe, if I am not mistaken, that it is (was?) in use by Morgan Stanley.
    The point is, once you have a different representation of data a different GUI approach (using the data) has to follow. I see data representation of large streams paving the way for true "Cyberspace" GUIs, allowing the user to walk through the data, adding movement, position etc. to the user experience.
    Just my 0000010 cents ...
  • by jht ( 5006 ) on Monday October 08, 2001 @06:05PM (#2403846) Homepage Journal
    To improve on the classic WIMP interfaces, I immediately would exclude anything that had to be physically connected to the user. That eliminates data gloves as a significant mainstream input device.

    Gloves work well technically (at least from what I've seen of them), but are fairly inconvenient to use. I suspect that some basic speech recognition will be mainstream in the not-too-distant future, because processing power is cheap enough to handle simple speech without impacting performance. Maybe eye tracking will be used some as well, but eyes tend to wander.

    So I think the biggest trend in interface design over the next few years is going to be a return to simplicity. Fewer clicks, fewer mouse movements, and greater use of predictive interfaces - where the interface guesses what you'll do next based on experience and learning, and has it ready for you just in case. The Mac-style version of the UI (but not necessarily the Mac itself) will probably be the dominant strain overall - Microsoft is converging in that direction now as well as evidenced by the Luna interface in XP. I think mainstream mice return to two buttons (from the currently popular multi-buttoned models), or maybe even one. And mouse gestures will be used more instead of clicking in some cases.

    Mice themselves I can see working by gyro rather than by ball or light sensor, which would allow a mouse to be held and moved within 3 dimensions (even if it only tracks in two), rather than skated over a desk in two dimensions. It's potentially a more natural hand motion. Keyboards probably won't change much due to inertia. They haven't really changed in basic layout for over a hundred years.

    Of course, I may just be blowing smoke, since my intuition is no better than anyone else's. But the Slashdot crowd is very different from the "mainstream" - we are typically more forgiving of a complex interface and would trade off in favor of power over simplicity. So we may not be the best people to forecast.
  • In the introduction to "Learning Perl", Larry Wall observes:
    "What nobody noticed in all the excitement was that the computer reductionists were still busily trying to smush your minds flat, albeit on a slightly higher plane of existance. The decree, therefore, went out (I'm sure you've heard of it) that computer incantations were only allowed to perform one miracle apiece, "Do one thing and do it well" was the rallying cry. and with one stroke, shell programmers were condemned to a life of muttering and counting beads on strings, (which in these latter days have come to be known as pipelines)."

    And while Perl has not made many contributions to user interface design-- it is half line noise, after all-- it does share a "everything but the kitchen sink" approach to product design.

    From a programming point of veiw, the small program does have numeous advantages. The code base is small, it is easy to test and debug, and the "do one thing" edict tends to focus the design.

    With large monolithic applications, the APIs and coding peculiarities differ from application to application-- so instead of writing a spell checker pipe based app to work with dozens of other apps, one has to write additional application specific glue code to work with each monolithic application.

    Or take multimedia frameworks. Xine, Alsaplayer, OMS, VideoLan, and Ogle each might have a different plugin architecture. A creator of a audio or video codec would have to write hundreds of lines of extra code to support each multimedia framework...
    • The Unix pipes-and-scripts system won't help today's average computer user get his/her work done faster, since s/he doesn't know programming. But the cool thing about pipes and scripts was that they were hackable: in other words, you could take other people's software and change its behavior. (Forgive the past tense, but it really is an obsolescing technology.)


      I think there are a lot of ways that today's GUI software could be made more hackable. For instance, every window and dialog box that an application ever pops up could be a fill-in-the-blanks XML file, sort of like the way CGI and JavaScript fill in the blanks in HTML. If the user wants to change the way a window looks, s/he should just be able to edit the XML file, and the software's behavior should change appropriately. A cool way to implement this in a GUI system would be to have an Edit Source command that you could do on any window, sort of like the way web browsers have View Source.


      For instance, suppose you have a program that always pops up an annoying dialog box saying "Warning: By erasing this file, you will be eliminating all its data, and you won't be able to use it any more. Do you really want to do this? [No](default)[Yes]" It would be nice to be able to change the default to Yes, and maybe to go further and make it not pop up at all. And you should be able to do this with /any/ dialog box, not just the ones that the original programmer wants you to be able to do it with.


      Of course, if it's open source C code or something, you could recompile it with your changes. But there are a lot of practical problems with that: (1) it's too hard for non-programmers, (2) it's time-consuming, (3) you have to maintain your own fork, and reconcile it with new releases of the official fork.

      • Forgive the past tense, but it really is an obsolescing technology.

        Why do you think that pipes and scripts are obsolescing? And when you say scripts, do you mean all programs written in interpreted languages? It seems to me that scripting languages are gaining, not declining in importance. And I think there are more people using pipes and scripts than ever before. I use pipes and other unix features heavily at work. I'm able to solve arbitrary problems much faster than the Windows programmers I work with because I have a better toolbox. Their only tool for bashing data is writing a custom program in C++.

        I agree with your idea about GUIs. I think that the GUI (X application) should be a generic program unto itself, like a web browser. The GUI application would make a socket connection to the GUI and send it XML commands like "Pop a dialog box asking $question". or "Create a scrolling buffer called EVENTS". Then the GUI could be anything - running on the same host or different, on X11 or Windows or curses, written in any language, customized by the user or the system vendor in any way. And all the apps run through that GUI would look visually consistent.
  • Since the `|' was invented by Doug McIlroy in 1973, has there ever been a more effective way of reusing tools and connecting data ?

    Yes there has. Pipes only offer a very limited flow of information between applications and has a horrible tendency not to seperate content from presentation, so you end with nasty hacks like ..well, text processing in general. Sed, awk, grep, and cut are all hackish workarounds for a problem that should have been solved a while ago. If I need to find out my IP address, there's no way I should have to cut text out of iconfigs output. I should simply ask ifconfig to output the first entry in the field `IP Address' for the first record in the set. This won't change because the command line afficionadoes (who seem to hate all non structured document format) are so familiar and have invested so much wasted time in text processing they cannot see it for the nasty hack it is. Thank god the rest of us do.

    Most Unix users think all graphical programs are limited in UI because all they've been exposed to are poor graphical apps (notice the same people who complain about GUIs are fine using ncurses - perhaps its not GUIs, but slow and unresponsive apps that are the problem. So much of the Unix world remains stuck in the mistaken idea that Unix philosophy (small apps that work together) is somehow magically limited to command line apps. It isn't - the way ifconfig communicates with grep via pipes is much more limited and hackish than the way khtml communicates with konq via kparts.
    • You will notice, however, that no one has ever written a script to connect khtml to, say, wget. Is this a limitation of scripting? Is it because there's no graphical way (a la the java beanbox) to tie components together? Without being a coder? While it's true that text processing is an ugly hack to handle formatted output, that doesn't mean that the pipe and command-line idea is the wrong one; something like XML-formatted output would help the problem nicely.

      I think the posted question more-or-less assumes the component model, because otherwise it doesn't make any sense; the question is, how do we make stitching components together as common as stitching command-line programs together?

      -_Quinn
      • You will notice, however, that no one has ever written a script to connect khtml to, say, wget.

        These connects aren't limited to scripts, they'll work in most interpreted or compiled langauges with KDE bindings. But yes, one can embed for example, a netscape plugin viewer kpart inside khtml.
    • Firstly.. I don't think that 'all graphical programs are limited in UI'.
      In general, they are.

      If I take a windows PC, and a Linux PC, and I want to do something 'innovative'.. like have my IP address automatically posted to a web page, and have my incoming ICQ messages automatically logged to a file, as well as copied & zipped to another file, then ftp'd to a remote host...

      These sort of tasks are very difficult to automate in Windows, and very straightforward to automate in unix. That's why people think this way.
      And on your point about the unix philosophy being 'mistaken'. You seem to think ifconfig should output exactly what you want... like a single ip address, for instance. The problem is.. that philosophy requires the designer of ifconfig to determine exactly what kinds of output every potential user might want, and forsake the rest.
      That's not the unix way; the unix way is to make the program output as much information as you could reasonably want, and let OTHER tools sort it out, so the user can always get what they want.

      • Firstly.. I don't think that 'all graphical programs are limited in UI'. In general, they are.

        It's good to hear you have an open mind.

        If I take a windows PC, and a Linux PC, and I want to do something 'innovative'.. like have my IP address automatically posted to a web page, and have my incoming ICQ messages automatically logged to a file, as well as copied & zipped to another file, then ftp'd to a remote host...

        That's a matter of scripting, which can be achieved in a reasonable manner with scripting tools on either platform, though it seems you haven't kept current on the Windows side of things.

        These sort of tasks are very difficult to automate in Windows, and very straightforward to automate in unix

        That isn't correct. You evidently have more Unix experience than Windows. With the exception of the ICQ parts (sub in use MSN messenger) its entirely possible.

        And on your point about the unix philosophy being 'mistaken'. You seem to think ifconfig should output exactly what you want

        No I do not. You have fundamantally misunderstood the solution presented. Rather than filtering via arbitrary text strings which may change and have no standard format, filter via fields. Ifconfig outputs structured data, I filter it by tellign it the fields and records I'm looking for.

    • ...the way ifconfig communicates with grep via pipes is much more limited and hackish than the way khtml communicates with konq via kparts.

      Perhaps, but there's a reason why this "hackish" communication is popular and effective. Ifconfig's text output is its API (the output half, anyway). Since I already type ifconfig to learn about interfaces on the machine, I don't have to learn a new API to stick data into a shell pipeline. Khtml may have this wonderful relationship with konq, but I feel left out. I read and write ASCII, not binary. Suppose I want to try using khtml for something - can I invoke it with different arguments in a few seconds and see what it does? And if I invest enough time to understand and use the interface, how do I troubleshoot it when it breaks? If a complex shell pipeline starting with ifconfig is malfunctioning, I could start by trying a plain 'ifconfig'. How will I isolate some K-component from its friends and see what it's putting out?

      I think a lot of the power of Unix is the overlap between the machine-readable and the human-readable. When you can read and write a language yourself, it's easier to write code that reads and writes that language. And it's easier to debug.
  • What we need is an interface that operates like a little joystick that you put in your mouth, perhaps in the shape of one of those baby sucker things. Click by biting down, maybe have a few extra "mouse-buttons" added to the keyboard, which could be operated by two hands.

    Telephone conversations may be impeded, but think of all the geek-girls you could get when they find out how much exercise your tongue gets.
  • Of Mice And Music (Score:3, Insightful)

    by istartedi ( 132515 ) on Monday October 08, 2001 @06:23PM (#2403907) Journal

    The mouse is a device of the Beatles era

    And the steering wheel is a relic of the Jazz age. I don't plan on giving up either of them any time soon.

  • No, I'm serious. Computers should be working for us, not against us. Today, we don't give a computer a task and ask it to complete it, we instead worry about interfacing with it. The interface should be natural and change depending on what you are doing. Let's go through this using Star Trek (TNG) as our baseline:

    Computers should have voice recognition for general tasks such as environment management and collabertion. The crew uses this to dim the lights for example.

    Computers should have keypad interfaces for general reading and writing. You'll notice the workstation on Picard's desk for exactly this purpose.

    Computers should have remote/networked interfaces for passing sneaker-net info. This means datapads and remote voice uplinks. (Hold on captain, I'll email the info to your chair! Yeah, right.)

    Computers should be able to present information in the most efficient way possible. This doesn't mean the prettiest, but instead it means dynamic graphical/audio interfaces. These interfaces must be easy to create and use. This means that the readouts might one moment be tracking the progress of Warp plasma then next moment display starcharts of the ship's course.

    All of this makes the Star Trek interface very useful and is quite possible with current technology. However, there is one thing that really makes it easy to use:

    Computers must be able to automatically analyze, process, and display any information. Filters should be able to be applied automatically. Artifical intelligence should be able to translate speech from any language to any language and add new patterns without reprogramming. Answers to theoriretcal questions should be able to be answered given a current set of information on a subject. Information should be able to be retrieved and stored no matter what the format and new formats should be able to be analyzed and added to the database.

    This is much more difficult due to the fact that computers today have a very monolithic nature. Pipes are a good example of dynamic linkage of programs, yet at the same time are very primitive. They require a great deal of operator understanding of the information in order to use.

    What if we could build layers of software on top of layers of software, which are then again used to build greater layers of software? This would require that we accept standard functionality at each level of the layering process and then allow people to write ever simpler code due to this great deal of layering. Why should anyone be required to rewrite a quicksort once it has been written?

    The first and best key we have today is the Java class file format. Class files are allowed to layer on top of others by nature. The computer can pull apart the structure and investigate the usefullness of a class for a specific function. With enough layers, the computer can make extremely intelligent decisions automatically. Witness JINI [sun.com].

    So here is my challenge to every programmer/newbie/manager/hardware designer in existance. Stop trying to focus on how we can optimize or tweak each individual computer function so that you gain temporary marketshare, outperform your competitor, or push your political agenda. Instead, begin pushing for hard standards in the industry. Don't worry if they have the "highest performance" or if they have the "coolest widgets". Instead worry if they have the best design, if they are technically "right" for the task at hand. Stop worrying about making languages that allow you to produce specific functionality in fewer lines of code, and worry about producing the highest level of quality.

    Many people will find this a hard pill to swallow, but the end result will most certainly be worth it.
    • Yeah!
      And the intellegent chair can read my ass for authentication!
      Should help with my diet/training program as well...
      "OOF! Based on your cheek patterns Captian, I am reprogramming your workout for an extra 15 minutes, and I have revoked your Krispy Kreme access on the replicator. Have a nice day" ;)

    • Besides, the Star Trek systems aren't very secure. They may even be less secure than Windows!

      Think about it. Any alien can beam onto Voyager, and know how to use the computers and take over the ship. Among some groups of fans [nitcentral.com], this is called an "Invader Friendly OS".
      • Oh fine, I'll refute this. Voyager is a piss-poor choice for holding up any sort of technology. In their world you could retake the ship just by rerouting the EPS conduits through the deflector dish in order to create a Tachyon pulse. Besides this being just STUPID (tachyons travel backwards through time and thus aren't very useful for the types of things Voyager uses them for), it is also not very realistic.

        In TNG (which was why I mentioned that series in specific), the computer was locked out on several occasions. "Rascals" and "Brothers" come to mind. I'd say those are pretty secure instances. Oh, and there was an episode of TNG where they saw a tachyon anomoly and Picard remarked something to the effect of, "I thought tachyons couldn't exist in normal space time" denoting that they had no control of tachyons much less pulses and beams. So please crawl back into your hole. Thank you.
    • by slamb ( 119285 ) on Monday October 08, 2001 @10:59PM (#2404713) Homepage

      Artifical intelligence should be able to translate speech from any language to any language and add new patterns without reprogramming.

      Oh, okay. I'll have that for you tomorrow.

      I'm really tired of people making these dramatic statements that are all but impossible to realize. There are people who have spent their careers trying to do what you mentioned in that paragraph, and they've gotten nowhere, because the task is so enormously complex. Translating from one language involves taking one set of nonsensical rules and ambiguities and replacing them with another. It can not be done without a complete understand of what is being translated. That's...hard, to make the understatement of the year.

      What if we could build layers of software on top of layers of software, which are then again used to build greater layers of software? This would require that we accept standard functionality at each level of the layering process and then allow people to write ever simpler code due to this great deal of layering. Why should anyone be required to rewrite a quicksort once it has been written?

      There are already so many levels of complexity in any piece of software that it's completely insane. Your idea is nothing new. Here's a partial list of the layers involved in a Java Virtual Machine I'm sure I am leaving many things out, putting them in a bad order, etc.:

      1. Gates. Operations on ones and zeros.
      2. Hardware components. I'm sure there are layers I am leaving out here and much more descriptive ways to say this...but I have only a token knowledge of these layers.
      3. The instruction set architecture of the machine. Already a huge step up, this is a layer at which I can (slowly and painfully) produce useful software.
      4. Assemblers. They take somewhat human-readable code and generate machine code from it. (Ever had to do this by hand? Not fun.)
      5. High-level programming languages. They take relatively abstract concepts like loops, functions, etc. and create assembler from them.
      6. Operating systems. (At this point, the "layers" metaphor shows its imperfections...high-level programming languages and operating system functionality are orthogonal, meaning they abstract different things. My order is arbitrary.) They make it possible to access devices in abstract ways, plus a million other useful things for software.
      7. The core library. (libc on C-based machines.) Makes system calls to the operating system easier. Implements stuff common to lots and lots of programs.
      8. Other libraries. There's actually many layers hidden here. For example, on UNIX, libxpm depends on libXt which depends on libX11 which depends on libm which depends on libc.
      9. Various other processes, like the X11 server. These interact with client-side libraries through communication primitives provided by the operating system layer and made more accessible by the libc layer.
      10. The Brahm GC. (For Kaffe. I don't know what GC Sun's VM uses or where exactly it fits in.) This makes memory allocation easier.
      11. C-side Java support stuff. Part of the String class implementation, for example.
      12. The virtual machine. Implements a completely different instruction set.
      13. Java-side support libraries. A lot of the java.lang stuff, for example.
      14. More Java-side support libraries. GUI stuff. This depends on stuff below. Extend to arbitrary depth.
      15. Your code.

      My point? Layering software is nothing new. If you want to add a layer, say so, but don't pretend it's a new concept.

      So here is my challenge to every programmer/newbie/manager/hardware designer in existance. Stop trying to focus on how we can optimize or tweak each individual computer function so that you gain temporary marketshare, outperform your competitor, or push your political agenda. Instead, begin pushing for hard standards in the industry.

      You know, I really hate it when people push for other people to make dramatic changes I suspect they don't understand (evidence: above) and use the word "we". It's entirely inappropriate.

      Stop worrying about making languages that allow you to produce specific functionality in fewer lines of code, and worry about producing the highest level of quality

      The language is one of those layers you advocated. The language itself is a piece of software to be reused. You suggested people add layers to make things possible with simpler code, and this is one way to do it. Don't knock it.

      • That's...hard, to make the understatement of the year.


        I don't believe I stated that it would be easy, I stated it should be the end goal. Automatic computer analysis of data is both possible and desirable. However, as you state it will take a great deal of research and design to get to this point.

        There are already so many levels of complexity in any piece of software that it's completely insane. Your idea is nothing new.


        I don't believe that I stated that it was a new idea. In fact, it is a very old idea one of which pipes themselves were based on. The only difference is that right now most people are contemplating their navels instead of truely trying to build on top of the existing layers. I mean, how many iterations of the GUI do we have now? The GUI is just ONE tool!

        My point? Layering software is nothing new. If you want to add a layer, say so, but don't pretend it's a new concept.


        No, it is not. What I am encouraging is to standardize each layer. How many people do we hear from even here on /. decrying "standards" because they think they can do better, or because Java is too slow, or because C++ & Linux rulez and you aren't going to take that away!? I am suggesting that very dynamic code such as Java can be used to create the more complex layers that I seek.

        Please keep in mind that this is entirely a theoretical discussion and as such, cannot really be expected as the interface of the near future. It's a great goal, but I do realize that we have to get to point B before we get to point C.
      • You make good points, but I think the layering you describe is not what the OP wants. All these layers of crud supporting Java on Unix are just to give it the abilities of BASIC on a VIC20 - input, output, RAM access. We have a good degree of code reuse for GENERIC purposes, as you illustrate. But we are not yet good at reusing task-oriented code.

        I'll try to explain. How do you find out the temperature in a city? To start with, we still don't have a standard way of coding cities. I encountered this when trying to (automatically) draw a world map showing the hosts in a certain network. The location information for each host was free form. It took a large amount of effort and special casing to get a program that could locate 90% of the hosts on the map.

        On every project I've seen, we reinvent the wheel. Not the OS or GUI, a higher-level wheel. What kind of contact info do you ask from a user? How do you validate it? I think MS is aiming at that problem with Passport. And SOAP may be the first step towards getting computers to talk on a high level without the explicit intervention of programmers. In fact SOAP may be the real answer to the OP's request. The only place I've seen real reusability (in the high-level sense) is CPAN.
    • The problem with the Star Trek paradigm is that is fake! As in Hollywood fake. Dramatic licence is used all over to make the show flow smoothly.

      Many people have tried to build systems like Star Trek, and in fact you can get a lot closer than you many be aware. The problem is that in real life, that interface *sucks*!

      In real life it is painfully annoying and insanely slow to tell a room/computer to do something.
      We just don't realize how good a user interface a lightswitch is.
      The Star Trek data pads are touch sensitive; we have had touch sensitive technology as long as the mouse; but the mouse is what we use because of its acceleration property.

      I have built and analyzed [CMU,MIT, Microsoft et al] many systems designed to solve the exact problem you are talking about.
      And although the technology isnt up to the task, you can evaluate it as if it is by using humans to "Wizard of Oz" the test. Turns out [in my and others opinions] that the whole idea is dysfunctional and annoying.
      • The problem with the Star Trek paradigm is that is fake! As in Hollywood fake. Dramatic licence is used all over to make the show flow smoothly.

        I won't refute that. It is merely a paradigm that people can identify with, that's why I used it.

        Many people have tried to build systems like Star Trek, and in fact you can get a lot closer than you many be aware.

        Lemme see, flip phones (communicators), horseshoe bridge design (used on some aircraft carriers), the biobed design was studied by the military, phasers (tazers and other non-lethal weapons), etc. Star Trek mythology has had a large impact on civilization and technology as a whole. Only a fool would deny that.

        In real life it is painfully annoying and insanely slow to tell a room/computer to do something.
        We just don't realize how good a user interface a lightswitch is.


        The idea behind what I am proposing (and what you actually see on Trek) is that the interface is multifaceted so that you can use what's convienient. Which is more convienient when you walk into a dark room carrying groceries, saying "computer: lights!" or fumbling for a light switch? That's not to say that some of the current attempts are really not good interfaces, but given enough technology and design and it could probably work quite naturally.

        The Star Trek data pads are touch sensitive; we have had touch sensitive technology as long as the mouse; but the mouse is what we use because of its acceleration property.

        Most laptops use touchpads, I once had a remote control that had a touch LCD, most fast food restaurants use touch screen registers, etc. Having actually used touchscreen cash registers, I can personally say that not only are they a good interface, but they are FAST and generally have a lower rate of error than key or touch-key (these things are a joke) registers. Part of what makes them so nice is that the interface is specialized and reconfigures on the fly. Now that's not to say that touch screens are optimal for all situations, but they definitely are useful.

        To be perfectly honest, you make some good arguments, but they are based on current emerging technologies. Natural voice recognition (much less speach) and touch screens are in use in many industrial areas, but as of yet are extremely specialized and need time to mature. One day the technology will be there, and then we will be able to have a much more realistic conversation on these pieces.
  • I'll avoid the theoretical for a moment and just speak to this:

    My web designer friends are damaged for life because of mice, and yet we persist... Where do we go from here ?


    Just thought I'd mention that when I started showing symptoms of RSI I went out and bought a couple of trackballs and a couple of Wacom [wacom.com] Stylus tablets.

    For design work, the Wacom [wacom.com] products spoil me rotten, and though it hurts me to say so I've had nothing but luck with the Microsoft thumb-controlled track pads [microsoft.com].
    Though if you have political problems with them try the Kensington [kensington.com] (which are excellent) or Logitech [logitech.com] versions. I might try the new Logitech units myself actually.

    It really changed the way i work, any desktop I loose to the tablets is mitigated by not halving to mouse around. So anyway, no more pain for me.

    • Good points.
      Like any job, you have to be aware of how it can harm you.
      If you lift things all day, you know to lift with your legs, not your back, or else you end up with long-term back problems.

      Same with a computer.. If you rely on your wrists/hands for your income... please, learn to sit properly, type properly, use a mouse properly, and get some exercise on those wrists/hands (and I don't mean from frequent computer use).
      That's all it takes to keep your hands healthy and strong.
  • let if have keyboard shortcuts

    At least in MS Office apps. Just record a macro that types the word "if", and assign a keyboard shortcut to it.
  • by Mr. Sketch ( 111112 ) <<moc.liamg> <ta> <hcteks.retsim>> on Monday October 08, 2001 @06:42PM (#2403988)
    A speech interface would be nice, but only if it was supplemented with a standard mouse and keyboard (and maybe a touch screen) and would accept natural language commands. As far as the user interface goes it should have a complete abstraction from applications and the file system leaving the user to only be concerned with documents.

    The reason they should also have mouse and keyboards are for security so passwords etc wouldn't have to be spoken (see the recent user friendly strip series for a humerous take on that), and so things you're doing could be kept somewhat private. Imagine starting up a long build or whatever on your machine and figuring you'd take a short break while everything compiles and telling your computer 'open mozilla. go to hot asian chicks dot com. click hot and horney', you might get more than a few head turns from local cube dwellers unless you bookmarked it and renamed it to something like 'intranet' but the renaming process would also have to be vocalized.

    It should also accept natural language commands for complicated to speak text. The main example for this is programming. If I wanted to do:
    for (int i = 1; i = 10; i++)
    cout << i << endl;
    I would like to just say 'for loop. local integer i from zero to ten step one begin. print i and end line. end loop'. instead of having to articulate each puntuation symbol as 'for open parenthesis int i equals 1 semicolon i less than equal ten semicolon i plus plus close parethesis. enter. c out less than less than i less than less than end l', not to mention if I had to put spaces in there too.

    The next thing we would need is an abstraction from the use of applications and the file system which would go in very well with a speech interface. The user would only be concerned with documents and data. The user would just ask the computer to start a new report on photosynthesis and the computer could ask the user what to call it and they could just respond with a natural name like 'biology 101 mid-term'. Later the user would just ask the computer to open the biology 101 mid-term without having to care if it was opened with word or starwriter or kword, etc, it would just be there and they could work on it.

    The abstraction from the file system would be a natural extension of this because the user doesn't need to know where anything is because the computer takes care of it for them. The user just needs to remember documents/files as he would anything else 'I was writing that letter to Bob', 'I was working on the bio mid-term', etc. This also furthers the use of a computer as a tool, because it would actually help you get things done and be easy to use by anyone because speech is a natural interface for us, but keyboards and mice are not.

    The best example I can think of having something like a touch screen is for web browsing or editing documents/preparing presentations, drawing (but maybe a graphic tablet would better for that), etc. so instead of telling the computer to open the 'Read more' link, I could point and it would open whatever I pointed to.

    Microsoft is trying to do this with things like the My Documents folder and automatically naming documents with the first line of the document, but it's still somewhat cludgy because it relies on keyboard and mouse interaction. They are kind of on the right track in terms of abstraction from applications and the file system, but still needs a ways to go. This is why they have the Documents folder in the start menu and New Office Document and Open Office Document on the start menu instead of the programs menu. This is also why they have extension associations with applications so the user can just click on a document and it will spawn the right application (or maybe they just stole it from macs).

    These ideas are nothing new, I've seem them all somewhere else before, but I just thought I'd post them here for discussion because I think they're good ideas. It should also be noted that this type of interface is for the 'average' user not the average slashdot reader since we all like our keyboards and CLIs.
  • I think our whole UI is restricted by the fact that there is only one pointer on the screen at one time. Almost every sci-fi (film|TV show) I've seen has a touch-screen where you can manipulate multiple things on the screen at once. We have yet to be able to do this with X.
  • Mouse? What mouse? (Score:2, Interesting)

    by SiriusBlack ( 313236 )
    All six of my computers at home are using touch pads. Even track balls are marginally more effective than mice. Alternatively, a device I dreamed up years ago (wear a reflective dot somewhere on your person and have an infra-red light emitter/detector track it in 2D)has actually been patented and is being sold by some company -- after everybody laughed at me for suggesting it. The "straw" pointing device (a cylinder that could turn for y-axis or slide side to side for x-axis) has never caught on.


    But I think you're right; what I really want to see is a 3D device, not everybody trying to improve on the 2D paradigm. Of course, that means existing drivers and existing operating systems would need to be abandoned.

  • Is Aqua a new paradigm for GUIs? No, answers the writer:
    To see tomorrow's computer systems, go to the video game parlors! Go to the military flight simulators!

    I see where he's going with this...he wants an interface where stuff blows up! Oh, wait, it's already been done [unm.edu] :-)

  • http://www.acm.org/uist [acm.org]

    I attended UIST 94 (in Marina del Rey), and a lot of the work was cutting edge (for that time, and some even for now!). Somebody did some presentation there that was similar to the visual pipe concept. I'll have to drag the proceedings out of storage, though...
  • Since this mentions RSI, I'd like to give a public service announcement I wish I got 10 years ago: If you are a teenager or in your early 20's and are the typical marathon computer sessions geek - realize if you don't take small precautions, you *will* get RSI. It's just like smoking - you're OK for 10-20 years and then you start coughing.

    Like many of my friends, I started getting RSI a few years ago, and it got worse and worse. I found out what to do though. You can reduce the RSI (like carpal tunnel syndrome) you're getting by some very simple procedures. The R in RSI stands for repetitive and that's what you get it from - having your hand on the mouse for hours and clicking it. I have a ball mouse at home and a Microsoft one at work so I use different muscles at home and work. I also switch-hit, switching the mouse from left hand to right every half hour. This way, you can stay at the computer like normal, except you're not hurting yourself as much. Switch-hitting every half hour gives your hand half an hour of rest while you keep working. Of course, resting, hand exercises and other things are good too.

    Programming guru Jamie Zawinski, the guy who wrote the original Netscape for UNIX has a great page on RSI. Check it out, and other pages on RSI. I really think there should be OSHA regulations at least *informing* young guys that prolonged use of mice and keyboards can damage their wrists and leave them so they can't type.

    JWZ's RSI page is:

    http://www.jwz.org/gruntle/wrists.html [jwz.org]

  • So, I have to say, I was surprised [surpriseaz.com] to see [seaweb.org] a story on Slashdot [notslashdot.org] with so many damn hyperlinks [kuro5hin.org] in it. Not to mention that some of them were rather trollish [adequacy.org].

    But what really sucks [hoover.com] is that Slashcode's [kuro5hin.org] inane . [goatse.cx] link exposer for people who are too stupid [aol.com] to look at the bottom of their browser [mozilla.org]'s window [windows.com] to see the URL that they're clicking [clearlandmines.com] on has basically ruined [cmdrtaco.net] this joke [slashdot.org].

  • Until my computer scans my brain and knows what to do, nothing is going to matter. Speech interfaces will change things for certain applications, but because computers are still oriented around words, it will be cumbersome to work with a computer in applications where a mouth just is not fast or versatile enough (I can speak one word at a time, but I can idepedently operate at least four keyboard+mouse buttons simultaneously.). Speaking to a computer is also useless for much of the computer using community, as many people become hoarse from talking over extended periods. It would also make offices intolerable, as the noise would become a terrible racket, and phone calls impossible.

    Others have mentioned eye motion tracking. A cute concept, but worthless for anyone who regularly moves his head while using a computer (As I do endlessly, moving back and forth between books, a phone, and several PCs at once, typing nonstop.).

    Touchscreens have been around for decades, anyone familiar with one already knows why we don't use them.

    To advance at a higher level, computers must become able to interperet thought. It sounds mad, but it is an imperative.
  • Either that or the engineers who build life imitate Star Trek, but anyhow, let's see what they use...

    Very good NLP, and buttons.

    Physical buttons in TOS; touch panels subsequently. They have pointers but seem to only use them for signatures.

    Hmmm... Of course they don't seem to have taken account of the fact that you can't use an upright touch panel for more than a few minutes before your arms felel like they're going to drop off.

Say "twenty-three-skiddoo" to logout.

Working...