Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Technology

GUI Research - Is it Still Being Done? 354

Davor Buvinic asks: "In my spare time I like to study about GUIs. Recently,I was amazed with the new design that I saw in the previews of the future MacOS X, until I discovered in theWeb that things like file dialogs attached to windows dated from the earliest prototypes from the Apple Lisa (July 1981). My question is: Is there any news in GUI design? The newest design I probed was Rob Pike's ACME user interface for programmers. Is anybody (individual or research center) working in a new GUI design? I mean a GUI for the mainstream, no immersive virtual 3D environments, that probably need a powerful Silicon Graphics to run." Have we done as much with the GUI as we possibly can, or are there other reasons behind the lack of technical innovation in most desktops?
This discussion has been archived. No new comments can be posted.

GUI Research Is Dead?

Comments Filter:
  • Perhaps instead of reinventing the 2D GUI interface (how our we interpret the screen), we should focus more on how we interface with the machine itself...
  • by ryanr ( 30917 ) <ryan@thievco.com> on Wednesday July 05, 2000 @10:35AM (#955628) Homepage Journal
    You see them all the time in little Internet toys, and media programs. Look at the KAI graphics stuff for example. Look at any of the "skinnable" applets. I've seen some MP3 players that look downright weird.

    The problem is, I hate most of them.

    I'm afraid that GUIs (as they exist in the mainstream now) have been hard-coded into our brains. New GUIs have a backwards compatibility problem like you wouldn't believe; they have to be backwards compatible with people.

    Unfortunately, we've learned the current GUIs so well, that any major departure is just "wrong."

  • In consumer desktops, if you change things radically, you are apt to lose the average users. I personally hate vast changes in design, as visual cues are a lot of the way I navigate. You make someone learn a brand new way, without careful incremental change, and you have the common person fighting their machine....

    Then again, I'm not sure there is much difference, since most people use Windows and fight their machines all the time anyway. :P

  • by damaged ( 60781 ) on Wednesday July 05, 2000 @10:36AM (#955630)
    Check out this link [cmu.edu] to see what's going on at CMU's HCII. All sorts of wacky stuff...
  • by linuxonceleron ( 87032 ) on Wednesday July 05, 2000 @10:38AM (#955632) Homepage
    People like stability when it comes to how their work is done. If I were to sit down at a Mac and Win9x box and Linux running GNOME or KDE, they would contain many of the same ideals when it came to operating (icons, menus, windows...). I think you could sit any reasonably computer-aware person in front of any of these GUIs and they would be able to sit down and work. For a new style of GUI to become popular it would have to make the work of the user easier without having a high learning curve. Think of how long it took for people to change away from DOS to Windows, and most people didn't use Windows until at least version 3.0 because the original versions had many flaws. If there are new GUI paradigms (god I hate that word) being developed, it will be a long time before they are accepted. Also the new single-purpose 'internet appliance' market may start to change interfaces to something more simplistic that is instantly obvious how to use (think TV/Stereo here), but I doubt people would want such a simple interface for their computer.

  • by BoLean ( 41374 ) on Wednesday July 05, 2000 @10:39AM (#955633) Homepage
    The flip side of this is that there has been essentially no breakthroughs in monitor development either. every couple of month some new story about reasonably priced flat panel monitors or 3D monitors appears, only to fade into obscurity. Maybe when desktop realestate gets bigger than 17" diagonal and more than 2d we'll see some novel approaches. In the end though "form follows function". Just as the shortest distance between two points is a straigt line.
  • by 11223 ( 201561 ) on Wednesday July 05, 2000 @10:39AM (#955635)
    We've reached a level of a few comfortable paradigms. First of all, the web is a very powerful GUI idea for delivery of applications, and in its modern form has only been around for a few years. It's exploded faster than the original WIMP GUI concept did.

    Secondly, there's much refining being done in the area of the GUI. Just look at some of the enlightenment [enlightenment.org] screenshots to see what I'm talking about. Different, but very powerful. (Those screenshots have sucked more than a few new users into Linux!)

    Everything else has been a "refinement" process in the area of GUI research. So, here's my idea for a new GUI:

    One of the best features of the newest refined GUI's is customizeability - the ability to choose what the OS looks like. Let's take that to the maximum - a generic plugin-based system that lets skin authors completely change the feel of the OS. The User Interface would load plugin modules (swappable at will) that perform the following functions:

    • Task management - switching between windows on the screen.
    • File management - browsing the files on the hard disk
    • Program launching - starting up programs from some sort of menu
    • Menu management - if one is loaded, the active program's menus are displayed in this widget, ala MacOS X or NeXT.
    • Others I can't think of right now...
    This would allow users to completely change the look and feel of their desktop interface. The UI could switch from a convincing Mac clone to a Windows clone to a BeOS clone to a Palm clone to something completely new and uncharted in a matter of seconds! Of course, it would still be based on the same ideas of dialog, widgets, etc. as current interfaces, but it would be a step towards complete user-control of man-machine interaction.
  • A short article [newscientist.com] in today's New Scientist mentions a few efforts at 3D graphical interfaces.

    Check out bottomquark [bottomquark.com] to discuss the latest science news.
    GrnArrow

  • They're pretty old as far as GUIs go, and they work. Yet no one uses them. Why? I downloaded the GTKPieMenu widget [mff.cuni.cz] and played with the test. It's amazing how much easier those things are to use than regular menus (once you get over the disorientation, of course). Are there any maintstream programs using any widget set that actually use pie menus? I'm sure there's plenty being done with new GUIs, but if nothing uses them it's not likely to be obvious unless they're all making press releases.
  • have you ever thought about the ask behind reteaching computer interfaces to everybody? It'd be nothing more than an annoyance to have to relearn how to use a computer when all you want to do is type up a document. if an interface works, you should use it. Oh my god, I've met people who didn't know how to use a mouse, and when you think about that... a radical new interface designs don't sound too appealing. especially for tech support people. however, there is going to be a time when idiot proofing the interface and making it 'friendly and familiar' is going to get too much in the way of progress. then we'll see how the morons ^H^H^H^H^H^H end-users take to new interfaces
  • by joshv ( 13017 ) on Wednesday July 05, 2000 @10:42AM (#955639)
    I think with the limitation of a 2D display with less than 100 dpi, we have approached the limit of what can reasonable be done with a modern GUI. The original GUIs came about as the result of cheaper raster based video hardware becoming available and supplanting the previous character based hardware.

    The attempts at 3D GUIs don't do anything for me, when the display really isn't 3D, and that icon in the distance is illegible because my screen's resolution sucks. We need better and radically different hardware before any major advances in user interface design can occur.

    -josh
  • by FascDot Killed My Pr ( 24021 ) on Wednesday July 05, 2000 @10:42AM (#955640)
    I read an interesting, online-only article at Linux Journal about a 18 months ago on a topic called "color reactance". Essentially it advocated (and partially demonstrated) how you could have programs set "traffic lights" (or window frame colors, or something) to indicate states. For instance, a program that needs attention could be set to flash yellow whereas one that is finished could flash green (or whatever).

    When I first say the Aqua screenshots, I thought Apple had done this. They have a trio of traffic lights on the upper right of every window. But it turns out they are just eye-candied versions of the old close/minimize/maximize buttons.
    --
  • I just started reading jef raskin's book, the Humane Interfrace. Raskin is one of the early apple guys -- it's worth checking out. He's got a series of rules derived from I, Robot: First, an Interface should never harm user input or allow it to come to harm.

    I wish all the current OSes I use would follow that one...

  • I don't know if this counts as research, but blender (www.blender.nl) has a totally fresh (if hard to use at first) take on a space-saving GUI.
    --
  • there were still some interesting things. I don't recall the links but I think there was a lense based interface being worked on for a while (idea being you could look at your desktop through the normal lense then pull different types of lenses out and 'see' new properies of the system.) I'm thinking xerox park maybe?

    There are also the hypertree widgets that are pretty cool. There are some java demos of those somewhere.

    researchers do everything (assuming they can get some kind of funding)
  • by namesAsh ( 88101 ) on Wednesday July 05, 2000 @10:43AM (#955646)
    cool stuff. Here's a link for the UMD HCIL: [umd.edu]
  • What is interesting is that GUIs have until now been limited by their input devices, having been tied to the mouse for over a decade. In all the years it has been around, the mouse has hardly changed. Okay, scroll wheels and context sensitive buttons have been a big improvement, but it is still faster to type in a wordprocessor and access menus using the keyboard. Some combination keyboard/trackball devices are available and these reduce the distance the hand has to travel compared to the keyboard compared to using keyboard and mouse. However, I feel that the real breakthrough in GUIs will come when voice recognition kicks off. Already, ViaVoice can open a program and move it around the screen.

    Having said that, a scroll mouse is ideal for browsing the web, so I guess it's a case of horses for courses.
  • by Panaflex ( 13191 ) <{moc.oohay} {ta} {ognidlaivivnoc}> on Wednesday July 05, 2000 @10:47AM (#955650)
    It really depends.. it's kindof mushy.

    Most of the hard academic research on GUI's has already been "done." (Meaning that people going from government grants will find it hard to compete with some of the other new technologies)

    The most research is being done on 3D desktops. (Microsoft has one, Berlin, SGI, etc) that take the traditional file managers and twist and turn.

    MIT has a textual "GUI." It's really not so much a GUI as it is a different way to present large sets of data in a minimalist fashion. (Think the Matrix..it maps text on 3D curves. Books essentially "rotate" pages constantly... at least that's what I remember it as).

    Another minimalist was Rob Pikes 8 1/2 (used on the Plan9 OS). This reminds me of emacs on crack. But it is a very effective way of managine text content.

    But to be fair, people are pretty much in love with their buttons and menus et cetera. Most of the GUI work being done is implementations and fancifull type stuff.

    I'm on the implementation side. I'm working on getting GTK on the X server side. That way you shift the drawing operations to the server and keep the client happy just handling events and widget control. The amount of communications between processes will DRAMATICALLY be reduced. Plus it will be 100% backwards compatable.

    Then maybe I can convince someone to rewrite XIE into something more like imlib2 ;-)

    Pan
  • by q[alex] ( 32151 ) on Wednesday July 05, 2000 @10:47AM (#955652) Homepage
    Don't forget the PalmOS, which while certainly not terribly original, was (IMHO) a good move towards solving UI problems in a 3 by 5 inch (or so) space.

    I think that GUIs _have_ been hard-coded into our brains. The Xerox PARC facility encountered that when they were designing their GUI/smalltalk stuff... people learn metaphor patterns, and tend to use those patterns (a paradigm, if you will) to interperet new information.
  • by Obscura ( 10237 ) on Wednesday July 05, 2000 @10:48AM (#955653) Homepage
    The fine folks over at the Archimedes Project [stanford.edu] are researching a bunch of stuff, new GUI designs included.
    The Total Access System project seeks to provide access to technology through a clean separation of the information to be accessed from the form of presentation required for individual users. The project is designing personal accessors that will talk to host computers and computer-based devices through infrared communications links. The accessors will thus become part of a three-way system, the Total Access System, that has been designed by Neil Scott. A complete TAS includes: an individualized accessor, an interface to a host computer or computer-based device, and a standardized link connecting them.
    They are looking for volunteers if you're interested in helping out. I met them on a list-serv I was subscribed to and their work is very interesting.
  • by Golias ( 176380 ) on Wednesday July 05, 2000 @10:49AM (#955655)
    As long as STDIN is a keyboard and pointing device combination, and STDOUT is some kind of monitor, then the limits of our physical contact with the PC will demand that we use an interface built on pointing and typing. Most of what makes that kind of interface functional has been thought of, and the current paradigm is good enough that switching is not yet worth the effort.

    CLI's rely on human memory... we need to learn to speak the computer's "language", and often need to remember what the computer is currently up to. GUI's rely on our visual pattern recognition abilities. We "see" the commands we want to execute, and have a "finder" or "taskbar" to remind us what is going on. In both cases, the interface is driven by our choices of how we want to communicate with the system, and once you make that decision, a lot of the rest of the design is mostly asthetics.

    The change will come when an interface that is obviously better than typing and clicking comes along. Whatever it is, it will need to be enough of a step up to be worth learning. There have been hundreds of "better" keyboards, but they don't get adopted by people because they are not enough of an improvement on the crappy qwerty (or dvorak) that we already know how to use. The next step to succeed will most likely be something completely different than a keyboard, and it will introduce the need for a radiacally different UI.

  • by gfxguy ( 98788 ) on Wednesday July 05, 2000 @10:49AM (#955657)
    We haven't reached the limit of what we can do with your standard 2D interface design. In fact, new things are being tried all the time, and it keeps coming back to 2 things:
    1. Every time you try something new, the mainstream complain that it isn't enough like the old, and

    2. Most changes are just to be differenct, to distinguish your UI from someone elses.
    The second thing results in crap like MS's horizontal file dialogs instead of vertical, and things like the quicktime interface which is an enigma to everyone that hasn't spent the time needed to figure it out. What do four dots mean again?

    The first thing is what keeps major overhauls of existing UI's from happening.

    I've done lot's of research into GUI design myself, and it boils down to: only design and use something new if it's going to make using the product easier. Unfortunately, many people nowadays (Apple, for example) go way off the deep end on design, with little respect for the user experience.

    There's lots of outdated concepts, too, like real world metaphors...why limit yourself to what some poor designer had to cram into a 15cm x 2cm area of a portable CD player?
    ----------

  • I tend to agree here...these GUIs are definitely hard-wired into our brains. The average user isn't exactly tuned in to the idea of abstraction, so they don't think in terms of *what* they are doing, but more in terms of the steps needed to get where they want to go. It's almost as if they have the black box paradigm down, but just don't think of it in the correct terms.

    This is very similar to a concept Economists call path dependency. The QWERTY keyboard layout, for example, has been shown to be far inferior to the DVORAK layout, but it persists as the de-facto standard (and indeed never caught on when computers came along) because so much time & effort has been invested in the QWERTY layout.

    We human beings are creatures of habit, and we generally don't like change. That's why we keep electing the corporate whores to public office...... =[;-)]
  • by dutky ( 20510 ) on Wednesday July 05, 2000 @10:56AM (#955666) Homepage Journal

    While much of the basics of human interaction and GUIs was worked out years ago (at Xerox and Apple) there are still people thinking of better ways to do things. Check out Bruce Tognazzini's web site AskTog [asktog.com] for some coverage of this topic. He has tutorials [asktog.com] on user interface design, cogent criticism [asktog.com] of current GUIs, suggestions for improvements [asktog.com], as well as sundry and other essays [asktog.com].

  • by redhog ( 15207 ) on Wednesday July 05, 2000 @10:56AM (#955667) Homepage
    We do invent new looks all the time, but no new feels. You have a huge set of different window managers and themes, each providing the same feautures.
    We are stuck in the desktop- and tools- and windows-methafors. You must start a tool (program) to edit your picture. You have folders, either as a tree, or as windows with icons that can be clicked to open new windows. You have windows which can overlay each other, but their placement is largely up to the user.
    There are ver few new things coming up. And the fresh air is old. Take a look at the The ROX Desktop [sourceforge.net] for example. A new and cool idea. Which is old.
    I think the majure problem is that people are spo used to how it works now, that they can not come up with something totally different any more.

    And to opose myself, there are some new ideas, like the PalOS, where you don't have files, and in particular, you don't have "save". You modify your text/picture/whatever directly. Nothing is "in RAM" and must be "saved". But that is one of the few new things I've seen...

    --The knowledge that you are an idiot, is what distinguishes you from one.
  • That depends upon your definition of radical, I think. The Windows 95 desktop was a pretty significant departure from Windows 3.1 and it caught on quite well.
  • by rcm ( 71569 ) on Wednesday July 05, 2000 @10:59AM (#955672)
    There's tons of good GUI research being done: The ACM CHI and UIST conferences over the past 10 years are full of good stuff that hasn't made it to production yet. It'd be great to see some of these ideas incorporated into open source projects.
  • There has been shockingly little innovation in the core fundamentals of computing. It has been accurately, if simplistically, stated that the entire history of personal computers has been one of reinventing what happened on mainframes 30 years prior.

    Nearly all of the important innovations in the GUI took place prior to 1970. Ditto CPU design, the OS kernel, programming languages, storage, networking, etc. All that we have done since 1970 is improve the implementation.

    What have we invented since 1980?

    Umm... hyperlinked multimedia (combining some of the better ideas from the 1940s through 60s). The microkernel and multithreading (minor refinements of 1969 kernel technology). Oh, wait, here's one: Distributed component software. And, of course, the blind user license agreement. Yuck.
  • I think the best interface is that which is natural to people. I'd like to see a computer display that is like (electronic) paper and I could input a number of ways including writing on the "paper".

    Imagine also, a stack of 52 electronic-display cards you could play cards and a computer could calculate probabilities for you based onthe cards it senses in your hand; you could use the cards as cue cards as you prepare for a speech (as notes are displayed on each of them)

    Moving from a command line text interface to a graphic interface was an amazing revolution. The creation of a GUI made it more like working on a "desktop". Moving to a physical interface is natural and would be a further advancement.

  • by Coz ( 178857 ) on Wednesday July 05, 2000 @11:06AM (#955685) Homepage Journal
    Yeah, but don't forget our color-blind brethren. I have a buddy who was on a review board for a product that used color-coded borders to indicate state of a document - informal, draft, released, revised, etc. Unfortunately, they all used about the same luminance - his red/green colorblind eyes couldn't tell that there was a difference.

    Then there are the all-the-way blind. I wonder how /. translates onto a Braille keyboard?

  • by maraist ( 68387 ) <michael.maraistN ... m ['AMg' in gap]> on Wednesday July 05, 2000 @11:07AM (#955687) Homepage
    Perhaps I'm naive here, but I don't see why you dismiss 3D as a GUI. You imply that serious graphics hardware is really necessary for 3D, but anymore, 3D support is standard, especially in the optimized forms such as Glide and Direct3D (I know, I'm a heretic for not promoting GL instead of these proprietary standards).

    In playing with Silicon Graphics machines I was not overly impressed with their GUI design. It had only a few tiny improvements, such as enhanced graphical directory navigation at the command prompt and the scaling of just about everything. Anymore, the prompt is being phased out. Hell, even in Linux, the "task-bar" is replacing more and more everyday command-prompt operations with mindless point and click. And forget about the prompt in windows. Also it wasn't too long ago that DirectDraw allowed graphics scaling on the windows platform.

    I've seen a couple interesting concepts utilizing 3D. The most profound (for me at least) was the perspective view. Namely for those of you, like me, that have window-itis (never less than 10 windows open at a time), only those windows in central view are fully sized and detailed, surrounding windows are visible though compressed / distorted (the actual method I think was to provide a geometrical box which you were looking into.. All non-selected windows were on the periphery of the box and thereby taking up less space).

    Perhaps you are thinking more along the lines of the movie Disclosure where you make use of a virtual reality helm and gloves. Computationally, VR is no different than standard 3D games (first person with multiple complex input devices). The only real complexity with the Disclosure model was the voice interface (which required AI), and possibly the scanning device that renders your avatar.

    VRML could have been the next big GUI, but it seems to have failed miserably, probably because it never found it's killer app. Theoretically we could have all mimicked our working environment, and then applied various database queries around certain triggers, and you could have achieved a low-res version of Disclosure.

    I think Apple's integration of PDF into their GUI is definitely a step in the right direction, though as you said, it's nothing new (NextStep had postscript built-in in a similar fashion). Unfortunately that's really only for show, and doesn't really provide too much additional functionality.

    Hell, the whole concept of the task bar is a remarkable advancement in my opinion. Anything that allows me to seamlessly manage multiple services on my computer (or to blend them into one big service) is advancement in the science. Additionally the treed directories that expand and collapse on command (with the ability to perform operations on the tree as if you were at a command prompt) is functional (though it has been around for well over a decade even in the DOS world a la Xtree Gold, etc). Intelligent Drag and Drop has been an essential addition to the GUI world (thankfully even the UNIX world is catching up on this respect). Recent advancements have been the utilization of html/xml to design dynamic GUIs. MS has been attempting to take this approach with their active desktop, though that seems to be too much fluff. Gnome, on the other hand, is doing a good job of using XML for this purpose.

    I think a generic (and more importantly open) rastering device, such as PDF along with the dynamic window modeling of XML could revolutionize graphics, if for no other reason than to simplify the process, and thereby open up GUI development to even non programmers (just look at how many web pages are maintained by clueless computer users). With the ever-growing complexity of new software, it is most important to device tools that simplify the development process which intern could attract people from other disciplines.
  • by BurnMage ( 69239 ) on Wednesday July 05, 2000 @11:09AM (#955689) Homepage
    I think good ground for GUI research is in games these days. Games have room to be artistic and to try new things that a mainstream OS can't get away with. There are many types of game GUIs/UIs and the data or whatnot that games interface with is very diverse as well. I was just discussing with friends how ingeniously Blizzards has put together UIs. Have you seen Diablo I? Have you seen Diablo II, and how the improved on Diablo I's interface, and the new features they have added to it to make the game easier to play? In Diablo II it's childs play to customize your two mouse buttons, even during tense situations, and it is necessary during gameplay to do so. With a couple clicks you completely change the way you interact with the program. There is even a fully customizable keyboard map so you can choose something with just a keystroke.

    Starcraft, as well, I believe is ingeniously engineered. With only two mouse buttons and with maybe 50 unique 'units' to control, each with an average of 4 or 5 commands, the game is set so selecting a unit in the game and right clicking gives the unit an implicit command, and it depends on the situation. If you select a unit and right click on a space, he goes there. Right click on an enemy, you attack it; on a transport, you try and get inside it. Yet again, there are keybord shortcuts and buttons for about everything, with 'tooltip' help texts that tell you what a button does, with the keyboard shortcut highlighted. These help the situations that implicit commands don't cover.

    What about other games? What do people think about the UI in Everquest, for example. It'd 3d, but it doesn't have to be as responsive and Quake3, and so works differently and has different commands, things it expects from the user. Modern games are the petrie dishes for UI, AI, and 3D programming, but people are only usually looking at the 3D part.
  • I believe that interfaces are moving beyond the standard GUI. Much research is now being done in cybernetitcs and other more holistic interface views. I think that the emphasis of research as a whole is turning to more than just the GUI, it's going to speach, motion, phone, web, device. The GUI has more life in it, and will continue to evolve, but in concert with all of these other interfaces. When you think about it, your pager is just another interface to a computer.
  • by Danse ( 1026 )

    This is basically the same conclusion I came to. Had to read pretty far down the page to find your post, but I figured that somebody had to have posted something like this by now. I was trying to think of what other sort of interface you could have when you're using a mouse and keyboard, but decided that what we've got is probably as good as any other interface designed to be used with a mouse, keyboard and monitor. Once we figure out a better way of communicating with our computers, we'll come up with a better GUI (assuming it would still be useful).

  • by Sabalon ( 1684 ) on Wednesday July 05, 2000 @11:11AM (#955693)
    But I don't really see anything new there. Okay...so enlightenment can look pretty, however, it is still the same thing - sliders, radio buttons, pushbuttons and little icons to close windows.

    Underneath, once I learn that the Whammy is actually the minimize button, we are back to the same old, same old.

    I don't think I'd call it refining as much as I'd call it dressing up.
  • Pie charts are not used in mainstream applications because someone has patended the idea of pie charts. Unfortunatly, I do not have the information in front of me about the patent holders. Pie charts are extremely efficient; the benefits can be explained by Fitt's Law which states the interaction time is proportional to the distance from the pointer to the object which needs to be clicked. Since all menu options are equidistant, each option can be selected quickly.
    --weenie NT4 user: bite me!
  • by jabber ( 13196 ) on Wednesday July 05, 2000 @11:16AM (#955702) Homepage
    Actually, M$ has done quite a bit of study in the area of UI usability.

    One particular conclusion I recall is that the UP and DOWN buttons on a scrollbar should be on the same side of a scrollbar. Sounds weird, but that's mental inertia: The reason for this is that clicking on these buttons moves the window content by a line, while clicking inside the scrollbar moves contents by a page... The finely grained movement, coupled with the very small target area of the UP and DOWN buttons tend to be difficult for users who need to alternate between these buttons often. Placing them close to each other makes for one precision/proximal movement only, not a macro-movement along the scrollbar and a proximal movement to hit the button.

    This came out of M$'s Usability labs and is documented in an M$ Press book on UI design (forgot title). Of course when the M$ market drones got a hold of this idea, it mutated, and M$ products now have small PGUP and PGDN buttons on the bottom of the scrollbar, which is redundant since the scrollbar already provides these functions implicitly.

    Another 'betterment' (which M$ 'extended' just to be different) is the NeXT (and other UNICES) convention of putting the scrollbar at the left side of the window, instead of the right. The reason for this is that most languages are read from left to right, and the beginning of a line of text provides more information (lists etc) than the end of a line. Sliding the window off-screen to the right, to make space on the desktop, would be more usabe is you could still scroll, and still read the beginning of the lines of text contained in the window - hence, the scrollbar should be on the left, not right edge of the window. A similar case may be made for placing the horizontal scrollbar at the top of the text area instead of at the bottom.

    Much research is being done on UI conventions. So much so in fact, that the EU (European Union) has a Standards Document for UI designers that all companies selling software (in certain areas of software, i.e. safety and fiscal) need to comply with for reasons of non-ambiguity and legal responsibility. A friend in Germany will be forwarding this doc to me, and I'll make it web-available as soon as I receive it.

    The doc outlines things such as standard wording that is easily translated between languages, standard button layouts, the upper-bound for the number of controlls that should/may appear in a single interface container (dialog box etc), standard icons that appear on pop-up dialogs, color schemes... I'll know more after I actually read it.

    In the mean time, do a /. search for OS X. A recent criticism of the Aqua interface mentions many UI considerations that Apple people completely ignored when Aqua was developed.
  • Strongly disagree! I think one of the worst attributes of the Mac is the unwillingness to put words next to icons to give a hint what they mean. The fact is that it's extremely difficult to select a picture that conveys obviously and unambiguously the function of an icon.

    The reason words are usually left out is that it makes it less expensive to internationalize a product, but that's at the expense of usability.


    --

  • The PalmOS feature you describe is called orthogonal persistence, and it's anything but new: it's been around since the old days of raw-disk, pointer-oriented databases.

    Thing is, it's very much incompatible with a file-oriented paradigm (and therefore with the Unix philosophy, amongst others). This is why it really didn't catch up in most environments. (Not to mention the fact that it can be really horrible to implement, especially in a limited environment and language combination such as the one provided by Unix/C.)

  • by NYC ( 10100 ) on Wednesday July 05, 2000 @11:19AM (#955705)
    Many of you may not like Microsoft's products or their marketing, but their research group does does some pretty inovative stuff.

    Check out the User Interface group [microsoft.com] which is part of the numerous research groups [microsoft.com] at Microsoft.

    There are many universities such as MIT, Georgia Tech, and CMU which do user interface research, but studies (conducted at CMU) have shown that it takes a long time for advances done in universties to reach actual products.


    --weenie NT4 user: bite me!

  • Small devices like PDAs, cell phones [nokia.com], wrist watches [ruputer.com], alpha two-way pagers [motorola.com], etc. seem to provide a fair amount of challenge and possible room for creativity with 6x6 icons and drop-down menus that take up most of the screen.
    --
  • It depends on what you mean by GUI research. There is a lot of bullshit "lets copy the Mac and call it GUI research" at your lower quality schools (and industry). Frequently, "themability" or simmilar crap gets passed off as GUI research. I think your better places are working on real stuff though (i.e. not fluf like themes).

    Plan9's GUI applications have a lot of inovative ideas like: use cut and paste for menus instead of plldowns (pulldowns are crap), make dialog boxes appear on the side of the screan where they will not interupt the person who is really doing something (traditional dialog boxes are a dumb idea too). Anywho, it's pretty cool ideas with real research to back them up (unlike 99% of "GUI Research"). I'm shure the good school's like MIT have one or two people with even more brilliant ideas.
  • When I can't figure out which %!@&^% eyeball to poke to get the thing to minimiize, that might as well be a new GUI.
  • by Spasemunki ( 63473 ) on Wednesday July 05, 2000 @11:27AM (#955714) Homepage
    The wacky world of depth. I still think a concept that is going to see its time come eventually is the 3d interface. Someone has already mentioned psDoom [sourceforge.net] and the Doom System Administration Tool [unm.edu]. At the time when the Doom SysAdmin program was on ./ for the first time, someone mentioned that humans have a much better spacial memory than they do for abstract data like text or numbers. I don't remember the number for the pizza guy, but I never forget that the phone book sits next to the phone. We're used to reasoning with relation to spacial objects, knowing what sort of things should be where. A 3d interface doesn't require the sort of complex "jack in your nervous system" schlock that always emerges from Cyberpunk novels; for a lot of people, something like Doom would be good enough. Just post some signs on a wall or something. Moving from room to room in a building is intuitive; it's something we do every day of our lives, from a very young age. A 3d interface takes advantage of our natural inclination to use sight as our primary sense. Figuring out the 'theme' of a room or a location is much easier for most people than figuring out and recalling something abstract, like what files end up in what directory. It's worth some research, I think, and I hope people continue to look into it.

    "Sweet creeping zombie Jesus!"
  • by Kaufmann ( 16976 ) <rnedal AT olimpo DOT com DOT br> on Wednesday July 05, 2000 @11:29AM (#955716) Homepage
    Off the top of my head...

    • Squeak [squeak.org], a Smalltalk-based language/OS/IDE/VM developed by Disney. Specifically, try to find stuff about Morphic there; it was born in Self, a prototype-based (classless) relative of Smalltalk, but it's been adopted officially as Squeak's UI system. It's pretty innovative, taking the approach of representing all objects graphically on screen through the notion of "morphs".
    • ETH Oberon [oberon.ethz.ch], another integrated language/OS hybrid, with a very different UI with some interesting ideas.
    • Gentner and Nielsen's amazing article The Anti-Mac [acm.org], which, by starting out with the goal of violating all the reasoning behind Apple's Human Interface Design Guidelines, ends up with a very interesting - and very implementable in the near future - UI design for high-performance workstations.

    So, no, GUI research ain't dead. ("It's pining for the fjords." :))

  • There are some great innovation going on when it comes to user interface design. I work as a researcher in sweden and here a lot of people are working on new hand healed devises and how to make new user interfaces for small screens(mostly because Ericson and Nokia are nordic, two of the leading cell phone manufacturers in the world). So a lot is happening in that area.

    A god place to look for innovation is ACM chi (computer human interaction) [acm.org], a org where you can find a lot of fun stuff. A lot of the research that is going on is about how to integrate computers in to our life's, so that you don't need to interact whit them directly, they them self should be context sensitive to their environment and respond to your needs and filter out the information you need. this is usually called "augmented reality"

    So what about regular user interfaces? well in my opinion there is way to little innovation when it comes to computer applications and the open source community has not been as innovative as one would think, but i what to give one link to Alias|Wavefront [sgi.com]. If you look at there hi-end cad/animation software you will find so much of innovation that will make you hate most of our common software's interfaces

    Eskil
  • Some counterproof: The Anti-Mac [acm.org] (by Gentner and Nielsen, so you'd better listen!)

  • I agree. I thought the picture representation was an effort to not have to do different languages. For example, on the backs of branded PCs, you've got things that are suppose to tell you what a port does. There's a oval in a box that is supposed to be a monitor. There's 01010 for a serial port (if you don't even know what a serial port is, how is a string of bits in series supposed to help you figure that out?). Then there's a dot matrix printer shape for parallel. Who gets a new dot matrix any more?
  • It seems like to me that UI enhancement and progress has been in leaps and bounds, not a steady jog. First we had the terminal/prompt. Then we went to a basic 2d GUI. From there, we started getting a little more advanced - we were able to run 3d items on top of the 2d environment. Now, we're developing 3d window managers and the like, and there is new monitor technology allowing for 3d monitors. So I don't think innovation has stopped. New technology has always required the latest hardware - look at enlightenment, the 3d window managers, and windows2k, for starters.

    -------
    CAIMLAS

  • Totally true. However, application of the advanced ideas you mention were developed before 1970 has not (necessarily) reached its full potential (now there's an awkward sentence). So what if (Unix) is (30 year) old technology, there's still plenty of stuff to do with (it). And I think it's pretty hackish to find new applications and uses for old technology as well as developing or using the shiniest new things.

    Besides, the whole idea of the microcomputer revolution was to give individuals better access to the technology embodied in mainframes without the cost and administrative control/overhead, so it's perfectly understandable that the evolution of micros follows that of macros.

    I agree that BEULAs are probably the worst new 'technological innovation.' I've never been able to get over the idea that you can't return something that doesn't work just because you tried it... that s*cks.

    WWJD -- What Would Jimi Do?

  • by adubey ( 82183 ) on Wednesday July 05, 2000 @11:41AM (#955728)
    A lot of posters here make references to CHI (Computer-Human Interface) research groups at various universities. This just skims the surface. (Do a google search for "computer human interface" or "human computer interface" and follow any of the many links you'll find).

    Is GUI innovation dead? Well, one of the things CHI people are working on are ways to improve GUI design. However, as is sadly too common, there is a huge barrier between what academics find and what is adopted in industry.

    Remember: although Apple did do a *lot* of original work with GUIs, the core ideas came from academia (Even the Xerox PARC team were former students of Doug Englebart, the Stanford researcher who laid the important groundwork).

    But where are the bold, new, designs? Why do all the improvemnts still look like dialog boxes and buttons?

    Well, there may be hugely innovative stuff yet to be done - but the field is old by computer science standards. Most of the major ideas of how to get humans with keyboards and mice to interact with computers have already been done.

    So does this mean *all* UI innovation is done? Nope. The old hardware assumptions - the human had a keyboard and mouse, the computer had a video display (and maybe a sound system) - will be overturned.

    You will be able to use your eyes and hands to let the computer know what you want. Or, if that isn't accurate enough, you can still use the mouse. You can speak when that's more efficient, or type if typing would be faster (For things like "(" or "{" or "["). If your finger and eyes aren't accurate enough to point, go ahead and use the mouse.

    All of these new ways of interacting with computers will lead to new ways of presenting data, and new ways of allowing users to modify data. The innovation won't be in GUIs alone, but a combination of GUIs with newer input/output devices.

    Don't ask about innovation in GUI design, ask about innovation in human-computer interfaces overall.
  • The only truly new paradigm I've come across in a while is called lifestreams [yale.edu], which is based on the ideas of Yale's David Gelernter. It basically replaces the spatial metaphor, on which conventional "desktop"-type GUIs are based, with a chronological one. Interesting.
  • While I agree with his sentiment, the idea that MacOS will "finally" have long file names is absurd. MacOS has supported file names up to 256 characters since day one.
    The HFS+ format supports filenames up to 255 chars, but MacOS X will be the first to allow _users_ to take advantage of this feature. Even in MacOS 9 the maximum length for a filename is 31 chars.
  • I strongly oppose color coding things because no one ever takes into consideration the color blind. Word is no fun for those of us who can't tell if we have a grammer problem or a spelling problem based soley on color.

    If there is going to be color coding, please, please please think of the color blind. Red and green are not that different! Use blue, and make the world a better place!

    Though, I suppose, some cultures don't even linguistically differentiate between blue and green...well, use very different colors! Especially you web designers out there!
  • We're assuming that the GUI is the end-point in user interfaces. I think the future is in speech-based user interfaces, not GUIs.

    Bruce

  • Depends on how you define "quite well".

    Expert users love the taskbar and start button, but despite millions of dollars worth of Microsoft propaganda, I still see people minimizing all their applications and using the desktop to perform most tasks - which is basically the same thing they did in Windows 3.1.

    Strange.

    D

    ----
  • Bah!!!

    First off, I can't see why that link has anything to support QWERTY.

    Secondly, If someone were go learn DVORAK growing up, they would be better/faster typers than learning QWERTY growing up.

    The fact is, they designed QWERTY to be slower because back in "the day" people were typing faster that the typewriters could process the info, and they kept jamming. QWERTY forced the users to type slower.

  • by davevr ( 29843 ) on Wednesday July 05, 2000 @11:51AM (#955738) Homepage
    User interface research is alive and well! Check out the proceedings from some of the larger user interface conferences, such as UIST, CHI, or CSCW (www.acm.org/sigchi [acm.org]).

    There are lots of market reasons why a non-WIMP mainstream user interface is unlikey to emerge. Essentially, the WIMP interface works well enough for doing productivity-style applications with a screen, mouse, and keyboard.

    Future interfaces will come when they are needed to support future capabilities. Look for new input/output technologies and new form-factors to usher in radical changes - speech input/output, vision, etc., will reshape the user experience in the next decade. In addition, expect that future user interfaces will have an increased recognition of the social and emotional functions that our computing devices are being asked to serve. (and no, I am not talking about Bob...)

    - davevr
    -====
    Open Source Virtual World's Toolkit! ==> http://www.vworlds.org [vworlds.org]

  • by density ( 191241 ) on Wednesday July 05, 2000 @11:52AM (#955740)
    For a new style of GUI to become popular it would have to make the work of the user easier without having a high learning curve.

    Yes.. and it would have to be as revolutionary as the PARC GUI we all use today. I think we can reasonably say it won't be a WIMP (windows, icons, menu, pointer) interface - there are too many people thinking inside that box. The key invention was not the window, but the pointer "floating" above everything else. The pointer inspired mode-less programmg. The next interface revolution will involve a similar move outside the box and a host of resultant style shifts; with the PARC gui came event-driven programming, etc. In other words, anything less than such a shift is not a revolution but marketing hype.

    desktop metaphor? I thought it was the prison metaphor.. I've been trying to telnet out..

  • by sela ( 32566 ) on Wednesday July 05, 2000 @11:53AM (#955742) Homepage

    There were several posts before raising the question: "Do we need a new GUI?" thats a good question anyone trying to develop new GUI should ask.

    I personally don't accept the claim we reached perfection. Even without introducing new input/output devices (which is also part of GUI research), there is always room for improvement. The question is: what do we need to improve. But one preliminary question is: why do we need a GUI?

    GUI gives us a standard way to communicate with the computer. In a way, it is kind of a language. As such it needs to achieve two goals: One: it should provide a standard way to communicate with our applications. We need to learn one language to use the GUI, and not a different language for each application (kind of like learning new language in order to chat with each new person you meet). Second: be as efficient as possible. A GUI should not stand in the way of the user.

    So, how do current GUI scores in those two areas?
    It does seem as current GUI does provide a coherent way to communicate with all applications, which is fairly easy to learn, but it can improve in several aspects here:
    1. Cover more aspects of the UI - some aspects are currently not covered by the GUI, that may be included.
    2. Simpler/easier to learn GUI. You may wonder if it can get any easier than it is, yet for some people that never touched a computer it still looks rather complicated. I'm not sure simpler/minimalist=easier, though. What can be simpler than a VCR interface? Yet how many people would never learn how to program a VCR. Maybe easier means make it closer to the way we communicate with other people ...
    3. Make it customizable - in other words, let the GUI adapt itself to you, instead of letting you adapt yourself to the GUI.

    As for making the GUI efficient, there is a lot to achieve as well. We all know using keyboard shortcuts is a lot easier than using the GUI features. Can we improve here? Can we combine intuitiveness with efficientcy?

    I don't think 3D GUI really address any of those questions. It looks neat, but thats it. Any other ideas? There could be. If you want to invent something new, just:
    1. Be creative.
    2. Forget anything you know about current GUIs
    3. Think about easier communication, not about neat look.

  • Check out the Interface Hall of Shame [iarchitect.com] maintained by Isys Information Architects. They utterly trash [iarchitect.com] Apple's QuickTime 4.x, very appropriately in my view, for introducing a wide range of stupid GUI elements, including the "thumbwheel" volume control and a "shirt button" that has no obvious meaning but, when clicked, introduces a "tray" of additional icons. They also provide some good advice on how to produce better UIs, which generally fall into the "don't reinvent the wheel" category.

    It is amazing how many developers, including those of Aqua, neglect these basic principles in favor of pretty new designs that are ultimately more difficult to use than the previous - see, for example, their review of EntryPoint, [iarchitect.com] the replacement for PointCast.

    Give me my old Mac any day ... just without crashing so damn much.

    sulli

  • But I don't want customizability. I want a standard GUI that looks and behaves the same way on every computer. I don't want to have to figure out how to use some bizarre personalized mutation of the program's interface every time I use a different computer.
  • Litestep, a windows equivilent of Afterstep seems to be the small step forward into the future of GUIs. Litestep is completely customizable shell replacement through a single text config file. It is possible to configure anything from Shortcut keys to rightclick popup menus and even taskbars and VWM. Dozens of modules are also available to add functionality to your GUI. The only problems are that is difficult for most newbies to manipulate the look and feel and it is still a tad bit unstable, also, Litestep users are still stuck with using ultracrappy MS explorer to browse files. Any negative features are easily balanced out by the fact that it's an Open Source project. For more information on Litestep goto: Litestep.net [litestep.net] or Litestep.org [litestep.org]

    -Chris Tower
    "Everything comes at a price and sooner or later, we all have to pay" -cTower

  • by EnderWiggnz ( 39214 ) on Wednesday July 05, 2000 @12:00PM (#955746)
    we really have to stop forcing user's to bend their actions and thought patterns around our implementations.

    The very real problem with current user interfaces is that they still force people to become "computer Literate" which really means that they have to learn nuances and terminology and procedures that the inner workings of a computer use.

    Its not "natural". File systems, databases, MP3 catalogs, are all differently organized IRL than on a computer.

    Lets look at music. Where is your "Britney Spears" CD? Me? Mine's in my car, in the elbow rest in the front seat.

    My mp3 files are on portman@grits:/home/ender/music/mp3s/annoyingmusic /britney/oopsididitagain

    The mp3 files are "organized" in a manner that only an incredibly anal person would organize their CD's. How many people do you know have all of their CD's labelled, catagorized, alphabetized, and all in the same spot.

    The point here, is that the computer gives you all sorts of information pertaining directly to the MP3's, but none of it is really helpful to someone not computer literate.

    If I were to tell you to go into my car, on the front seat, look for the jewel case with the sexpot on the front, and bring it to me, how many "normal" people would be able to find it as compared to telling someone to find it on my computer.

    Currently, computers are great at storing data, but not at describing the data in real terms. Most of the time we categorize items in terms of things that have nothing to do with the data contained, but we limit ourselves with storing data on a computer only by the actual data items - part number, ISBN, whatever, and not things that are inately helpful.

    Until a better file/data system is divised, the UI will not improve.

    As a matter of fact, the typical user should not need to know what a "file" is or a "database". They just want to listen to music, write a term paper for Biology class, email Aunt Helga, look at pr0n on the web, play a game, whatever.

    But notice, these things were not "create a file in MS Word format that will contain my biology report". We may think in terms of files and data, but they think in terms of actions and events.

    We shouldnt force people to think like computers.

  • I am a student at the University of Washington which is home to the Human Interface Technology Lab. Some of my colleages are doing some interesting new work. Gesture recognition is one thing we are working on. We have one application of gesture recognition that does finger tracking near a screen. We have a display of that that will be at SIGGRAPH 2000. PUI or perception user interface is not being done here but I am aware of it. That is where movement of the eye is interpreted as input. Augmented or Mixed reality is something new and interesting that brings the computing environment out into the real world. We are doing some really cool work there. You can chech our stuff out at www.hitl.washington.edu. -Jordan Andersen
  • One idea I saw kicking around for the Berlin [berlin.org] project was quite similar to what you're saying... Since the whole windowing system would be vector-based, windows could pulse, or spin, or waggle, or do any number of things to get your attention. Colorblind people rejoice!
  • Please. It's irritating enough as it is, with people are shouting at cell phones all the time, discussing their personal lives for the world and their parrot to hear. Now you want us to have to go through that even when we're not talking to anyone?

    A speech-based interface only makes sense in environments that require hands-off operation (e.g. driving a car, fixing a spaceship in orbit). Otherwise, speech recognition should remain as an useful but not essential add-on in otherwise more developed UIs.

    Besides, think of the impacts of this in young geeks' lives:

    Little John to his computer, circa 2005: "Download hotanalsex.jpg... rotate windows... click on 'Free Asian Teens'... rotate windows... tell H0+Ch1x on #h4x0rz 'yeah b4b3, I'm a l33+ h4x0r d00d!'"

    Little John's Mom: "John, I know what you're doing in there! I can hear you! Put your... uh, both your hands in plain sight! Onanism is a sin against Ghod!"

  • by nellardo ( 68657 ) on Wednesday July 05, 2000 @12:08PM (#955753) Homepage Journal
    There is certainly research being done in user interfaces, even ones that aren't 3D. Some general areas include the following:
    • Speech. See Portico [genmagic.com] for a real commercial product with pervasive use of a speech UI (if only the smarts were on my Newton....)
    • Agents. Lots of work being done on how to make "smarter" user interfaces. Just do a query on any big search engine. Brenda Laurel's seminal Computers as Theater [amazon.com] is a prime example.
    • Information visualization, [amazon.com] some of which is 3D but Edward Tufte's [amazon.com] books are a well-known exception.
    • CSCW, aka Computer-Supported Collaborative Work, including shared whiteboards and the like.
    • Not to mention video conferencing, the web itself, video games, etc.

    Completely new paradigms are also being worked on - Ken Perlin's Pad [acm.org] is one good example, as is David Gelertner's Lifestreams. [mirrorworlds.com]

    PDA intercases, at least the better ones, are also an area of active research. WinCE is mostly a scaled-down WIMP UI, but the Newton is not. The Newton makes pervasive use of gestures (and not just handwriting - even cut, copy, and paste), as well as sound, animation, and a lack of anything resembling a desktop, "saving" files, or even files at all at the user level.

    General references to UI research include Ben Schneiderman's textbook [amazon.com] (good for learning just how complex the field is) and Baecker et al's collection [amazon.com] (which has some of the recent results) and the pages of SIGCHI, [acm.org] the ACM's [acm.org] Special Interest Group for Computer-Human Interaction.

  • by Detritus ( 11846 ) on Wednesday July 05, 2000 @12:11PM (#955755) Homepage
    The OS/2 Workplace Shell had a nice, advanced GUI. Somewhere I have an IBM book (CUA?) that described the ideas and principles behind the new GUI. Everything was supposed to be document centered. If you needed a new spreadsheet, you dragged a new spreadsheet from a spreadsheet template icon to the desktop and then double clicked on it. You didn't directly run a spreadsheet program. Everything was an object and you could right-click for the object's methods and properties. Microsoft stole some of the elements of the GUI when they created Windows 95.
  • I think that what's wrong with current GUIs is that we say "Amazing new paradigm" and people get it intellectually, but don't quite get it in the animalistic portion of their brain where it really counts.

    Let's face it, people still like to use their fingers when they work. Being able to handle paper, shuffling it around and giving it to someone, is still the way that is preferred to process information. The mouse -- and, to some extent, the keyboard -- divorce the hand from interacting from information on the screen. I would love to be able to touch my screen in order to interact with documents any day. If you haven't seen new people use a mouse, try it some time. They will look at the mouse, move it a little, check the pointer on the screen, look at the mouse and move it more, etc. I felt those kiosks at the mall with the touch-sensitive screens were more userfriendly than some of the junk I've put up with!

    Getting back to the point, the idea of an interface needs to be fused at the hardware level before the software level will take off. Make the screen, keyboard and mouse one unit, kind of like in Star Trek. (Not those little terminals that sit on people's desks. I don't see how much usability can come out of those. I'm talking about those terminals the pilot/navigator use.) If anyone has read the StarTrek Technical Manual, you'll know what I mean. A touch-sensitive optical display that automatically re-arranges the controls so that the button you are most likely to use is closest to your hand when you need it.

    But that's for another day.

    Let's concentrate on the hardware right now and forget the software we have whose only purpose is to work-around dificiencies of a 30-yr old design.

  • by Anonymous Coward
    Project Oberon (started in 1985!), and the Oberon system, especially, Oberon V4 is a very different GUI than the usual Mac/Win/Xish thing. Plan 9's UI 8.5 is based on it somewhat, as is Wily.

    Just for starters, Oberon V4 has:

    • no overlapping windows
    • chorded mouse actions
    • no menus, or ..., all text is a menu
    • There is no concept of "shell"
    • There are no interactive programs, all commands complete without user interaction
    • The user interacts with documents, not programs.
    • All commands can be applied to almost any document
    • Commands are subroutines loaded into the system.
    It is original, different, and I think it is very cool. There is also a more recent research project, called Oberon System 3 oddly enough, that adds a lot of other interface features and looks a little more "normal". It is especially interesting for its document model and its GUI building capabilities.

    You can find our more at: http://www.oberon.ethz.ch [oberon.ethz.ch]
    There are lots of downloadable verions too:

    There are also versions for PPC, Sparc, HP, Windows, Mac, ARM, etc.
    Check it out.

    -dg (dg@suse.com)

  • The Sims [thesims.com] uses Pie menus. Click on a character, and various actions you can do pop up around his/her head. Clicking on an action sometimes brings up a second, similarly styled menu. Aside from the excessive use of Comic Sans MS font, The Sims has an interface that's very easy on the eyes and very easy to use.
  • Designing for the color blind is fairly straightforward -- in the case of Aqua, use shape and position clues to tell which button does which. In the case discussed here, the environment might make slightly different visual cues (shapes and positions) to construe different messages, in addition to sound and color feedback. That way you touch all the bases.
  • by matthewd ( 59896 ) on Wednesday July 05, 2000 @12:22PM (#955768)
    Something clicked while reading the AntiMac page.

    The Trashcan/Recycle Bin metaphor should be extended. When you empty your trash can, the contents should be placed in Dumpster on your LAN. If you realize that you've deleted a file that you needed, you can go dumpster diving. Of course the LAN will have a twice weekly pickup, so if the garbage truck has already come, you'll have to travel to the Landfill (a tape/CD-RW archive of deleted files) to retreive your file.

    Somehow, it seems kind of fitting to have a Dumpster icon appear in a Windows NT/2000 server window under Network Neighborhood, and a Landfill icon when you click on Entire Network.
  • by Junks Jerzey ( 54586 ) on Wednesday July 05, 2000 @12:29PM (#955775)
    The most disturbing things about Liunx GUIs is that the architects--and I hesitate to call them that--are not paying attention to any research or good advice. There are a number of good books and online resources about GUI design, and many of them go off in very different directions than Windows. So, yes, there is research going on and there are alternatives, but no one is listening. "Gotta clone Windows!" is the battle cry.

    Two good examples are the Genera environment from Symbolics and the system software of the Apple Newton. The latter of these is astounding. It does away with a filesystem, and is based on scraps of information that are indexed and compressed on the fly, invisible to the user. Lisp Machine fanatics can tell you about Genera.

    The biggest flaw of KDE and GNOME is that they aren't designed to solve any particular problem. They're just nebulous environments with doodads and gadgets. KDE, for example, seems to have been developed solely to allow people to tinker with and customize KDE. And what a lot of effort and code has gone into a project without a real point.

    It would be nice to have a GUI that was more fitting for the small and well-engineered Linux kernel. A 1970s terminal window misses the mark. So does a crufty, minimalist interface sitting on top of X Windows. Are there any real alternatives besides the jump to KDE and GNOME?
  • (Whoops! Just hit [RETURN] by mistake...) GUIs are great for people that can use them. I see however a great deal of people who can't, typified by my grandparents.

    Grandma, intelligent and resourceful, can't use a mouse. A track ball may help, but even that will be shaky. Why? Muscular Atrophy, a form of muscular dystrophy. MA causes the body to wither away, no matter what the person does. Mouse clicking has become a maximal-effort event. When the mouse does click, it slides halfway across the screen due to the weight of her arm and the exertion that's required.

    Grandpa, on the other hand, is still a pillar of physical strength. His eyes, however, have gone. Macular degeneration. He still has some vision, and when I take him flying he can see some using his special telescopic-autofocus glasses. Viewing detailed images, like computer monitors, is impossible, though.

    The next wave of computer interfaces will involve a revolustion in multi-sensory, or at least non- visual, interaction. We're going there already with the limited abilities of Dragon's Naturally Speaking and IBM's ViaVoice (among others).

    This new non-visual I/O systems will enhance the computer experience of those with physical disabilities, but the rest of us as well. I dream of the day when I can write small programs by verbally giving the computer a list of actions to perform. Or retrieve data by just asking for it.

    In my head, and on all sorts of paper at home I have plans for these kinds of things. I'm sure that others do as well. Computer I/O systems should be able to adapt to use any sense that can convey the information-- visual, aural, and even tactile for perhaps Braille-readers (I don't think that smell or taste will help much :)

    Just my view of the Road Ahead...

    Jeff

    PS-- just got my first Linux box going this weekend! I've got the best father in law in the world; with the kind donations from his closet, and some cheapy stuff from the local computer show, I got a K6-2 400 system for about $300! woohoo! It's RedHat 6.2

  • by dmccarty ( 152630 ) on Wednesday July 05, 2000 @12:34PM (#955779)
    Thing is, it's very much incompatible with a file-oriented paradigm (and therefore with the Unix philosophy, amongst others).

    I'm fairly familiar with the PalmOS, and I have to say that it isn't as incompatible as you might think. Palm implements "files" as databases, and file handles as individual database records. What the PalmOS doesn't handle is a FAT, and only recently did the OS address things like memory fragmentation issues and unique record ID's.

    The positive side of this is that once a user taps the "OK" or "Done" buttons on a screen, data is written to memory. This is why crashes are so rare on Palms, and why if a crash occurs, data isn't lost. If only "file" handling on other OSes (Win32, mostly) were as seamless.

    One of the best things IMO about the Palm paradigm ("zen of Palm" for those who want to be catchy) is that the degree of orthogonal persistance is left up to the applications, and isn't dictated by the OS. So an application remembers where a user last left off, but doesn't have to transport them back to the same screen the next time the program was run. However, the illusion of running apps simultaneously is complete when an application resumes exactly where it was exited. (The preferences panels do this, merging N panels into what appears to be a single app.)

    This is why it really didn't catch up in most environments.

    After using the PalmOS to write notes, etc. one wonders why data has to be lost at all when a program crashes. However, I'll admit that this way of doing things becomes much, much more complex as the file size and application size increase. It's one thing to persistantly track a 4K memo, but another thing entirely to try it on a 20MB 3DMax model or a 60MB Director movie.
    --

  • The original implementations of GUIs and desktops in languages like Smalltalk and Cedar/Mesa made exploration and experimentation quite easy.

    Then came Apple. They cut corners to squeeze something that looked like the Xerox PARC GUIs into completely underpowered machines. First, they tried the Lisa, which was marginally acceptable, if already quite stripped down. Then came the 128k Mac and its toolbox. It was a great engineering achievement and a great hack. Microsoft then just copied Apple's strategy, coming out with a mediocre clone of a good hack. Apple's strategy worked beautifully in the market.

    But squeezing all that stuff into these underpowered machines meant that customizability, ease of programming, and extensibility went out the window. X and UNIX contributed their part. The overall result has been 20 years of living within a straightjacket of limiting APIs, poor tools, and lousy languages. Since the consumer didn't have to deal with the programming side and since the results looked nice, the consumer was happy. But, IMO, Apple is probably the single most responsible company for impeding progress in the area of GUIs and HCI.

    People will make progress in fields when they have good tools that allow them to explore new ideas easily. We are only now beginning to get back to the state of the art of the early 1980's. Languages like Java and Python make experimentation easier, and systems like OpenGL present a good standard for advanced graphics. Maybe soon, we'll see some genuine progress in human/computer interaction again.

  • by SimHacker ( 180785 ) on Wednesday July 05, 2000 @12:49PM (#955787) Homepage Journal
    First, I'd like to say that I'm really happy to see the pie menus in Gnome -- great work!!!

    Unfortunately, there are a couple of stupid reasons why pie menus aren't widely used. One is technical and one is political.

    The technical one has been the lack of plug-in component architectures that allows new widgets like pie menus to be integrated into new and pre-existing applications. The other is that companies like Alias/SGI are abusing the patent system to discourage their competitors from using useful techniques like pie menus.

    Some of the technical problems have finally been solved for Linux and X11! Thanks a lot to everyone who contributed.

    NeWS took a stab at solving of those problems a while ago. You could download piemenu.ps to the NeWS window server, and replace all the linear menus in the system with pie menus, and download pietab.ps and replace all the window frames with tabbed window frames, that let you drag the tab anywhere around the edge of the window, and pop up optimized window management pie menus.

    "Bring to front" was up, "Push to back" was down, the "Stretch edge/corner" submenu had 4 corners and 4 edges in the appropriate direction, so you could mouse ahead into the pie menus very quickly once you learned them, etc. Pie menus are great for window management, since the tasks are so spatial and you use them so often, you soon learn to mouse ahead very efficiently, it saves you a lot of time, and is very reliable.

    The litmus test for a pie menu window manager is that you should be able to reliably start up programs and manipulate windows, even while the window system is busy starting up, paging and thrashing virtual memory, and only slugishly responding to input events. Mouse ahead is that good! Pie menus must be very careful how they synchronize and handle input events, never dropping any mouse clicks on the floor!

    All that PostScript pie menus source code I wrote is freely available, but only runs on NeWS, which would be more effort than it is worth to resurrect.

    When NeWS died and I had to start use X11 on a regular basis, I hacked pie menus into one of the window managers (that I called "piewm"), so I could use them to control the windows and run programs without going crazy with frustration at linear menus. That source code is also freely available, and it probably still works ok. But the code is not very reusable or up to date, since the X11 window manager is monolithic and does not use any plug-in component framework. It would be better to start with the following code instead.

    When I ported SimCity Classic to Unix in 1992, I used the TCL/Tk toolkit, and implemented a Tk pie menu widget for the game, to select between city editing tools (bulldozer, road, residential zone, etc). I distributed the source code for the TCL/Tk widget for free, but it was not widely used in other applications, because it required a programmer to integrate the C and TCL source code into another program, then recompiling and relinking. At the time, TCL/Tk did not have a dynamically loadable component framework.

    Microsoft has developed OLE (aka ActiveX) to solve this problem. It allows components written in any language to be loaded dynamically at run time and integrated with any other language, and it allows programmers as well as more casual interface designers to plug components together and configure them with property sheets.

    I implemented an ActiveX pie menu control, so that pie menus can be used on web pages and in other Windows applications. The source code as well as the binary is freely available. Now it is quite easy for other people to integrate ActiveX pie menus into their own applications and configure them to their liking.

    I've used ActiveX pie menus as a vehicle to experiment with all kinds of different layout and interaction styles. They've got lots of property sheets to set all the various modes and attributes, and you can type in a nested submenu tree as an indented text outline.

    I implemented graphical menu items, but I still want them to be animated. A while ago I started adding the ability to read and write nested pie menu specifications as xml. I wanted to add all kinds of other features, but there needed to be an easy concise way to read, write and configure them all. I finally realized that I had hit a brick wall with ActiveX, in the face of all the complexity and things I wanted to be able to do with pie menus, compared to what could be done on a web page with dynamic html.

    I want each menu item to be any dynamic html object, like a movie, or a java applet, or an ActiveX control. And I want the graphics and interactive feedback to exploit the full capabilities of dynamic html, like making the point size of the label grow continuously larger as you move the cursor into the slice.

    I realized that it was going to be impossible to play keep-up with the capabilities of a web browser by adding feature after feature to my little ActiveX control, and what I really needed was for pie menus to be specified in xml, and implemented inside the web browser using dynamic html on the web page itself, instead of using a shrink wrapped plug-in control that opens and draws its own windows, but can't interact with the rest of the web page.

    So I have basically shelved the ActiveX pie menu, and decided to rewrite pie menus in JavaScript and dynamic html, if I ever get around to it, and if the browsers ever get around to supporting dynamic html.

    In the mean time, I have been working on the political problems that have kept pie menus and other useful techniques from being widely used.

    I was at the computer game developer's conference several years ago. Since I was using 3D Studio Max at work, I stopped by the Kinetix booth, and asked them for some advice integrating ActiveX pie menus into their 3D editing tool.

    They told me that Alias had "marking menus" which were like pie menus, and that Kinetix's customers had been requesting that feature, but since Alias had patented marking menus, they were afraid to use pie menus or anything resembling them for fear of being sued for patent infringement.

    I told them that sounded like bullshit since there was plenty of prior art, so Alias couldn't get a legitimate patent on "marking menus".

    The guy from Kinetix told me that if I didn't believe him, I should walk across the aisle and ask the people at the Alias booth. So I did.

    When I asked one of the Alias sales people if their "marking menus" were patented, he instantantly blurted out "of course they are!" So I showed him pie menus on my laptop, and told him that I needed to get in touch with their legal department because they had patented something that I had been working on for many years, and had used in several published products, and I didn't want them to sue me for patent infringement.

    When I tried to pin him down about what exactly it was that they had patented, he started weasling and changed his story several times. He finally told me that Bill Buxton was the one who invented marking menus, that he was the one behind the patent, that he was the senior user interface researcher at SGI/Alias, and that I should talk to him.

    So I called Bill Buxton at SGI/Alias, who stonewalled and claimed that there was no patent on marking menus. He said he felt insulted that I would think he would patent something that we both knew very well was covered by prior art. I told him that companies try to made illegitimate patents all the time, and that I did not mean to insult him by repeating to him the misinformation that his marketing people were spreading around the computer industry, in his name.

    I tried to explain how Alias's FUD had adversely effected the user interface design of 3D Studio Max, in spite of user requests, but he did not care about 3D Studio Max, since Kinetix was his competition. I asked him whose side he was on, the users or the patent lawyers.

    He claimed to be on the side of the users, since he is such a well known user interface researcher, but I believe he has totally sold out to the point of abusing the patent system for profit, and is in the thrall of SGI corporate lawyers. Users beware.

    A year or so later, I ran across a marking menu patent issued to Alias, that is probably the one the Alias sales people were spreading rumors about. Now it all makes a lot more sense in perspective.

    At the time I found out about it from Kinetix, Alias had just applied for the patent on marking menus. The Alias sales people had heard about it, but could not keep their mouths shut, even though there were damn well supposed to. So they repeatedly spread Fear, Uncertainty and Doubt by bragging about this PENDING patent that they really didn't know much about. The only reason I ever learned about it, was that their FUD was so successful if effected Kinetix's plans.

    When it got back to Buxton that they had leaked news of the pending patent to Kinetix, which was supposed to be secret, he was furious, but certainly wouldn't tell me what was really up, so he took his anger out on me instead. He wanted to keep me in the dark, so I didn't go to the U. S. Patent Office and inform them of all the prior art that was conspicuously missing from his patent. But I'll bet he was sure proud that the leak about the patent successfully discouraged Kinetix's plans to put marking menus into 3D Studio Max. It's a textbook example of successful FUD!

    Anyway, I did not let that discourage me from my long term plan of incorporating pie menus into a mainstream product (The Sims from Maxis). That is the only way that a lot of people will ever be able to see them and get used to the idea.

    Now, when the users of a program like 3D Studio Max demand a feature like pie menus, companies like Kinetix will not be fooled by FUD spread by corporations like Alias/SGI. They will realize that their kids play a game that has pie menus, and they seem to work ok, so there must not be anything wrong with using them for a 3D graphics editing program.

    -Don

    Pie menu web page:
    http://www.catalog.com/hopkins/piemenus [catalog.com]

    Notes from a talk about Pie Menus I gave to BayCHI at Xerox PARC:
    http://catalog.com/hopkin s/piemenus/NaturalSelection.html [catalog.com]

    A description of ActiveX pie menu features:
    http://catalog.com/hopk ins/piemenus/PieMenuDescription.html [catalog.com]

  • Check out The Flash Challenge [flashchallenge.com].

    It is a monthly contest of web sites that predominantly use flash.

    The interfaces on many of these web sites are not run-of-the-mill and most are truly inspirational.

  • Hardware has a huge effect on the interface, and output hardware is only half the story. Input hardware needs major improvement.

    A modern GUI goes hand in paw with a mouse. Almost every GUI operation involves moving the mouse pointer to a location and clicking. (Even dragging ignores the path the mouse took to get to the mouse-up location.)

    To expand the limits of interaction between the computer and the user, we need to increase bandwidth in both directions. Input bandwidth is much lower than output bandwidth. The mouse is an extremely low-bandwidth device:

    Assuming...

    • an average of 1 down-up click per second
    • a 1 million pixel screen
    • clicking one of 2 buttons at a time
    ...that's about 24 bits per second.

    By comparison, typing on a keyboard might be more like...

    • 50 words per minute
    • average of 5 characters per word and one space
    • 7 bits per character
    ...35 bits per second.

    The mouse defines the current state of GUIs. I don't know how much you can change and still keep the mouse/mouse-pointer combination. Touch screens are a start, since it's a little easier to do gestures on them and they have the potential to be more accurate... but I think we can do better. If you want a truly 3D environment, you need a 3D device for input as well as output. I wonder if the real innovation will come with something that lets you use your own body, like a camera that follows your hand and face movements, or better voice recognition.

  • Cars have 4 wheels, gas pedal on the right, clutch on the left, brake in the center, and your turn with a steering wheel. You can add alot of fancy gadgets and make the interior colorful and luxurious, yet the interface remains the same. So will it be with GUI's.

    GUI's have become so similar that there are few real differences between them and they all work in the same way. Why? Because they receive input from a mouse and users point and click with it. Yes there are cosmetic differences between them but they all have the 'click on this icon to run this program'. Even with unorthodox ones like WindowMaker you still have to move your mouse then click to make something happen. As long as we use the mouse the GUI will not change.

  • The two notable "paradigms" associated with GUIs are of:
    • WIMP - Windowed Interface with Mouse Pointer

      This "model" has become fairly much dominant, and continues to undergo various forms of "tweaking," lately with everyone going gonzo over Themes. [themes.org]

      Unfortunately, major changes require either nuking the whole thing and starting from scratch, which is a lot of work, or else making systems of more and more byzantine complexity to operate.

      The latter is where adding additional "stuff-to-click" takes us. Every added toolbar results in another "hieroglyphic" language, moving us towards ancient Egyptian rather than anything modern. (The McLuhan "Laws of Media" strike again...)

    • MVC - Model/View/Controller

      The more "intelligent" sorts of changes don't necessarily involve increasing the visible complexity, but rather trying to split systems more clearly into this paradigm of designing, somewhat separately, an underlying model, a set of controller functions to control the object, and then some form of "front end," or "view."

      It's hardly new; Smalltalk and NeXTStep promoted the MVC "view of the world" umpteen years ago, and the problem really is that the ad-hoc GUI construction systems have so often conflated M, V, and C together that many GUI applications wind up as jumbled sets of functionality.

      It may be that introducing things like Glade User Interface Builder [pn.org] along with libglade , to encourage keeping "controller" stuff in once place, GNOME-print, [gnome.org] Gnome Canvas, [gnome.org] DPS for XFree86, [sourceforge.net] and Display Ghostscript [aist-nara.ac.jp], ReportLab, [reportlab.com] providing "view" tools, and CORBA, [hex.net] providing separation of "model," may provide a direction to clearly separate these functions so that GUIs will be less confused.

    None of this represents dramatic, overnight change, and I'm not sure that that's a bad thing.

  • I've been using a GUI of one sort or another since I bought my Atari ST back in '87, but I still prefer the CLI for some things. What I want to know is why it has to be one or the other...

    To me, the perfect UI would be one where a command is a command is a command, whether it is a text string from the keyboard or mouse click/action. A few years ago, I took an entry level AutoCAD class (V13 I think) and found that I really really liked the way I could do stuff with the mouse, with the keyboard, or both. Why can't there be the same functionality in an OS's UI? A "Command Interface" with a command line component and graphical component; if it were set up right, it would be very customizable (mostly CLI's for dinosaurs like me, mostly graphical for technophobes, etc.), and it would be expandable to other methods of entry (like voice commands). (The only voice-command software I've seen so far has been strap-on stuff that was pretty lame, like Windows sitting on top of DOS.)

    OTOH, maybe the problem is that we're trying to find a one-size-fits-all UI and failing; how well would people react to such limited choices in non-computer areas? Like a standardized instrument panel for you car; I know people who think that they can't drive without a tachometer, and others who would find the additional information a tach provides as intimidating and/or distracting.

    Ack; enough of my prattle...

  • The reason words are usually left out is that it makes it less expensive to internationalize a product, but that's at the expense of usability.
    Apps written with Cocoa on OS-X are very easy to internationalize (assuming you have a clue and write them properly). In fact, your end-users can internationalize them without access to the source code. As long as you don't do anything dumb like hard-code any text messages. End users can even readjust controls and other UI elements to accomodate longer/shorter words in different languages.

    They got this stuff from OpenStep. Some of the old OpenStep software companies (like Stone Design [stone.com]) offered bounties (like free software) for people that translated their apps to other languages.

    Burris

  • R/G color-blind people still have trouble with the colors used in traffic lights. They, however, make do via the fact that the relative positions of the colors is uniform.
  • I agree with your desire to categorize
    information more flexibly than in the arcane
    ways we currently force files
    into a directory structure.

    Check out the work on
    [xerox.com]
    Placeless Documents
    and
    [devlinux.com]
    Hans Reiser's white paper on name spaces
    . I find that stuff really interesting
    and encouraging.

  • The current metaphor system of user interfaces is the best chance at this time to develop 3D interfaces. Almost anyone who plays first person shooter-type games is satisfied with the keyboard-mouse combination that is currently used. It works because the interface is simple.

    Imagine yourself in a 3D environment that extends indefinitely. Imagine 3 axes that all intersect at you. The X axis will point infinitely side to side, while the Y axis will point infinitely up and down. The Z axis points infinitely forward and backward. A satisfying 3D interface must find a way to rotate along each of these axes and move forward and backward among these axes.

    For using the features of the computer, tasks must still be organized in a certain matter. Windows, dialog boxes, et cetera can still serve this purpose, with their content either rendered as a flat 2D surface on the window itself or they can "pop out" of the window - with "depth" values that make them come of the window. These depth values should be kept small, however because the farther the objects are from their "window," the more chaos and disorganization occurs.

    The way that the keyboard+mouse combination works currently is that the mouse is mapped to rotate along the X and Y axes - moving the mouse left will rotate your perspective left around the Y axis and inversely for right, and moving your mouse up will rotate your perspective up around the X axis and inversely for down. You can use the keyboard in this system to move forward and backward along the X and Z axes, and limitedly along the Y axis - you can jump up, and gravity pulls you down, but there isn't the need for a good amount of control in that respect. A typical layout of keys for this approach is E moving forward along the Z axis, D moving backward, and S moving left along the X axis and F moving right.

    We already have rotation along the X and Y axes, and movement along the X and Z axes... the movement system along the Y axis is much to weak to be used as a method of navigating through a GUI. What needs to be added to the system is a way to rotate around the Z axis, and a better system of moving along the Y axis. What would be ideal would be a mouse that has buttons on it that would serve as arrow keys arranged in the familiar format, but there isn't really a big market for that. Supply and demand therefore dictates a mouse like that will not be created for at least a couple of years.

    We have to use different fingers on the keyboard, or assign more keys to the already in use fingers. We are using the ring finger for left movement along the X axis with the S key; the middle finger controls up and down movement on the Z axis with respectively the E and D keys; and finally the pointer finger moves right along the X axis with the F key. We can map the W key for use by the ring finger to rotation left around the Z axis, and for rotation right, we can map the pointer finger's R key. We can assign movement along the Y axis to the pinky finger. Q can move up and A can move down.

    Using this system you can achieve somewhat accurate 3D precision with physical 2D input and output. To move a window or rotate it through the 3D environment, you can do just as you do in a 2D GUI. You hold down the left mouse button to "grab" a portion of the window - like the title bar, and navigate through the user interface, finally letting go of the left mouse button and dropping it where you want it. This presents a problem; it would be hard to select something if you don't have some way to pinpoint where you are "grabbing." A simple targeting reticule, like in FPS games, or guns, a dot, or a circle surrounding the current area pointed at would serve a great purpose. You just grab, rotate, move and drop and there, you've changed the location and rotation to where you want. Resizing windows should be limited from rotation, as that would shear the window and greatly complicate the whole interface.

    If you were to modify the contents of a document, you must be able to easily rotate yourself so you are centered in relation to that window. You could simply select the window, press spacebar, and the GUI would automatically align you with the window, "looking" perpendicular to the surface of the window, and aligned so that your Y axis is aligned parallel to the left and right sides of the window.

    Now that you are ready to work with your document inside the window, such as the text document, you start typing. However, you now realize that every time you press A, D, E, F, Q, R, S, W, or the spacebar, you manipulate the environment, but not the document. There must be a better way.

    The most natural place to look on the keyboard to manipulate the environment with more than 4 keys would be the numeric keypad. You can press numlock to toggle between number mode and manipulation mode. As an added feature, when you are in number mode with the keyboard, the interface can be manipulated as a regular 2D environment, with a traditional cursor and everything! Key assignments could be replacing the keys E, D, S, F, Q, A, W, R, and spacebar with 8, 5, 4, 6, +, Enter, 7, 9, and ., respectively. You would have to use the left hand, so accessing + and enter would be a stretch with your thumb, but you would be able to get used to it easily.

    This is my wishlist; what actually happens to become a new interface will probably be radically different, but I hope I can provoke such an interface to be designed.

    ----
  • Well, on the plus side, Apple is fixing the stupid QT4 interface. The tray is going away, and the thumbwheel is being replaced with a slider.

    At least, as of OSX DP4...

    - Jeff A. Campbell
    - VelociNews (http://www.velocinews.com [velocinews.com])
  • <i>The fact is, they designed QWERTY to be slower because back in "the day" people were typing faster that the typewriters could process the info, and they kept jamming. QWERTY forced the users to type slower. </i>

    Actually, moving the letters apart wasn't so much about slowing down people, but rather to move frequently co-occuring letters apart. With old typewriters, the arms that flew up and hit the ribbon would be more likely to get jammed if the were close together, but moving the keys far apart made this problem a bit less troublesome.

    The scientific literature indicates that Dvorak gives a 5-10% speed advantage, and a small accuracy advantage vs QWERTY. But it doesn't justify the costs of hardware (although these days it would generally mean little stickers for the keyboard rather than a new physical mechanism).

    For most people, the bottleneck isn't typing speed it's thinking speed. Unless you are a transcriptionist, most of your typing will probably occur during composition, when you will stop and think of stuff to write, etc... That's what takes time. I conducted a study a while ago that indicated that a 35 wpm typist only typed about 15 wpm when replying to emails, if you included thinking time.

    Interesting fact: The R in QWERTY is due to the fact that after the advent of the "Sholes keyboard" (when they reordered the characters) a typewriter company wanted their salesmen to be able to quickly type the word 'typewriter' to impress customers. All the other letters in 'typewriter' are on the top row, so they put the 'r' there too.

    ---
    Ever notice how at trade shows Microsoft is always giving away stress balls...
  • Given that at any point in a programme your decision tree of what you might want to do is not enormous, the goal of UI design is not to allow you to dump as many bits of information into the computer, but instead to allow the user to use less information to convey what (s)he wants the programme to do.
  • by CR0 ( 22574 ) on Wednesday July 05, 2000 @01:32PM (#955814)
    true, very true. and this is how i see it....

    speech. ok, so you have heard that before, but current speech technologies use a monitor... why?

    the new gui will all but disappear visually... i will have many many "panels" throughout my house. some on walls, some handheld (but bigger than the palm pilot, some the same size) and some on appliances. my "computer" will be cased in a closet somewhere, and i will walk around the house asking my computer to do things.... like, " computer, please display the current weather sat image on this handheld" and poof.. there it is.

    "computer, what is the price of ram today at egghead.com" (a voice echos, 15$ per GB, sir)

    "computer, please dial mom" (mom says hello through household stereo speakers, only in the room i am in)

    "computer, do i have all the ingrediants for 'brian's pizza recipie #4'" (computer answers no, you still need cheese) (ie, it remembered i asked before... arg.. silly me)

    "computer, please find which channel is playing the blue jays game, and display it on that wall display" (i point, and the game appears)

    see, these are all functions that we can do with computers today, but require a lot of effort on our part. ie, all current groceries in a database, a tv-tuner and tv-out card manually set up. etc, etc.

    the "GUI" essentially disappears. as does the manual work.


  • There is absolutly no standard when it comes to unix programs. Every programmer has to come up with their own strange syntax. Think about it, we have vi, emacs and all the other editors. Everyone has to memorize different switches for each program and none of them have much of any relation. I know there are loose standards but still, sometimes -v displays version, sometimes verbose. -h might get you help, -help or --help. Then you have to memorize each programs UI and strange config file syntax. Part of this makes unix great, a lot of it sucks. Just thnink how many different syntax's there are for configuration files. No wonder every new linux user is confused.

    As far as GUI's go, most suck. I have to say windows is by far the easiest to use at first. Enlightenment looks nice and all but its confusing as hell to configure. Ever try to configure it by hand, wtf kind of syntax did they use. And besides, I would rather see some standards to computers rather than just another new geek interface that is totally original. There needs to be a consistent simple standard for people to follow that should carry over to all programs. Every UI in unix is like a whole different world, I wonder how much time people waste adjusting to each one?

  • Tee hee.

    I think plain language speech interfaces apply to more than just applications where being hands-free is important. Most people don't type as fast as they speak (OK, I type faster than I speak but that's because I'm a computer nerd). People will (as they do, somewhat, today) tell their computer things like "Take a letter", "call home", etc. Even when they use a graphical display, speech combined with a pointing device will be the dominant means of input.

    Thanks

    Bruce

  • by Ian Bicking ( 980 ) <ianb@nOspaM.colorstudy.com> on Wednesday July 05, 2000 @01:35PM (#955817) Homepage
    We shouldnt force people to think like computers.
    I agree with this, but I think it's also important not to take it to far. Things that falsely mimic the real world are not helpful. My computer "desktop" is really only vaguely like an actual desktop. To extend the metaphor into drawers and what-not would be stupid -- drawers happen to be useful physical ways to keep objects, but they are lousy ways to keep data.

    Many of the things around us are not particularly intuitive. If you really think about the interfaced involved with driving a car, it's very non-intuitive. You press things with your foot to stop and go. You twist something to change direction, but that change is dependant on speed, direction, and how much you've already turned the wheel. It's awful. But, with some practice, nearly everyone is able to figure it out.

    What we should do with computers is to create a simple set of fundamental ideas which combine in powerful ways. These are the abstractions which people can use to do things they've never done before successfuly and without training. Files, or more generally objects, are probably one good abstraction. Currently the domain name/server abstraction is useful, but may be replaced. There are more of these -- perhaps with by defining a minimal set we can find a better interface.

    I can manipulate files much more flexibly than I can manipulate my CD collection. Hell, every time I get a new CD I have to rearrange everything because my CD holders are a little tight on space. It's a mess. Computers can do better. We shouldn't cripple them by holding them to physical/metaphorical limitations.
    --

  • by SimHacker ( 180785 ) on Wednesday July 05, 2000 @01:49PM (#955821) Homepage Journal
    See my previous posting about implementing pie menus in dynamic html/javascript/xml. If somebody ever gets around to implementing pie menus in the web browser using Dynamic HTML/JavaScript/XML, they could look like anything you can put into a web page.

    The pie menus in The Sims go a bit further than you could do in a web browser, though. Instead of using an opaque circular window, the Sims pie menus use a circular feathered real time image processing effect. It shows through to the live 3d graphics going on behind the menu, but the menu background is desatureted, darkened and lowered in contrast, so the text labels and the colorful person's head in the menu center stand out sharply against the background, but there is no sharp edge to the circular shadow effect, and you can still see what's going on behind the menu.

    The problem I was trying to solve, was that I wanted to clearly separate the interface elements (the pie menus) from the virtual world (the house), because the pie menus pop up overlapping the world view, wherever you click on an object.

    The pie menu has the selected person's head floating in the center, and without the shadow separating the head from the rest of the world, it would look disconcertingly like a giant head and menu labels appearing in the middle of the room among all the people and furniture.

    So the desaturation, darkness and low contrast of the background made the head in the menu center and the labels pop out much better against the otherwise colorful background. The circular shadow is smoothly feathered so it does not have a distinct edge, and the menu labels overlap out over that edge, breaking the frame, yet obviously associated with the menu.

    The overall effect is intended to be that the selected person is thinking about which action to perform on the selected object, their disembodied head outside of the world at another level of thought, looking up and down and all around around at the labels, trying to decide which action to do next.

    There's an illustration at the end of this web page, or you can pick up a copy of The Sims anywhere that sells computer games:
    http://catalog.com/hopkin s/piemenus/NaturalSelection.html [catalog.com]
    Right now, The Sims is only available on Windows, but I'm making a lot of progress porting it to Linux, and looking for a distributor. Please contact your favorite Linux game distributor and tell them if you would like to buy a copy of The Sims for Linux.

    -Don

  • A lot of the interactions that need to be of concern are not the "Human-Computer" ones, but rather those where the computer is "talking to itself."

    This includes:

    • The whole "registry" thing.

      There is various information that needs to be persistent to one degree or another. On Windows, this tends to be saved in the Registry of "renoun and much denigration."

      On Linux, such data typically sits in the hordes of files in /etc and in $HOME/.*rc

      The semantics of how this all works has rather a lot of effect on how applications start up, even though it sits "under the covers."

    • Similarly, when applications 'talk to one another,' whether via OLE, COM, CORBA, RPC, HTTP, or ICE, this has rather a lot of effect on system behaviour, even when the protocols hide "below the skin."
    • The use of serialized data transfer protocols ( e.g. - the "Save File" dialog) as opposed to persistent database schemes similarly can make systems work way different even though the appearance of what gets shown on screen may have minimal difference.

      It's a small additional step to get to "transactional" systems, where once updates are "committed," they are really permanent. Think Tuxedo/Encina...

    These three "views" all have in common that they have nothing to do with which GUI library you're using to build your applications, or what icons are used.

    The fact that they're not particularly "visible" does not make them any the less important in the overall scheme of things.

    After all, if the gentle user can shut down (perhaps pressing the power switch!), and expect to power up again tomorrow and have everything go to where it was when they pressed the switch, that has lots of effect on user behaviour, whether they "click on save" continually, or not.

    My thought here is that a lot of the "HCI" changes taking place don't always need to involve things that are manifestly graphical. A Massively Improved World may "simply" involve systems that are reliable and provide persistent data as opposed to "3D Rotating Splash Screens."

  • by Wakko Warner ( 324 ) on Wednesday July 05, 2000 @04:29PM (#955878) Homepage Journal
    Absolutely nothing looks worse than a screen cluttered with seventeen different-looking applications, each as counter-intuitive and gaudy as the rest, and each totally different from the others. Open up Realplayer 7, Quicktime Player, Winamp, "Neoplanet", and a few other apps. Oh, then run Windowblinds.

    I'm assuming you've got a Windows system. Those who run Linux, like me, can easily emulate this train wreck in X with GTK, KDE, Xt, Motif, Athena, and straight Xlib applications.

    Barf. Barf. Barf! Death to skins everywhere. Give me a good-looking, powerful, *standard*, incredibly intuitive interface. Hopefully someone's researching this.

    - A.P.
    --


    "One World, one Web, one Program" - Microsoft promotional ad

  • by Eater ( 30104 ) on Wednesday July 05, 2000 @06:37PM (#955898)
    I would like to see a textmode interface to a 3D GUI. Something like:

    Desktop
    You are standing in an open field to the west of a bar. There are some icons in the bar.

    >examine icons
    You can't see any icons here!

    >e

    Launcher Bar
    This is a narrow room with passages leading west to the Desktop and north to an xterm window. In addition, a set of stairs leads down into darkness. There is a Netscape icon here. There is a StarOffice icon here. There is a gaim icon here.
    Your pointer is glowing with a faint blue glow.

    >click netscape

    What do you want to click the Netscape icon with?

    >click netscape with pointer

    A violent rumbling comes from the ground. A previously unseen door opens to the southwest, revealing a brightly colored splash screen. After a moment, the rumbling stops, and the splash screen is replaced by an instance of Netscape Navigator 4.72, process number 5188.
    Your pointer has begun to glow very brightly.

"Protozoa are small, and bacteria are small, but viruses are smaller than the both put together."

Working...