Forgot your password?
typodupeerror
Programming Education GUI Technology

Ask Slashdot: Why Are We Still Writing Text-Based Code? 876

Posted by timothy
from the because-there-are-only-so-many-lego-in-the-world dept.
First time accepted submitter Rasberry Jello writes "I consider myself someone who 'gets code,' but I'm not a programmer. I enjoy thinking through algorithms and writing basic scripts, but I get bogged down in more complex code. Maybe I lack patience, but really, why are we still writing text based code? Shouldn't there be a simpler, more robust way to translate an algorithm into something a computer can understand? One that's language agnostic and without all the cryptic jargon? It seems we're still only one layer of abstraction from assembly code. Why have graphical code generators that could seemingly open coding to the masses gone nowhere? At a minimum wouldn't that eliminate time dealing with syntax errors? OK Slashdot, stop my incessant questions and tell me what I'm missing." Of interest on this topic, a thoughtful look at some of the ways that visual programming is often talked about.
This discussion has been archived. No new comments can be posted.

Ask Slashdot: Why Are We Still Writing Text-Based Code?

Comments Filter:
  • by Anonymous Coward on Friday February 07, 2014 @08:59PM (#46191731)

    The reason programming languages are still as they are is for a simple reason, because you can't produce something complex with something simple, I.E. the more you simplify something the less control you have of it. Can a programming language be made that is not text based? Sure, but I highly doubt you are going to get the flexibility to do a lot of things. Even assembly is still required sometimes.

    • This view is belied by the graphical tools used to design and layout hardware and chips. Higher level languages in particular are largely based on connecting the data flow between various pre-defined blocks or objects - function libraries.

        I actually built a primitive graphical Pascal pre-processor back in the late 1980s, which used the CMU SPICE circuit board layout program. Since the output of the program was text based, it could be processed into Pascal code. The model I used was that a function was a 'black box' with input and output 'pins', but also could be designed itself in a separate file.

      I never actually finished it, but it was pretty workable as a programming paradigm, and opened up some new ways of looking at programs. For instance, a 3-D structure could be used to visualize formal structure (function calls, etc.) in one axis, data flow in another.

      Also, the Interface Builder for the NeXT machine was more-or-less graphical, IIRC only 2-D. It made for very fast prototyping of a new user interface, and the 'functional' code could be put in later. (I saw a former schoolteacher, who had never used a computer until a few months before, demonstrate creating a basic calculator in Interface Builder in under 15 minutes. It worked, first time.)

      I think the real issue is in large part a chicken-and-egg problem. Since there are no libraries of 'components' that can be easily used, it's a lot of work to build everything yourself. And since there is no well-accepted tool, nobody builds the function libraries.

      Looking at this from a higher level, a complex system diagram is a visualization that could be broken down to smaller components.

      In practice, I believe that the present text-based programming paradigm artificially restricts programming to a much simpler logical structure compared to those commonly accepted and used by EEs. For example, I used to say "structured programming" is essentially restricting your flow chart to what can be drawn in two dimensions with no crossing lines. That's not strictly true, but it is close. Since the late 1970s, I've remarked that software is the only engineering discipline that still depends on prose designs.

      • by icebike (68054)

        This view is belied by the graphical tools used to design and layout hardware and chips. Higher level languages in particular are largely based on connecting the data flow between various pre-defined blocks or objects - function libraries.

        That's basically a scheduling and routing problem. Getting all the leads from hither to yon connecting all the right points.
        That's drafting, not programming. Akin to wiring a board for an IBM 407 [wikipedia.org] or something.
        Wiring a board wasn't programming either, (although it was often called that).

        Conceptualizing the problem and the solution is the job of wetware. The closest we get to symbolic programming is writing out flow charts, but you can still say in 6 words and one symbol what takes a mountain of code to ac

        • by unitron (5733)

          ...
          Wiring a board wasn't programming either, (although it was often called that)...

          In, or shortly after, the beginning, that is how programming was done, back when computers were made of vacuum tubes and relays and were about as big as a locomotive, if not as cool running.

      • by Tom (822) on Friday February 07, 2014 @09:59PM (#46192245) Homepage Journal

        Also, the Interface Builder for the NeXT machine was more-or-less graphical, IIRC only 2-D. It made for very fast prototyping of a new user interface, and the 'functional' code could be put in later. (I saw a former schoolteacher, who had never used a computer until a few months before, demonstrate creating a basic calculator in Interface Builder in under 15 minutes. It worked, first time.)

        That's impressive for a newbie, but it's not even on the order of magnitude of complexity as a real application. And it probably didn't have input validation and a bunch of other items that new programmers always forget.

        I've got a couple programs with several ten-thousand lines of code. If you want to visualize them, you will need a very, very large sheet. And it wouldn't be more transparent.

        Since the late 1970s, I've remarked that software is the only engineering discipline that still depends on prose designs.

        It's also the only engineering discipline with no physical representation. So maybe, just maybe, it's a case of "the rules don't apply because it's different" ?

      • by erice (13380) on Friday February 07, 2014 @10:18PM (#46192399) Homepage

        In practice, I believe that the present text-based programming paradigm artificially restricts programming to a much simpler logical structure compared to those commonly accepted and used by EEs. For example, I used to say "structured programming" is essentially restricting your flow chart to what can be drawn in two dimensions with no crossing lines. That's not strictly true, but it is close. Since the late 1970s, I've remarked that software is the only engineering discipline that still depends on prose designs.

        Funny that you should say that. For the last 20 years, the trend in Electrical Engineering is away from graphical entry and toward text based design languages. Hardly anyone designs logic by drawing gates anymore. We use languages like Verilog and VHDL, which look a whole lot like software languages. Even the analog designers make use of Verilog-A or even just Spice, all text based. When it comes down to building a circuit board or analog circuitry on a chip, there is still a manual "compile" step of drawing diagrams and polygons but that is only because the result is ultimately a three dimensional object (well, more lke 2.5D) and it is the only way to be sure you get what you intended. It is not because creating designs graphically is considered convenient.

    • The reason programming languages are still as they are is for a simple reason, because you can't produce something complex with something simple

      You're right! Think of how much more profound Shakespeare would have been if we had 28 or 30 letters in tha alphabet, or think of the symphonies that Mozart could have written if he had twelve notes to compose with, instead of eight. Why, even the Cray Supercomputer would have been astounding in its day if AND, OR and NOT weren't the only gates we had to build with. Maybe if they weren't coding in LOGO, then beta would be worth switching to.

      • by unitron (5733) on Friday February 07, 2014 @11:18PM (#46192789) Homepage Journal

        " Why, even the Cray Supercomputer would have been astounding in its day if AND, OR and NOT weren't the only gates we had to build with."

        Do you know how hard it was to get the individual components and materials to build MAYBE gates, and how tight the tolerances had to be?

        Doc Brown only needed one flux capacitor, those things each needed at least a dozen.

        And you couldn't just take a MAYBE gate and slap an inverter on it to get an NMAYBE, you had to turn the whole design inside out, and the XMAYBE only existed on paper, because it would have taken the equivalent of 3 Manhattan projects and a quarter of the GNP of the entire Western Hemisphere just to produce a working prototype.

  • It's been done (Score:5, Insightful)

    by Misanthrope (49269) on Friday February 07, 2014 @08:59PM (#46191737)

    If you have to understand the concepts anyways, why is text worse than a graphical set up? You can't really avoid learning syntax this way if you want to write anything actually complicated.

    Also, fuck beta.

    • Re:It's been done (Score:5, Insightful)

      by Nerdfest (867930) on Friday February 07, 2014 @10:12PM (#46192343)

      Why are we still writing books using text (for the most part)? Doing it with pictures or other methods is frequently not clear enough even for fiction. Text is concise, or at least more-so than other methods.

      • Re:It's been done (Score:4, Interesting)

        by tlhIngan (30335) <(ten.frow) (ta) (todhsals)> on Saturday February 08, 2014 @03:12AM (#46193631)

        Why are we still writing books using text (for the most part)? Doing it with pictures or other methods is frequently not clear enough even for fiction. Text is concise, or at least more-so than other methods.

        Well, perhaps why are we still using text-only to code?

        I mean, the thing is, books are mostly text, but there are also illustrations (photos, artwork, graphs, charts, etc) that help enhance the content in the book.

        A picture is worth 1000 words does happen quite a bit, and it shows how one picture can remove a ton of wordy description in both clarity, conciesness and ease of expression.

        Heck, we can start with basic charts and tables - when you need to consult a chart or table, why do we have to literally code them in? Can't we just say "this is a chart with input X and output(s) y". and just include it, and the compiler automatically generates the code to handle looking up data? Same with a table of data - you put it in the code as a table, the computer figures it out and may even offer interpolation.

        Now you have source code where the chart is easy to understand and the amount of written code is less because the compiler generates the actual translations and encoding of the table.

    • Re:It's been done (Score:5, Insightful)

      by Dunbal (464142) * on Friday February 07, 2014 @10:27PM (#46192459)
      I would like to complain that OP had to explain his concept to us in words. Why are we still using something as primitive as words - abstract collections of symbols depicting sound (of all things!)- to convey meaning. Surely in the tens of thousands of years or more that humans have had language, someone must have come up with a better way of transmitting information... oh, and fuck beta
    • by Anonymous Coward on Friday February 07, 2014 @10:36PM (#46192509)

      If you have to understand the concepts anyways, why is text worse than a graphical set up? You can't really avoid learning syntax this way if you want to write anything actually complicated.

      Also, fuck beta.

      For that matter (and it really does matter), why is Slashdot still text based? I mean, my 2-year-old daughter enjoys looking at pictures on an iPad. So why not make Slashdot picutre based only, to open it up more to the masses (who often have the intellectual capacity of a 2-year-old anyway)? You could start by having 42% of visitors arbitrarily enter this new picture only mode which would have the second letter of the Greek alphabet (I love Greek!), and an embedded picutre that everyone associates with slashdot (some .cx domain or something). I'm blanking on the second step here, but I promise you, we will PROFIT!

  • Lego Mindstorms (Score:5, Interesting)

    by mrbluze (1034940) on Friday February 07, 2014 @09:00PM (#46191741) Journal

    Try Lego Mindstorms and see whether you find it quicker or slower. It's easy to make something simple but once the algorithm gets complicated it is not much easier to decipher than text code, and no faster in my experience. As soon as you want to get serious with the system, you will wish it had a low level system that lets you lay it out in text instead of images.

    This is partly the reason why surviving languages use symbols representing sounds rather than images as the Egyptians used. It's faster to write, and possibly faster to read.

    • Re:Lego Mindstorms (Score:5, Informative)

      by petermgreen (876956) <plugwash@@@p10link...net> on Friday February 07, 2014 @09:52PM (#46192205) Homepage

      Try Lego Mindstorms

      Be aware that the lego NXT software (haven't tried the EV3 stuff yet) is seriously crippled compared to labview (which it was based on), in particular you can't take "wires" in and out of structure blocks.

      I have used labview a bit and find serveral things annoying.

      1: There is no zoom functionality (apparently this is the #1 most requested feature)
      2: unlike variable names in traditiona code wires in labview typically don't have names. This makes it hard to understand what each wire is for (yes i'm pretty sure there is a way to label them, but it's something you have to do extra not something that naturally comes as part of the coding like in traditional languages)
      3: I can never remember what all the little pictures on the blocks mean.
      4: I find connecting the blocks very fiddly.

      Having said that some people seem to like it.

    • Re:Lego Mindstorms (Score:4, Interesting)

      by EETech1 (1179269) on Friday February 07, 2014 @09:56PM (#46192231)

      The last place I worked went from hand written C code to using Simulink to generate the (C) code for use in their ECMs.

      The entire engine: airflow model, fuel injection, and emissions system was just a bunch of pretty pictures in Simulink. You can drill down by clicking on the high level diagrams to see the nitty gritty of each process if you so desire.

      It was not nearly as efficient as the hand coded version, but there were far less issues with bugs, and it allowed us to have (many more) math / simulation types coding instead of just a few C gods. There were libraries that Simulink hooked in to that let it configure the hardware, but those were hidden away from the day to day people diagramming code.

      Cheers!

  • by Anonymous Coward on Friday February 07, 2014 @09:01PM (#46191751)

    This is a rhetorical question. It would be similar to ask "why do we write books or manuals when we can just record a video"

    The answer is written words is how we communicate and record such communication as a civilization. Written communication is easy to modify and requires little space to store. And this is just scratching the surface, not touching things like language grammar or syntax, etc.

    • by pla (258480) on Friday February 07, 2014 @10:18PM (#46192413) Journal
      This is a rhetorical question. It would be similar to ask "why do we write books or manuals when we can just record a video"

      You clearly haven't searched for even the most trivial of "How do I..." topics recently, have you?

      Why write three quick and dirty sentence-fragments on how to do it, when you can record a 10 minute video and post it to YouTube? And I wish I meant this as hyperbole.

      More seriously, I agree with you. We still code in text because no programming language will ever let me easily express "c^=0xdeadbeef" by drawing a line between two data objects. Yes, wizards have become reasonably adept at setting up the core functionality of any app not worth writing in the first place. But even when they do allow you to write a line of code such as I gave above, well... I can type that in about a tenth the time it would take me to click... drag... click... right-click... click (function) select (xor)... click (constant) type "0xdeadbeef"... whatmorondoesntaccepthexforafuckingbitwiseop??? backspace*10 "-559038737".
      • by skids (119237) on Saturday February 08, 2014 @12:40AM (#46193133) Homepage

        Why write three quick and dirty sentence-fragments on how to do it, when you can record a 10 minute video and post it to YouTube?

        This. And it's getting even worse -- even enterprise grade vendors are starting to do it to document their products while allowing their more formal manuals to languish.

        Anyone who wonders why we still use language instead of pictures really needs to spend some time trying to find information in a manual for a GUI-based application versus finding it for the CLI (or writing the two styles of manual, for that matter.) Yes, learnig to read well and type well takes a lot of practice. It is also worth every second.

  • Labview (Score:5, Insightful)

    by Anonymous Coward on Friday February 07, 2014 @09:02PM (#46191767)

    Because visual programming is even more awkward in almost any aspect (see Labview).It takes significantly longer to write, large projects are all but impossible. There is a reason why circuits are not designed anymore by drawing circuits (in most cases anyway)

    • Re:Labview (Score:5, Interesting)

      by Garble Snarky (715674) on Friday February 07, 2014 @09:04PM (#46191785)
      I'm stuck on a several-month-long Labview project right now. It's been a terrible experience. I don't know if it's more because of the poorly designed editor, the language itself, or the visual language paradigm. But I'm sure all three of those are part of the problem.
      • While SQL query design (with heavy checkpoint/drop down menu/etc UI) is sometimes useful, its ability to build queries with complicated logic is rather limited. It is good to write basic stuff or to learn basics of SQL writing, but people usually quickly move on to text mode in writing their SQLs.
        I personally enjoyed solving complicated problems by writing a suitable query to our database. I liked a lot to tune my queries' performance, it felt like creating art.

        My joy is about to end as our managers dec
    • Re:Labview (Score:4, Informative)

      by ArchieBunker (132337) on Friday February 07, 2014 @09:22PM (#46191957) Homepage

      I use Labview all the time and it does exactly as advertised. I'm a hardware guy but occasionally need things done in software. Sure its not optimal but it gets the job done.

      How are circuits designed today if they are not drawn?

      • Re:Labview (Score:4, Informative)

        by tftp (111690) on Friday February 07, 2014 @09:33PM (#46192057) Homepage

        How are circuits designed today if they are not drawn?

        They are synthesized by XST, Synplify Pro, or a similar tool.

        Slashcott Feb. 10-17!

      • Board designs are usually done by drawing schematics and then importing those into the PCB editor and then laying them out (autolayout of PCBs is possible in theory but i've never heard of anyone using it in practice).

        IC designs on the other hand are done by writing code in a hardware description language and then running that through the synthisizer (and maybe some manual tweaking afterwards for really high end designs).

    • There are some applications where graphical design works. Matlab Simulink, Labview etc are very useful for a certain limited set of problems. My feeling is that if the problem can be easily represented graphically, it may make sense to use a graphical language to code the solution. I think it is rare for a graphical language to be a good choice for a large problem.

      Its pretty similar to spreadsheets - they are very efficient tools for certain types of functions, but should not be turned into large scale pro

  • Text-based books (Score:5, Insightful)

    by femtobyte (710429) on Friday February 07, 2014 @09:03PM (#46191771)

    Why are we still writing text-based books, and communicating in word-based languages? Surely, we should have some modern, advanced form of interpretive dance that would make all such things obsolete. Wait, that's a terrible idea! Text turns out to be a precise, expressive mode of communication, based on deep human-brain linguistic and logical capabilities. While "a picture is worth a thousand words" for certain applications, clear expression of logical concepts (versus vague "artistic" expression of ambiguous ideas) is still best done in words/text.

  • by t0qer (230538) on Friday February 07, 2014 @09:03PM (#46191773) Homepage Journal

    I think the /. folks think it's an early April Fools day. Not write code using text? That's like saying, write a book with pictures. Sure it can be done, but it doesn't apply to all books.

    Maybe beta is an early April Fools joke too.

  • by Moblaster (521614) on Friday February 07, 2014 @09:04PM (#46191783)
    Well, Grasshopper, or Unschooled Acolyte, or whatever your title of choice may be...

    You did not hear this from me.

    But most developers belong to the Church of Pain and we pride ourselves on our arcane talents, strange cryptic mumblings and most of all, the rewards due the High Priesthood to which we strive to belong.

    Let me put it bluntly. Some of this very complicated logic is complicated because it's very complicated. And pretty little tools would do both the complexity and us injustice, as high priests or priests-in-training of these magical codes.

    One day we will embrace simple graphical tools. But only when we grow bored and decide to move on to higher pursuits of symbolic reasoning; then and not a moment before will we leave you to play in the heretofore unimaginable sandbox of graphical programming tools. Or maybe we'll just design some special programs that can program on our behalf instead, and you can blurt out a few human-friendly (shiver) incantations, and watch them interpret and build your most likely imprecise instructions into most likely unworkable derivative codes. Or you can just take up LOGO like they told you to when you were but a school child in the... normal classes.

    Does that answer your impertinent question?
  • Sure thing (Score:5, Funny)

    by Tough Love (215404) on Friday February 07, 2014 @09:06PM (#46191795)

    Sure, and similarly, laws should not be written down in legal language, they should be distributed in comic book form.

  • by machineghost (622031) on Friday February 07, 2014 @09:06PM (#46191803)

    There have been LOTS of attempts at "visual code", and they all look great when you watch the 10 minute presentation on them, but when you actually try to use them you find that they all solve a very small set of problems. Programmers in the real world need to solve a wide variety of problems, and the only medium (so far) that can handle that is text code.

    It's like saying "why don't we write essays in pictograms?" You might be able to give someone directions to your house using only pictograms (and street names), but if you want to discuss why Mark Twain is brilliant, pictograms just don't cut it: you need the English (or some other) language.

    • by satch89450 (186046) on Friday February 07, 2014 @09:51PM (#46192199) Homepage

      I used to write articles for magazines as a full-time job. When I first started using the outliner MORE, I found that the task of writing became much, much easier: I would outline the article, then fill in text for each outline item. When I was finished, I would then export the text and there was my article. It let me design the articles top-down, just as a EE designs a circuit top-down. Moreover, as the article would develop, I could shift things around very easily without having to do massive cut-and-paste exercises.

      Software design? I do that top-down mostly. I design the top level with functions, then fill in the functions. Lather, rinse, repeat as many times as you need to. The result is a piece of software that is highly maintainable.

      One of my biggest complaints about "graphical" programming is that you can't have much on the display -- you end up paging *more* than with a text-based system. It isn't the text that's the problem; its the lack of top-down designing on the part of the human.

      Now, one system that I absolutely loved working with had an IDE (text-based) where you deal in layers. When you click on the function name, the function itself comes up in different windows. I found that paradigm encouraged small, tight functions. Furthermore, the underlying "compiler" would in-line functions that were defined only once automatically. (You could request a function be in-lined in all cases, like in C, if you needed speed over code size.)

    • by Megane (129182) on Friday February 07, 2014 @10:54PM (#46192629) Homepage

      Also, just try using source code management (such as svn or git) with graphical programming languages. Even if they save in something sort of text-based (like XML), it's much harder to track and merge changes. And it's impossible when they save code as binary blobs. (LabView, I'm looking at YOU.)

      This is the number one reason why graphical programming languages are dead in the water from the start for any but the smallest toy projects.

  • by TheloniousToady (3343045) on Friday February 07, 2014 @09:07PM (#46191811)

    One practical example that I know of is Simulink, which can be used to generate code from diagrams. I did some testing years ago on Simulink-generated source code, and the code itself was awful looking but always worked correctly. Not a lot of fun to test when you had to dig into it, though. Also, testing seemed superfluous after never finding any bugs in it. All the bugs we ever found were in the original Simulink diagrams that the humans had drawn.

    • by tftp (111690)

      Simulink is not as easy as it looks. Not every block has compatible I/O, and not every arrow from block A can connect to block B. You have to understand what data those blocks are producing and consuming. Simulink is a useful tool ... but only for a specific class of problems. I am not sure if it can be even used to calculate primes. A simple airline ticket reservation system would require sheets and sheets of Simulink graphic.

      Text-based code is very powerful. A mathematician can write a formula with jus

  • by dagrichards (1281436) on Friday February 07, 2014 @09:07PM (#46191813)
    You may believe that you 'get code'. But clearly you do not. there have been more than a few attempts to make common objects flexible enough so that even you can stack them on top of each other to make applications. They are unwieldy and create poorly performing applications.
  • by Anonymous Coward on Friday February 07, 2014 @09:07PM (#46191815)

    Does APL [wikipedia.org] suffice?

  • by transporter_ii (986545) on Friday February 07, 2014 @09:09PM (#46191839) Homepage

    And why should you change if what you had worked great. I'm not against change, just as long as it is change for the better. If they came out with some new snazzy looking way to write code, but everyone said it sucks...but the old way worked just fine...then freaking stick with the old way. Unless you just don't care about actually making writing code better. Now who in their right mind would want to change something just to make it worse?

  • by sixtysecs (3529515) on Friday February 07, 2014 @09:12PM (#46191871)
    “Programs are meant to be read by humans and only incidentally for computers to execute”. — Donald Knuth http://stackoverflow.com/quest... [stackoverflow.com] http://www.codinghorror.com/bl... [codinghorror.com] http://www.codinghorror.com/bl... [codinghorror.com]
  • by umafuckit (2980809) on Friday February 07, 2014 @09:16PM (#46191905)
    There are "visual" (non-text) languages out there and they're not very nice. A major proprietary one is LabVIEW [ni.com], which mainly used for data acquisition and instrument control (hence the name). This is what the code might look like [unm.edu]. Developing small applets in LabVIEW is very fast, but things get horrible as the project gets larger. LabVIEW issues include:
    • Hard to comment
    • Very easy to write bad code (particularly for beginners)
    • Version control is awkward
    • Clunky to debug because programs are hard to follow.
    • Hard to modify existing code
    • Coding becomes an exercise in placing the mouse in just the right places and finding the right little block.
    • As a beginner you waste lots of time trivialities instead of actually learning to code.
    • Hard to learn from a book or even from reading somebody else's code.
    • Documentation is crappy.

    Graphical languages are still programming. Syntax errors don't go away, they just manifest themselves differently. I don't think graphical languages really solve any problems, they just create new ones. That's why they haven't caught on.

  • by sunking2 (521698) on Friday February 07, 2014 @09:17PM (#46191913)
    If someone ever comes up with such a thing I have the perfect name for it. MatrixX or perhaps Matlab.
  • by MpVpRb (1423381) on Friday February 07, 2014 @09:18PM (#46191923)

    ..text vs "something-else-that-isn't-text"

    The problem is complexity

    Programs are getting too complex for humans to understand

    We need more powerful tools to manage the complexity

    And no, I don't mean another java framework

  • by maple_shaft (1046302) on Friday February 07, 2014 @09:19PM (#46191935)

    There have been a number of attempts at making coding easy enough that non engineering types will be able to conceive their requirements in software then communicate these through a tool, usually in a visual manner and turns this into functional software. This has come in many different forms over the years, Powerbuilder, FoxPro, Scratch, BPEL, etc...

    The fundamental flaw is one of the software development industry, especially when it comes to line of business applications. Analysts writing requirements have been and always have been an inefficient and flawed model as most requirements documents are woefully incomplete and tend to not capture the true breadth of necessary functionality that ends up existing in resultant software. Analysts are business oriented people and they will think about the features and functionality that are most valuable and tend to miss or not waste time on what are deemed as low value or low risk items. Savvy technical folks have needed to pick up the slack and fill in the gaps with non-functional requirements (Architecture) or even understand the business better than the analysts themselves for quality software for the business to even be realized.

    I have seen this song and dance enough. True story, IBM sales reps take some executives to a hockey game, show them a good time, tell them about an awesome product that will empower their (cheap) analysts to visualize their software needs so that you don't need as many (expensive) arrogant software engineers always telling you no and being a huge bummer by bringing up pesky "facts" like priorities and time. So management buys Process Server, snake oil doesn't do it justice, without consulting anybody remotely technical. Time passes, and analysts struggle to be effective with it because it forces them to consider details and fringe cases. Software engineers end up showing them how to use it, at which point it just becomes easier for the software engineer to just do the work instead of holding hands and babying the analysts all day. Now your company is straddled with a sub par product that performs terribly, that developers hate using, that analysts couldn't figure out and that saved the company no money.

  • by Todd Knarr (15451) on Friday February 07, 2014 @09:22PM (#46191961) Homepage

    So-called "visual programming", which is what you're wanting, is great for relatively simple tasks where you're just stringing together pre-defined blocks of functionality. Where you're getting bogged down is exactly where visual programming breaks down: when you have to start precisely describing complex new functionality that didn't exist before and that interacts with other functionality in complex ways. It breaks down because of what it is: a way of simplifying things by reducing the vocabulary involved. It's fine as long as you stick to things within the vocabulary, but the moment you hit the vast array of things outside that vocabulary you hit a brick wall. It's like "simplfying" English by removing all verb tenses except simple past, present and future. It sounds great, until you ask yourself "OK, now how do I say that this action might take place in the future or it might not and I don't know which?". You can't, because in your simplification you've removed the very words you need. That may be appropriate for an elementary-school-level English class where the kids are still learning the basics, but it's not going to be sufficient for writing a doctoral thesis.

  • Look at RpgMaker (Score:5, Informative)

    by elysiuan (762931) on Friday February 07, 2014 @09:22PM (#46191965) Homepage

    Kind of a weird example but RpgMaker is a tool that lets non-programmers create their own RPG games. While there is a 'text based code' (ruby) layer a non-programmer can simply ignore it and either use modules other people have written or confine their implementation to the built in functionality.

    Now look at the complexity involved in the application itself to enable the non-programmer to create their game. Dialog boxes galore, hundreds of options, great gobs of text fields, select lists, radio buttons. It's just overflowing with UI. And making an RPG game, while certainly complex, is a domain limited activity. You can't use RpgMaker to make a RDBMS system, or a web framework, or a FPS game.

    The explosion of UI complexity to solve the general case -- enable the non-programmer to write any sort of program visually-- is stupendously high. WIth visual tools you'll always be limited by your UI, and what the UI allows you to do. Also think of scale, we can manage software projects with text code up to very high volumes (it's not super easy, but it's doable and proven). Chromium has something like 7.25 million lines of code. I shudder to think how that would translate into some visual programming tool.

    I'm not sure how well it would scale

  • by saccade.com (771661) on Friday February 07, 2014 @09:22PM (#46191967) Homepage Journal
    Graphical programming languages were a popular PhD topic 25-30 years ago. You can find them today in systems targeted at kids or non-technical users. But you won't find them anywhere near serious software development. Text is an incredibly dense and powerful medium for communicating with machines. The problem with graphics for programming is they do not scale well. Consider a moderately complex problem, solved in, say, several thousand lines of code. The same thing expressed graphically starts using dozens of pages (or bubbles, or nodes or whatever graphics) to express the same thing. It gets ugly quick.

    Several years ago, I did the side by side experiment of expressing the same non-trivial digital circuit (a four digit stopwatch with a multiplexed display) as both a schematic diagram, and as text with Verilog. The graphic (schematic) version was much more time consuming, and *much* harder to modify than the text-based Verilog. It became very clear why digital circuit designers abandoned graphics and switched text for complex designs.

  • by Bob9113 (14996) on Friday February 07, 2014 @09:25PM (#46191991) Homepage

    Most of the unnecessary parts of code are there for clarity, to make the code less cryptic. Most of the cryptic stuff is cryptic because it has been condensed. Consider iterating with a counter:

    for $i in ( 1..100 )

    That's about as concise as it can possibly be, and still get the job done. Most languages get a little more verbose, to add specificity and clarity:

    for ( int i = 1; i <= 100; i++ )

    That specifies the type of the holder (int), that it should use include i=100 as the final iteration, and it explicitly states that i should be increased by 1 each time through. That's just a tiny example, but that is how most code is. It is as simple as possible, without becoming too noise-like, but no simpler. Some langauges, like PERL, even embrace becoming noise-like in their concision.

    As for doing it with pictures instead of text, we try that every five or ten years. GUI IDEs, MDA [wikipedia.org], Rational Rose [visual-paradigm.com], UML [wikipedia.org], etc (there's some overlap there, but you get the picture).

    I suspect the core problem is that code is a perfect model of a machine that solves a problem. The model necessarily must be at least as complex as the solution it represents. That could be done in pictures or with text glyphs. Why are text glyphs more successful? I'm guessing it is because we are a verbal kind of animal. Our brains are better adapted to doing precise IO and storage of complex notions with text than with pictures. It's also faster to enter complex and precise notions with the 40 or 50 handy binary switches on a keyboard than with the fuzzy analog mouse. But at this point I'm just spitballing, so on to another topic:

    Fuck beta. I am not the audience, I am one of the authors of this site. I am Slashdot. This is a debate community. I will leave if it becomes some bullshit IT News 'zine. And I don't think Dice has the chops to beat the existing competitors in that space.

  • by gweihir (88907) on Friday February 07, 2014 @09:25PM (#46191993)

    All other potential "interfaces" lack expressiveness. Just compare a commandline to a GUI. The GUI is very limited, can only do what the designers envisioned and that is it. The commandline allows you to do everything possible.

    So, no, we are not "still" using text. We are using the best interface that exists and that is unlikely to change.

  • by necro351 (593591) on Friday February 07, 2014 @09:33PM (#46192061) Journal

    ...and I do not mean programming language, though that can help.

    There is not a big gain (any gain?) to seeing a square with arrows instead of "if (a) {b} else {c}" once you get comfortable with the latter. I think you hinted at the real problem: complexity. In my experience, text is not your enemy (math proofs have been written in mostly text for millennia) but finding elegant (and therefore more readable) formulations of your algorithms/programs.

    Let me expand on that. I've been hacking the Linux kernel, XNU, 'doze, POSIX user-level, games, javascript, sites, etc..., for ~15 years. In all that time there has only been one thing that has made code easier to read for me and those I work with, and that is elegant abstractions. It is actually exactly the same thing that turns a 3--4 page math proof into a 10--15 line proof (use Louisville's theorem instead of 17 pages of hard algebra to prove the fundamental theorem of algebra). Programming is all about choosing elegant abstractions that quickly and simply compose together to form short, modular programs.

    You can think of every problem you want to solve as its own language, like English, or Music, or sketching techniques, or algebra. Like a game, except you have to figure out the rules. You come up with the most elegant axiomatic rules that are orthogonal and composable, and then start putting them together. You refine what you see, and keep working at it, to find a short representation. Just like as if you were trying to find a short proof. You can extend your language, or add rules to your game, by defining new procedures/functions, objects, etc... Some abstractions are so universal and repeatedly applicable they are built into your programming language (e.g., if-statements, closures, structs, types, coroutines, channels). So, every time you work on a problem/algorithm, you are defining a new language.

    Usually, when defining a language or writing down rules to a game, you want to quickly and rapidly manipulate symbols, and assign abstractions to them, so composing rules can be done with an economy of symbols (and complexity). A grid of runes makes it easy to quickly mutate and futz with abstract symbols, so that works great (e.g., a terminal). If you want to try and improve on that, you have to understand the problem is not defining a "visual programming language" that is like trying to encourage kids to read the classics by coming up with a more elegant and intuitive version of English to non-literate people. The real problem is trying to find a faster/easier way to play with, manipulate, and mutate symbols. To make matters worse, whatever method you use is limited by the fact that most people read (how they de/serialize symbols into abstractions in their heads) in 2D arrays of symbols.

    I hope helping to define the actual problem you are facing is helpful?

    Good luck!

  • by russotto (537200) on Friday February 07, 2014 @09:38PM (#46192089) Journal

    The reason we're still writing text-based code is because it works and it works well, unlike, say, Slashdot Beta. Other things have been tried; most sucked, no one used them, and they went away. Others (e.g. LabView) found a niche and stayed there.

    Shouldn't there be a simpler, more robust way to translate an algorithm into something a computer can understand? One that's language agnostic and without all the cryptic jargon?

    How are you going to describe this algorithm? As far as I can tell, any meaningful answer to that question IS a programming language.

  • Because... (Score:4, Insightful)

    by Darinbob (1142669) on Friday February 07, 2014 @09:39PM (#46192093)

    Because text based stuff works. All the graphical programming stuff essentially is experimental. ALL of them have major faults. Yes, there are some people who think that everything can be done in UML and then automatically have that generate code, but that requires a huge investment to learn UML (at least as much time as it takes to learn a text based language) plus the code generated is not necessarily efficient. This is a very old idea, people have been working on this for decades!

    It is only recently that we've had graphical displays that I would considere good enough for the level of detail necessary. The computer monitors from 10 years ago were not high enough resolution.

    And frankly there's nothing wrong with text based programming. After all we are programmers. We all learned calculus (or should have), physics (or should have), we learned all the theory (or should have), we wrote term papers using text, and so forth. So to learn a simple programing language should not be a hurdle to anyone. We're professionals, we should never be saying "this is too hard!"

    Graphical user interfaces are not efficient in terms of building something up. Lots and lots of mouse movement is necessary merely to draw out a basic set of blocks and flow control but then you still need lots and lots of mouse movement to apply the correct sets of properties to each box, each line, and so forth (ie, type in variable names, set their type, make them const, place them in the correct scope, etc). Whereas text you just start typing and it is fast. That's why we still use command language interfaces instead of graphical user interfaces for most professionals, they're faster and more efficient. You may think that typing is slow and cumbersome, but I find using tools like Visio and Powerpoint to be slow and cumbersome.

    Finally, how are you going to share your graphical program? Do you require everyone who will read your code to also have the same graphical code viewer, no matter what operating system they are on? Sure this may be ok if you're just doing simplistic visual basic but in the real world you can't rely on this. The practical matter is that it will get translated into a textual form just to be shared. At which point you may have well done it in text to start with. Why do we have so many programming languages? Because not everyone agrees on just one language, and of course no language is equally efficient in all problem domains. The same issue will exist in any graphical programming style; no one will agree on just one, and you'll need different variants.

    Basically, text based programs are indeed simpler and more robust. Now maybe you don't like some programming languages because they're too verbose and hard to type, in which case choose a language that uses higher level constructs, and so forth.

  • by raymorris (2726007) on Friday February 07, 2014 @09:41PM (#46192115)

    You'll have your answer if you first rewrite your questions in picture form, with no words. You may find it's much, much harder to write anything that way. There ARE purely graphical programming environments, like Lego Mindstorms. Using it, you can write a ten line program in only twenty minutes

    Additionally, graphical environments actually are NOT simpler. They are far more complex. Standard C, the language operating systems are written in, consists of a couple dozen "words". Microsoft VISUAL C has hundreds or thousands of items to learn.

    The visual approach only tries to HIDE the complexity, make it invisible. The thing is, if you can't see it, you can't understand it. Building a complex system out of complex parts that you cannot understand is extremely difficult. That way leads to madness, to healthcare.gov. The way to make it simple is to start with simple things - 30 or so simple words like "while" and "return". You take a few of those words to build a small function like "string_copy". The string copy function is simple because a) it does one simple job and b) you can understand it because it's composed of a few simple keywords. Take four or five of those simple functions like stringcopy and you can easily build a more powerful function like "find_student". Each stage is simple, so you build all this simple stuff, each simple layer built on another simple layer and soon you have powerful software that can do complex tasks. Graphical tools don't work like that. You can't have a "while" picture, because in even a fairly small program you soon end up with thousands of pictures, way too many to see and understand. So with graphical tools you have to have a "web browsing" picture - a complex object whose behavior you cannot intuitively know. Instead, you have to spend hundreds of hours reading textual descriptions of the details of how the "web browsing" picture and thousands of other pictures can be used. Learning a few dozen words if far, far simpler.

  • by Pseudonym (62607) on Friday February 07, 2014 @10:05PM (#46192289)

    ...that all of our tools are designed for text. Our editors, our debuggers, our revision control systems, our continuous integration systems, our collaborative code review systems, our bug/feature tracking systems... they are all designed around text. Replacing text for the writing part of programming does nothing about every other part of the pipeline.

    And of course, as Henry James noted, all writing is rewriting. This is just of true of software as everything else.

    Everyone who has spoken about the information-density of text is, IMO, missing the point. Information density is not the most important aspect of software development, otherwise everyone would use APL instead of Java.

    More random thoughts:

    There have been some very good graphical and semi-graphical development environments out there; Newspeak is a good modern example. However, despite 30 years of trying, nobody has yet come up with a graphical programming environment which works well with more than one programming language. No modern system of any complexity is written in only one language, and the only format which they all speak is text.

    Oh, and don't forget the vendor lock-in issue. I can edit your text file in any editor or IDE that I want, and I don't have to pay you money or spend time learning your interactive tool. Any decent editor/IDE can be customised to do things like folding and syntax highlighting for your language, even if it doesn't support it out of the box.

  • by DrJimbo (594231) on Friday February 07, 2014 @10:08PM (#46192309)

    Algorithmic information theory (AIT) explains very clearly and simply why we are still writing text-based code. AIT is based on the idea of measuring the amount of information in a series of bits (or bytes or however you want to chunk it) based on the size of the smallest possible program that can create the series.

    There are simply not enough bits of information in a GUI based coding system to create the algorithms we want/need to create. Even though almost all programming languages have a lot of redundancy built-in in order to make them easier to understand, programs written in these languages still have a much greater amount of information than what is available by simple point-and-click which is equivalent to a series of multiple choice questions. For example 80 multiple choice questions with 100 options in each question only give you the information contained in a line of 80 ASCII characters.

    Shouldn't there be a simpler, more robust way to translate an algorithm into something a computer can understand? One that's language agnostic and without all the cryptic jargon?

    I believe people have tried to make universal programming languages. I don't think any of them caught on in the sense of replacing coding in real programming languages. And for very good reasons. One problem is the conflict between simpler and more robust. Shorter programs require higher information density and hence less redundancy and robustness. If you want to make a language simpler by reducing the number of keywords and special symbols then you will force programs to be longer or harder to understand or both. In the limit of the shortest program possible, the program itself appears to be a random series of bits, every one of which is significant. If there is any pattern or bias in the bits then it is not the shortest possible program.

    OTOH, some higher level-languages such as R or MatLab (Octave) do make it easier to express many algorithms. This is mostly because they have vector and matrix data types. Their forerunner in many ways was APL [wikipedia.org] which has a fairly high information density partly because it uses a wider range of characters than are available in ASCII. Perhaps you should learn R or Matlab or maybe even Mathematica. These languages give you a high-level means of expressing algorithms in a way that computers can understand.

    The summary reminds me of the lollipop Perlisim [yale.edu]:

    When someone says "I want a programming language in which I need only say what I wish done," give him a lollipop.

  • Doesn't Scale (Score:5, Insightful)

    by iluvcapra (782887) on Friday February 07, 2014 @10:15PM (#46192367)

    I do a lot of odds and ends in Max/MSP [cycling74.com] and Reaktor [native-instruments.com] for work. Normally I do the more robust stuff in C, ObjC and Ruby.

    They're "dataflow" languages, you have boxes that transform data, and you wire them together in the order you want the transformation to happen. Everything's graphical. It's designed to be easy enough that someone with no computer background could use it– a composer or synth programmer will learn it for a few days and then off they go.

    I've noticed some things:

    • Code sharing almost never happens. You can't email a snippet of your "patch" (a program) as text, you can't post it in a text box at stackoverflow, it's almost impossible to communicate with other people about what you're working on without emailing the binary document. When you send someone a patch to look at, you're doing a lot of "look to the left of this," and "look for the red box."
    • Code reuse can be difficult because boxes generally aren't typed in any way, so interfaces are difficult to verify and document.
    • ... This leads the dev environments to only be as good as their templates and default libraries. People prefer Reaktor to Max not because it's easier for developing, but because it comes with a bunch of really useful default synths and sampler instruments, which people will tweak slightly.
    • It's very difficult to talk about the algorithm itself, you have to spend all your time orienting yourself. If the program you're building is a simple pipeline, it's easy to see what's happening, but if you have loops and divergences it becomes very hard to understand what's going on in the abstract.
    • Data types are a hack. You end up having to have different color wires that carry different things, type-tagging of binary data is routine, and you often have to do conversions because the environment runs different data connections at different levels of service. Trial and error is usually required to see if a box responds to a message in the way you want; I can write correct C without having to run the code, I would never try that in Reaktor.
    • Execution order is a hack. If you connect one output to two inputs, which input will process the output first? There's conventions: In Max: the rightmost box will act first, and your graph is traversed depth-first right-to-left (this rule introduces ambiguity when dataflow is fed back). There are also boxes/modules that can make execution order explicit in various ways. (Note that in most cases we don't care about execution order, and the implicit multithreading is quite nice.)
    • Doing N of anything is a pain. In Max, It's easy to build a sampler that can play one sample. It's easy to build one that can play two. It's basically impossible to build a sampler that can play N, without using the textual scripting language (ha!) to dynamically rewrite your patch based on creation arguments.

    If I have something thats useful, I'll often conceptualize stuff in Max and then rewrite it in C with CoreAudio, because I know the Max code is basically a dead end for its usefulness.

  • by ltbarcly (398259) on Friday February 07, 2014 @10:38PM (#46192525)

    For the same reason we still write text-based stories, send text-based emails, text based text messages, etc etc.

    There isn't a way to express tree structures directly, without jumping back and forth, so we have settled on (or evolved to) a standard way to linearize such structures, which is called grammar.

    There's no advantage to any other representation, but rather there is a huge disadvantage to other representations because our brains have spent the last million or so years evolving to be adept at manipulating language in this way.

  • by Lumpy (12016) on Friday February 07, 2014 @10:42PM (#46192551) Homepage

    If you really dont like how many things you have to use for programming then switch to a simpler language.

    Assembly has the smallest command set there is. Start there.

  • Wrong Question (Score:4, Insightful)

    by Greyfox (87712) on Friday February 07, 2014 @10:54PM (#46192631) Homepage Journal
    What you really want to ask is "Why is programming hard"? It's hard because you have to know what you want to do. Go to any random company and ask a random employee how the company does what it does. What are its products, who are its customers, what do those customers want, what tasks does the business need automated to perform more efficiently? Projects fail so frequently (What is it, about 70% of the time?) because managers and some programmers think you can just start crapping out code without considering any of these things. You want a simplified environment where you can just draw a bunch of boxes together, but even if you had such an environment (As witnessed by the testimony of the people who have replied who do) it's STILL hard because you STILL have to know what you want. Programming is not fucking magic. We can't just crap out a bunch of code that magically does everything you want. Those of us who make it look easy have spent a lot of time mastering our craft. And we're still programming in text because we've found that it's the most efficient way of doing things, most of the time.
  • by SuperKendall (25149) on Friday February 07, 2014 @11:03PM (#46192705)

    "Shouldn't there be a simpler, more robust way to translate an algorithm into something a computer can understand?"

    Actually text is the simplest and most robust way.

    But the thing is, the best idea would be to have both that you could seamlessly switch between.

    Back a while ago I had a system close enough to that you could see the benefits - it was Java, you could easily generate class diagrams and alter some stuff about the code while in there.

    Some things about code are just way easier to see as text, and some things are way easier to see visually... we should use each medium for the strengths it has and not abandon one for the other.

  • by stenvar (2789879) on Saturday February 08, 2014 @12:14AM (#46193015)

    Why Are We Still Writing Text-Based Code?

    For the same reason we still write text-based news articles, textbooks, letters, novels, recipes, screen plays, diplomatic cables, and other stuff: it works better than the alternatives.

  • by jader3rd (2222716) on Saturday February 08, 2014 @12:42AM (#46193141)

    It's possible that we're still using text to represent our ideas, because we think in words. Right now my two year old son is behind the curve in talking. His mouth doesn't seem to have any physical limitations in making the necessary sounds for English speaking, but he rarely attempts any words anyway. After lots of observation, I'm currently thinking that the problem isn't that he can't say worse, it's that he isn't thinking words. Some concepts are represented as words, and he can say those words (ie ball, shoe, Super Mario), but concepts which words yet in his head are what's preventing him from speaking more.

    So the reason why I think we code in text, is because we think in words, which map really well to text.

  • Rephrase (Score:4, Insightful)

    by sjames (1099) on Saturday February 08, 2014 @12:49AM (#46193179) Homepage

    Please rephrase your question in the form of a picture.

    Or, if you prefer, interpretive dance.

    As you contemplate that task, you will learn the answer to your question.

Do molecular biologists wear designer genes?

Working...