



Ask Slashdot: Why Are We Still Writing Text-Based Code? 876
First time accepted submitter Rasberry Jello writes "I consider myself someone who 'gets code,' but I'm not a programmer. I enjoy thinking through algorithms and writing basic scripts, but I get bogged down in more complex code. Maybe I lack patience, but really, why are we still writing text based code? Shouldn't there be a simpler, more robust way to translate an algorithm into something a computer can understand? One that's language agnostic and without all the cryptic jargon? It seems we're still only one layer of abstraction from assembly code. Why have graphical code generators that could seemingly open coding to the masses gone nowhere? At a minimum wouldn't that eliminate time dealing with syntax errors? OK Slashdot, stop my incessant questions and tell me what I'm missing." Of interest on this topic, a thoughtful look at some of the ways that visual programming is often talked about.
The more simple you make it the less complex it is (Score:5, Insightful)
The reason programming languages are still as they are is for a simple reason, because you can't produce something complex with something simple, I.E. the more you simplify something the less control you have of it. Can a programming language be made that is not text based? Sure, but I highly doubt you are going to get the flexibility to do a lot of things. Even assembly is still required sometimes.
Re:The more simple you make it the less complex it (Score:5, Interesting)
This view is belied by the graphical tools used to design and layout hardware and chips. Higher level languages in particular are largely based on connecting the data flow between various pre-defined blocks or objects - function libraries.
I actually built a primitive graphical Pascal pre-processor back in the late 1980s, which used the CMU SPICE circuit board layout program. Since the output of the program was text based, it could be processed into Pascal code. The model I used was that a function was a 'black box' with input and output 'pins', but also could be designed itself in a separate file.
I never actually finished it, but it was pretty workable as a programming paradigm, and opened up some new ways of looking at programs. For instance, a 3-D structure could be used to visualize formal structure (function calls, etc.) in one axis, data flow in another.
Also, the Interface Builder for the NeXT machine was more-or-less graphical, IIRC only 2-D. It made for very fast prototyping of a new user interface, and the 'functional' code could be put in later. (I saw a former schoolteacher, who had never used a computer until a few months before, demonstrate creating a basic calculator in Interface Builder in under 15 minutes. It worked, first time.)
I think the real issue is in large part a chicken-and-egg problem. Since there are no libraries of 'components' that can be easily used, it's a lot of work to build everything yourself. And since there is no well-accepted tool, nobody builds the function libraries.
Looking at this from a higher level, a complex system diagram is a visualization that could be broken down to smaller components.
In practice, I believe that the present text-based programming paradigm artificially restricts programming to a much simpler logical structure compared to those commonly accepted and used by EEs. For example, I used to say "structured programming" is essentially restricting your flow chart to what can be drawn in two dimensions with no crossing lines. That's not strictly true, but it is close. Since the late 1970s, I've remarked that software is the only engineering discipline that still depends on prose designs.
Re: (Score:3)
This view is belied by the graphical tools used to design and layout hardware and chips. Higher level languages in particular are largely based on connecting the data flow between various pre-defined blocks or objects - function libraries.
That's basically a scheduling and routing problem. Getting all the leads from hither to yon connecting all the right points.
That's drafting, not programming. Akin to wiring a board for an IBM 407 [wikipedia.org] or something.
Wiring a board wasn't programming either, (although it was often called that).
Conceptualizing the problem and the solution is the job of wetware. The closest we get to symbolic programming is writing out flow charts, but you can still say in 6 words and one symbol what takes a mountain of code to ac
Re: (Score:3)
...
Wiring a board wasn't programming either, (although it was often called that)...
In, or shortly after, the beginning, that is how programming was done, back when computers were made of vacuum tubes and relays and were about as big as a locomotive, if not as cool running.
Re:The more simple you make it the less complex it (Score:5, Insightful)
Also, the Interface Builder for the NeXT machine was more-or-less graphical, IIRC only 2-D. It made for very fast prototyping of a new user interface, and the 'functional' code could be put in later. (I saw a former schoolteacher, who had never used a computer until a few months before, demonstrate creating a basic calculator in Interface Builder in under 15 minutes. It worked, first time.)
That's impressive for a newbie, but it's not even on the order of magnitude of complexity as a real application. And it probably didn't have input validation and a bunch of other items that new programmers always forget.
I've got a couple programs with several ten-thousand lines of code. If you want to visualize them, you will need a very, very large sheet. And it wouldn't be more transparent.
Since the late 1970s, I've remarked that software is the only engineering discipline that still depends on prose designs.
It's also the only engineering discipline with no physical representation. So maybe, just maybe, it's a case of "the rules don't apply because it's different" ?
Re:The more simple you make it the less complex it (Score:5, Interesting)
In practice, I believe that the present text-based programming paradigm artificially restricts programming to a much simpler logical structure compared to those commonly accepted and used by EEs. For example, I used to say "structured programming" is essentially restricting your flow chart to what can be drawn in two dimensions with no crossing lines. That's not strictly true, but it is close. Since the late 1970s, I've remarked that software is the only engineering discipline that still depends on prose designs.
Funny that you should say that. For the last 20 years, the trend in Electrical Engineering is away from graphical entry and toward text based design languages. Hardly anyone designs logic by drawing gates anymore. We use languages like Verilog and VHDL, which look a whole lot like software languages. Even the analog designers make use of Verilog-A or even just Spice, all text based. When it comes down to building a circuit board or analog circuitry on a chip, there is still a manual "compile" step of drawing diagrams and polygons but that is only because the result is ultimately a three dimensional object (well, more lke 2.5D) and it is the only way to be sure you get what you intended. It is not because creating designs graphically is considered convenient.
Re: (Score:3)
The reason programming languages are still as they are is for a simple reason, because you can't produce something complex with something simple
You're right! Think of how much more profound Shakespeare would have been if we had 28 or 30 letters in tha alphabet, or think of the symphonies that Mozart could have written if he had twelve notes to compose with, instead of eight. Why, even the Cray Supercomputer would have been astounding in its day if AND, OR and NOT weren't the only gates we had to build with. Maybe if they weren't coding in LOGO, then beta would be worth switching to.
Re:The more simple you make it the less complex it (Score:4, Funny)
" Why, even the Cray Supercomputer would have been astounding in its day if AND, OR and NOT weren't the only gates we had to build with."
Do you know how hard it was to get the individual components and materials to build MAYBE gates, and how tight the tolerances had to be?
Doc Brown only needed one flux capacitor, those things each needed at least a dozen.
And you couldn't just take a MAYBE gate and slap an inverter on it to get an NMAYBE, you had to turn the whole design inside out, and the XMAYBE only existed on paper, because it would have taken the equivalent of 3 Manhattan projects and a quarter of the GNP of the entire Western Hemisphere just to produce a working prototype.
It's been done (Score:5, Insightful)
If you have to understand the concepts anyways, why is text worse than a graphical set up? You can't really avoid learning syntax this way if you want to write anything actually complicated.
Also, fuck beta.
Re:It's been done (Score:5, Insightful)
Why are we still writing books using text (for the most part)? Doing it with pictures or other methods is frequently not clear enough even for fiction. Text is concise, or at least more-so than other methods.
Re:It's been done (Score:4, Interesting)
Well, perhaps why are we still using text-only to code?
I mean, the thing is, books are mostly text, but there are also illustrations (photos, artwork, graphs, charts, etc) that help enhance the content in the book.
A picture is worth 1000 words does happen quite a bit, and it shows how one picture can remove a ton of wordy description in both clarity, conciesness and ease of expression.
Heck, we can start with basic charts and tables - when you need to consult a chart or table, why do we have to literally code them in? Can't we just say "this is a chart with input X and output(s) y". and just include it, and the compiler automatically generates the code to handle looking up data? Same with a table of data - you put it in the code as a table, the computer figures it out and may even offer interpolation.
Now you have source code where the chart is easy to understand and the amount of written code is less because the compiler generates the actual translations and encoding of the table.
Re:It's been done (Score:5, Insightful)
Re:It's been done (Score:5, Funny)
If you have to understand the concepts anyways, why is text worse than a graphical set up? You can't really avoid learning syntax this way if you want to write anything actually complicated.
Also, fuck beta.
For that matter (and it really does matter), why is Slashdot still text based? I mean, my 2-year-old daughter enjoys looking at pictures on an iPad. So why not make Slashdot picutre based only, to open it up more to the masses (who often have the intellectual capacity of a 2-year-old anyway)? You could start by having 42% of visitors arbitrarily enter this new picture only mode which would have the second letter of the Greek alphabet (I love Greek!), and an embedded picutre that everyone associates with slashdot (some .cx domain or something). I'm blanking on the second step here, but I promise you, we will PROFIT!
Re: (Score:3)
Textual syntax does have its limitations. Many language designers strive to make their languages at most context-sensitive, but that can only take you so far. The semantics of variable naming and lookup require another layer on top of syntax to complete the description.
All languages have limitations. This is why TFA really makes no sense to me. The computer does not understand graphic vs. text, the computer understands binary instruction. Languages allow _us_ easier ways of telling a computer what to do. Languages are translators, and all translators are imperfect.
A graphical language tries to do way more translation than text, it has to do so. That does not make them better, or worse. I think the big thing is that the text based compilers bring a person closer to
Lego Mindstorms (Score:5, Interesting)
Try Lego Mindstorms and see whether you find it quicker or slower. It's easy to make something simple but once the algorithm gets complicated it is not much easier to decipher than text code, and no faster in my experience. As soon as you want to get serious with the system, you will wish it had a low level system that lets you lay it out in text instead of images.
This is partly the reason why surviving languages use symbols representing sounds rather than images as the Egyptians used. It's faster to write, and possibly faster to read.
Re:Lego Mindstorms (Score:5, Informative)
Try Lego Mindstorms
Be aware that the lego NXT software (haven't tried the EV3 stuff yet) is seriously crippled compared to labview (which it was based on), in particular you can't take "wires" in and out of structure blocks.
I have used labview a bit and find serveral things annoying.
1: There is no zoom functionality (apparently this is the #1 most requested feature)
2: unlike variable names in traditiona code wires in labview typically don't have names. This makes it hard to understand what each wire is for (yes i'm pretty sure there is a way to label them, but it's something you have to do extra not something that naturally comes as part of the coding like in traditional languages)
3: I can never remember what all the little pictures on the blocks mean.
4: I find connecting the blocks very fiddly.
Having said that some people seem to like it.
Re:Lego Mindstorms (Score:4, Interesting)
The last place I worked went from hand written C code to using Simulink to generate the (C) code for use in their ECMs.
The entire engine: airflow model, fuel injection, and emissions system was just a bunch of pretty pictures in Simulink. You can drill down by clicking on the high level diagrams to see the nitty gritty of each process if you so desire.
It was not nearly as efficient as the hand coded version, but there were far less issues with bugs, and it allowed us to have (many more) math / simulation types coding instead of just a few C gods. There were libraries that Simulink hooked in to that let it configure the hardware, but those were hidden away from the day to day people diagramming code.
Cheers!
Re: (Score:3)
Mostly black, or blue (now available with custom colors)
We put them in some big yellow things for testing though.
www.mercuryracing.com/sterndrives/hp1650.php
The above is all Simulink powered including the drive by wire.
Re:Lego Mindstorms (Score:4, Insightful)
The familiarity, and the fact that it was already being used to do the modeling.
It's much easier to find people who are well versed inMatlab / Simulink VS. coding in C, especially the PHDs who really understand dynamic process control and simulation.
We already used Simulink to do the control algorithm design and simulation, but then we had to write detailed software specs to be hand coded by the controls group (to hope we got the same results) Sending them an updated Simulink model to be linked into the production application (and being able to rapidly make changes to the production code for testing) was a much more effective use of everyone's time.
There were still only a few people who really knew how it worked, but anyone who knew Simulink could get it to do whatever they wanted without having to deal with understanding the interactions in many thousands of lines of C code.
Wrapping a PID around a PID that's already around a PID is much less error prone in Simulink, and much easier to integrate into the application
You can have thousands of lines of code generated by a 5 minute update to a Simulink diagram. Copy it, paste it, delete it, change it around 5 ways on Wednesday, all without having to worry about breaking it, or leaving something behind that will haunt you.
Testing was also much easier as the functions knew the range of their variables and the whole thing could be simulated in Simulink to verify itself.
Re:Lego Mindstorms (Score:4)
Last I heard Chinese was still a surviving written language.
How many programming languages do you know that use Chinese script?
Depends; how many keyboards do you own that have 30,000 keys?
Ideogrammatic languages are great for reading high information density content, but are really crappy when you want to do input processing; chording and composition sequences, such as Kanji hand, tend to be no more useful for input than Romaji or Katakana/Hirugana alphabetic/syllabic input. This is why it's been a holy grail for a while to get working voice input for Japanese, but the information density on audio input is generally worse than text input via keyboard (i.e. a good typist can do 120 WPM, but a good speaker can't spee 2 words per second).
Re:Lego Mindstorms (Score:4, Funny)
"... but a good speaker can't spee 2 words per second..."
You never heard me back in the day when I had to cram 90 to 120 seconds worth of furniture store copy into a 60 second spot.
Symbolic characters are on the decline. (Score:5, Informative)
>surviving languages use symbols representing sounds
over a billion people have a few symbols with you...
Are you referring to the Asian languages that use Chinese characters?
- Vietnamese used to be written in Chinese characters, it now uses the Latin alphabet.
- Korean replaced Chinese characters with the phonetic Hangul 500 years ago.
- Japanese has not one but two phonetic alphabets to go along with their Chinese characters. They mix all three together, and you can tell a passage is intended to be simple to understand when it will be all phonetic except the simplest of Chinese characters.
- Even China simplified the traditional characters because they were deemed too hard to learn. School children are taught new Chinese characters via pinyin, a phonetic scheme that uses Latin characters. Since they don't have a phonetic system, when they borrow foreign words then they match the foreign pronunciation with the set of Chinese characters that have the closest pronunciation. The result is a mix of characters where some have their original symbolic meaning, and others that only stand in for their pronunciation. Think "what your name means in Chinese" party trick.
- Typing Chinese characters usually means typing out the pronunciation and then selecting the character.
I think the point that symbolic characters are on the decline is very valid.
Because people write text (Score:5, Insightful)
This is a rhetorical question. It would be similar to ask "why do we write books or manuals when we can just record a video"
The answer is written words is how we communicate and record such communication as a civilization. Written communication is easy to modify and requires little space to store. And this is just scratching the surface, not touching things like language grammar or syntax, etc.
Re:Because people write text (Score:4, Insightful)
You clearly haven't searched for even the most trivial of "How do I..." topics recently, have you?
Why write three quick and dirty sentence-fragments on how to do it, when you can record a 10 minute video and post it to YouTube? And I wish I meant this as hyperbole.
More seriously, I agree with you. We still code in text because no programming language will ever let me easily express "c^=0xdeadbeef" by drawing a line between two data objects. Yes, wizards have become reasonably adept at setting up the core functionality of any app not worth writing in the first place. But even when they do allow you to write a line of code such as I gave above, well... I can type that in about a tenth the time it would take me to click... drag... click... right-click... click (function) select (xor)... click (constant) type "0xdeadbeef"... whatmorondoesntaccepthexforafuckingbitwiseop??? backspace*10 "-559038737".
Re:Because people write text (Score:4, Interesting)
Why write three quick and dirty sentence-fragments on how to do it, when you can record a 10 minute video and post it to YouTube?
This. And it's getting even worse -- even enterprise grade vendors are starting to do it to document their products while allowing their more formal manuals to languish.
Anyone who wonders why we still use language instead of pictures really needs to spend some time trying to find information in a manual for a GUI-based application versus finding it for the CLI (or writing the two styles of manual, for that matter.) Yes, learnig to read well and type well takes a lot of practice. It is also worth every second.
Labview (Score:5, Insightful)
Because visual programming is even more awkward in almost any aspect (see Labview).It takes significantly longer to write, large projects are all but impossible. There is a reason why circuits are not designed anymore by drawing circuits (in most cases anyway)
Re:Labview (Score:5, Interesting)
Re:Labview - Also SQL/ graphic query designer (Score:3)
I personally enjoyed solving complicated problems by writing a suitable query to our database. I liked a lot to tune my queries' performance, it felt like creating art.
My joy is about to end as our managers dec
Re:Labview (Score:4, Informative)
I use Labview all the time and it does exactly as advertised. I'm a hardware guy but occasionally need things done in software. Sure its not optimal but it gets the job done.
How are circuits designed today if they are not drawn?
Re:Labview (Score:4, Informative)
How are circuits designed today if they are not drawn?
They are synthesized by XST, Synplify Pro, or a similar tool.
Slashcott Feb. 10-17!
Re: (Score:3)
Board designs are usually done by drawing schematics and then importing those into the PCB editor and then laying them out (autolayout of PCBs is possible in theory but i've never heard of anyone using it in practice).
IC designs on the other hand are done by writing code in a hardware description language and then running that through the synthisizer (and maybe some manual tweaking afterwards for really high end designs).
Have you tried any other languages? (Score:3)
ASM, C are no where near the abstraction level of LabView.
C++ is higher but so complex that it's useless for rapid development.
Labview is at a much higher level of abstraction. Of course it's designed essentially for hardware folks to do software with a low learning curve.
Comparable level text-based languages would be something like Python or Matlab. Have you tried those?
Re: (Score:3)
There are some applications where graphical design works. Matlab Simulink, Labview etc are very useful for a certain limited set of problems. My feeling is that if the problem can be easily represented graphically, it may make sense to use a graphical language to code the solution. I think it is rare for a graphical language to be a good choice for a large problem.
Its pretty similar to spreadsheets - they are very efficient tools for certain types of functions, but should not be turned into large scale pro
Text-based books (Score:5, Insightful)
Why are we still writing text-based books, and communicating in word-based languages? Surely, we should have some modern, advanced form of interpretive dance that would make all such things obsolete. Wait, that's a terrible idea! Text turns out to be a precise, expressive mode of communication, based on deep human-brain linguistic and logical capabilities. While "a picture is worth a thousand words" for certain applications, clear expression of logical concepts (versus vague "artistic" expression of ambiguous ideas) is still best done in words/text.
Comment removed (Score:5, Funny)
Re: (Score:3)
I have seen these things you speak of. I have also noticed that they have an extremely low information density, especially compared to the effort required to produce communications. Compare the number of person-hours required to make a movie versus writing a book. "TV" and "youtube" are not generally the first places I turn to when I want detailed information about a subject.
Re:Text-based books (Score:5, Funny)
A picture is worth a thousand words.
A thousand pictures flipping past at 24 frames per second is worth ten words.
April 1st isn't for a few more months (Score:5, Informative)
I think the /. folks think it's an early April Fools day. Not write code using text? That's like saying, write a book with pictures. Sure it can be done, but it doesn't apply to all books.
Maybe beta is an early April Fools joke too.
Re: (Score:3)
They appear to be tossing up anything they can in an effort to stop us bitching about Beta.
Church of Pain (Score:5, Funny)
You did not hear this from me.
But most developers belong to the Church of Pain and we pride ourselves on our arcane talents, strange cryptic mumblings and most of all, the rewards due the High Priesthood to which we strive to belong.
Let me put it bluntly. Some of this very complicated logic is complicated because it's very complicated. And pretty little tools would do both the complexity and us injustice, as high priests or priests-in-training of these magical codes.
One day we will embrace simple graphical tools. But only when we grow bored and decide to move on to higher pursuits of symbolic reasoning; then and not a moment before will we leave you to play in the heretofore unimaginable sandbox of graphical programming tools. Or maybe we'll just design some special programs that can program on our behalf instead, and you can blurt out a few human-friendly (shiver) incantations, and watch them interpret and build your most likely imprecise instructions into most likely unworkable derivative codes. Or you can just take up LOGO like they told you to when you were but a school child in the... normal classes.
Does that answer your impertinent question?
Sure thing (Score:5, Funny)
Sure, and similarly, laws should not be written down in legal language, they should be distributed in comic book form.
Because text is the only medium that's varied enou (Score:3, Insightful)
There have been LOTS of attempts at "visual code", and they all look great when you watch the 10 minute presentation on them, but when you actually try to use them you find that they all solve a very small set of problems. Programmers in the real world need to solve a wide variety of problems, and the only medium (so far) that can handle that is text code.
It's like saying "why don't we write essays in pictograms?" You might be able to give someone directions to your house using only pictograms (and street names), but if you want to discuss why Mark Twain is brilliant, pictograms just don't cut it: you need the English (or some other) language.
Re:Because text is the only medium that's varied e (Score:5, Interesting)
I used to write articles for magazines as a full-time job. When I first started using the outliner MORE, I found that the task of writing became much, much easier: I would outline the article, then fill in text for each outline item. When I was finished, I would then export the text and there was my article. It let me design the articles top-down, just as a EE designs a circuit top-down. Moreover, as the article would develop, I could shift things around very easily without having to do massive cut-and-paste exercises.
Software design? I do that top-down mostly. I design the top level with functions, then fill in the functions. Lather, rinse, repeat as many times as you need to. The result is a piece of software that is highly maintainable.
One of my biggest complaints about "graphical" programming is that you can't have much on the display -- you end up paging *more* than with a text-based system. It isn't the text that's the problem; its the lack of top-down designing on the part of the human.
Now, one system that I absolutely loved working with had an IDE (text-based) where you deal in layers. When you click on the function name, the function itself comes up in different windows. I found that paradigm encouraged small, tight functions. Furthermore, the underlying "compiler" would in-line functions that were defined only once automatically. (You could request a function be in-lined in all cases, like in C, if you needed speed over code size.)
Re:Because text is the only medium that's varied e (Score:4, Interesting)
Also, just try using source code management (such as svn or git) with graphical programming languages. Even if they save in something sort of text-based (like XML), it's much harder to track and merge changes. And it's impossible when they save code as binary blobs. (LabView, I'm looking at YOU.)
This is the number one reason why graphical programming languages are dead in the water from the start for any but the smallest toy projects.
One practical example (Score:4, Interesting)
One practical example that I know of is Simulink, which can be used to generate code from diagrams. I did some testing years ago on Simulink-generated source code, and the code itself was awful looking but always worked correctly. Not a lot of fun to test when you had to dig into it, though. Also, testing seemed superfluous after never finding any bugs in it. All the bugs we ever found were in the original Simulink diagrams that the humans had drawn.
Re: (Score:3)
Simulink is not as easy as it looks. Not every block has compatible I/O, and not every arrow from block A can connect to block B. You have to understand what data those blocks are producing and consuming. Simulink is a useful tool ... but only for a specific class of problems. I am not sure if it can be even used to calculate primes. A simple airline ticket reservation system would require sheets and sheets of Simulink graphic.
Text-based code is very powerful. A mathematician can write a formula with jus
if you "get coding" so well, why arent you coding? (Score:4, Insightful)
Re: (Score:3)
Nah, I'm actually with the poster. I get text-based traditional coding too, but find the ROI (time and effort) quite poor and the work dreary. You have to be either well disciplined, or get the sort of joy banging out code that running get when pushing their body through the next mile.
So one can get 'coding', or get 'running', but find themselves searching for something better. (visual coding/visual abstractions swimming)
What do you mean by text? (Score:3, Funny)
Does APL [wikipedia.org] suffice?
Because it is classic (Score:5, Funny)
And why should you change if what you had worked great. I'm not against change, just as long as it is change for the better. If they came out with some new snazzy looking way to write code, but everyone said it sucks...but the old way worked just fine...then freaking stick with the old way. Unless you just don't care about actually making writing code better. Now who in their right mind would want to change something just to make it worse?
Code is meant to be read. (Score:5, Insightful)
Because the alternatives are worse (Score:5, Insightful)
Graphical languages are still programming. Syntax errors don't go away, they just manifest themselves differently. I don't think graphical languages really solve any problems, they just create new ones. That's why they haven't caught on.
Wow, what a brilliant idea (Score:3)
The problem is not... (Score:4, Insightful)
..text vs "something-else-that-isn't-text"
The problem is complexity
Programs are getting too complex for humans to understand
We need more powerful tools to manage the complexity
And no, I don't mean another java framework
Re:The problem is not... (Score:4, Funny)
It is a symptom of the industry and human nature (Score:4, Interesting)
There have been a number of attempts at making coding easy enough that non engineering types will be able to conceive their requirements in software then communicate these through a tool, usually in a visual manner and turns this into functional software. This has come in many different forms over the years, Powerbuilder, FoxPro, Scratch, BPEL, etc...
The fundamental flaw is one of the software development industry, especially when it comes to line of business applications. Analysts writing requirements have been and always have been an inefficient and flawed model as most requirements documents are woefully incomplete and tend to not capture the true breadth of necessary functionality that ends up existing in resultant software. Analysts are business oriented people and they will think about the features and functionality that are most valuable and tend to miss or not waste time on what are deemed as low value or low risk items. Savvy technical folks have needed to pick up the slack and fill in the gaps with non-functional requirements (Architecture) or even understand the business better than the analysts themselves for quality software for the business to even be realized.
I have seen this song and dance enough. True story, IBM sales reps take some executives to a hockey game, show them a good time, tell them about an awesome product that will empower their (cheap) analysts to visualize their software needs so that you don't need as many (expensive) arrogant software engineers always telling you no and being a huge bummer by bringing up pesky "facts" like priorities and time. So management buys Process Server, snake oil doesn't do it justice, without consulting anybody remotely technical. Time passes, and analysts struggle to be effective with it because it forces them to consider details and fringe cases. Software engineers end up showing them how to use it, at which point it just becomes easier for the software engineer to just do the work instead of holding hands and babying the analysts all day. Now your company is straddled with a sub par product that performs terribly, that developers hate using, that analysts couldn't figure out and that saved the company no money.
It's because you get bogged down (Score:5, Informative)
So-called "visual programming", which is what you're wanting, is great for relatively simple tasks where you're just stringing together pre-defined blocks of functionality. Where you're getting bogged down is exactly where visual programming breaks down: when you have to start precisely describing complex new functionality that didn't exist before and that interacts with other functionality in complex ways. It breaks down because of what it is: a way of simplifying things by reducing the vocabulary involved. It's fine as long as you stick to things within the vocabulary, but the moment you hit the vast array of things outside that vocabulary you hit a brick wall. It's like "simplfying" English by removing all verb tenses except simple past, present and future. It sounds great, until you ask yourself "OK, now how do I say that this action might take place in the future or it might not and I don't know which?". You can't, because in your simplification you've removed the very words you need. That may be appropriate for an elementary-school-level English class where the kids are still learning the basics, but it's not going to be sufficient for writing a doctoral thesis.
Look at RpgMaker (Score:5, Informative)
Kind of a weird example but RpgMaker is a tool that lets non-programmers create their own RPG games. While there is a 'text based code' (ruby) layer a non-programmer can simply ignore it and either use modules other people have written or confine their implementation to the built in functionality.
Now look at the complexity involved in the application itself to enable the non-programmer to create their game. Dialog boxes galore, hundreds of options, great gobs of text fields, select lists, radio buttons. It's just overflowing with UI. And making an RPG game, while certainly complex, is a domain limited activity. You can't use RpgMaker to make a RDBMS system, or a web framework, or a FPS game.
The explosion of UI complexity to solve the general case -- enable the non-programmer to write any sort of program visually-- is stupendously high. WIth visual tools you'll always be limited by your UI, and what the UI allows you to do. Also think of scale, we can manage software projects with text code up to very high volumes (it's not super easy, but it's doable and proven). Chromium has something like 7.25 million lines of code. I shudder to think how that would translate into some visual programming tool.
I'm not sure how well it would scale
Graphics doesn't scale well (Score:3)
Several years ago, I did the side by side experiment of expressing the same non-trivial digital circuit (a four digit stopwatch with a multiplexed display) as both a schematic diagram, and as text with Verilog. The graphic (schematic) version was much more time consuming, and *much* harder to modify than the text-based Verilog. It became very clear why digital circuit designers abandoned graphics and switched text for complex designs.
As Simple As Possible, No Simpler (Score:5, Insightful)
Most of the unnecessary parts of code are there for clarity, to make the code less cryptic. Most of the cryptic stuff is cryptic because it has been condensed. Consider iterating with a counter:
for $i in ( 1..100 )
That's about as concise as it can possibly be, and still get the job done. Most languages get a little more verbose, to add specificity and clarity:
for ( int i = 1; i <= 100; i++ )
That specifies the type of the holder (int), that it should use include i=100 as the final iteration, and it explicitly states that i should be increased by 1 each time through. That's just a tiny example, but that is how most code is. It is as simple as possible, without becoming too noise-like, but no simpler. Some langauges, like PERL, even embrace becoming noise-like in their concision.
As for doing it with pictures instead of text, we try that every five or ten years. GUI IDEs, MDA [wikipedia.org], Rational Rose [visual-paradigm.com], UML [wikipedia.org], etc (there's some overlap there, but you get the picture).
I suspect the core problem is that code is a perfect model of a machine that solves a problem. The model necessarily must be at least as complex as the solution it represents. That could be done in pictures or with text glyphs. Why are text glyphs more successful? I'm guessing it is because we are a verbal kind of animal. Our brains are better adapted to doing precise IO and storage of complex notions with text than with pictures. It's also faster to enter complex and precise notions with the 40 or 50 handy binary switches on a keyboard than with the fuzzy analog mouse. But at this point I'm just spitballing, so on to another topic:
Fuck beta. I am not the audience, I am one of the authors of this site. I am Slashdot. This is a debate community. I will leave if it becomes some bullshit IT News 'zine. And I don't think Dice has the chops to beat the existing competitors in that space.
Simple: Text is most expressive (Score:4, Informative)
All other potential "interfaces" lack expressiveness. Just compare a commandline to a GUI. The GUI is very limited, can only do what the designers envisioned and that is it. The commandline allows you to do everything possible.
So, no, we are not "still" using text. We are using the best interface that exists and that is unlikely to change.
Language is the answer to your question... (Score:5, Insightful)
...and I do not mean programming language, though that can help.
There is not a big gain (any gain?) to seeing a square with arrows instead of "if (a) {b} else {c}" once you get comfortable with the latter. I think you hinted at the real problem: complexity. In my experience, text is not your enemy (math proofs have been written in mostly text for millennia) but finding elegant (and therefore more readable) formulations of your algorithms/programs.
Let me expand on that. I've been hacking the Linux kernel, XNU, 'doze, POSIX user-level, games, javascript, sites, etc..., for ~15 years. In all that time there has only been one thing that has made code easier to read for me and those I work with, and that is elegant abstractions. It is actually exactly the same thing that turns a 3--4 page math proof into a 10--15 line proof (use Louisville's theorem instead of 17 pages of hard algebra to prove the fundamental theorem of algebra). Programming is all about choosing elegant abstractions that quickly and simply compose together to form short, modular programs.
You can think of every problem you want to solve as its own language, like English, or Music, or sketching techniques, or algebra. Like a game, except you have to figure out the rules. You come up with the most elegant axiomatic rules that are orthogonal and composable, and then start putting them together. You refine what you see, and keep working at it, to find a short representation. Just like as if you were trying to find a short proof. You can extend your language, or add rules to your game, by defining new procedures/functions, objects, etc... Some abstractions are so universal and repeatedly applicable they are built into your programming language (e.g., if-statements, closures, structs, types, coroutines, channels). So, every time you work on a problem/algorithm, you are defining a new language.
Usually, when defining a language or writing down rules to a game, you want to quickly and rapidly manipulate symbols, and assign abstractions to them, so composing rules can be done with an economy of symbols (and complexity). A grid of runes makes it easy to quickly mutate and futz with abstract symbols, so that works great (e.g., a terminal). If you want to try and improve on that, you have to understand the problem is not defining a "visual programming language" that is like trying to encourage kids to read the classics by coming up with a more elegant and intuitive version of English to non-literate people. The real problem is trying to find a faster/easier way to play with, manipulate, and mutate symbols. To make matters worse, whatever method you use is limited by the fact that most people read (how they de/serialize symbols into abstractions in their heads) in 2D arrays of symbols.
I hope helping to define the actual problem you are facing is helpful?
Good luck!
Because it works, like Slashdot Classic (Score:3)
The reason we're still writing text-based code is because it works and it works well, unlike, say, Slashdot Beta. Other things have been tried; most sucked, no one used them, and they went away. Others (e.g. LabView) found a niche and stayed there.
How are you going to describe this algorithm? As far as I can tell, any meaningful answer to that question IS a programming language.
Comment removed (Score:4, Insightful)
please rewrite the submission in pictures, no word (Score:3)
You'll have your answer if you first rewrite your questions in picture form, with no words. You may find it's much, much harder to write anything that way. There ARE purely graphical programming environments, like Lego Mindstorms. Using it, you can write a ten line program in only twenty minutes
Additionally, graphical environments actually are NOT simpler. They are far more complex. Standard C, the language operating systems are written in, consists of a couple dozen "words". Microsoft VISUAL C has hundreds or thousands of items to learn.
The visual approach only tries to HIDE the complexity, make it invisible. The thing is, if you can't see it, you can't understand it. Building a complex system out of complex parts that you cannot understand is extremely difficult. That way leads to madness, to healthcare.gov. The way to make it simple is to start with simple things - 30 or so simple words like "while" and "return". You take a few of those words to build a small function like "string_copy". The string copy function is simple because a) it does one simple job and b) you can understand it because it's composed of a few simple keywords. Take four or five of those simple functions like stringcopy and you can easily build a more powerful function like "find_student". Each stage is simple, so you build all this simple stuff, each simple layer built on another simple layer and soon you have powerful software that can do complex tasks. Graphical tools don't work like that. You can't have a "while" picture, because in even a fairly small program you soon end up with thousands of pictures, way too many to see and understand. So with graphical tools you have to have a "web browsing" picture - a complex object whose behavior you cannot intuitively know. Instead, you have to spend hundreds of hours reading textual descriptions of the details of how the "web browsing" picture and thousands of other pictures can be used. Learning a few dozen words if far, far simpler.
The actual reason is... (Score:3)
...that all of our tools are designed for text. Our editors, our debuggers, our revision control systems, our continuous integration systems, our collaborative code review systems, our bug/feature tracking systems... they are all designed around text. Replacing text for the writing part of programming does nothing about every other part of the pipeline.
And of course, as Henry James noted, all writing is rewriting. This is just of true of software as everything else.
Everyone who has spoken about the information-density of text is, IMO, missing the point. Information density is not the most important aspect of software development, otherwise everyone would use APL instead of Java.
More random thoughts:
There have been some very good graphical and semi-graphical development environments out there; Newspeak is a good modern example. However, despite 30 years of trying, nobody has yet come up with a graphical programming environment which works well with more than one programming language. No modern system of any complexity is written in only one language, and the only format which they all speak is text.
Oh, and don't forget the vendor lock-in issue. I can edit your text file in any editor or IDE that I want, and I don't have to pay you money or spend time learning your interactive tool. Any decent editor/IDE can be customised to do things like folding and syntax highlighting for your language, even if it doesn't support it out of the box.
Algorithmic Information Theory (Score:3)
Algorithmic information theory (AIT) explains very clearly and simply why we are still writing text-based code. AIT is based on the idea of measuring the amount of information in a series of bits (or bytes or however you want to chunk it) based on the size of the smallest possible program that can create the series.
There are simply not enough bits of information in a GUI based coding system to create the algorithms we want/need to create. Even though almost all programming languages have a lot of redundancy built-in in order to make them easier to understand, programs written in these languages still have a much greater amount of information than what is available by simple point-and-click which is equivalent to a series of multiple choice questions. For example 80 multiple choice questions with 100 options in each question only give you the information contained in a line of 80 ASCII characters.
Shouldn't there be a simpler, more robust way to translate an algorithm into something a computer can understand? One that's language agnostic and without all the cryptic jargon?
I believe people have tried to make universal programming languages. I don't think any of them caught on in the sense of replacing coding in real programming languages. And for very good reasons. One problem is the conflict between simpler and more robust. Shorter programs require higher information density and hence less redundancy and robustness. If you want to make a language simpler by reducing the number of keywords and special symbols then you will force programs to be longer or harder to understand or both. In the limit of the shortest program possible, the program itself appears to be a random series of bits, every one of which is significant. If there is any pattern or bias in the bits then it is not the shortest possible program.
OTOH, some higher level-languages such as R or MatLab (Octave) do make it easier to express many algorithms. This is mostly because they have vector and matrix data types. Their forerunner in many ways was APL [wikipedia.org] which has a fairly high information density partly because it uses a wider range of characters than are available in ASCII. Perhaps you should learn R or Matlab or maybe even Mathematica. These languages give you a high-level means of expressing algorithms in a way that computers can understand.
The summary reminds me of the lollipop Perlisim [yale.edu]:
When someone says "I want a programming language in which I need only say what I wish done," give him a lollipop.
Doesn't Scale (Score:5, Insightful)
I do a lot of odds and ends in Max/MSP [cycling74.com] and Reaktor [native-instruments.com] for work. Normally I do the more robust stuff in C, ObjC and Ruby.
They're "dataflow" languages, you have boxes that transform data, and you wire them together in the order you want the transformation to happen. Everything's graphical. It's designed to be easy enough that someone with no computer background could use it– a composer or synth programmer will learn it for a few days and then off they go.
I've noticed some things:
If I have something thats useful, I'll often conceptualize stuff in Max and then rewrite it in C with CoreAudio, because I know the Max code is basically a dead end for its usefulness.
Re: (Score:3)
Because it's the best way.. (Score:3)
For the same reason we still write text-based stories, send text-based emails, text based text messages, etc etc.
There isn't a way to express tree structures directly, without jumping back and forth, so we have settled on (or evolved to) a standard way to linearize such structures, which is called grammar.
There's no advantage to any other representation, but rather there is a huge disadvantage to other representations because our brains have spent the last million or so years evolving to be adept at manipulating language in this way.
Noobs bitching..... (Score:3)
If you really dont like how many things you have to use for programming then switch to a simpler language.
Assembly has the smallest command set there is. Start there.
Wrong Question (Score:4, Insightful)
The Best is Both (Score:3)
"Shouldn't there be a simpler, more robust way to translate an algorithm into something a computer can understand?"
Actually text is the simplest and most robust way.
But the thing is, the best idea would be to have both that you could seamlessly switch between.
Back a while ago I had a system close enough to that you could see the benefits - it was Java, you could easily generate class diagrams and alter some stuff about the code while in there.
Some things about code are just way easier to see as text, and some things are way easier to see visually... we should use each medium for the strengths it has and not abandon one for the other.
simple (Score:3)
For the same reason we still write text-based news articles, textbooks, letters, novels, recipes, screen plays, diplomatic cables, and other stuff: it works better than the alternatives.
You think in words (Score:3)
It's possible that we're still using text to represent our ideas, because we think in words. Right now my two year old son is behind the curve in talking. His mouth doesn't seem to have any physical limitations in making the necessary sounds for English speaking, but he rarely attempts any words anyway. After lots of observation, I'm currently thinking that the problem isn't that he can't say worse, it's that he isn't thinking words. Some concepts are represented as words, and he can say those words (ie ball, shoe, Super Mario), but concepts which words yet in his head are what's preventing him from speaking more.
So the reason why I think we code in text, is because we think in words, which map really well to text.
Rephrase (Score:4, Insightful)
Please rephrase your question in the form of a picture.
Or, if you prefer, interpretive dance.
As you contemplate that task, you will learn the answer to your question.
Re: (Score:3)
IMHO that's just historic, mostly. EEs have been designing circuits with structural complexity at least as great as any software program, using graphical tools, all along. Early on (after the plugboard era) computers didn't have the horsepower or graphic capability to do software CAD, and so programmers got started using prose of necessity. It's quite possible that as a result, programmers have tended to be non-graphical people (viz. the folks that hate using X-windows, or all that "GUI crap".) Now gett
Re:Power. (Score:5, Informative)
As an EE I call complete bullshit on this. Other than simplistic circuits most modern electronics design is done just as software: textually. Ever heard of Verilog and VHDL? These have largely replaces schematics. You can't have a schematic for a modern complex IC. The schematic would cover a fairly large state like Florida for something like a SPARC T5 or Core i7. There are so many pathways that even labeling the lines would be problematic. The days of something like CS0 are over. VHDL supports structured naming. These days netlists for complex chips are huge.
The tools and techniques for IC design have changed to a more textual mechanism precisely because text is better at dealing with complex abstractions. Please don't tell us what EEs do if you aren't an EE.
Re:Power. (Score:4, Insightful)
EEs have been designing circuits with structural complexity at least as great as any software program, using graphical tools, all along.
The most complex circuits (in ICs) are synthesized from text-based HDLs or automatically generated by software tools. Schematics are common at the board level, but that's nowhere near the complexity of even a medium-sized software program. And of course all functionality is explained through text-based documentation.
Text is better for expressing abstract concepts. Graphics are better for expressing concrete concepts. If you try to use graphics for abstract concepts, you end up adding a lot of text anyway -- flowcharts, for example.
Re:There is programming without code. (Score:5, Interesting)
Actually you can't tell your car to start by turning a key. Turning the key starts a complex series of interactions between the fuel pump, fuel-injection and air intake systems, ignition system and starter motor to get the engine turning over and then to manage disengaging the starter motor and switching from battery to alternator power once the motor's running until you let the key go from Start back to Run. You can ignore all that and talk only in terms of the key being turned if all you want to do is drive the car, but if you need to say diagnose why the car won't start or figure out why it's running rough and has no power you need to delve into the complexity behind just turning the key. You can no longer ignore it and abstract it away. That's the key: not whether it's code or a mechanical system, but the degree of abstraction involved. Most programming languages are seen as complex because they dive below the level of "start the car" and work at the level of "OK, how exactly do I design the drivetrain of the starter motor so it'll rotate the engine crankshaft until the crankshaft starts turning faster than the starter motor is, then automatically and instantly disengage so the engine won't strip the starter motor by trying to turn it faster than it's safe to?" (and that's just one small part of what's needed to make turning the key start the car, and not even the most complicated part).
Re:I know this one... (Score:4, Insightful)
Re: I know this one... (Score:5, Insightful)
It's been my experience over the last 25 or so years that, to the corporate apes in charge, anything they don't understand is easy.
That's still limited (Score:5, Interesting)
to what the programmer of the computer programmer envisioned.
I think the story OP should learn some Lisp. Seriously. Just to grok it.
Part of the frustration I had with many programming languages is feeling I was trying to build castles with toothpicks. If I moved the wrong way or wasn't utterly careful, the structure would fall.
Maybe the OP feels the same way since he is talking about feeling a single level away from assembly.
Like the most powerful editors (emacs and beyond) requires commandline. That's just how it is. If you want fast to learn, then you absolutely want a pretty GUI and all that nonsense, but a user will hit his head on a low ceiling if he's a fast learner. Because GUIs just don't do anything but the small tasks envisioned to them. OTOH, commandline is hard to learn but a much higher ceiling.
Put another way, text is abstractions. I say cat, you the reader know roughly what I'm talking about. I didn't have to describe a small furry 4 legged animal. Now if I did a graphical representation of it, I would be limited to the parameters I gave the original cat - fur (there are furless cats like the sphinx), legs (some cats are missing legs), tails (the manx). How would a graphical representation take that into account? Through clunkiness if at all.
It's kind of like the difference between an alphabet and a logographic system like kanji.
Kanji seem like an awesome idea at first. You make a picture of the sun, and voila, you have the sun! And then a picture of the moon, and you have that idea. Moonlight? Combine kanji for moon plus kanji for light and you probably have moonlight!
Awesome right? Yeah it's just fucking great until you realize you have to start making 30 strokes for one word, and that small pics start looking like each other, and that unless you know that very specific kanji, you have no clue how to write it out. And unlike the english alphabet which has 26 letters and once you learn them and combination you can sound out most words, you have to memorize thousands of kanji and even more kanji combinations or you'll get hung up reading highschool level newspapers.
I view CLI like the alphabet and GUIs inevitably like the alphabet and kanji. One is more awesome than the other in theory but in practice...
Re:That's still limited (Score:5, Insightful)
I say cat, you the reader know roughly what I'm talking about.
Right. A unix command utility that concatenates its inputs as its output.
I didn't have to describe a small furry 4 legged animal.
oooooooh .... riiiiiight. in all seriousness, if i saw the word cat without context in nearly any setting I'd have been right with you on a furry critter... but here on /. especially given you'd mentioned the CLI and GUI, well, my brain was primed up for the other cat.
Re:That's still limited (Score:5, Funny)
Kanji seem like an awesome idea at first. You make a picture of the sun, and voila, you have the sun! And then a picture of the moon, and you have that idea. Moonlight? Combine kanji for moon plus kanji for light and you probably have moonlight!
Awesome right? Yeah it's just fucking great until you realize you have to start making 30 strokes for one word, and that small pics start looking like each other, and that unless you know that very specific kanji, you have no clue how to write it out. And unlike the english alphabet which has 26 letters and once you learn them and combination you can sound out most words, you have to memorize thousands of kanji and even more kanji combinations or you'll get hung up reading highschool level newspapers.
You forgot about Kanji dictionaries. Want to look something up, just count the strokes. It's easy, as long as you've spent years practicing forming the strokes in the first place. If not, it's a bit of a wild guess what constitutes one or two or three strokes. But hey, once you've counted the strokes, you just have to jump to the section of the dictionary for characters with that many strokes then look through a few thousand characters until you see one that matches. Piece of cake!
Re: (Score:3)
I've not done much/any c/c++/java stuff other than compile source written by other folks in the past 14 years or so, so maybe the current crop of IDEs do this.
iOS/Mac/NeXT programming has worked that way for about 25 years. You build and configure a series of objects with a GUI and when the application launches all of those objects will be loaded into RAM, then a "loaded" event is fired on each of them. From there your code will typically create more objects and setup stuff that can't be done in the GUI.
Objects created in the GUI are tightly linked to your code, the GUI for building the objects understands how to parse and partially compile code (even half writte
Re: (Score:3)
Just off the top of my head...
In "Hollow Pursuits", Barclay used it for purposes that we all know everyone would, if the technology existed in real life.
In "Booby Trap", Geordi used it to simulate the engineer who developed the Enterprise engines and discuss certain things with her, and as I recall it functioned completely correctly in that episode.
In "The Outrageous Okona" Data was trying to learn how to tell jokes from a holodeck stand-up comic, and the holodeck functioned perfectly (too perfectly,
Re: (Score:3)
then you need variable assignment, conditionals, and loops to write a new box. And then all of a sudden you are back to writing text code
Have you ever used labview?
A custom block just links to another diagram on which you can place more stude
Conditionals and loops are represented by large adjustable size boxes in which you can place other boxes.
Variables aren't used much, if you really need them they are there but for the most part you just use "wires" and "shift registers" (a "shift register" takes an output from one iteration of a loop and feeds it into the next one with the ability to feed in at the start of the loop and feed out at the e
Re:I think IBM is working on it (Score:5, Insightful)
The hard part is clearly, unambiguously describing the solution to the problem at hand. English is a crappy language for that (legalese and standardese are harder to read than code). The easy part is expressing your clear thoughts in a formal language. Seriously, if you can't get past the fact that you need a formal language, you'll never be writing non-trivial programs - you've high-centered on the easy part and haven't even gotten to the hard part.
There's one tried and true way to create a computer program to solve your problem without learning to code: hire a programmer. Even then, you'll likely discover that you lack the ability to even explain the problem clearly and unambiguously.
Re:I think IBM is working on it (Score:5, Funny)
</b>
Re:I think IBM is working on it (Score:5, Funny)
I think I spotted your initial error. Would you like cheese with that wine?
Re:I think IBM is working on it (Score:5, Informative)
I'd also throw in, we're still writing text based messages - even though competing pictogram systems have been developed, none seem to have caught on well enough to displace the written word, composed of characters from a relatively small alphabet.
I made a LabViewLike graphical system for compiling algorithms into parallel processors as part of my Masters' thesis. It used a schematic capture program to build the hierarchical graphic "data flow" - but at their core, the basic modules were still short text based programs.
Some "programmers" might operate at an entirely graphical connect the lego blocks level, but sooner or later it comes down to 1s and 0s, and somewhere along that chain (several somewheres in modern practice) it will be represented in text based languages.
I think the graphic based systems haven't taken over for the heavy lifting because they're all too specialized, my thesis included. Most "real jobs" need more flexibility than the non-text based systems can provide. Having said that, I use 100% graphic based UI design, even though the tools translate to .xml for me, I almost never "get my hands dirty" on the text based code for my UIs, anymore.
This Ask Slashdot must be from the /. Beta Team! (Score:5, Insightful)
Oh, I get it! This question for Ask Slashdot must come from the Slashdot beta team.
Now, I understand that as a Slashdot beta developer you don't know how to program. We can all see that.
Web site development is more difficult than the programs you are used to where you drag a picture of a shape onto another picture of the shape, or how when a large colored shape is presented you click on the corresponding color image.
All of that "cryptic jargon" is important to computers. Just like all that "cryptic jargon" in legal agreements is important to judges.
Since you must be on the Slashdot beta development team, I'll point out that people sometimes don't like it when you make changes. Try some of these:
* Go to the Louvre with a paintbrush and some oil paints. Attempt to fix the eyebrows on the Mona Lisa, because they have faded off. Tell me how people like your slight changes.
* Go to the Royal Academy of Arts and slightly modify DaVinci's Last Supper. Maybe stand the salt shaker back up and paint over some of the damage that was done after people cut an arch through it for a doorway, or after the WW2 bombing damage. See how well people respond.
* Pay a visit to the Sistine Chapel, that thing has lots of cracks on it. Tell me what happens after you climb up to the ceiling with your bucket off plaster to fix the cracks.
* The White House lawn looks nice, but it could be changed to allow more foot traffic. Tell me what happens when you take your backhoe up to the presidential mansion and being excavating for new footpaths.
Any change, no matter how tiny, has the potential to destroy the essence of the item. You got that, Slashdot beta team?
Re:This Ask Slashdot must be from the /. Beta Team (Score:5, Insightful)
The art analogy is definitely wrong for Slashdot.
A better analogy is written language. For instance, I could write the sentence "Today I went to my friend's house." Or I could convey the same message graphically, but using an icon to represent myself, another icon to represent my friend, and my friend's icon could then be placed next to an icon of his house with the "ownership" relationship connecting them. Then I could draw a vector from my icon to the icon representing my friend's house, and then a small calendar could be placed on this vector with today's date, and another graphical feature added to indicate that this is all past tense.
Would the graphical representation be faster to create? Of course not. Would it be easier to understand? Only for someone that does not understand written English. The graphical representation of algorithms is no different. It is far faster to just write textual code, and it is more understandable to people that actually understand the programming language. This is why "graphical coding" is almost always proposed by people that are not programmers (such as the submitter), just like "graphical English" would only be proposed by people that don't understand written English.
Re:I think IBM is working on it (Score:5, Insightful)
What does that have to do with BETA?
It is because both BETA and "graphical coding" are abominations. The people pushing "graphical coding" are usually PHBs or other "non-programmers" (such as the submitter). I have never met a programmer that has used GC and prefers it over just writing code.
Note to submitter: Before you try to "fix" a profession, try learning enough to understand it first. The first thing you need to understand is that you are recycling a thirty year old idea, that has been tried and failed many, many times.
Re:I think IBM is working on it (Score:5, Insightful)
I do recall some attempts at "graphical coding", where a function was an icon that you could drag into your code, and other such nonsense.
Wikipedia has a whole list of them. [wikipedia.org]
Thankfully, most never really took off, except for the WYSIWYG HTML editors. I still hate it when people who make their entire WYSIWYG site, and then ask me to go make "simple" changes. Sure. 3 hours to reformat the HTML itself and strip out extraneous crap, and 5 minutes to make the change. ...like...
I don't mind charging the time to do it, but I hate doing the work. Sometimes I'm actually stunned how much crap can be shoved into code, that does absolutely nothing.
It happens in real coding too. I've found thousands of lines of unused functions, or even
Comment removed (Score:4, Funny)
Re: (Score:3)
Re:It's who we all are (Score:4, Insightful)
Text is our most efficient method of encoding language, however.
The only language-agnostic way to program is to do it directly in binary or hex. The only reason this is language agnostic is because the Arabic numerals, unlike the Latin alphabet, is ideographic.
A "visual" system where you point and drool and get code generated for you *still has to generate some code* in order to work, whether directly in binary or through some higher-level intermediary language then fed to a compiler or an interpreter. If it's done right it might be a very useful tool but it's never going to change the fact that the only thing a computer can do is perform operations on numbers and push them back and forth across the bus.
Re: (Score:3)
Don't fall prey to these trendy attempt to get you away from truely understanding what you are doing when programing. Write to the bare metal. Ada did.