Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Programming IT Technology

Ruby-Is it Prettier than Perl? 19

Kailden asks: "I've run across several references to Ruby, a scripting language that claims to be a hybrid of Perl and Python. Supposedly, this language has taken Japan by storm. I'm looking for Slashdot's verdict before jumping in. Has anyone outside the Ruby site used this language? What advantages/disadvantages have you found?"
This discussion has been archived. No new comments can be posted.

Ruby-Is it Prettier than Perl?

Comments Filter:
  • by drix ( 4602 ) on Monday May 08, 2000 @10:47AM (#1084131) Homepage
    I'm not too shocked. It's not really a big secret that Perl OO is pretty ugly as it stands now; it's much like C++ is to C in that OO support was kind of welded onto the side as an afterthought. Someone looking to satiate the PHBs of the world who think OOP is the greatest thing since sliced bread had to do it sooner or later. But whether it's a good idea or not remains to be seen.

    My issue with ruby is this: OOP just doesn't work for a lot of Unix programming situations. It's not a conincidence that 95% of all Unix apps are still written in C, despite it's successor having been around for 20+ years. Procedural programming is a lot less convoluted, requires a lot less time, and, despite what a lot of people claim, for me it's more intuitive than OOP.

    Object orientation was really designed for massive, distributed software projects, where you aren't going to be familiar with every variable, and aren't going to know what should or should not be touched. And the paradigm really does lend itself well to GUI environments, where you can treat windows and such as objects. For that stuff, OOP works. Almost all of today's really big software and GUI apps are written in C++.

    But Perl doesn't power too many really big projects, and it certainly isn't going to be used to write Gnome or KDE apps anytime soon. It's mainly used for parsing text. Does the "Practical Exctraction and Reporting Language" really need to treat all the text it's slicing through as objects? I don't think so. Perl is the quickest and dirtiest language of them all. Only in Perl can I do something like
    #!/usr/bin/perl

    use Time::CTime;
    print(ctime(localtime));

    and have a prayer of it compiling without any warnings or errors. Most every Perl app I have ever written or seen has been to do really simple things - e-mail form data to me, prune my directories, etc. etc. In such cases, OOP is complete overkill. Off the top of my head, I can't think of a Perl app that could really use OOP. In some of the larger CGI apps that I have written, e.g. a shopping cart w/CC verification & inventory control and lots of other goodies, I gave Perl OO a try and didn't like it at all. Maybe I'm a poor OO programmer? Dunno. But I found it to bee too much work for a language whose motto is TMTOWTDI.

    The authors of Ruby want to replace Perl. It's in the faq: "matz hopes that Ruby will be a replacement for Perl ;-)" I'm not sure if this is a good idea. Perl got to where it is today precisely because it is ultrasimple. Why muddle it with lots of OOP complexity, no matter how elegant it may be?



    --
  • by Anonymous Coward
    Wow. It's hard to follow up an excellent comment like the first one posted.

    I have to admit I've hardly ever used Perl. I started learning it once, but like so many of my projects, it was interrupted by Real Work, and never really finished.

    What I have used fairly extensively is Python. I've used it for web development work, and general personal scripting, and I've found it quite decent. How it does differ from the Perl I've used, however, is its syntax-- it's much more stringent in terms of producing readable code. What it comes down to is asking yourself: Do I have time to waste writing this code, and will I ever have to look at it again? If the answer is no, use Perl.

    I wish I had looked at Perl some more, so I could make some more educated judgements. I realize I haven't even touched on Ruby yet. :)

    I've just briefly glanced at Ruby. I have found that roughly equivalent scripts run (slightly) faster in general. I'm not sure why.. I've heard that Ruby has better garbage collection, but maybe it's just in the design of the interpreter.

    All three languages look at least passable. But there are tons of considerations when choosing which language to use. How fast can I code in it? How easy is it to extend an existing project? Can I find people who know the language to help me? Is it truly cross-platform? Do its basic libraries let me take the grunt-work out of my coding, leaving me to focus on my objective? Are there additonal modules available (for example, XML parsing, cross-platform GUIs)? Will my target audience (end-user/server) likely already have an interpreter installed?

    etc., etc.

    Really, I doubt you can go wrong with any of these languages if you're not looking to create huge projects. Good luck.

  • But Perl doesn't power too many really big projects, and it certainly isn't going to be used to write Gnome or KDE apps anytime soon.

    May I introduce George [sourceforge.net], a Gnome app written in Perl...

  • Not everyone likes OO, but much of its muddy reputation results directly from "Frankenstein" languages like Java and C++, where OO was grafted on almost as an afterthought.

    Objective C is (arguably) more appealing as a complied OO language. For "real" OO, one must go to high-level languages like Smalltalk and Lisp. Of course, even fewer people seem to like these languages than do C++. The ones who do like them generally like them a lot.

    We have yet to see the emergence of an OO language that those weaned on procedural languages will easily feel comfortable with (myself included). Smalltalk comes very close, but it's very difficult to translate Smalltalk's advantages into a worthy compiler, because typing is dynamic. Someone once made a "typed Smalltalk" which could probably be compiled, but it would probably break a lot of favored Smalltalk idioms.
  • For "real" OO, one must go to high-level languages like Smalltalk and Lisp

    The above comment just defeated any hopes of you being taken seriously by anyone with a clue. Lisp is a function al programming language [bucknell.edu] and not an object oriented language. Functional langauges emphasize a lack of state (no global variables or assignment operations) and referential transparency (functions always do the same thing if passed the same parameters) which is in conflict with Object Oriented programming concepts. The chances of Lisp being mistaken for an object oriented language by anyone who actually knows about Object Oriented Programming is zero.

    Personally I hate SmallTalk. As the first post to this article indicated OOP's forte is solving large scale problems, primarily because it turns out that C becomes very difficult to maintain past 50,000 to a 100,000 lines of code. The typelessness of smalltalk (as exemplified by Squeak [squeak.org]) makes one rely on the documentation practices of other programmers way too much. Typelessness means that if one does not choose variable names that are highly indicative of the the type purpose of an object as well as comment the code properly it will be difficult for maintainers to update the code. In programs with a high degree of coupling this can be extremely aggravating. Several times while trying to write applications in Squeak I hit my head against the brick wall when trying to find out what types a function accepted or what types it returned simply by looking at the code for the function. Sometimes it would take looking through methods in 4 to 6 classes before I could figure exactly what type had been passed to a function and what it returned, of course by then I would have forgotten why I was looking in the first place. Imagine reading a man page with all the types Xed out. AAAARGH.

  • **Object orientation was really designed for massive, distributed software projects,**

    (Disclaimer: I find OO concepts and languages very appealing. I program in Java. I sometimes think longingly of SmallTalk. But I also like Perl.)

    From what I have seen, overuse of OO concepts has a tendency to take a project that would have been small and simple and *make* it in to a "massive, software project", sometimes just by the sheer heft of the source code. Java is particularly eggregious:

    * a member variable ends up needing two accessor methods if you follow the patterns. This further boosts code length because each reference to an accessor is wordy. (For an example of how to handle this MUCH more smoothly, look at Delphi's object "properties").

    * People end up defining a class or an interface to hold a list of Integer constants... misusing an OO concept to make up for the language not having enumerated types.

    * Overapplication of design patterns results in layers upon layers of classes which do very little actual work, which in addition to excessive code length makes the code very hard to follow.

    * I'm sure the anti-Java-zealots among us could supply more examples.

  • You want objects? Here is a simple point object (from an inexperienced (and sleepy) schemer):

    (define point (lambda (init)
    (let ((contents init))
    (lambda msg
    (case (car msg)
    ((show) contents)
    ((distance)
    (sqrt (+ (expt (- (car contents) (caadr msg)) 2)
    (expt (- (cdr contents) (cdadr msg)) 2))))
    ((set) (set! contents (cadr msg)))))))))

    (define origin (point '(0 . 0)))
    (define end (point '(2 . 0)))

    (display (origin 'distance (end))) (newline)
    (end 'set '(3 . 5))
    (display (origin 'distance (end))) (newline)

    Of course, there are a number of packages available to do OO in scheme in a manner much better than the above...

    ^Z
  • I suggest you do some research into Multiple Dispatch. It was invented to solve the exact problem you're talking about. The best-known systems which implement it are CLOS, Cecil, and Dylan. There are systems for other languages (Perl!) which can simulate multiple dispatch also.
    The basic idea is that instead of choosing which procedure to execute based solely on the type of a single object, the choice is based on the type of several objects. For example (in Dylan):

    define generic inspect-vehicle( v :: <vehicle>, i :: <inspector> ) => ();

    define method inspect-vehicle( v :: <vehicle>, i :: <inspector> ) => ();
    look-for-rust( car );
    end;

    define method inspect-vehicle( car :: <car>, i :: <inspector> ) => ();
    next-method( ); // perform vehicle inspection
    check-seat-belts( car );
    end;

    define method inspect-vehicle( truck :: <truck>, i :: <inspector> ) => ();
    next-method( ); // perform vehicle inspection
    check-cargo-attachments( truck );
    end;

    define method inspect-vehicle( car :: <car>, i :: <state-inspector> ) => ();
    next-method( ); // perform car inspection
    check-insurance( car );
    end;

    This example was taken from http://www.tpk.net/~ekidd/dylan/multiple-dispatch. html
  • by baldur ( 36963 ) on Monday May 08, 2000 @11:42PM (#1084139)

    I'm just starting to try Ruby out, and I haven't used it for anything big or important yet, but it seems to me that its main advantage is being rather more readable and probably more maintainable than Perl (which I still haven't stopped loving anyway...). Here is a small sample, lifted directly from the cgi.rb module:


    def CGI::parse(query)
    params = Hash.new([])
    query.split(/[&;]/n).each do |pairs|
    key, value = pairs.split('=',2).filter{|v| CGI::unescape(v) }
    if params.has_key?(key)
    params[key].push(value)
    else
    params[key] = [value]
    end
    end
    params
    end

    Now, a more or less "literal" translation into Perl would look like this:


    sub CGI::parse {
    my $query = shift;
    my %params;
    foreach $pair (split(/[&;]/, $query)) {
    my ($key,$value) = map { CGI::unescape(\$_) } split(/=/,$pair,2);
    if (defined($params{$key}) {
    push @{$params{$key}}, $value;
    else {
    $params{$key} = [$value];
    }
    }
    %params;
    }

    Despite superficial differences, you are able to tell from this example that the strongest influence on Ruby has been Perl. The examples are essentially the same. Someone with a background in Perl (like myself) has a much easier time learning Ruby than, for instance, Python.

    What I like about Ruby:

    • Less punctuation than Perl and hence more readable. (This applies to braces, parentheses and semicolons, and maybe also the Perl vartype-symbols $, @ and % - though I'm of two minds about those, as I also think they contribute to clarity in most cases).
    • You don't have to use local or my to get local/lexical variables. Variables in Ruby are local (not lexical) by default. (When I'm writing Perl, about 95% of the variables I use are lexical. That's a lot of my's!)
    • The way you can easily string methods after each other using dot notation: variable.method1.method2(/regex).method3
    • The way you can easily act on a reference rather than return a value, using "!", as in str.gsub!(/\"/n, '&quot;'). (Although this particular example would be a bit more compact in Perl: $str=~s/\"/&quot;/g).

    What I don't like about Ruby:

    • The iterator syntax (array.iterate{|v|, do_something(v)}), which admittedly is quite logical but for some reason gets on my nerves.
    • No use strict.
    • No CPAN (though The Ruby Application Archive [ruby-lang.org] is a good start).
    • No poetry... at least not yet.

    So, on the whole I think Ruby is quite nice. I'll follow its further development with great interest. But for now, I'm too attached to Perl to make the switch.

  • A few misconceptions:
    • Objective-C is as much a "grafted" OO language as C++. Admittedly C++ has lots of other things which have nothing to do with OO.
    • Java is not a language with OO "grafted" on to it. Just try writing a program without using a class. Try writing a program that does anything useful without using an object.;-)
    • Lisp is not an OO language, it's a functional language. CLOS is, but of course, that's a case of grafting.
    • There is nothing that's inherently wrong with using a VM to execute a program. Indeed, Transmeta's approach suggests it's even of benefit to the hardware guys.
    • Sun's Hotspot VM is based on a project that did type infrencing to improve performance. The results were much more impressive for Smalltalk, and were quite easy to achieve.
    • The GNOME project using a Corba ORB to bundle up bits of code into objects. Given that most of the GNOME team is writing code in C, they seem to be demonstrating a fair degree of comfort with objects while being procedural programmers.
    Ok, now that we've cleaned up all that stuff, a few comments. First off, there is no question that OO programming is a different way of thinking than procedural programming. That being said, most good procedural programming follows basic OO techniques (i.e. in C this is done using structures which include pointers to functions). Indeed, the Linux kernel has tons of examples of this. So it's really just that OO programming is structured to leverage good design skills. Indeed, once you have some experience doing OO, most skilled developers find it much faster to put together a prototype with say, Smalltalk, than with a non-OO approach.
  • If i can program in a certain paradigm in a certain language, then that language supports that paradigm. Scheme is not a Pure OO language (neither is Java), but it facilitates programming in OO.

    ^Z
  • Um... what have you just proved? Simply because you can use hacks and kludges to simulate objects in Scheme does not make it an OO language. The fact that I can write recursive C code without assignments or global variables does not make C a functional programming language. Neither does that fact that I can create structs with function pointers make it an OO language.

  • Actually, parametric polymorphism handles this as well. A good example of this would be ML.
  • Having the ability to access a language's meta-acces-protocol to make the language behave differently is not an indication that a language follows a particular philosophy. Perl is actually a great example of this. Are SCOS and CLOS really Scheme and Lisp or are they languages in their own right?

    Secondly, I think you misunderstand the notions of OOP. OOP doesn't necessarily involve message passing. Certainly, that is how Smalltalk works, but it's not a requirement for OOP. Even in Smalltalk, all messages have a "receiver", which is a single object. Smalltalk code typically uses a mechanism called "ping-pong" to allow two objects to "talk amongst yourselves" as you describe.

    In C++ you could define the behavior of the "=" operator outside of either class, and that would be basically what you're talking about. It sounds to me like in this case you really want to use a "function object" to implement your "equals" behavior. This is entirely possible in almost every OOP I can think of. Sure, the syntactic "=" is not implemented this way, but you can create your own Equals class which handles this.
  • I find that Java forces me to think about structure when I program. Sure, it means you can't rattle off a quicky program, but sometimes quicky programs become "mission critical" and turn in to big hulking mounds of spaghetti. Java forces you to create the most efficient code possible: what data do I need? who should have access to it? Where should I leave room for extensions?

    Case in point: I wrote a software program that decodes satellite data and parses the output. I was able to add a whole host of features by touching a single class for each case: support for multiple data formats in one file, extensible support for any future data format, cleaner file handling routines, new ways of gathering data. I never had to go back and rework major portions of code. This assumes you design the program well to begin with :)

    Another feature that goes untouted in Java is error handling. The "try{} catch()" method of trapping errors is pure genius. I can effectively trap and collect any errors and find out where they occured in the stack trace. Because all of the errors are caught in one place (the catch statements), I don't have to clutter my code with error checking. EOF? Don't worry about it, if it happens I can catch it later!

    Lastly, having a main() function for _every_ class makes testing a piece of cake. If I create a main() function that tests the class, and it passes, I know I never have to debug that class and I can use its interface without worry. If something goes wrong in a class I can go back to the main function and play around with it until it works.

    Oh yeah, and I need not deal with pointers or memory management :)
  • what have you just proved? Simply because you can use hacks and kludges to simulate objects in Scheme

    I didn't read his actual code, so maybe it was a hack or a kluge, but in scheme you can add features to the language because scheme can manipulate itself -- not just manipulate code in the language, but also manipulate the language. Any object oriented feature you want to add, you can add, and it is added in a first-class way. So, your only remaining objection should be worded as "scheme allows you to program in other paradigms in addition to the narrow confines of OOP." I'm not saying that narrow confines are bad, but they are narrow.

    I like many things about OOP, but every implementation I've been exposed to (and thus far that's not Smalltalk) has seemed like a huge compromise. Here's an example of a beef I have: If you want to compare two objects for equality, you don't pass "messages" as is the claim... you pass one object to the other object thus giving one object primacy over the definition of equality. To me, str1.equals(str2) is a highly unsatisfactory way to test equality. If you want an OOP you can brag about, the code should read something like: "equality: talk amongst yourselves"... of course, with scheme, you could do it this way :)

  • another feature that goes untouted in Java is error handling. The "try{} catch()" method of trapping errors is pure genius.

    It's great(!) that you have seen the pure genius, so hopefully you will cheered to hear that there's more genius there than (sadly) the java implementors realized: exception handling is great for errors, but it is also great for other exceptional values. Let me just cut to the chase. How many times have you written code that declares a "temporary" variable simply because you need to test a return value and save the return value? Think of functions like "find this character in a string". It is completely wrong to return -1 for "not found" as java does if you have throw available to you. Using throw, you can simply use the return value of the character index in the normal flow of your code, and then catch the exceptional condition separately. For people who like so-called "strongly typed" languages, this is the only strongly typed way to do it. If you think about it, that -1 return value is a value which is actually a different type. Once you start coding this way, you can never go back, but try it in java and you discover (after reimplementing all of the cheesey libraries java provides) that "try" is completely superfluous. Every line or block of code should be prepared to catch; all those manually inserted trys just add noise.

    This concept, BTW, was taught to me in CS class 20 years ago by people who had been thinking about it longer than that (Love ya, Professor Barbara Liskov) and it is a good illustration of the value of studying CS in an academic setting rather than teaching yourself a kluge like perl. Don't get me wrong: teaching yourself C and perl is better than not doing so, but it can be hard for people to see the value of CS degree before getting one. "Type correctness" is an abstract concept that transcends any particular language, and it is depressing to see a very recent language like Java show up and be embraced by the OOP crowd, and at the same time show such a glaring misunderstanding of type correctness.

    Your example of catch and throw is one example of genius. Learn about tail-recursion and really get blown away (search for "the ultimate goto").

  • I'd agree totally: one of my biggest moans about the Standard C library, and just about every other C library and C-based API ever written is their (IMHO) abhorrent habit of overloading return values to handle both semantically significant data and error cases.

    While I've got a few problems with the implementation of SEH in C++, such as the fact that it's a pig to handle exceptions in constructors, it's very embarassing throwing exceptions in destructors, it's easy to nause the whole thing up (see here [aw.com] for an in-depth discussion) and the fact that there's a runtime hit just like with RTTI (and I don't know Java well enough to comment on how it handles it), the general concept is one of the best things C++ stole from (wherever BS stole it from: I haven't got my copy of D&E to hand) and the combination of exceptions and smart pointers rocks.

    And yes, tail recursion is waay cool: in a similar vein, do 6502 assembly programmers remember replacing JSR <address>; RTS with JMP <address> to save the extra operation. Those were the days...
    --
    Cheers

  • I have to say that Perl OO is extremely ugly.
    Even after a couple of years of daily Perl coding, I still find the syntax confusing and difficult to read.

    I do love Perl for ordinary every day parsing - it can't be beat there. But large, object based projects with long term maintainablity? Give me python.

    BTW, you can tell I'm a perl-girl: TMTOWTDI just extends to languages...

"The medium is the massage." -- Crazy Nigel

Working...