Catch up on stories from the past week (and beyond) at the Slashdot story archive


Forgot your password?
Get HideMyAss! VPN, PC Mag's Top 10 VPNs of 2016 for 55% off for a Limited Time ×
Social Networks Cloud Google Open Source Privacy

Ask Slashdot: Best Way To Implement Wave Protocol Self Hosted? 112

First time accepted submitter zeigerpuppy writes "It's time to revisit Wave, or is it? I have been looking to implement a Wave installation on my server for private group collaboration. However, all evolutions of Wave seem to be closed-source or experiencing minimal development. I was excited about Kune, but its development looks stalled and despite Rizzoma claiming to be Open-Source, their code is nowhere to be found! Wave-in-a-box looks dead. So Slashdotters, do any of you have a working self-hosted Wave implementation?"
This discussion has been archived. No new comments can be posted.

Ask Slashdot: Best Way To Implement Wave Protocol Self Hosted?

Comments Filter:
  • Who else uses it? I know this is a catch-21 sort of thing. I don't see what Wave really offers over social networks (in conjunction with good old email).
    How about tickling your fancy with email and a federated social network such as [] and good old email?
    • by alen ( 225700 ) on Sunday December 29, 2013 @01:59PM (#45812527)

      It's the perfect geek social network
      No one uses it

    • Re:Why Wave? (Score:4, Interesting)

      by gmuslera ( 3436 ) on Sunday December 29, 2013 @02:08PM (#45812571) Homepage Journal
      Don't think on it as a worldwide, new social network, but for deploying it as a collaboration/documentation platform inside a company. In that area could be pretty useful.
      • if it's for corporate purposes just use share point.

      • I think the core of it lives on in products like Google Docs and Spreadsheets.
        • And some of it's ideas in ownCloud 6 : operational transform collaborative editing available for OpenDocument in the browser.

      • by afxgrin ( 208686 )

        It's not even good for that. Using Wiki for an internal documentation system is infinitely superior despite lacking WYSIWYG editting and drag-and-drop functionality. At least you have a built-in and obvious revision control system using Wiki. Wave just makes everything look like a mess. Not to mention, Wave-in-a-box is broken all over the place in its current state. Any of the killer features, like integrating various Google features like maps, etc just isn't there.

        You're far better off using something

        • My company likes WYSIWYG. Wiki is fairly arcane and training users on it can be a royal PITA.

          • That's true, my users are relatively computer illiterate. They don't want to learn syntax as much as I do.
      • That's exactly right, I like Wave because it makes the discussion to documentation progression relatively seamless. Other alternatives tent to fall down on either privacy or not being free software.
    • by Anonymous Coward

      Reason to switch? What does this provide that I don't have now? It wasn't immediately clear when I read the blurb. For example: 802.11s provides a mesh network. It can be a globally connected internet beyond government control. You could --in theory-- have a network travelling hundreds or thousands of miles that never touches the traditional internet. What is the transport medium of Wave? Is it medium independent? Is it like sneakernet or RfC 1149 (IP over avian carrier)? It is never mentioned in t

      • Re:Why Wave? (Score:5, Informative)

        by mitzampt ( 2002856 ) on Sunday December 29, 2013 @02:37PM (#45812747)
        Just to solve some of the unknowns of the thread:
        1) between servers, Wave federation uses XMPP with some mumbo-jumbo/magic messages and as such it can be hosted (and interconnected) in a mesh, so yes it is medium independent.
        2) between server and client Wave uses HTTP with websocket.
        3) Wave is not a moving target as the protocol is no longer developed at high pace, if at all. The "Wave in a box" platform is still being incubated by Apache (waiting to get stable an attract developers).
        4) I personally participated to the beta and I liked it.The potential to couple cooperative editing with the replay feature is huge on a lot of use-cases. It just gave you too many tools editing and little automated housekeeping so it ended a lot messier than e-mail conversations. Also, Google version of Wave was awful as a workflow. Rizzoma looks way better.
        • Yes of course - but why Wave over tools provided in a modern social network? Technologically BuddyCloud is similar - and BuddyCloud has protocols documented, and because of their modular nature they don't represent a moving target.
          • I think I went on with the Google hype about Wave, but I could see in it a replacement for current usage of e-mail (document collaboration, discussion threads, file transfer) as a lot of people abuse the reply with history feature of mail.
            Because it wanted to replace e-mail it tried the federated approach for inter-server communication. Having servers of different ownership communicate freely would have provided migration or further along the way interconnection with social media (think about migrating a F
    • Re:Why Wave? (Score:4, Interesting)

      by PhrostyMcByte ( 589271 ) <> on Sunday December 29, 2013 @02:22PM (#45812655) Homepage

      For me it was never trying to be a social network. Wave was a great blend between chat, email, and forums which was phenomenal for collaboration on projects.

      Unfortunately I never used it beyond that. It was way too bulky as a replacement for random chat, never had the features to properly replace forums, and we're pretty much stuck with email now so no point in trying to replace that.

    • by Jethro ( 14165 )

      That's Catch-22. Catch-21 was a gameshow where people played blackjack.

    • WTF is a Catch-21? I know what a Catch-22 is seeing as how I worked for Captain Major before he was promoted to Major Major but again what is a Catch-21.

  • by Anonymous Coward

    The WAVE implementation was heavily inter-twined with XML. Wave was introduced about the time everyone discovered that XML was more hassle than it was worth on the front-end

    • Re:JSON (Score:5, Insightful)

      by beelsebob ( 529313 ) on Sunday December 29, 2013 @02:10PM (#45812591)

      "Oh no, they used one really bad verbose text based encoding, rather than another really bad verbose text based encoding that I happen to like because it's cool these days, both of which have decoding support in the browser. This is clearly what will stop it from ever working properly"

      • It might stop egoistic developers from working on it.

      • by mha ( 1305 )

        JSON is verbose? That's news to me. Make it any more dense and it becomes hard to read for humans - and having human readable messages is one thing I don't want to give up, speaking as a developer. I don't want binary messaging. It has been tried. (It doesn't prevent anyone from sending natively binary data - as opposed to data that is made binary but might as well just be text - in another way, but for a lot of communication it's great.)

        • JSON is fine for sending a single record, but fails hard when you want to send 1,000s of records, since it sends the contract with every single record. This is made even worse by including lengthy, descriptive field names e.g.

          CustomerId : "ABCDEF012938487432112424242322426",
          AllowExtendedConfiguation: "true",
          IsMaximumLengthRequired : "true"

          Sending 1,000 copies of that is going to take a lot more packets than a fixed binary format where you can pack the entire th

          • JSON is fine for sending a single record, but fails hard when you want to send 1,000s of records, since it sends the contract with every single record.

            Can't you include the field names once, followed by the records as arrays?

            "fields" : [ "CustomerId", "AllowExtendedConfiguration", "IsMaximumLengthRequired" ]
            "records" : [
            [ "ABCDEF012938487432112424242322426", "true", "true" ],
            [ "GHIJKLMN3458745092349837469089845", "false", "true" ]

            Then rebuild your object

            • At that point, you're simply sending really verbose CSV.

              • And what would be wrong with that?

                • Given that the above commenter was trying to establish that JSON was not overly verbose, and even his example of how to combat the overly verboseness is overly verbose, what's wrong is that it doesn't back up the argument that JSON isn't overly verbose.

          • It's tragic how this is always being rediscovered...

            back in the late 80's I wrote software that sent wire data for financial transactions... it was not open source, it was proprietary and sold as part of my company's product portfolio for wall street.

            However, we did exactly this.. all client's taking part in the transactions first spent a few minutes (back then!) loading the data dictionary. Subsequent information packets were packed binary data with each field having a dictionary ID. There were 1 byte INT

            • by mha ( 1305 )

              First, thanks for ignoring the solution given to the problem mentioned - use common sense, and arrays.

              Second, the data that goes over the wire *IS* binary - it is (de)compressed on the fly.

              Third, the majority prefers human readable formats. That's why those formats became popular - "popular": "liked or admired by many people or by a particular person or group".

              • Fourth, use common sense.

                Don't even try to come up with extreme cases where something else obviously does make more sense then these text formats. Because also obviously there is no one-size-fits-all. So if you think you have a problem that is better solved using some other format, binary, whatever, just DO it and don't try to use your particular example as "counter point" why everything else is wrong.

            • There were 1 byte INTs, 2 byte INTs, 3 byte INTs and 4 byte INTs, variable length strings, booleans packed into bit fields, etc. It was very wire efficient, and this was because back then the wires were really slow. We had utilities developed to view the wire data and corelate it with the data dictionary, so we could inspect and debug captured wire data.

              I feel you. Those wires were slow.

              In the 90s I was toying with the ASN.1 spec [] and its many derivatives, trying to find a good balance between a wire-stream and random access storage protocol. It would consist of a series of synchronous streams transmitted in tandem with shifting, the most primitive being the raw ASN.1 transmittal of data, each successive meta-stream on top of it consisting of lisp-like primitives that act on the data and 'unroll' it into more symbolic form, providing entry vectors for tr

          • Sending 1,000 copies of that is going to take a lot more packets than a fixed binary format where you can pack the entire thing down to 9 bytes e.g. 8 bytes for the Id, and both bools into a bitset on the last byte.

            That's why you compress the stream. HTTP supports Content-Encoding: gzip, or you can wrap it around your file format on disk. Here's what happens with your example:

            user@host:~$ ruby <<EOF | gzip | wc -c
            prng =
            1000.times do |i|
            puts <<EOR
            CustomerId: "#{prng.bytes(8).unpack('H*').first}",
            AllowExtendedConfiguation: "#{prng.rand(2).zero? ? 'true' : 'false'}",
            IsMaximumLengthRequired: "#{prng.rand(2).zero? ? 'true' : 'false'}"

            12550 bytes... 12.5 bytes per record, instead of your hand-optimized 9 bytes per record. I'm only paying a 28% premium with 100% random data. When it contains text strings (and we're talking about Wave here - it's mostly text) it's quite common for gzipped JSON to be smaller than an optimized but uncompressed binary format.

            This frequently ha

            • That's a pretty solid example you're provided there. There's a small issue with you using a truncated customerId, since you're assuming it's an autoincrement variable, and those are going out of fashion now we're having to build for clustered installations. If you're using MongoDB it will be comprised of a serverId, a snapshot of the current time, and a random portion. For a single DB solution, your example is fine.

              You could try compressing the binary data, there's no reason that stream can't be compressed

              • It doesn't require all those extras brackets and braces and quotes.

                My point is all those extra brackets, braces, and quotes (and field labels) don't cost you much. They compress efficiently.

                JSON is like any hammer. Sometimes you gotta know when it's time to put it down and pick up the screwdriver instead.

                No argument there - JSON isn't my only tool. :) I just disagree that it "fails hard when you want to send 1,000s of records".

            • If you're talking customer numbers, not customer serials (as used above) for your optimised case, then the binary case too can be more heavily optimised. Byte 1 encodes 2 bits for the two flags, then 6 bits for the number of bytes the customer ID takes up (as an integer, not a string). This gives you 2 bytes for your first 256 customers, 3 for your next 65280, and 4 for your next 16711680. For most companies that means you're likely to be averaging 3.5 bytes per record, suddenly we've compressed our wire

          • {
            CustomerId : "ABCDEF012938487432112424242322426",
            AllowExtendedConfiguation: "true",
            IsMaximumLengthRequired : "true"

            This is not valid JSON : keys must be encoded.

            And it it inefficient to send boolean values as strings instead of booleans because that data type is fully supported in JSON [].

            "CustomerId": "ABCDEF012938487432112424242322426",
            "AllowExtendedConfiguation": true,
            "IsMaximumLengthRequired": true

    • Re:JSON (Score:4, Insightful)

      by hey! ( 33014 ) on Sunday December 29, 2013 @03:24PM (#45812959) Homepage Journal

      Most of the complaints I hear about XML are really complaints about lousy software architecture. Yes, XML is complex, but that should cause *zero* hassle on the front end. If it is causing hassle throughout your project, you're doing something wrong architecturally.

      The desire to flex your muscles in a hot technology often overwhelms good design sense. Thus ten years ago you'd see people parsing and doing XML DOM tree manipulation directly in UI code and crap like that. Doing the same with JSON would be just as bad design, although since JSON's feature set is much smaller the results are less immediately catastrophic. But they're still bad design.

      If your objection to XML is that it spreads complexity throughout your code, you're just a mediocre coder. Choosing XML or JSON should be a very minor implementation detail.

  • by Anonymous Coward on Sunday December 29, 2013 @02:21PM (#45812653)

    due to low numbers of people working on the project.. but here's an idea.. GET INVOLVED. []

  • Really? (Score:1, Informative)

    by Desler ( 1608317 )

    and despite Rizzoma claiming to be Open-Source, their code is nowhere to be found!

    Funny, I found it in 5 seconds here [] after simply doing a search for "Rizzoma source code".

    • Re:Really? (Score:5, Informative)

      by mitzampt ( 2002856 ) on Sunday December 29, 2013 @03:08PM (#45812859)
      Pardon me, but if you look closer, in that github there isn't any Rizzoma server, just some gadgets.
      There are no updates on Rizzoma core since early this year (I think January) and they didn't choose a license for it.
      I saw that they are still working on some gadgets, and their server performs quite well, but that is a sign that developer involvement decreased once they reached a stable base.
      • Yeah, it's not great they they are claiming open source and not providing code, let alone an installable version.
  • The Wave Story (Score:4, Informative)

    by digitaltraveller ( 167469 ) on Sunday December 29, 2013 @04:58PM (#45813463) Homepage

    Here's what happened in excruciating detail:

    1) Google Releases Wave, claims it will be open source. Promises/Tells a Fibonacci.
    2) Google doesn't release Wave as open source for various reasons eg: protocol buffers toolchain underneath deemed too valuable. (Please don't argue this, the protocol buffers stuff that's been released is only a tiny part of the story.)
    3) Google builds a terrible open source replacement pretty much from scratch. It BARELY works for one commit nearly 3 years after they claim Wave will be open sourced. It never has been. Entire affair is swept under carpet.

    I know because I had an ehealth startup [] that died partially as a result of this. In the end, after we realized we had been hoodwinked (this post excludes private conversations we had with Google) we wrote a Wave-like thing around part of our technology in record time and it surprisingly turned out really well, but unfortunately it was too late, and company died. That was sad. Startups are fragile things.

    Anyhow, try sharejs [] it's written by a former Wave team member and it's better. You can easily wrap gwt around that if you need to. Or, I'm highly skeptical but you can try JBoss Errai [], they have written an OT framework into their weird everything framework. OT is a pretty complicated bit of code, and they just stuck it in a directory errai-otec [] like it was any other feature (eg. a Base64 encoder). I would rate the chance their OT impl has major issues as very high. I don't really understand corporate open source like this, so I'd love to see an Errai person explain the project. I'm guessing the thesis is somehow based around upselling a service of some sort.

    tl;dr You want this. []

    Support A Free Internet []

    • Thanks for the suggestions, ShareJS looks really easy to integrate into a site that meets some of my needs.
  • Zeigerpuppy...what features of Wave are missing from other open or closed source software do your collaborators need? I find it odd that you would go to a dead--or nearly dead--open source project based on a requirements/needs stage. Unless there is something that Wave does out of the box better than existing options that are in active development you are barking up the wrong tree. Plus, one of the advantages of Wave early on was that Google hosted it and collaboration across corporate lines made it attract
    • The major features I am interested in are: real-time collaborative editing, discussion workflow and easy documentation/media integration. As the group's discussions are private, it needs to be self hosted. I prefer open source. Wave ticks the boxes. The closest I have found (and tried installing) is a version of etherpad []. I was excited by Rizzoma as it looked like a more general purpose solution but doesn't look very open, unfortunately. I am more of a sysadmin than a developer and will certainly poke my
  • ...Google Plus.

  • I liked Google Wave, it was good for the sort of collaboration work I do quite often - working on specification documents. Better than a live document. I was annoyed when they killed it off as I was just starting to see more use for it.

    • I never bought the "lack of uptake" angle that google pushed when they axed it. I think that a federated network is the opposite of what google wants. They want all the juicy bits passing through their servers and the federated model of wave was not good for this. I guess I at least need to thank them for opensourcing etherpad after they acquired that code.
  • If Google Wave was featured in a movie, it would be directed by John Romero and people would be trying to kill it with a shotgun.

Some people have a great ambition: to build something that will last, at least until they've finished building it.