Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Software

Ask Slashdot: What's the Future of Desktop Applications? 276

MrNaz writes: Over the last fifteen years or so, we have seen the dynamic web mature rapidly. The functionality of dynamic web sites has expanded from the mere display of dynamic information to fully fledged applications rivaling the functionality and aesthetics of desktop applications. Google Docs, MS Office 365, and Pixlr Express provide in-browser functionality that, in bygone years, was the preserve of desktop software.

The rapid deployment of high speed internet access, fiber to the home, cable and other last-mile technologies, even in developing nations, means that the problem of needing offline access to functionality is becoming more and more a moot point. It is also rapidly doing away with the problem of lengthy load times for bulky web code.

My question: Is this trend a progression to the ultimate conclusion where the browser becomes the operating system and our physical hardware becomes little more than a web appliance? Or is there an upper limit: will there always be a place where desktop applications are more appropriate than applications delivered in a browser? If so, where does this limit lie? What factors should software vendors take into consideration when deciding whether to build new functionality on the web or into desktop applications?
This discussion has been archived. No new comments can be posted.

Ask Slashdot: What's the Future of Desktop Applications?

Comments Filter:
  • See it before (Score:5, Interesting)

    by amalcolm ( 1838434 ) on Monday May 11, 2015 @11:25AM (#49664629)
    In the 80s and 90s. X terminals and the like. Sooner or later the users want their power back. It will be interesting to see what happend this time around.
    • Re:See it before (Score:5, Informative)

      by bondsbw ( 888959 ) on Monday May 11, 2015 @11:52AM (#49664957)

      Computers users in the 80s and 90s were a different breed in general than today's users. For most users today, an iPad is good enough for just about anything they will ever want to do.

      • by jythie ( 914043 )
        Untill apps get fancier and depend on greater local hardware specs.

        Users have not changed all that much, the ratios of use cases are not all that different than they were in the 80s and 90s, with the majority of users having fairly modest requirements but a market with increasing hardware needs.
      • For most users today, an iPad is good enough for just about anything they will ever want to do.

        The keyword here being "most".

        Sure, most folks just want their facebook and online shopping... most of the time. However, there is still a not-insubstantial percentage of folks who want to have a means of using their computer while it is off the network.

        There is also the limitations of a tablet... my wife uses nothing but an iPad most of the time, but sometimes she wants to bust out a spreadsheet for her local volunteer group, or write something a bit more involved than just a quick note. Sometimes she want

        • Re:See it before (Score:5, Insightful)

          by Hotawa Hawk-eye ( 976755 ) on Monday May 11, 2015 @01:44PM (#49666245)

          Sure, most folks just want their facebook and online shopping... most of the time. However, there is still a not-insubstantial percentage of folks who want to have a means of using their computer while it is off the network.

          And there are some people for whom that is not a want but a NEED.

          http://en.wikipedia.org/wiki/A... [wikipedia.org]

          The computer of a programmer working on the design of a new piece of classified military hardware isn't going to be able to connect to the open Internet. If the security of the system is sufficiently important, the machine may not be allowed to connect to any network at all.

      • Re: (Score:2, Interesting)

        An iPad, seriosly...?
        iPads are media consumption devices, not workplace tools, unless of course FB, NetFlix and YouTube are what you do at work.
        • An iPad, seriosly...?

          iPads are media consumption devices, not workplace tools, unless of course FB, NetFlix and YouTube are what you do at work.

          That would be an interesting job.

    • Re:See it before (Score:5, Interesting)

      by jythie ( 914043 ) on Monday May 11, 2015 @11:57AM (#49665029)
      Probably something similar. Like the 80s and 90s we will probably get another wave of upgraded desktops overtaking the server upgrade cycle, with desktop power and storage jumping ahead and making the shared resources seem constricting and economically inefficient, and then developers (and users) will rediscover how much better things run when utilizing the greater local resources.

      We will then get a decade or two of young programmers rediscovering what that 'unhirable' older ones already knew, holding themselves up as visionary geniuses for realizing things that those 'behind the times' client/server developers were 'doing wrong', attracting hype and investment dollars while repeating the same mistakes people made (and learned from) 2 generations ago.

      Rinse, lather, repeat.
      • Re: (Score:3, Interesting)

        by Anonymous Coward

        It might be changed though. We have a different set of users than we did in the 1980s and 1990s. A lot of them don't care about privacy, or security... just see their files magically saved somewhere (be it local storage, the cloud, or redirected to a server belonging to the MSS.) Users who really don't care if they have an i3/i5/i7 CPU or what amount of RAM... but just want something that will run their walled garden apps. These users are brought up where the walled garden is everything, and that an ave

        • Re:See it before (Score:4, Insightful)

          by CronoCloud ( 590650 ) <cronocloudauron.gmail@com> on Monday May 11, 2015 @01:07PM (#49665881)

          Back in the "don't copy that floppy" days, we were promised by software publishers that prices for games and applications were high due to piracy. Now with consoles having a 0% piracy rate, if one factors all the DLC needed to play an average console game, the price has gone up by 2 to 10 times.

          DLC is not NEEDED to play, it's optional. One can still play Skyrim without Dawnguard, Hearthfire or Dragonborn. One can play Akiba's Trip without purchasing the DLC for the Prinny weapon. One can play War Thunder without buying the Premium vehicles.

          One must also remember that back in the don't copy that floppy days, the average game cost $39 and had much less content. Taking inflation and included content in account modern games are cheaper than the ones of the 70's/80's.

      • by chuckugly ( 2030942 ) on Monday May 11, 2015 @02:43PM (#49666837)
        Call me a whippersnapper but I think you have to lather, then rinse, and THEN you can repeat.
    • Re:See it before (Score:5, Interesting)

      by buchner.johannes ( 1139593 ) on Monday May 11, 2015 @11:58AM (#49665045) Homepage Journal

      Problem 1)
      Open-source desktop applications have is that the feedback loop takes forever. It is difficult to edit a GUI or modify a behaviour immediately. One has to find the (current) code base, compile, make sure one has the right libraries (which may be different to the system versions) and make a local installation.

      I would like to see a program/framework/DE/whatever where you can, while you are in an interface, click "edit code" and modify the program on the fly. Sugar/OLPC began implemeonting such functionality for their Python programs. This would drastically speed up make scratching your itches much easier, as well as redistributing your modifications.

      All progress comes from having fast feedback loops. Make it easy for users to play around (and exchange modifications).

      Problem 2)
      Another change I would like to see in Desktop Applications is that one does not have to program any UI logic (creating widgets, connecting events) at all, it just seems to be redundant. Why do we design a UI by writing *text* in 2015?
      It should be possible to auto-generate a UI from the type of objects one wants to modify, from the constraints of the best practices in UI design, perhaps with a workflow definition. It's useless to have all this freedom when we always want it the same way (text boxes for text input, checkboxes for booleans, list for lists, buttons for actions) anyways. Why hasn't a library come along that does that. At least glade lets one draw UIs, producing a XML file that can then be loaded and populated by events. More work on making programming UIs trivial please.

      Problem 3)
      Deployment. It's ridiculous. Today we can easily install python/ruby libraries from git repos, but not programs that will run in user-space?
      In fact, perhaps the whole packaging of Linux systems should be different. What if every user was running in a virtual environment where they can install any software they want, with the other users being isolated from those changes. In the days of Docker and KVM that should be quite possible.

      • Re:See it before (Score:5, Interesting)

        by Grishnakh ( 216268 ) on Monday May 11, 2015 @12:13PM (#49665287)

        Why do we design a UI by writing *text* in 2015? It should be possible to auto-generate a UI

        Um, it is, and this has been possible for quite some time. Lots of IDEs auto-generate code for UIs. Qt's QtCreator comes to mind, and I'm pretty sure MS has had something like this for ages. I'm sure there's several others.

        In fact, perhaps the whole packaging of Linux systems should be different. What if every user was running in a virtual environment where they can install any software they want, with the other users being isolated from those changes.

        It's been like this for ages. When was the last time you saw a desktop Linux system where more than one person actually used it, and there were multiple accounts on it? Better yet, when was the last time you saw such a system where multiple people were using it simultaneously? Linux, just like any UNIX system, certainly has the capability built-in to have multiple simultaneous users (since it is a multi-user multi-processing time-sharing system), but in practice no one does that much any more because we use PCs now, not shared centralized machines. Servers are a little different of course, but even here people are frequently running VMs these days so they have a full Linux environment to themselves; the big exception I can think of is ultra-cheap shared web hosting, and there the capabilities available to users are limited.

        • Re:See it before (Score:4, Interesting)

          by psyclone ( 187154 ) on Monday May 11, 2015 @03:27PM (#49667165)

          Linux Package Deployment

          I don't think the parent was complaining about not being able to modify his own linux desktop because there are other shared users. I think the problem might be around distributions that only release certain versions of software. For example, I run an "old" Ubuntu 10.04 LTS release. It is nearly impossible to install the latest Chromium build due to package dependencies and management. However, I can run the latest Firefox since I can download the tarball directly. (And no, I shouldn't have to upgrade the entire operating system just to run a simple userspace program.)

          • Yes, this is a big problem with Linux distros and there's been a lot of talk about it and about solutions for it. The systemd guys even proposed an interesting solution that involves btrfs.

            However, the OP seemed to be talking about having virtual machines so that different users have full control over their applications, unlike now where for the most part applications have to be installed by someone with root access.

            • The VM for each application is a good idea. Android got close, by at least creating a user for each app using the standard unix permission model where each user can't see another user's files so each app is separate. But they still have all the "what APIs does this device allow" and "what APIs have this program implemented" problems similar to "what libraries does this distro have".

      • by zanzi ( 517065 )
        Regarding deployment, the Nix Package Manager (https://nixos.org/nix/) is a very clean solution. It is a package manager and build system, that can be used for building and deploying libraries and applications, with maximal sharing of the same binaries on disk and RAM, but also great isolation: every user/environment/application can use the version of libraries it needs, without influencing other users/environments/applications.
      • Re:See it before (Score:5, Interesting)

        by Burz ( 138833 ) on Monday May 11, 2015 @02:15PM (#49666551) Homepage Journal

        Anyone who looks back in my posting history will see that I have long, LONG advocated for tackling the UI and packaging paradigms on FOSS desktops because they choke-off interest from the type of creative person who develops apps. (Even worse, they scare away people who would like to experiment and become budding app developers, so those people cut their teeth on OSX or Windows almost as a rule.)

        PC tools are supposed to link the user with the power and features of the underlying hardware, making them at least discoverable in the GUI; In other words, there must be lots of vertical integration. Also... the GUI must have a 'gist' or feel consistent because this is a sign of feature-stability in the OS.

        What FOSS has is a bunch of developers who tinker with the OS itself (I include the GUI in this, as it rightfully belongs in the category of OS) and assume that anyone who understands how a system works internally can trivially design GUI features... a big, big flaw in what is not so much an articulated belief as an unhealthy attitude. This is part of the subconscious of the FOSS world, and it results in maladies like not being able to describe fixes and workarounds (or just general usage instructions) as GUI snapshots and walkthroughs (almost always, the user will be directed to the CLI); It means even seasoned tech support personnel will struggle to interpret DEs and other UI features they are not very familiar with. Just getting to the point where your cousin or boss can try out your creations is hell.

        App developers should have the power to create exceptions for UI features in their *apps* (I said apps, not OS), because that embodies the two things app developers subconsciously look for: power and feature-stability. The default behavior is always the OS way (i.e. ONE way) out of respect for all users in general; If the default behavior/appearance is ten possible ways, then the app developer feels like they are managing chaos instead of power.

        My 'remedy' for the FOSS OS problem would be for a distro like Ubuntu to shed its identity as a "Linux distro" because the Linux moniker just confuses people at this point; and to take full control over the UI design so that it conforms more to a single vision (something that is apparently already under way). Pretty much all of the OS except the kernel should be original to the project or forked and, as Google did for a while with Android, Canonical should threaten to fork the kernel if that is necessary to improve the UX.

        I'll also point out that Ubuntu has gotten some meta-features that were typically missing from a Linux distro, like a full-blown SDK and extensive whole-system hardware compatibility tests and searchable database. What would remain to be done beyond this is to standardize on a GUI IDE (with capabilities like Xcode) and extend the hardware program to include a certification process (with licensed emblem) that system and peripheral manufacturers can use in a straightforward way.

        Also, packaging is a whole other cup of worms, though I personally think emulating OSX app folders would be a good foundation for easily-redistributed apps. This means that an OS repository would have to stop at some well-defined point instead of trying to mash all the apps and OS together along with the kitchen sink.

    • Re:See it before (Score:5, Insightful)

      by Austerity Empowers ( 669817 ) on Monday May 11, 2015 @12:06PM (#49665161)

      In the 80s and 90s. X terminals and the like. Sooner or later the users want their power back. It will be interesting to see what happend this time around.

      Not surprisingly, we neither trust our web browser, the company providing the software, nor the network it all operates on. The majority of things I use my PC for, I am not ready to release to "the cloud".

      While I'm glad that hollywood starlets think the cloud is safe enough for nudes, all that proves pretty thoroughly it's not safe for anything important.

    • Re:See it before (Score:4, Insightful)

      by Mr D from 63 ( 3395377 ) on Monday May 11, 2015 @12:11PM (#49665249)

      In the 80s and 90s. X terminals and the like.

      Thin client has arrived after 30 years of talk, and its name is Chromebook. Not catching on like wildfire, but certainly more than any previous example I can think of.

    • Re: (Score:3, Interesting)

      by Anonymous Coward

      Wait a second, users didn't want their power back. What happened is that someone called Bill Gates came along with the idea that you could *buy* a copy of a piece of software. That you believe there is something natural about this is a testament to how effectively he shaped the personal computing industry.

      A so called 'cloud' based system is actually just a return to how things were going to go in the first place.

      It is also an incredible annoyance to older CS people that it has taken nearly 20 years to get b

  • by Endloser ( 1170279 ) on Monday May 11, 2015 @11:26AM (#49664633)
    There will always be a need for those who want to keep what they are doing private. It's not private if it's not local and even then it may not be private.
    • Didn't the courts recently decide that there is No Expectation Of Privacy on mobile devices ?

      I do a lot of Audio Visual work on my computer. Wouldn't be easy or convenient on a web application.
  • by TWX ( 665546 ) on Monday May 11, 2015 @11:31AM (#49664701)
    Until only a few years ago, electricity was the only thing that one needed to use a computer productively. Now there are so many dependencies all of the way from device drivers to intermediate pieces of equipment and services, that there are a whole lot of things that can cause the function to stop.

    I only use online features because they're free-as-in-beer along with their ease of access. If either changes then there's no reason to continue using an online version.
    • (takes out his autonomous, self-contained smartphone)

      what where you saying about new stuff having more dependencies ?

      • by jythie ( 914043 )
        Just because the dependencies are hidden from users does not mean they can not be impacted by them.
        • Nothing makes you as dependent as an on-line accessible only feature.

          • by TWX ( 665546 )
            mmmhmm... Plus the prospect of being without service when there shouldn't be any reason to be without service. Like in the middle of a city or on a major interstate. This is why I still like having paper maps that at least feature numbered roads like federal, state, and county highways.
      • by mcrbids ( 148650 )

        (takes out his autonomous, self-contained smartphone to post on Slashdot)

        what where you saying about new stuff having more dependencies ?

        Oh. Yeah.

  • No (Score:5, Insightful)

    by Lunix Nutcase ( 1092239 ) on Monday May 11, 2015 @11:31AM (#49664705)

    The fact that this question gets asked basically every year should more than sufficiently answer the question.

    • Re:No (Score:5, Interesting)

      by sribe ( 304414 ) on Monday May 11, 2015 @11:45AM (#49664867)

      The fact that this question gets asked basically every year should more than sufficiently answer the question.

      Exactly.

      The rapid deployment of high speed internet access, fiber to the home, cable and other last-mile technologies, even in developing nations, means that the problem of needing offline access to functionality is becoming more and more a moot point. It is also rapidly doing away with the problem of lengthy load times for bulky web code.

      Oh, bullshit. Millions of people in developed nations (particularly the U.S.) have "broadband" that is a few hundred Kbps, or a couple of Mbps--let's just call it 3 orders of magnitude, or more, slower than a spinning disk. And of course there's an order of magnitude difference, or more, in latency as well. And of course, absolutely nothing about the deployment of high-speed internet access deserves to be called "rapid"! Remember, we were hearing about how the rapid rise in internet access speed was outpacing CPU speed increases and would soon make data transfer times irrelevant in the 1990s!

      And that's before we even get to the performance difference between JavaScript DOM manipulation vs compiled C manipulation of native view/control hierarchies. Yes, I've heard about how much faster JavaScript has gotten. I use it. I also use native toolkits. You can show me the micro-benchmarks all day long; doesn't change the fact that a complex UI in JavaScript is vastly slower.

      And that's before we even get to the performance difference when dealing with more intense data manipulation.

      And that's before we even get to the higher memory usage for a control in the DOM than for a native widget. (Don't believe me--inspect an input element, and tell me how many pointers it holds to objects & prototypes...)

      • Re:No (Score:4, Interesting)

        by TheRaven64 ( 641858 ) on Monday May 11, 2015 @12:43PM (#49665639) Journal

        You can show me the micro-benchmarks all day long; doesn't change the fact that a complex UI in JavaScript is vastly slower.

        You're conflating JavaScript and DOM. With FTL, JavaScriptCore can run C code compiled via Emscriptem to JavaScript at around 60% of the speed of the same C code compiled directly. That's not a huge overhead (40% is a generation old CPU, or a C compiler from 5 years earlier). Transitions from JavaScript (or PNaCl compiled code) to the DOM, however, are very expensive. This is why a lot of web apps just grab a canvas or WebGL context and do all of their rendering inside that, rather than manipulating the DOM. Optimising the DOM interactions without sacrificing security is quite a difficult problem.

    • by jythie ( 914043 )
      It probably does, but not in the direction you are implying. Look back on the scale of decades and you see a pattern of flipping back and forth, with the question/answer not changing much from year to year, but inverting over longer time scales.
    • by Rich0 ( 548339 )

      The fact that this question gets asked basically every year should more than sufficiently answer the question.

      Yup, the Apple Newton clearly disproves that nobody wants to own a tablet.

  • Office 365 (Score:4, Interesting)

    by Malc ( 1751 ) on Monday May 11, 2015 @11:37AM (#49664771)

    Office 365 is a poor example. The web interface has definitely come a long way, but any serious work falls over. Maybe they'll get there, but for now, local apps integrated with the cloud backend seem to work better.

    Write now I definitely wouldn't want to try working with RAW photos from a DSLR or edit high bitrate 4K video using a web app. Maybe in ten years, but then again, those digital formats will probably have moved on to another level by then too.

    Oh and email: there's still definitely a need for offline access. Be it a tradition MUA or when on a mobile phone. Online isn't online enough even for this.

    • Pixlr cant even handle JPEGs from my Sony a6000. Takes forever to get the image to load into the 'app'. I was just trying it out last night on the Surface 3.
  • Two problems (Score:4, Insightful)

    by erp_consultant ( 2614861 ) on Monday May 11, 2015 @11:38AM (#49664781)

    Single point of failure and security. Some applications might lend themselves to running exclusively in a browser and some will not.

    • by rnicey ( 315158 )

      You talking about the desktop or the cloud? :)

      • Good question. I'm talking about the cloud. All communication between your PC and the cloud based application occurs over TCP/IP/routers etc. which are not secure. They can be made secure if you are willing to go to a lot of effort and are willing to give up come conveniences.

        Can a PC be compromised? Sure. The usual attack vectors are the internet and physical access to the keyboard.

        My point is that you can more easily control the data on an PC. You can even, if you choose, disconnect it from the internet a

        • by rnicey ( 315158 )

          I respectfully disagree. Especially if I'm the sysadmin of the centralized cloud servers.

          Although my original comment was somewhat fatuous, a user on a single machine is much more often a security/failure risk. It's not so much which is better and more reliable these days (it's certainly centralized control and storage, aka 'The Cloud'), but who controls that central point. This all started decades ago with centralized file servers, and processing farms for big data. It's eventually going to be just the bro

          • "It's not so much which is better and more reliable these days (it's certainly centralized control and storage, aka 'The Cloud'), but who controls that central point" - Which introduces a third point of failure. How do I know, with any degree of certainty, that the person administering the server(s) is competent? Or that their management is giving them the right tools and the right priorities to protect my data? The short answer is that I don't.

            I'm not suggesting that you are not competent to do it or that

          • Your terminal -- mainframe concept sounds a bit too 70ish to me. Why should it be the future of computing now when it already failed in the 80s-90s for most uses? I really don't buy it.

  • Yeah, right ... (Score:5, Insightful)

    by gstoddart ( 321705 ) on Monday May 11, 2015 @11:39AM (#49664801) Homepage

    Sorry, in my experience these web based applications are crap, and they started around the .com era where suddenly everybody thought everything belonged on the web.

    The "problem of needing offline access" most certainly has not been solved, and not all of us want our data in the cloud.

    If the web browser is going to become our operating system, we're fucked -- because we'll all be running garbage code which covers some of the use-cases, but which generally has terrible interfaces as we try to shoehorn every problem into something which doesn't lend itself to the web.

    Many of us have lamented the move to web-first technologies as a byproduct of lazy corporations writing mediocre software.

    If you think the end of desktop applications is nigh, I sincerely hope you're wrong -- because the endless stream of crap web pages which almost work is getting tedious.

    And it mostly ends up in greedy corporations more worried about analytics and advertising, than writing usable software which actually solves the problems.

    • The "problem of needing offline access" most certainly has not been solved

      Note that HTML5 does allow effectively unlimited (policy set by the user) local data to be storage and applications that run completely disconnected. It's possible to write a web app that uses the browser for the UI, but only uses the network for software updates.

      • by Lennie ( 16154 )

        I would go a step further on the offline topic, many developers needed time to get used to the idea of how to go about this and good patterns needed to spread.

        And the browser vendors are still improving things by adding new and better APIs like service worker.

  • Most any data-driven desktop app can be rebuilt for the web, but the same can't be said the same for apps that depend on local resources outside of a browser's limited environment. Could you ever imagine pro video editing (i.e. Adobe Premiere / After Effects) 100% within Chrome?
    • Could you ever imagine pro video editing (i.e. Adobe Premiere / After Effects) 100% within Chrome

      Depends. With WebGL / WebCL, I can imagine preview effects there quite easily. I can also imagine that it would be nice to be able to do the real rendering runs on a rack somewhere else. The more difficult thing is imagining the multiple GBs of data between the two. Possibly uploading the raw source data to the server and keeping the local copy and just syncing the non-destructive editing instructions would work.

  • by future assassin ( 639396 ) on Monday May 11, 2015 @11:40AM (#49664811)

    through my browser or half installed program on my computer or I have to be online within 30 days no thanks, go and die already.

  • by xtal ( 49134 ) on Monday May 11, 2015 @11:40AM (#49664813)

    Eventually people will get fed up with paying $4.99 in perpetuity to a dozen or more vendors, and we'll have single pay licensing again. Legislative changes relating to data protection will complicate cloud migration for some professions, and I imagine state spying is starting to have economic impact.

    I've seen the cycles too; the difference there is a legion of programmers and a even bigger pile of code out there. Computers (hardware) are also trending to very low cost now as well.

    Software trends to zero in volume as there's no marginal cost; I'd expect more and more core functionality to be free. This has already happened to some degree in the Apple ecosystem, and Microsoft is bundled with everything.

    Another prediction: More and more functionality will come bundled into the OS, and you can factor on paying a subscription for it (or the fee when you upgrade).

    You want to jump on the next big rage? Nice, clean applications, web based or not, devoid of crapware and malware and in-app-purchases and ads that do what they're supposed to, cleanly, nothing more, and easily connect together through standard interfaces. It's almost like someone built something like that before.

    On the other hand, no application is complete until it has an email client..

  • Developer: Application
    Consumer: Web
    • Field worker: mobile phone, tablet, or laptop (applications can be native, web-based, or hybrid)
      On-site worker: a combination of several in many cases (native, web-based, or hybrid)

    • Absolutely this.

      There is a huge difference between content production and content consumption. I do both. When I'm out and about checking Facebook or reading my email and so forth, I don't need a desktop app. I can get by just fine doing all of my online shopping with a tablet or even my phone. Hell, I use the built in app on my Blu-ray player to stream my Netflix movies. Outside of some very specific tasks (and not counting work), I live most of my life without having to touch a desktop app.

      But when I

      • The desktop is going away in the home.

        Partially correct. However, don't forget the viability of the PC as an entertainment platform. A PC has a form-factor that makes it optimal for many types of games that no other form factor can really match. And people who create any sort of digital content, whether it's professional or a hobby, will probably want a PC at home for the task. Plugging a tablet into a docking station doesn't necessarily make it a PC-like system suitable for anything but the simplest content creation tasks.

        I think what's mo

  • Bad Idea (Score:5, Informative)

    by BrendaEM ( 871664 ) on Monday May 11, 2015 @11:49AM (#49664921) Homepage

    Stuff like this is marketing people's dream: dependence. Though, seen through the eyes of marketing people, they only consider the small select applications they use, but those office applications are only a small part of what people use.

    Applications like CAD, Design, Bitmap Editing, 2D vector art applications would run terribly slow over the net.

    The things people use to make your stuff would become more expensive as it is starting to happen, and those costs will be passed onto you.

    It would be a bad computer world where would could not afford a company to throw the switch, and discontinue your product in one day with no warning.

    If there was a disaster or real war (on our soil), no one would be able to work, at all, because there would be concentrated central points of failure.

    • by Lennie ( 16154 )

      - you are partly talking about niche users: it's the same situation as: most people working at an office aren't developers that need big workstations. Yes, these people are important, but they don't represent the general public. Certain organizations have certain workloads that only run well on mainframes. You might not believe it, but I believe the mainframes market was even growing a in 2013, haven't really followed it lately though. That doesn't mean that mainframes will be the next trend.

      - you mentioned

  • Internet access (Score:4, Insightful)

    by hierofalcon ( 1233282 ) on Monday May 11, 2015 @11:49AM (#49664927)

    Everyone thinks the cloud is great - till the backhoe goes through your fiber line and you either don't have a backup data connection due to the fiber cut being down the street, or you do have a backup data connection but it doesn't have the capacity to handle everyone running on the cloud. There are many points in the country where even if your ISP does have a backup, you will be down for quite some time while they reroute (and everyone else is trying to reroute as well). When most ISPs in town use the same trunks to get to the real world, you don't even really have many choices for redundancy.

    People who live in silicon valley and some countries with really good overall connectivity to all users are spoiled with many options. Out in the flyover area, things are tougher. Then think of places with even less connectivity than the US has.

    Keeping the company up and running by keeping the data local has a lot of advantages.

    • by n1ywb ( 555767 )
      Until your building burns down. Hopefully you had the foresight to do off-site backups.
    • Everyone thinks the cloud is great

      Not everyone. I, for one, think that the cloud is the exact opposite of great. No third party provider can be trusted, by law.

  • by MpVpRb ( 1423381 ) on Monday May 11, 2015 @11:51AM (#49664947)

    Simple programs used by the general public could conceivably be served from the cloud to a browser

    Even for those simple things, many people will still prefer local data and local control

    I find it hard to imagine that serious stuff like CAD, video editing, digital audio workstations, etc could ever be forced into this model

    I, for one, require local data, local software and local control..and I will NEVER rent software

  • ... at work recently. Bunch of crap the whole of them.

    One basically only works reliably in Firefox, one only in IE, one only in Chrome. And then of course there is the problem that one other needs an option in Chrome set to "on", the other needs that same option set to "off" to work.

    So at the moment it seems any more complex "web" application I look at basically needs it's on sandboxed browser to not interfere with all your other web applications, and the whole internet itself. And at that point, HTML is a

  • Right now the limits are in High End Graphic Processing, and Interfacing with external hardware.
    Most applications are rather boring, Take in some input, do some calculation based on the input, perhaps look up some data on a database, then give you an output. These batch type of processing is great for the web, as you get to have a big system in the background crunch all those inputs rather quickly and you get your results back.

    However for high end Gaming/CAD The browser will cause too much overhead.
    Then yo

    • Right now the limits are in High End Graphic Processing, and Interfacing with external hardware.

      And user interface in general. I've yet to see a web-based UI that even comes close to being as good as a well-developed native one.

      • Please explain further...
        For most standard interfaces, I haven't seen anything that you couldn't do on the web.
        However the dev tools (especially by Microsoft) do a poor job at coding them easily.

  • This again? (Score:5, Informative)

    by TheDarkMaster ( 1292526 ) on Monday May 11, 2015 @12:00PM (#49665073)
    Again this bullshit?

    - Flawless 24/7 connection to the internet is plain impossible and any application that does not take this into consideration is a piece of shit;

    - Your data on a third-party server is always a security problem waiting to happen;

    - Browsers cannot provide the exact same features of a native application without the idea of them being completely rethink;

    - When a web application has successfully emulate a desktop application it usually costs double or triple in computational resources to do the same thing as a native application;

    - HTML is not designed for making desktop GUI applications, it need a ridiculous amount of very ugly hacks do to things that are done easily using native GUIs;

    That said, of course there are tasks where a web application is useful... But it is foolish to believe that so any task task can be done in a web application.
    • What happens when the 'Cloud' provider gets bought by your competitor?

      Or goes bankrupt?

      Poof!!

      You no longer have the application, or the data. Your 'competitive edge' is now random electrons heading for NGC6724.

  • I'm sure it's been mentioned here already, but one of the major advantages of "native" apps (be it desktop or mobile) is that, unlike a browser, doesn't necessarily require an active network connection, which (at least in North America) has been rather sub-standard considering what other countries get.

    While it's true that browsers now have local data stores for data that might reduce the need for an active connection to a server, native apps usually are better able to handle a greater amount of data than
  • by rockmuelle ( 575982 ) on Monday May 11, 2015 @12:06PM (#49665155)

    I run a company that develops a laboratory informatics platform for data intensive science applications that mix wet lab and analytics operations into single workflows, with gene sequencing as the motivating application - think LIMS with a pipeline and visualization engine, if you're familiar with the space. (Lab7 Systems, if you're curious - http://www.lab7.io/ [lab7.io]

    When we started development a few years ago, we had to make the decision as to whether or not to build a desktop application or a browser-based application. At the time, this wasn't an easy decision. Some aspects of the UI are straightforward form-style interfaces, but others are graphics heavy visualizations of very large data sets (100+ GB in some cases). Scientific and information visualization have almost always benefitted from local graphics contexts and native rendering engines. In addition, the data decomposition tasks often require efficient implementations in compiled languages. Our platform also controls analysis processes on large clusters, another task not well suited for the browser.

    We gambled a bit and decided that the browser would be our primary user interface. Two trends at the time helped us make the decision (and luckily they both held steady):

      (1) The JavaScript engines in all the major browsers get faster with each new release and now outperform other scripting languages for many tasks.
      (2) The JavaScript development community is maturing, with more well-engineered and stable libraries available

    As few other considerations helped us make the call:

      (1) Our platform is a multi-user system. A desktop client would add to the support burden for our customers.
      (2) Our backend needs to integrate with compute clusters, scientific instruments, and large, high-performance file systems. It is server-based, regardless of the client.
      (3) The data scales we were dealing with also required "out-of-core" (to use an older term) algorithms for redenering, so the client would never get entire data sets at once.
      (4) REST/json... XML, XMLRPC, SOAP, and all the others are a pain to develop for (I speak from experience), REST/json significantly reduced the amount of code we needed to maintain state between the client and server.

    Since we made the call to use the browser, we haven't looked back. Early on there were some user interactions that were tricky to implement across all browsers, but today they've all caught up. Our application looks much more like a desktop or (*shudder*) Flash application, with a very rich UI (designed by an actual UX team that gets scientific software ;) ) and complex visualizations. It's also been relatively straight-forward to implement, thanks in large part to the maturity of some JavaScript libraries (we use jQuery, D3 (for complex filtering, but not for visualization), Canvas, Backbone, and a few others).

    Personally, I can't imagine ever writing a desktop application again. The browser is just too convenient and, in the last few years, finally powerful enough for most tasks.

    -Chris

    • How do you cope with old browsers? Supporting 2 year-old browsers is already difficult, I can't imagine supporting 6-8 year-old browsers, which is the usual thing in enterprise. Or are you limiting your development to old APIs only?

      • Two years is our horizon for browser support. Two other trends that have helped us in this regard are (1) that most browsers auto-update or at least nag you a lot and (2) IT departments are more accepting of users running Chrome/Safari/Firefox alongside IE. We're targeting enterprise/internal users, not everyone on the Web, so we can also put some requirements in place when we deploy.

        Most of our functionality uses standard HTML/CSS/DOM features, so our we haven't had any issues with features dropping. We d

  • Netcraft confirms that desktop applications are dead! Also Desktops. :)

    Seriously this bunk is garbage. Perhaps for personal use, there may be some transition. I know I use some google docs as I don't want to bother buying a personal copy of office for hundreds of dollars for the amount I actually use it outside of work... Though I probably have used OpenOffice more.

    In a corporate environment? Just no. This also happens to be where most of the usage is located. It is not even close, it is absurd. Ask a syste

  • Web applications chug like a sloth on qualudes compared to local applications. They consume more CPU, they take forever to load/store data, and their interfaces are clunky as hell (Google Office apps included.)

    Personally I think it's the web developers that keep asking this question every year, hoping to get praise for their shitty efforts over the past year to catch up to 1990's desktop applications.

  • by dannydawg5 ( 910769 ) on Monday May 11, 2015 @12:13PM (#49665291)

    I am sick of hearing about how desktop apps are dead. How am I supposed to develop embedded applications through a web browser? I suppose a cloud compiler could do it --- assuming it supports my extreme customizations, and even then, I can't imagine how slow it'd be.

    What about network tools? My open source project is a network test utility: http://packetsender.com/ [packetsender.com]. How can network test utilities exist other than a native desktop app? Am I supposed to create a browser add-on? Now we are just arguing semantics. Depending how deep the add-on is developed, might as well call that native.

    The app world is more than just a means to consume video, music, etc. Some people need to do real work.

  • Where you want lots of pixels, multiple windows and the ability to quickly point to any pixel location. Note, these are apps where you create content as opposed to consuming it.
  • by argStyopa ( 232550 ) on Monday May 11, 2015 @12:27PM (#49665447) Journal

    Please, understand this categorical statement: I DON'T WANT YOUR FUCKING CLOUD SERVICE.

    I do not want to rely on an internet connection to generate any trivial document.
    I do not want even my meaningless documents stored "in the cloud", much less anything any private or commercial value.
    I'm uninterested in making something simple, quick, and reliable into something complicated with more points of failure, slower, and unreliable (that in the meanwhile makes me dependent on you, and paying you for the privilege).

    So no, stop asking.

  • by Anonymous Coward

    Gamers will attest to running game logic server side not working for certain types of games because latency is too high. Going forward augmented reality and virtual reality have even lower latency requirement. With some latency requirements getting under 5 ms the actual physics (speed of light) starts to prevent everything from running remotely.

    Perhaps even more important is the further you send bits the more power it consumes. Bits in registers use the least power. Bits in cache are still low power. B

  • I suspect many "desktop" software companies will hedge and build the app as a web app, BUT keep a "local virtual web server" option so that it can have quick access to local disk etc. if needed. Light users may be fine with a cloud-centric approach, but power users may want the local approach for responsiveness and storage space control.

    Some of my personal music "experiments" are like that: I only run them on my desktop, but they use a typical web browser stack such that they could be "in the cloud" with a

    • I suspect many "desktop" software companies will hedge and build the app as a web app, BUT keep a "local virtual web server" option so that it can have quick access to local disk etc. if needed.

      But that wouldn't solve the problem of having to use a web-based UI.

  • A decade and some change ago first noticed marketeers slithering out of the woodwork to belch their dreams of SAAS into the ether. The dream is not a statement about architectures of the future it was always focused explicitly on raking in regular predictable revenue.. A concept customers have and continue to reject regardless of state of supporting infrastructure.

    From where I sit the proof is in the pudding. I openly encourage our competitors to offer online subscription only services. We are making ban

  • Without a doubt, web is s crapshoot of browser inconsistency and standards. Imagine this hypothetical scenario: No more local apps, but you have a web server running locally, which when you install an app, installs to the local web server. Your entire desktop is in a browser. What are the problems with this? Many: 1. Serialization to HTML/CSS/JS is slow and unnecessary. The code path to put a red rectangle on the screen is absurd 2. Those interfaces prevent direct access to local hardware. 3. Operational La

  • by Eravnrekaree ( 467752 ) on Monday May 11, 2015 @01:28PM (#49666075)

    Richard Stallman covered this subject in detail, it is important reading: http://www.gnu.org/philosophy/who-does-that-server-really-serve.html

    I am surprised this would even be asked here. The fact is, if you care about security and privacy, you dont want to use anything other than desktop apps. You want to avoid anything such as Google Docs for your normal letter writing and so on. One area of confusion is that people have problems drawing a distinction between which is where you share things that you want other people to see, versus a tax spreadsheet that no one else should see. With the social networking the material is sort of not private anyway and you want to share it so little is lost by putting it on a server farm, and it is necessary that it be shared with others so the server farm facilitates the communications.

    With a desktop application where you are working on tax spreadsheets or working on other things that will not be shared, there is no need to put it on a server some place else, so why do it? In so doing you give up a huge amount of potential privacy, increasing the technical possibilities of a possible access of the material on the server farm by other entities.

    Using this cloud stuff you lose control of your data. The cloud provider could pull the plug on the service at any time (and it happens, look at Google Code and Geocities and the vast store of information that was lost with that).

    Using the cloud for office apps is basically not necessary for what you are doing, since when you are writing a document for local use, or working on spreadsheet data, there is no technical need to use a cloud service to do this, and by doing so you endanger privacy and your control over the data.

    Whats really going on here is an attempt for large corporations to nickle and dime you and monetize you, perhaps by the minute, to use their software, while if you use an open source desktop app, you have unlimited use of the software for as long as you need at no charge.

    Secondly, open source is all about users being able to control, modify, run and expeiriment with the code they use, and being able to read it. Using apps on a server farm takes away the users control over the software they use, as it does with taking away users control over their data.

    Avoid Software as a Service like the plague.

    • I should add, desktop apps dont take away your ability to put your documents on a server if you want to share the data, what it does assure that the data will not be uploaded in any way unless you specifically authorize it, otherwise its only stored locally. If someone wants to put some data on a server, they can take the data produced by a desktop application and use their own server or another service to put the file online. This allows people to only upload data that they want to share, rather than uploa

  • My initial reaction is to say that computing is simply cyclical; what was once mainframes and dumb terminals turned into locally installed applications on desktops and laptops, and now we're doing that again with Teh Cloud (tm). However, here's the difference:

    1.) In the 80's and early 90's, overall technical competence of computer users was higher. Yes, the there was always the secretary who tried to use WordPerfect to make a database because she knew exactly one program, but overall, especially if you had a home computer, you had some concept of what you were buying, and what the things on the spec sheet meant - computers being sold today will have helpfully descriptive bullet points like "great for multitasking" instead of "8GB RAM", something that wouldn't have passed muster in the last cycle.
    1b.) Malware was much less a problem, back in the earlier days of computing. E-mail viruses were a thing, certainly, but for the most part, one ran a virus scanner and moved on with life. Also, with less hardware to throw at resident software, any kind of malware that ran resident would use enough system resources to alert the user to its presence, which is less the case now. Google Docs doesn't care about macro viruses, and users of that platform don't have to, either. There's value in that proposition for many less-technically-inclined users. Similarly, backups/hard disk crashes are "someone else's problem".
    2.) In the 80's and 90's, systems were generally designed for interoperability a bit better than they are today. It's possible to send an e-mail from a server running Exchange 2016 Preview to an SMTP server from 1989 and it'll be able to meaningfully use the message. This is not the case with Facebook or WhatsApp.
    3.) Inherently connected applications are the norm now. The utility of Facebook is "the rest of the stuff on Facebook". Google Docs and Pixlr don't apply to this point since they still deal with .doc and .jpg files that are more standards compliant, but many of the web apps that are popular aren't necessarily tied to the "open/change/save/close" paradigm that is commonplace in the desktop world.
    4.) "Bleed little, bleed often" is a more culturally acceptable proposition for most people, as it gives them the instant gratification of getting the product at a price they can afford, while not requiring a gargantuan up-front cost that happens regularly as people feel the need to keep up with the Joneses. $5/month = $180 over the course of three years, which has basically been the shelf life of every version of MS Office released. Makes it a lot easier to swallow for many people, whether or not it's actually a value proposition in the long run.
    4b.) The fact that virtually every software developer who has implemented IAPs instead of a one-time, up-front cost has made more money on that business model. At this point, it's solely a matter of principle that a developer of a paid application would sell a perpetual license, since general acceptance of subscription and IAP licensing makes it a better idea for everyone to go down that road instead. This was not nearly as true in the days of mainframe computing.

    Now all of that being said, I do think that video editing is one of the few tasks that will never lend itself to a subscription model, beyond what Digital Juice does. Editing-as-a-service makes very little sense, since even a moderately sized project will still take tens of gigabytes of upload time, which means "hours before you can edit". Meanwhile, 100GB of assets is not unheard of for even a two hour wedding video shot in HD, and with upload speeds still measured the single-digit mbit/sec unit, it can easily be days before editing can even be entertained. At the same time, costs are a lot higher for a company looking to get into that business, because you're going to get much less ability to thin provision even 500GB of space, as the nature of what's being done is going to make much more use of that space than the OneDrive accounts with a 1TB progres

  • by Karmashock ( 2415832 ) on Monday May 11, 2015 @01:39PM (#49666181)

    It is faster, gives the user more control, is more responsive, is more secure, etc.

    It is generally better in most circumstances for anything serious.

  • by Coward Anonymous ( 110649 ) on Monday May 11, 2015 @04:15PM (#49667459)

    Anywhere that latency is not adequately met by "cloud apps" will require desktop apps.

    Over time, bandwidth will become less of an issue as it continues to improve but latency is governed by the speed of light and light ain't getting any faster.

    Conversely, if a "cloud app" is a huge pile of JavaScript that does everything locally on your machine, it is arguable that it is really a desktop app.

  • by fluffernutter ( 1411889 ) on Monday May 11, 2015 @04:50PM (#49667667)
    If I need an application that has to crunch a million numbers into a few cells, I am more than likely going to want a browser application because that will get the impact of the computations off of my local CPU in a straightforward way.

    If I want an application that is less computational, such as a calendar application, I don't want a whole bunch of generic controls such as a back button or an address bar, or having tine extra work of dealing with tabs to go from reading slashdot to working on my calendar for the week. I want to be able to click once on my OS task bar and have a window come up that is dedicated to my calendar. The browser experience is just shitty for light applications and I hope that the companies out there continue to develop native windows applications. I really don't get why people were so nuts over google mail. Maybe it was good but the browser experience killed it for me every time.

Understanding is always the understanding of a smaller problem in relation to a bigger problem. -- P.D. Ouspensky

Working...