Ask Slashdot: What's the Future of Desktop Applications? 276
MrNaz writes: Over the last fifteen years or so, we have seen the dynamic web mature rapidly. The functionality of dynamic web sites has expanded from the mere display of dynamic information to fully fledged applications rivaling the functionality and aesthetics of desktop applications. Google Docs, MS Office 365, and Pixlr Express provide in-browser functionality that, in bygone years, was the preserve of desktop software.
The rapid deployment of high speed internet access, fiber to the home, cable and other last-mile technologies, even in developing nations, means that the problem of needing offline access to functionality is becoming more and more a moot point. It is also rapidly doing away with the problem of lengthy load times for bulky web code.
My question: Is this trend a progression to the ultimate conclusion where the browser becomes the operating system and our physical hardware becomes little more than a web appliance? Or is there an upper limit: will there always be a place where desktop applications are more appropriate than applications delivered in a browser? If so, where does this limit lie? What factors should software vendors take into consideration when deciding whether to build new functionality on the web or into desktop applications?
The rapid deployment of high speed internet access, fiber to the home, cable and other last-mile technologies, even in developing nations, means that the problem of needing offline access to functionality is becoming more and more a moot point. It is also rapidly doing away with the problem of lengthy load times for bulky web code.
My question: Is this trend a progression to the ultimate conclusion where the browser becomes the operating system and our physical hardware becomes little more than a web appliance? Or is there an upper limit: will there always be a place where desktop applications are more appropriate than applications delivered in a browser? If so, where does this limit lie? What factors should software vendors take into consideration when deciding whether to build new functionality on the web or into desktop applications?
See it before (Score:5, Interesting)
Re:See it before (Score:5, Informative)
Computers users in the 80s and 90s were a different breed in general than today's users. For most users today, an iPad is good enough for just about anything they will ever want to do.
Re: (Score:3)
Users have not changed all that much, the ratios of use cases are not all that different than they were in the 80s and 90s, with the majority of users having fairly modest requirements but a market with increasing hardware needs.
Re: (Score:3)
For most users today, an iPad is good enough for just about anything they will ever want to do.
The keyword here being "most".
Sure, most folks just want their facebook and online shopping... most of the time. However, there is still a not-insubstantial percentage of folks who want to have a means of using their computer while it is off the network.
There is also the limitations of a tablet... my wife uses nothing but an iPad most of the time, but sometimes she wants to bust out a spreadsheet for her local volunteer group, or write something a bit more involved than just a quick note. Sometimes she want
Re:See it before (Score:5, Insightful)
Sure, most folks just want their facebook and online shopping... most of the time. However, there is still a not-insubstantial percentage of folks who want to have a means of using their computer while it is off the network.
And there are some people for whom that is not a want but a NEED.
http://en.wikipedia.org/wiki/A... [wikipedia.org]
The computer of a programmer working on the design of a new piece of classified military hardware isn't going to be able to connect to the open Internet. If the security of the system is sufficiently important, the machine may not be allowed to connect to any network at all.
Re: (Score:3)
If you want to run applications completely controlled and filtered by Apple, yea go with that. Apple doesnâ(TM)t like something about some app you want to run then you do without that functionality. Apple wants you to use their crappy version of some app so they kill the competing apps, which one are you gonna be using?
I am fine with the prospect of using mobile devices to do everything assuming they have peripherals and expansion, but the prospect of Apple and Google controlling all software, not so
Re: (Score:2, Interesting)
iPads are media consumption devices, not workplace tools, unless of course FB, NetFlix and YouTube are what you do at work.
Re: (Score:2)
An iPad, seriosly...?
iPads are media consumption devices, not workplace tools, unless of course FB, NetFlix and YouTube are what you do at work.
That would be an interesting job.
Re: (Score:3)
Some people keep saying but I have yet to see any personal evidence for this alleged "trend". All of my relatives, all of my friends and all of my colleagues (with not a single exception!) have a PC and a tablet in their household, if they own a tablet. I have never heard of anyone who uses a tablet but does not use a PC or laptop at the same time.
I also highly doubt that there is any statistical evidence for this trend, according to which clearly more people have a tablet and no PC/laptop than there are people with tablet and a PC/laptop. Probably for mobile phones but not for tablets. Tablets are additional throwaway/low lifespan gimmicks rather than replacements of PCs and laptops.
There are some, but mostly those are people who aren't using personal computers to produce in the first place. In the early-mid 90's a common refrain (I remember my parents even saying it at one point) was "Why do we need a computer?" For those people, virtually everything you could respond to answer that question with (intercommunication with others, organizing, entertainment, writing text documents, etc) is served by a combination of smartphone and/or tablet with access to internet-enabled applications. M
Re:See it before (Score:5, Interesting)
We will then get a decade or two of young programmers rediscovering what that 'unhirable' older ones already knew, holding themselves up as visionary geniuses for realizing things that those 'behind the times' client/server developers were 'doing wrong', attracting hype and investment dollars while repeating the same mistakes people made (and learned from) 2 generations ago.
Rinse, lather, repeat.
Re: (Score:3, Interesting)
It might be changed though. We have a different set of users than we did in the 1980s and 1990s. A lot of them don't care about privacy, or security... just see their files magically saved somewhere (be it local storage, the cloud, or redirected to a server belonging to the MSS.) Users who really don't care if they have an i3/i5/i7 CPU or what amount of RAM... but just want something that will run their walled garden apps. These users are brought up where the walled garden is everything, and that an ave
Re:See it before (Score:4, Insightful)
Back in the "don't copy that floppy" days, we were promised by software publishers that prices for games and applications were high due to piracy. Now with consoles having a 0% piracy rate, if one factors all the DLC needed to play an average console game, the price has gone up by 2 to 10 times.
DLC is not NEEDED to play, it's optional. One can still play Skyrim without Dawnguard, Hearthfire or Dragonborn. One can play Akiba's Trip without purchasing the DLC for the Prinny weapon. One can play War Thunder without buying the Premium vehicles.
One must also remember that back in the don't copy that floppy days, the average game cost $39 and had much less content. Taking inflation and included content in account modern games are cheaper than the ones of the 70's/80's.
Re:See it before (Score:4, Funny)
Re:See it before (Score:5, Interesting)
Problem 1)
Open-source desktop applications have is that the feedback loop takes forever. It is difficult to edit a GUI or modify a behaviour immediately. One has to find the (current) code base, compile, make sure one has the right libraries (which may be different to the system versions) and make a local installation.
I would like to see a program/framework/DE/whatever where you can, while you are in an interface, click "edit code" and modify the program on the fly. Sugar/OLPC began implemeonting such functionality for their Python programs. This would drastically speed up make scratching your itches much easier, as well as redistributing your modifications.
All progress comes from having fast feedback loops. Make it easy for users to play around (and exchange modifications).
Problem 2)
Another change I would like to see in Desktop Applications is that one does not have to program any UI logic (creating widgets, connecting events) at all, it just seems to be redundant. Why do we design a UI by writing *text* in 2015?
It should be possible to auto-generate a UI from the type of objects one wants to modify, from the constraints of the best practices in UI design, perhaps with a workflow definition. It's useless to have all this freedom when we always want it the same way (text boxes for text input, checkboxes for booleans, list for lists, buttons for actions) anyways. Why hasn't a library come along that does that. At least glade lets one draw UIs, producing a XML file that can then be loaded and populated by events. More work on making programming UIs trivial please.
Problem 3)
Deployment. It's ridiculous. Today we can easily install python/ruby libraries from git repos, but not programs that will run in user-space?
In fact, perhaps the whole packaging of Linux systems should be different. What if every user was running in a virtual environment where they can install any software they want, with the other users being isolated from those changes. In the days of Docker and KVM that should be quite possible.
Re:See it before (Score:5, Interesting)
Why do we design a UI by writing *text* in 2015? It should be possible to auto-generate a UI
Um, it is, and this has been possible for quite some time. Lots of IDEs auto-generate code for UIs. Qt's QtCreator comes to mind, and I'm pretty sure MS has had something like this for ages. I'm sure there's several others.
In fact, perhaps the whole packaging of Linux systems should be different. What if every user was running in a virtual environment where they can install any software they want, with the other users being isolated from those changes.
It's been like this for ages. When was the last time you saw a desktop Linux system where more than one person actually used it, and there were multiple accounts on it? Better yet, when was the last time you saw such a system where multiple people were using it simultaneously? Linux, just like any UNIX system, certainly has the capability built-in to have multiple simultaneous users (since it is a multi-user multi-processing time-sharing system), but in practice no one does that much any more because we use PCs now, not shared centralized machines. Servers are a little different of course, but even here people are frequently running VMs these days so they have a full Linux environment to themselves; the big exception I can think of is ultra-cheap shared web hosting, and there the capabilities available to users are limited.
Re:See it before (Score:4, Interesting)
Linux Package Deployment
I don't think the parent was complaining about not being able to modify his own linux desktop because there are other shared users. I think the problem might be around distributions that only release certain versions of software. For example, I run an "old" Ubuntu 10.04 LTS release. It is nearly impossible to install the latest Chromium build due to package dependencies and management. However, I can run the latest Firefox since I can download the tarball directly. (And no, I shouldn't have to upgrade the entire operating system just to run a simple userspace program.)
Re: (Score:3)
Yes, this is a big problem with Linux distros and there's been a lot of talk about it and about solutions for it. The systemd guys even proposed an interesting solution that involves btrfs.
However, the OP seemed to be talking about having virtual machines so that different users have full control over their applications, unlike now where for the most part applications have to be installed by someone with root access.
Re: (Score:3)
The VM for each application is a good idea. Android got close, by at least creating a user for each app using the standard unix permission model where each user can't see another user's files so each app is separate. But they still have all the "what APIs does this device allow" and "what APIs have this program implemented" problems similar to "what libraries does this distro have".
Re: (Score:3)
Computer manufacturers already offer that. You can buy a modern computer which easily has the hardware power necessary to do all that. What they don't offer is the software for that, because computer manufacturers don't make software, they just stick Windows on there or let you put your own software on it. Unless you're talking about Apple of course, but obviously even though they control the software on their machines they can't figure out how to make all that work.
The problem with your idea is that, wh
Re: (Score:2)
Re:See it before (Score:5, Interesting)
Anyone who looks back in my posting history will see that I have long, LONG advocated for tackling the UI and packaging paradigms on FOSS desktops because they choke-off interest from the type of creative person who develops apps. (Even worse, they scare away people who would like to experiment and become budding app developers, so those people cut their teeth on OSX or Windows almost as a rule.)
PC tools are supposed to link the user with the power and features of the underlying hardware, making them at least discoverable in the GUI; In other words, there must be lots of vertical integration. Also... the GUI must have a 'gist' or feel consistent because this is a sign of feature-stability in the OS.
What FOSS has is a bunch of developers who tinker with the OS itself (I include the GUI in this, as it rightfully belongs in the category of OS) and assume that anyone who understands how a system works internally can trivially design GUI features... a big, big flaw in what is not so much an articulated belief as an unhealthy attitude. This is part of the subconscious of the FOSS world, and it results in maladies like not being able to describe fixes and workarounds (or just general usage instructions) as GUI snapshots and walkthroughs (almost always, the user will be directed to the CLI); It means even seasoned tech support personnel will struggle to interpret DEs and other UI features they are not very familiar with. Just getting to the point where your cousin or boss can try out your creations is hell.
App developers should have the power to create exceptions for UI features in their *apps* (I said apps, not OS), because that embodies the two things app developers subconsciously look for: power and feature-stability. The default behavior is always the OS way (i.e. ONE way) out of respect for all users in general; If the default behavior/appearance is ten possible ways, then the app developer feels like they are managing chaos instead of power.
My 'remedy' for the FOSS OS problem would be for a distro like Ubuntu to shed its identity as a "Linux distro" because the Linux moniker just confuses people at this point; and to take full control over the UI design so that it conforms more to a single vision (something that is apparently already under way). Pretty much all of the OS except the kernel should be original to the project or forked and, as Google did for a while with Android, Canonical should threaten to fork the kernel if that is necessary to improve the UX.
I'll also point out that Ubuntu has gotten some meta-features that were typically missing from a Linux distro, like a full-blown SDK and extensive whole-system hardware compatibility tests and searchable database. What would remain to be done beyond this is to standardize on a GUI IDE (with capabilities like Xcode) and extend the hardware program to include a certification process (with licensed emblem) that system and peripheral manufacturers can use in a straightforward way.
Also, packaging is a whole other cup of worms, though I personally think emulating OSX app folders would be a good foundation for easily-redistributed apps. This means that an OS repository would have to stop at some well-defined point instead of trying to mash all the apps and OS together along with the kitchen sink.
Re:See it before (Score:5, Insightful)
In the 80s and 90s. X terminals and the like. Sooner or later the users want their power back. It will be interesting to see what happend this time around.
Not surprisingly, we neither trust our web browser, the company providing the software, nor the network it all operates on. The majority of things I use my PC for, I am not ready to release to "the cloud".
While I'm glad that hollywood starlets think the cloud is safe enough for nudes, all that proves pretty thoroughly it's not safe for anything important.
Re:See it before (Score:4, Insightful)
In the 80s and 90s. X terminals and the like.
Thin client has arrived after 30 years of talk, and its name is Chromebook. Not catching on like wildfire, but certainly more than any previous example I can think of.
Re: (Score:3, Interesting)
Wait a second, users didn't want their power back. What happened is that someone called Bill Gates came along with the idea that you could *buy* a copy of a piece of software. That you believe there is something natural about this is a testament to how effectively he shaped the personal computing industry.
A so called 'cloud' based system is actually just a return to how things were going to go in the first place.
It is also an incredible annoyance to older CS people that it has taken nearly 20 years to get b
Re: (Score:2)
Will there be enough of a market of 'control preference' users to justify the desktop ecosystem? What application types are there that would likely draw a viably wide desktop audience?
"control"? Dunno... "Storage"? Definitely.
Silly (Score:5, Insightful)
No. And the "trend" referred to here is 99.999999% junkware. Slow junkware. Junkware that typically invades privacy and/or bombards with ads. You can't compete with my image editor. You can't compete with my word processor. You can't even compete with my text editor. You can't compete with my SDR software. You can't compete with my database. You can't compete with my media center. You can't compete with my fish tank controller. You can't guarantee that you, your ISP, my ISP, the connection(s) between them, the name servers, the competition for bandwidth at any one (or more points) will work to my satisfaction. Or at all. You can't even promise the app will BE there (cough, Google, cough) when I need it. Or that it will work properly in my chosen browser. And you're almost *certain* to screw it up so badly that it does all manner of things with rollovers, popping up garbage ads and menus without an instantiating click or drag or keypress from me.
And the other .000001% ??? Minimalist web-apps that never, ever hold a candle to a real app running on your own hardware.
Seriously, even the *speculation* is ridiculous.
There will always be a need... (Score:5, Interesting)
Re: (Score:2)
I do a lot of Audio Visual work on my computer. Wouldn't be easy or convenient on a web application.
Re: (Score:2)
Looking at the trends of today, however, the vast majority of people seem only too willing to serve up their privacy on a silver platter. Are there enough people who care about privacy to create an ecosystem around, or will we have a divide between the functional, privacy free, mainstream technology world, and the dusty poorly maintained, undermanned and underfunded world where a few diehards cling to ideals that have long since been abandoned?
Re: (Score:3)
That being the case small personal computers and encryption will become an ever more important issue.
Re: (Score:2)
Not to mention, Companies. Would you want your internal documents stored on Google Cloud where it could be "read" or even sent to a 3rd party ?
The answer appears to be a resounding "Yes, as long as you don't force me to consider the implications."
I think you'll find that today, the majority of corporate IP from the western world has found its way onto the Google Cloud. When corporations lock things down so tight that it becomes a day's endeavor to send a spreadsheet from one office to another in a large corporation, people tend to give up and use gmail. When you're a small shop and can't afford the resiliency that's a requirement in this day and
Re: (Score:3)
Not to mention, Companies. Would you want your internal documents stored on Google Cloud where it could be "read" or even sent to a 3rd party ?
The answer appears to be a resounding "Yes, as long as you don't force me to consider the implications."
I think you'll find that today, the majority of corporate IP from the western world has found its way onto the Google Cloud. When corporations lock things down so tight that it becomes a day's endeavor to send a spreadsheet from one office to another in a large corporation, people tend to give up and use gmail. When you're a small shop and can't afford the resiliency that's a requirement in this day and age, you're willing to risk the perceived small chance of losing your IP for the pure fact that you can afford to do business using Google Cloud.
So would you "want" your documents stored there? Probably not. When it comes to pragmatism however, most companies and/or at least a few employees let pragmatism win out over privacy concerns.
That would be nice.. But my company DOES care. They block access to Goggle Drive and DropBox type sites (and you can be fired if you send emails with corporate docs to other sites/people).
Re: (Score:2)
Too much dependency (Score:3)
I only use online features because they're free-as-in-beer along with their ease of access. If either changes then there's no reason to continue using an online version.
Re: (Score:3)
(takes out his autonomous, self-contained smartphone)
what where you saying about new stuff having more dependencies ?
Re: (Score:2)
Re: (Score:2)
Nothing makes you as dependent as an on-line accessible only feature.
Re: (Score:3)
Re: (Score:2)
(takes out his autonomous, self-contained smartphone to post on Slashdot)
what where you saying about new stuff having more dependencies ?
Oh. Yeah.
No (Score:5, Insightful)
The fact that this question gets asked basically every year should more than sufficiently answer the question.
Re:No (Score:5, Interesting)
The fact that this question gets asked basically every year should more than sufficiently answer the question.
Exactly.
The rapid deployment of high speed internet access, fiber to the home, cable and other last-mile technologies, even in developing nations, means that the problem of needing offline access to functionality is becoming more and more a moot point. It is also rapidly doing away with the problem of lengthy load times for bulky web code.
Oh, bullshit. Millions of people in developed nations (particularly the U.S.) have "broadband" that is a few hundred Kbps, or a couple of Mbps--let's just call it 3 orders of magnitude, or more, slower than a spinning disk. And of course there's an order of magnitude difference, or more, in latency as well. And of course, absolutely nothing about the deployment of high-speed internet access deserves to be called "rapid"! Remember, we were hearing about how the rapid rise in internet access speed was outpacing CPU speed increases and would soon make data transfer times irrelevant in the 1990s!
And that's before we even get to the performance difference between JavaScript DOM manipulation vs compiled C manipulation of native view/control hierarchies. Yes, I've heard about how much faster JavaScript has gotten. I use it. I also use native toolkits. You can show me the micro-benchmarks all day long; doesn't change the fact that a complex UI in JavaScript is vastly slower.
And that's before we even get to the performance difference when dealing with more intense data manipulation.
And that's before we even get to the higher memory usage for a control in the DOM than for a native widget. (Don't believe me--inspect an input element, and tell me how many pointers it holds to objects & prototypes...)
Re:No (Score:4, Interesting)
You can show me the micro-benchmarks all day long; doesn't change the fact that a complex UI in JavaScript is vastly slower.
You're conflating JavaScript and DOM. With FTL, JavaScriptCore can run C code compiled via Emscriptem to JavaScript at around 60% of the speed of the same C code compiled directly. That's not a huge overhead (40% is a generation old CPU, or a C compiler from 5 years earlier). Transitions from JavaScript (or PNaCl compiled code) to the DOM, however, are very expensive. This is why a lot of web apps just grab a canvas or WebGL context and do all of their rendering inside that, rather than manipulating the DOM. Optimising the DOM interactions without sacrificing security is quite a difficult problem.
Re: (Score:3)
But that, in and of itself doesn't disprove the existence of a trend which does not show any sign of slowing.
There's a trend? It doesn't show any sign of slowing?
Where's your data? Show me the trend line, and show me that it's not slowing. As far as I can see, some people moved email to the web a decade or two ago, and since then, nearly nothing else has moved into being a web app.
Re: (Score:2)
Anything already slow on the desktop, and which is always being used for ever more demanding projects, isn't really moving to the web.
Autocad for example.
Re: (Score:2)
Re: (Score:2)
The fact that this question gets asked basically every year should more than sufficiently answer the question.
Yup, the Apple Newton clearly disproves that nobody wants to own a tablet.
Re: (Score:2)
Corrollary: There are two ways a computer is used: As a tool, and as an appliance.
Office 365 (Score:4, Interesting)
Office 365 is a poor example. The web interface has definitely come a long way, but any serious work falls over. Maybe they'll get there, but for now, local apps integrated with the cloud backend seem to work better.
Write now I definitely wouldn't want to try working with RAW photos from a DSLR or edit high bitrate 4K video using a web app. Maybe in ten years, but then again, those digital formats will probably have moved on to another level by then too.
Oh and email: there's still definitely a need for offline access. Be it a tradition MUA or when on a mobile phone. Online isn't online enough even for this.
Re: (Score:2)
Two problems (Score:4, Insightful)
Single point of failure and security. Some applications might lend themselves to running exclusively in a browser and some will not.
Re: (Score:2)
You talking about the desktop or the cloud? :)
Re: (Score:2)
Good question. I'm talking about the cloud. All communication between your PC and the cloud based application occurs over TCP/IP/routers etc. which are not secure. They can be made secure if you are willing to go to a lot of effort and are willing to give up come conveniences.
Can a PC be compromised? Sure. The usual attack vectors are the internet and physical access to the keyboard.
My point is that you can more easily control the data on an PC. You can even, if you choose, disconnect it from the internet a
Re: (Score:3)
I respectfully disagree. Especially if I'm the sysadmin of the centralized cloud servers.
Although my original comment was somewhat fatuous, a user on a single machine is much more often a security/failure risk. It's not so much which is better and more reliable these days (it's certainly centralized control and storage, aka 'The Cloud'), but who controls that central point. This all started decades ago with centralized file servers, and processing farms for big data. It's eventually going to be just the bro
Re: (Score:2)
"It's not so much which is better and more reliable these days (it's certainly centralized control and storage, aka 'The Cloud'), but who controls that central point" - Which introduces a third point of failure. How do I know, with any degree of certainty, that the person administering the server(s) is competent? Or that their management is giving them the right tools and the right priorities to protect my data? The short answer is that I don't.
I'm not suggesting that you are not competent to do it or that
Re: (Score:2)
Your terminal -- mainframe concept sounds a bit too 70ish to me. Why should it be the future of computing now when it already failed in the 80s-90s for most uses? I really don't buy it.
Yeah, right ... (Score:5, Insightful)
Sorry, in my experience these web based applications are crap, and they started around the .com era where suddenly everybody thought everything belonged on the web.
The "problem of needing offline access" most certainly has not been solved, and not all of us want our data in the cloud.
If the web browser is going to become our operating system, we're fucked -- because we'll all be running garbage code which covers some of the use-cases, but which generally has terrible interfaces as we try to shoehorn every problem into something which doesn't lend itself to the web.
Many of us have lamented the move to web-first technologies as a byproduct of lazy corporations writing mediocre software.
If you think the end of desktop applications is nigh, I sincerely hope you're wrong -- because the endless stream of crap web pages which almost work is getting tedious.
And it mostly ends up in greedy corporations more worried about analytics and advertising, than writing usable software which actually solves the problems.
Re: (Score:2)
The "problem of needing offline access" most certainly has not been solved
Note that HTML5 does allow effectively unlimited (policy set by the user) local data to be storage and applications that run completely disconnected. It's possible to write a web app that uses the browser for the UI, but only uses the network for software updates.
Re: (Score:2)
I would go a step further on the offline topic, many developers needed time to get used to the idea of how to go about this and good patterns needed to spread.
And the browser vendors are still improving things by adding new and better APIs like service worker.
Resource Proximity & Browser Limits (Score:2)
Re: (Score:3)
Could you ever imagine pro video editing (i.e. Adobe Premiere / After Effects) 100% within Chrome
Depends. With WebGL / WebCL, I can imagine preview effects there quite easily. I can also imagine that it would be nice to be able to do the real rendering runs on a rack somewhere else. The more difficult thing is imagining the multiple GBs of data between the two. Possibly uploading the raw source data to the server and keeping the local copy and just syncing the non-destructive editing instructions would work.
If I can't own it and need to rent it (Score:3)
through my browser or half installed program on my computer or I have to be online within 30 days no thanks, go and die already.
What's old will be new again.. (Score:4, Insightful)
Eventually people will get fed up with paying $4.99 in perpetuity to a dozen or more vendors, and we'll have single pay licensing again. Legislative changes relating to data protection will complicate cloud migration for some professions, and I imagine state spying is starting to have economic impact.
I've seen the cycles too; the difference there is a legion of programmers and a even bigger pile of code out there. Computers (hardware) are also trending to very low cost now as well.
Software trends to zero in volume as there's no marginal cost; I'd expect more and more core functionality to be free. This has already happened to some degree in the Apple ecosystem, and Microsoft is bundled with everything.
Another prediction: More and more functionality will come bundled into the OS, and you can factor on paying a subscription for it (or the fee when you upgrade).
You want to jump on the next big rage? Nice, clean applications, web based or not, devoid of crapware and malware and in-app-purchases and ads that do what they're supposed to, cleanly, nothing more, and easily connect together through standard interfaces. It's almost like someone built something like that before.
On the other hand, no application is complete until it has an email client..
Short Answer (Score:2)
Consumer: Web
Re: (Score:2)
Field worker: mobile phone, tablet, or laptop (applications can be native, web-based, or hybrid)
On-site worker: a combination of several in many cases (native, web-based, or hybrid)
Re: (Score:3)
Absolutely this.
There is a huge difference between content production and content consumption. I do both. When I'm out and about checking Facebook or reading my email and so forth, I don't need a desktop app. I can get by just fine doing all of my online shopping with a tablet or even my phone. Hell, I use the built in app on my Blu-ray player to stream my Netflix movies. Outside of some very specific tasks (and not counting work), I live most of my life without having to touch a desktop app.
But when I
Re: (Score:2)
The desktop is going away in the home.
Partially correct. However, don't forget the viability of the PC as an entertainment platform. A PC has a form-factor that makes it optimal for many types of games that no other form factor can really match. And people who create any sort of digital content, whether it's professional or a hobby, will probably want a PC at home for the task. Plugging a tablet into a docking station doesn't necessarily make it a PC-like system suitable for anything but the simplest content creation tasks.
I think what's mo
Bad Idea (Score:5, Informative)
Stuff like this is marketing people's dream: dependence. Though, seen through the eyes of marketing people, they only consider the small select applications they use, but those office applications are only a small part of what people use.
Applications like CAD, Design, Bitmap Editing, 2D vector art applications would run terribly slow over the net.
The things people use to make your stuff would become more expensive as it is starting to happen, and those costs will be passed onto you.
It would be a bad computer world where would could not afford a company to throw the switch, and discontinue your product in one day with no warning.
If there was a disaster or real war (on our soil), no one would be able to work, at all, because there would be concentrated central points of failure.
Re: (Score:2)
- you are partly talking about niche users: it's the same situation as: most people working at an office aren't developers that need big workstations. Yes, these people are important, but they don't represent the general public. Certain organizations have certain workloads that only run well on mainframes. You might not believe it, but I believe the mainframes market was even growing a in 2013, haven't really followed it lately though. That doesn't mean that mainframes will be the next trend.
- you mentioned
Internet access (Score:4, Insightful)
Everyone thinks the cloud is great - till the backhoe goes through your fiber line and you either don't have a backup data connection due to the fiber cut being down the street, or you do have a backup data connection but it doesn't have the capacity to handle everyone running on the cloud. There are many points in the country where even if your ISP does have a backup, you will be down for quite some time while they reroute (and everyone else is trying to reroute as well). When most ISPs in town use the same trunks to get to the real world, you don't even really have many choices for redundancy.
People who live in silicon valley and some countries with really good overall connectivity to all users are spoiled with many options. Out in the flyover area, things are tougher. Then think of places with even less connectivity than the US has.
Keeping the company up and running by keeping the data local has a lot of advantages.
Re: (Score:2)
Re: (Score:2)
Everyone thinks the cloud is great
Not everyone. I, for one, think that the cloud is the exact opposite of great. No third party provider can be trusted, by law.
It depends on what you are doing (Score:3)
Simple programs used by the general public could conceivably be served from the cloud to a browser
Even for those simple things, many people will still prefer local data and local control
I find it hard to imagine that serious stuff like CAD, video editing, digital audio workstations, etc could ever be forced into this model
I, for one, require local data, local software and local control..and I will NEVER rent software
We rolled out a few web applications ... (Score:2)
... at work recently. Bunch of crap the whole of them.
One basically only works reliably in Firefox, one only in IE, one only in Chrome. And then of course there is the problem that one other needs an option in Chrome set to "on", the other needs that same option set to "off" to work.
So at the moment it seems any more complex "web" application I look at basically needs it's on sandboxed browser to not interfere with all your other web applications, and the whole internet itself. And at that point, HTML is a
The limits keep on changing. (Score:2)
Right now the limits are in High End Graphic Processing, and Interfacing with external hardware.
Most applications are rather boring, Take in some input, do some calculation based on the input, perhaps look up some data on a database, then give you an output. These batch type of processing is great for the web, as you get to have a big system in the background crunch all those inputs rather quickly and you get your results back.
However for high end Gaming/CAD The browser will cause too much overhead.
Then yo
Re: (Score:3)
Right now the limits are in High End Graphic Processing, and Interfacing with external hardware.
And user interface in general. I've yet to see a web-based UI that even comes close to being as good as a well-developed native one.
Re: (Score:2)
Please explain further...
For most standard interfaces, I haven't seen anything that you couldn't do on the web.
However the dev tools (especially by Microsoft) do a poor job at coding them easily.
This again? (Score:5, Informative)
- Flawless 24/7 connection to the internet is plain impossible and any application that does not take this into consideration is a piece of shit;
- Your data on a third-party server is always a security problem waiting to happen;
- Browsers cannot provide the exact same features of a native application without the idea of them being completely rethink;
- When a web application has successfully emulate a desktop application it usually costs double or triple in computational resources to do the same thing as a native application;
- HTML is not designed for making desktop GUI applications, it need a ridiculous amount of very ugly hacks do to things that are done easily using native GUIs;
That said, of course there are tasks where a web application is useful... But it is foolish to believe that so any task task can be done in a web application.
Re:This again? You forgot one (Score:2)
Or goes bankrupt?
Poof!!
You no longer have the application, or the data. Your 'competitive edge' is now random electrons heading for NGC6724.
Connectivity (Score:2)
While it's true that browsers now have local data stores for data that might reduce the need for an active connection to a server, native apps usually are better able to handle a greater amount of data than
Why we targeted the browser... (Score:5, Interesting)
I run a company that develops a laboratory informatics platform for data intensive science applications that mix wet lab and analytics operations into single workflows, with gene sequencing as the motivating application - think LIMS with a pipeline and visualization engine, if you're familiar with the space. (Lab7 Systems, if you're curious - http://www.lab7.io/ [lab7.io]
When we started development a few years ago, we had to make the decision as to whether or not to build a desktop application or a browser-based application. At the time, this wasn't an easy decision. Some aspects of the UI are straightforward form-style interfaces, but others are graphics heavy visualizations of very large data sets (100+ GB in some cases). Scientific and information visualization have almost always benefitted from local graphics contexts and native rendering engines. In addition, the data decomposition tasks often require efficient implementations in compiled languages. Our platform also controls analysis processes on large clusters, another task not well suited for the browser.
We gambled a bit and decided that the browser would be our primary user interface. Two trends at the time helped us make the decision (and luckily they both held steady):
(1) The JavaScript engines in all the major browsers get faster with each new release and now outperform other scripting languages for many tasks.
(2) The JavaScript development community is maturing, with more well-engineered and stable libraries available
As few other considerations helped us make the call:
(1) Our platform is a multi-user system. A desktop client would add to the support burden for our customers.
(2) Our backend needs to integrate with compute clusters, scientific instruments, and large, high-performance file systems. It is server-based, regardless of the client.
(3) The data scales we were dealing with also required "out-of-core" (to use an older term) algorithms for redenering, so the client would never get entire data sets at once.
(4) REST/json... XML, XMLRPC, SOAP, and all the others are a pain to develop for (I speak from experience), REST/json significantly reduced the amount of code we needed to maintain state between the client and server.
Since we made the call to use the browser, we haven't looked back. Early on there were some user interactions that were tricky to implement across all browsers, but today they've all caught up. Our application looks much more like a desktop or (*shudder*) Flash application, with a very rich UI (designed by an actual UX team that gets scientific software ;) ) and complex visualizations. It's also been relatively straight-forward to implement, thanks in large part to the maturity of some JavaScript libraries (we use jQuery, D3 (for complex filtering, but not for visualization), Canvas, Backbone, and a few others).
Personally, I can't imagine ever writing a desktop application again. The browser is just too convenient and, in the last few years, finally powerful enough for most tasks.
-Chris
Old browsers (Score:2)
How do you cope with old browsers? Supporting 2 year-old browsers is already difficult, I can't imagine supporting 6-8 year-old browsers, which is the usual thing in enterprise. Or are you limiting your development to old APIs only?
Re: (Score:3)
Two years is our horizon for browser support. Two other trends that have helped us in this regard are (1) that most browsers auto-update or at least nag you a lot and (2) IT departments are more accepting of users running Chrome/Safari/Firefox alongside IE. We're targeting enterprise/internal users, not everyone on the Web, so we can also put some requirements in place when we deploy.
Most of our functionality uses standard HTML/CSS/DOM features, so our we haven't had any issues with features dropping. We d
Bahahahaha! No. (Score:2)
Netcraft confirms that desktop applications are dead! Also Desktops. :)
Seriously this bunk is garbage. Perhaps for personal use, there may be some transition. I know I use some google docs as I don't want to bother buying a personal copy of office for hundreds of dollars for the amount I actually use it outside of work... Though I probably have used OpenOffice more.
In a corporate environment? Just no. This also happens to be where most of the usage is located. It is not even close, it is absurd. Ask a syste
Web apps chug like a sloth on qualudes (Score:2)
Web applications chug like a sloth on qualudes compared to local applications. They consume more CPU, they take forever to load/store data, and their interfaces are clunky as hell (Google Office apps included.)
Personally I think it's the web developers that keep asking this question every year, hoping to get praise for their shitty efforts over the past year to catch up to 1990's desktop applications.
Re: (Score:2)
Could we wire up the web developers to a dynamo and electrocute them? :P
R&D software will stay native (Score:4, Insightful)
I am sick of hearing about how desktop apps are dead. How am I supposed to develop embedded applications through a web browser? I suppose a cloud compiler could do it --- assuming it supports my extreme customizations, and even then, I can't imagine how slow it'd be.
What about network tools? My open source project is a network test utility: http://packetsender.com/ [packetsender.com]. How can network test utilities exist other than a native desktop app? Am I supposed to create a browser add-on? Now we are just arguing semantics. Depending how deep the add-on is developed, might as well call that native.
The app world is more than just a means to consume video, music, etc. Some people need to do real work.
niche applications like CAD (Score:2)
Dear Marketing Wonk slashbaiting as advertising: (Score:5, Insightful)
Please, understand this categorical statement: I DON'T WANT YOUR FUCKING CLOUD SERVICE.
I do not want to rely on an internet connection to generate any trivial document.
I do not want even my meaningless documents stored "in the cloud", much less anything any private or commercial value.
I'm uninterested in making something simple, quick, and reliable into something complicated with more points of failure, slower, and unreliable (that in the meanwhile makes me dependent on you, and paying you for the privilege).
So no, stop asking.
Low Latency and Low Power Doesn't Problems (Score:2, Interesting)
Gamers will attest to running game logic server side not working for certain types of games because latency is too high. Going forward augmented reality and virtual reality have even lower latency requirement. With some latency requirements getting under 5 ms the actual physics (speed of light) starts to prevent everything from running remotely.
Perhaps even more important is the further you send bits the more power it consumes. Bits in registers use the least power. Bits in cache are still low power. B
Being Both (Score:2)
I suspect many "desktop" software companies will hedge and build the app as a web app, BUT keep a "local virtual web server" option so that it can have quick access to local disk etc. if needed. Light users may be fine with a cloud-centric approach, but power users may want the local approach for responsiveness and storage space control.
Some of my personal music "experiments" are like that: I only run them on my desktop, but they use a typical web browser stack such that they could be "in the cloud" with a
Re: (Score:2)
I suspect many "desktop" software companies will hedge and build the app as a web app, BUT keep a "local virtual web server" option so that it can have quick access to local disk etc. if needed.
But that wouldn't solve the problem of having to use a web-based UI.
Enough already (Score:2)
A decade and some change ago first noticed marketeers slithering out of the woodwork to belch their dreams of SAAS into the ether. The dream is not a statement about architectures of the future it was always focused explicitly on raking in regular predictable revenue.. A concept customers have and continue to reject regardless of state of supporting infrastructure.
From where I sit the proof is in the pudding. I openly encourage our competitors to offer online subscription only services. We are making ban
Native is here to stay, the web will fail. (Score:2)
Without a doubt, web is s crapshoot of browser inconsistency and standards. Imagine this hypothetical scenario: No more local apps, but you have a web server running locally, which when you install an app, installs to the local web server. Your entire desktop is in a browser. What are the problems with this? Many: 1. Serialization to HTML/CSS/JS is slow and unnecessary. The code path to put a red rectangle on the screen is absurd 2. Those interfaces prevent direct access to local hardware. 3. Operational La
Giving up privacy and control over data (Score:5, Insightful)
Richard Stallman covered this subject in detail, it is important reading: http://www.gnu.org/philosophy/who-does-that-server-really-serve.html
I am surprised this would even be asked here. The fact is, if you care about security and privacy, you dont want to use anything other than desktop apps. You want to avoid anything such as Google Docs for your normal letter writing and so on. One area of confusion is that people have problems drawing a distinction between which is where you share things that you want other people to see, versus a tax spreadsheet that no one else should see. With the social networking the material is sort of not private anyway and you want to share it so little is lost by putting it on a server farm, and it is necessary that it be shared with others so the server farm facilitates the communications.
With a desktop application where you are working on tax spreadsheets or working on other things that will not be shared, there is no need to put it on a server some place else, so why do it? In so doing you give up a huge amount of potential privacy, increasing the technical possibilities of a possible access of the material on the server farm by other entities.
Using this cloud stuff you lose control of your data. The cloud provider could pull the plug on the service at any time (and it happens, look at Google Code and Geocities and the vast store of information that was lost with that).
Using the cloud for office apps is basically not necessary for what you are doing, since when you are writing a document for local use, or working on spreadsheet data, there is no technical need to use a cloud service to do this, and by doing so you endanger privacy and your control over the data.
Whats really going on here is an attempt for large corporations to nickle and dime you and monetize you, perhaps by the minute, to use their software, while if you use an open source desktop app, you have unlimited use of the software for as long as you need at no charge.
Secondly, open source is all about users being able to control, modify, run and expeiriment with the code they use, and being able to read it. Using apps on a server farm takes away the users control over the software they use, as it does with taking away users control over their data.
Avoid Software as a Service like the plague.
Re: (Score:3)
I should add, desktop apps dont take away your ability to put your documents on a server if you want to share the data, what it does assure that the data will not be uploaded in any way unless you specifically authorize it, otherwise its only stored locally. If someone wants to put some data on a server, they can take the data produced by a desktop application and use their own server or another service to put the file online. This allows people to only upload data that they want to share, rather than uploa
History doesn't apply the same way (Score:4, Insightful)
My initial reaction is to say that computing is simply cyclical; what was once mainframes and dumb terminals turned into locally installed applications on desktops and laptops, and now we're doing that again with Teh Cloud (tm). However, here's the difference:
1.) In the 80's and early 90's, overall technical competence of computer users was higher. Yes, the there was always the secretary who tried to use WordPerfect to make a database because she knew exactly one program, but overall, especially if you had a home computer, you had some concept of what you were buying, and what the things on the spec sheet meant - computers being sold today will have helpfully descriptive bullet points like "great for multitasking" instead of "8GB RAM", something that wouldn't have passed muster in the last cycle. .doc and .jpg files that are more standards compliant, but many of the web apps that are popular aren't necessarily tied to the "open/change/save/close" paradigm that is commonplace in the desktop world.
1b.) Malware was much less a problem, back in the earlier days of computing. E-mail viruses were a thing, certainly, but for the most part, one ran a virus scanner and moved on with life. Also, with less hardware to throw at resident software, any kind of malware that ran resident would use enough system resources to alert the user to its presence, which is less the case now. Google Docs doesn't care about macro viruses, and users of that platform don't have to, either. There's value in that proposition for many less-technically-inclined users. Similarly, backups/hard disk crashes are "someone else's problem".
2.) In the 80's and 90's, systems were generally designed for interoperability a bit better than they are today. It's possible to send an e-mail from a server running Exchange 2016 Preview to an SMTP server from 1989 and it'll be able to meaningfully use the message. This is not the case with Facebook or WhatsApp.
3.) Inherently connected applications are the norm now. The utility of Facebook is "the rest of the stuff on Facebook". Google Docs and Pixlr don't apply to this point since they still deal with
4.) "Bleed little, bleed often" is a more culturally acceptable proposition for most people, as it gives them the instant gratification of getting the product at a price they can afford, while not requiring a gargantuan up-front cost that happens regularly as people feel the need to keep up with the Joneses. $5/month = $180 over the course of three years, which has basically been the shelf life of every version of MS Office released. Makes it a lot easier to swallow for many people, whether or not it's actually a value proposition in the long run.
4b.) The fact that virtually every software developer who has implemented IAPs instead of a one-time, up-front cost has made more money on that business model. At this point, it's solely a matter of principle that a developer of a paid application would sell a perpetual license, since general acceptance of subscription and IAP licensing makes it a better idea for everyone to go down that road instead. This was not nearly as true in the days of mainframe computing.
Now all of that being said, I do think that video editing is one of the few tasks that will never lend itself to a subscription model, beyond what Digital Juice does. Editing-as-a-service makes very little sense, since even a moderately sized project will still take tens of gigabytes of upload time, which means "hours before you can edit". Meanwhile, 100GB of assets is not unheard of for even a two hour wedding video shot in HD, and with upload speeds still measured the single-digit mbit/sec unit, it can easily be days before editing can even be entertained. At the same time, costs are a lot higher for a company looking to get into that business, because you're going to get much less ability to thin provision even 500GB of space, as the nature of what's being done is going to make much more use of that space than the OneDrive accounts with a 1TB progres
No. Local processing is superior (Score:3)
It is faster, gives the user more control, is more responsive, is more secure, etc.
It is generally better in most circumstances for anything serious.
Latency and bandwidth (Score:3)
Anywhere that latency is not adequately met by "cloud apps" will require desktop apps.
Over time, bandwidth will become less of an issue as it continues to improve but latency is governed by the speed of light and light ain't getting any faster.
Conversely, if a "cloud app" is a huge pile of JavaScript that does everything locally on your machine, it is arguable that it is really a desktop app.
Back end versus front end (Score:3)
If I want an application that is less computational, such as a calendar application, I don't want a whole bunch of generic controls such as a back button or an address bar, or having tine extra work of dealing with tabs to go from reading slashdot to working on my calendar for the week. I want to be able to click once on my OS task bar and have a window come up that is dedicated to my calendar. The browser experience is just shitty for light applications and I hope that the companies out there continue to develop native windows applications. I really don't get why people were so nuts over google mail. Maybe it was good but the browser experience killed it for me every time.
Re: (Score:2)