Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Operating Systems Software Virtualization IT

Ask Slashdot: A Development Environment Still Usable In 25 Years Time? 257

pev writes: I'm working on an embedded project that will need to be maintainable for the next 25 years. This raises the interesting question of how this can be best supported. The obvious solution seems to be to use a VM that has a portable disk image that can be moved to any emulators in the future (the build environment is currently based around Ubuntu 14.04 LTS / x86_64) but how do you predict what vendors / hardware will be available in 25 years? Is anyone currently supporting software of a similar age that can share lessons learned from experience? Where do you choose to draw the line between handling likely issues and making things overly complicated?
This discussion has been archived. No new comments can be posted.

Ask Slashdot: A Development Environment Still Usable In 25 Years Time?

Comments Filter:
  • OpenVMS (Score:4, Interesting)

    by nospam007 ( 722110 ) * on Monday June 15, 2015 @12:31PM (#49914585)

    It's been around for almost 40 years and most trains and lots of factories are still running on it.
    It runs on almost everything since the original hardware has been discontinued years ago.

    • Re:OpenVMS (Score:5, Funny)

      by danbob999 ( 2490674 ) on Monday June 15, 2015 @12:33PM (#49914611)

      Yeah, use something already outdated to make sure that no one evens remember it in 25 years.

      • Re:OpenVMS (Score:4, Insightful)

        by Anonymous Coward on Monday June 15, 2015 @12:45PM (#49914709)

        Outdated... proven... mature... what's the difference?

        • Re:OpenVMS (Score:4, Interesting)

          by Anonymous Coward on Monday June 15, 2015 @01:05PM (#49914883)

          Why did the stupid parent comment get modded up to 5, Insightful?

          There are huge differences between software that is outdated, software that is proven, and software that is mature.

          Outdated software is old, and isn't being actively maintained. It's a fossil, frozen in time. See CP/M.

          Outdated software isn't necessarily proven. It may have had stability problems in the past, and these were never fixed because the system was superseded with better technology. See Windows 95.

          Mature software is independent of age. Newly-written software can mature quite rapidly, if developed using the proper techniques and by talented developers. See Dragonfly BSD.

          Then there is software like FreeBSD, which is proven and mature, but not outdated.

          If you can't comprehend the differences between these concepts, then you'd best be keeping your mouth shut, son.

    • by Anonymous Coward

      Goddamn it, why would you even suggest VMS when we have FreeBSD?

      For over 20 years now FreeBSD has proven to be one of the most reliable and trustworthy operating systems out there.

      Unlike VMS, FreeBSD is very widely used, is very modern, is undergoing continuous development and improvement, and is truly open source (unlike proprietary or GPLed software), while still retaining superb compatibility.

      I'm confident that FreeBSD will be around in 25 years, and I'm confident that it will be as strong as ever then.

      • by Bert64 ( 520050 )

        Or just make your environment portable enough to run on anything vaguely unix-like... Linux, *BSD, Solaris etc will still compile and run extremely old code without problems.
        There are plenty of old programs out there which still compile and run on current systems without problems. I have code which predates linux, and predates any 64bit hardware yet still compiles and runs (very quickly) on a modern amd64 linux host.

    • Re: (Score:2, Informative)

      by Anonymous Coward

      You are right. Good choice.

      Linux is not multi-year support, it use to be all inclusive, exspecially with old equipment. I have equipment that is 15 years old, and cannot even use GCC3to recompile because it has ONLY 128MB of memory! GCC requires it to be all in memory are the same time, no pipes or swap! Add to it, cannot even try a system that was compiled after '05 because everyone turned on required MMX, even in their "386" or "486" versions. If your software uses the name 386, support the limits o

      • by Bert64 ( 520050 )

        The Linux kernel will run on a 486 and upwards, i believe they dropped 386 because it was extremely crufty... It also still runs on m68k as far as i know, all the way back to mc68020 (i even have an m68k box to hand running a fairly modern kernel but its a 68060).

        Mainstream distros compile to require modern hardware by default because it makes sense to do so, not making use of such features results in inferior performance when running on newer hardware, thats why many people use distros like gentoo where yo

    • There's also Maxima (formerly Macsyma) and REDUCE, both of which started in late 1960s or something like that.
    • openvms? is that really a thing?

      (I worked at DEC for a number of years and loved vms back then. on a vax. BACK THEN.)

      how can it be relevant, still? who today even knows the name DEC anymore? recruiters even tell me to remove my DEC company experience from my resume.

      I knew vms before unix. unix was hard for me since it was different enough from vax/vms. but once I left DEC I never really touched vms again.

      I seriously ask - how is this still relevant, other than for legacy sw that someone refuses to l

      • who today even knows the name DEC anymore? recruiters even tell me to remove my DEC company experience from my resume.

        I do and at last count I have 10 customers running VMS/OpenVMS systems. Your recruiters are idiots. Yes it's not a growth market but if you have experience with these legacy systems you can still make a living in the niche marketplace.

  • by grimmjeeper ( 2301232 ) on Monday June 15, 2015 @12:31PM (#49914587) Homepage
    Why do you need to ensure you can keep recompiling it with the same old compiler for the next 25 years? Why not design it with the expectation that the development environment will evolve? Ensure you design in portability between compilers and development platform operating systems and you don't have to keep stringing along an environment that's already obsolete.
    • Sorry for my dyslexia. I thought I read Ubuntu 10.04, not 14.04. So it's not already obsolete. But it will be in the future.
    • by danbob999 ( 2490674 ) on Monday June 15, 2015 @12:36PM (#49914643)

      Because not only the code would need to be portable, but the build system too (makefiles). Also, in 25 years you don't want to re-qualify everything and risk introducing some new bugs because the new compiler doesn't behave like the old one. Too risky.

      • Re: (Score:3, Interesting)

        And struggling to maintain an outdated system in some kind of virtual environment isn't too risky?

        If you're doing any kind of maintenance or continual upgrade process, you're going to find yourself upgrading your development environment at least once or twice over 25+ years. And if you have any sense whatsoever, you'll have a suite of regression tests to run on your software already. You can use that to validate the new environment when you compile a baseline. I've been involved with several projects tha

        • by Ungrounded Lightning ( 62228 ) on Monday June 15, 2015 @01:02PM (#49914857) Journal

          if you have any sense whatsoever, you'll have a suite of regression tests to run on your software already. You can use that to validate the new environment when you compile a baseline. I've been involved with several projects that migrated from one platform to another.

          Such tests might convince YOU (the developer). But would they convince REGULATORS? If not, you have to go through a whole, horribly-expensive, regulatory approval every time you migrate tool versions.

          Regulators don't get dinged for insisting on more costly work by the regulated and withholding their approval. They DO get dinged if they approve something that then does harm.

          That's why the FDA caused something like 400,000 extra deaths by delaying the approval of beta blockers for prevention of secondary heart attacks until the European research had been repeated in the US under US rules, rather than accepting the data and allowing the use. After the Thalidomide mess they're not going to approve ANYTHING quickly or easily. The same principle applies to other fields.

          • A comprehensive test plan was sufficient to convince the FAA when we did large scale changes like that for flight software.

            YMMV

        • And struggling to maintain an outdated system in some kind of virtual environment isn't too risky?

          Who said anything about upgrades/maintenance? Maybe he has to build it once, certify it once, deploy once on an embedded system.

          If the job requirement says "25 years" then that's what he has to do. It wouldn't even be an unusual specification for military.

          • by ranton ( 36917 )

            And struggling to maintain an outdated system in some kind of virtual environment isn't too risky?

            Who said anything about upgrades/maintenance? Maybe he has to build it once, certify it once, deploy once on an embedded system.

            If the job requirement says "25 years" then that's what he has to do. It wouldn't even be an unusual specification for military.

            The only reason to need a consistent development environment is for upgrades and maintenance. If they never need to debug errors or add features then there is no reason to have a development environment at all. Just pass around binary executables.

        • by MadShark ( 50912 )

          Regression tests are never perfect. I recently debugged an issue where if we received an interrupt during a one instruction time window, the system had an issue. It worked fine on an older compiler but we were forced to upgrade for other reasons. The new compiler generated code that now had the issue due to improved optimizations that reordered the code. There is no reasonable way to unit test that kind of issue. It required the entire system to be running, and running on target.

      • by vux984 ( 928602 )

        Because not only the code would need to be portable, but the build system too (makefiles). Also, in 25 years you don't want to re-qualify everything and risk introducing some new bugs because the new compiler doesn't behave like the old one. Too risky.

        Are you planning on developing it today and then putting it in a box and forgetting about it until 2040 ?

        Because if you actually expect to maintain it over the next 25 years you will maintain it organically. Compiler versions and build environments will come and go. And the kinks will be identified and solved as they arise. And then 25 years from now it will still all work just fine because you actually maintained it.

        • by MightyYar ( 622222 ) on Monday June 15, 2015 @01:14PM (#49914961)

          That depends on the application. If he's making an industrial control system, then no, he probably will not be maintaining it organically. It will get built, qualified, and then expected to run for the life of the process. Think nuclear plant... what is more painful, re-qualification or running obsolete tools? Plants built in the 80s (power, sewer, etc) are still running DOS control systems with ancient serial PLCs.

          • by jandrese ( 485 )
            And then one of the stupid old PLCs craps out and you discover that they have not been made for 20 years and all of the old stock is exhausted... Now you have a crisis where you have to rebuild a major part of your system at great expense.
            • I don't think many people would wait until their last spare to start retrofitting their system. At the same time, you want to stretch your investment as long as you can get away with it.

              In the case of old style PLCs, there have been a number of transitional technologies, since so many people were in the same situation.

      • This is similar to the backup media problem. Yeah, the code, build system and makefiles need to be updated when they go out of fashion, just as the backups need to be moved to new media when the current media goes out of fashion.

        .
        So you deal with it.

        Design for it. Make change your friend, not your enemy.

        When your fashionable long-term build environment looks like it is dying, create a new build environment based upon the current fashion.

        If you cannot or do not adapt, your pet project will be hist

      • Then why isn't your build system written in the same language? Presumably it's a smaller software artifact of its own so bootstrapping it shouldn't be difficult even in the most pessimistic case.
        • Then why isn't your build system written in the same language?

          If I'm writing firmware for a microcontroller based on the MOS Technology 6502 architecture, I don't especially think writing the build system in 6502 assembly would be the best choice. That's why I use mostly Make and Python for my 6502 projects [pineight.com].

      • You put the whole build environment into a SCCS ... that is actually a no brainer.

        You would not compile the old code base with new compilers but with old ones.

        • Using new compilers was the whole point of the OP. Of course if you keep the old compiler you can keep the old "make" utility too.

  • Store the hardware (Score:3, Interesting)

    by mark_reh ( 2015546 ) on Monday June 15, 2015 @12:31PM (#49914591) Journal

    Keep a working system in storage for future use along with copies of software that runs on it including OS, etc., on archival disks.

    • by Tailhook ( 98486 ) on Monday June 15, 2015 @01:41PM (#49915173)

      Better plan: pick something that is currently and widely used in aerospace and military applications and the world will preserve working systems for you.

      I have personal experience with this. 15 years ago, just before I left an employer I had worked for for some time, I took a number of Digital Alpha workstations off their hands; they just gave them away after about five years of use and replaced them with newer workstations.

      It turns out there is a thriving market for this hardware because aerospace and military outfits used it for their work and today they still have drawings and material they need to deal with in original form. They have migrated the original material to newer systems, but they also still maintain the equipment and software needed to get at the material in its original form.

      They pay through the nose to get replacement parts and complete systems in working condition, so a salvage market has emerged and people prowl around trying to find caches of ancient workstations. Doubtless this will be ongoing for at least another ten years, and the prices will escalate accordingly.

      So if you need to ensure there will be spare parts and systems at your disposal a quarter century from now, find out what Lockheed and Boeing are designing today's jets with and use that stuff. It's built well and people pay dearly for it when new, so it tends to be carefully preserved; it's hard to trash something that cost $20k, even if it is wildly obsolete.

  • Don't use an IDE (Score:5, Insightful)

    by Kinthelt ( 96845 ) on Monday June 15, 2015 @12:33PM (#49914609) Homepage

    C, make, and vi/EMACS.

    • Re: (Score:2, Insightful)

      by Anonymous Coward

      This is half of the answer. C and text editors will be around forever, However, the missing part is documentation. There will be stuff you have done that is built on assumptions. That must be documented. We can still maintain (and do maintain) 35+ year old aerospace code, and it's relatively easy *because* all of the software artifacts are still available. They've been ported now to 3 different formats, but we can still trace EVERY SINGLE LINE OF CODE to a requirements document, and each lower level require

      • we can still trace EVERY SINGLE LINE OF CODE to a requirements document

        So can I, for the software that I have produced. Line 1263 in stuff.c maps to the requirements "document" that I jotted down on the back of a coaster on the day we were having that 4 martini lunch. So do all the other lines, come to think of it.
        But seriously... I'm no hard core software engineer and I've never been involved in anything that required that amount of rigour. I've wondered how one ensures that requirements are correct at that level, and are correct together. Even in relatively simple busi

      • This is half of the answer. C and text editors will be around forever, However, the missing part is documentation. There will be stuff you have done that is built on assumptions. That must be documented. We can still maintain (and do maintain) 35+ year old aerospace code, and it's relatively easy *because* all of the software artifacts are still available. [...]

        Yes, the grandfather post did forget to include SCM [wikipedia.org] (software configuration management) / source revision control [wikipedia.org], where numerous tools are available including : rcs, cvs, subversion, sccs, git or one of the other widely available SCMs. Of course, RCS (1982), CVS (1990), and SCCS (1972) are 25 years or older themselves.

    • by rwa2 ( 4391 ) *

      Counterpoint: MSVC++

      CSB: so I briefly worked on MS Flight Simulator a few years ago. It was interesting working with code that was older than Windows, but it was still there (under a whole bunch of C++ shims).

      • by bmo ( 77928 )

        >Software with a 25+ lifespan
        >proprietary version of C++
        >proprietary IDE

        Just... no. Depending on proprietary anything is a no-no. Anyone who has experienced the "oh we don't do that anymore" phenomenon with other Microsoft products (the EOLing of MSVB 6.0, fer example), and other software vendors, know that depending on a company that needs to introduce $SOMETHING_DIFFERENT every year to differentiate (and call it innovation!) themselves from $OTHER_VENDOR is a road that leads to Lovecraftian ma

        • by 0123456 ( 636235 )

          Yes. If I have a Linux VM, I can be sure it will run somehow, even if I have to run it through an x86 emulator on a 128-bit 256-core ARM (though that introduces issues of its own). If I have a Windows VM, who knows whether, in twenty years, it will announce that it's no longer activated and won't run?

  • Where do you choose to draw the line between handling likely issues and making things overly complicated?

    You're trying to create a VM with all of the tools for your development environment which will last 25 years.

    Why start worrying about being overly complicated now?

  • IBM or VMs (Score:5, Funny)

    by dmaul99 ( 1895836 ) on Monday June 15, 2015 @12:35PM (#49914631)

    If you're talking about IBM Mainframe stuff, then don't worry about it. IBM will support it in one way or another for the next 500 years because the systems that processes your airline reservations and processes your credit card transactions all still run on mainframes with COBOL code and CICS UIs. Nowadays they're dressed up with modern GUIs on top of it but ultimately it's all the same. Peek over at what the airline agent is looking at when she prints your ticket and you'll see a text console with lots of green and where you press CTRL to submit the form.

    If you're talking about some modern unix or windows stuff, VMWare it now or something. In 25 years you'll have your quantum singularity computer with an emulator for GoogleOS 54 inside of which you can run an emulator for Windows 15 inside of which you can run an emulator for Windows 11 inside of which you can run VMWare Player with your stuff.

    Better get started.

    • mainframe code for embedded system??! you win the unnecessary complexity award along with the OpenVMS guy. And yes I've done development and sysadmin on both.

    • by sjames ( 1099 )

      And it'll still run faster than when it was the native OS on it's own hardware.

    • Re:IBM or VMs (Score:5, Interesting)

      by TWX ( 665546 ) on Monday June 15, 2015 @12:59PM (#49914835)
      Came here to say this. The financial system was implemented in the eighties on an i-Series and is written in COBOL and RPG. There are web front-ends for some components, others still use a "Client Access" TN5250 client or for me, since the console TN5250 client isn't being maintained for Linux anymore and doesn't want to compile, a 3270 client still works.

      Do not overlook the service agreement/contract. Do not overlook the need for grizzled old guy with poor social skills to take care of it. Do not parade him around in front of senior management, they will hate him and it if you do. Make it clear that it's not DOS. We've seen people complain about the system because it was text-based, calling it DOS. They're fools, but if you don't nip that in the bud at the beginning it'll bite you in the ass later.

      Also, define service windows when there is no expectation of access, and enforce them even when there's no maintenance from time to time, to keep people from complaining when there is a legitimate service needed. It's annoying but it can be necessary when some asshole thinks that they should work on something at 3am because they feel like it, even when that's in the middle of the published service window.
    • CICS actually is not a UI but a "kind of" database, or more precisely the transactional interaction to storage systems.

    • You know, you can alias the CTRL key to enter. /IBM iSeries (RPG) developer

  • by Anonymous Coward on Monday June 15, 2015 @12:41PM (#49914677)

    Since all code will be written in Apple's amazing revolutionary innovative new Swift language, just write it in that! It will be infinitely portable and recompileable for millennia!

  • Try taking your install and making it into a bootable iso image and a bootable DVD. Run off of that.
  • by rubycodez ( 864176 ) on Monday June 15, 2015 @12:46PM (#49914727)

    layers of complexity only increase the number of things you assume about the future.

    Instead use a common realtime OS and chipset. Those already have a track record of decades of support.

    Failing that the second best solution would be embedded java app.

  • A friend of mine heard a talk about a guy to had a similar problem in the airline industry. He needed the whole hardware and software package to be maintainable for 80 years. He used Field Programmable Gate Arrays: https://en.wikipedia.org/wiki/... [wikipedia.org]
    • That's brilliant, actually.

    • which in and of itself is a problem. No IT system should have an 80 year life expectancy. If you're talking an embedded system, such as an onboard radar or flight control system, that's another topic but even those get refreshed at a frequency 80 years. Technologies and tools change over time and coming up with unreasonable requirements only means that eventually the system developed will be on a dead branch of support and extensibility. You'll spend more in supporting it than throwing it away and star

      • by 0123456 ( 636235 )

        I worked on a system in the 90s that had to have at least a fifty year lifespan because REGULATION, though it was assumed that they'd have to replace the hardware now and again and just retain the files and database. I do sometimes wonder what they did when the optical drives we used became obsolete.

        • and when the media decays and the systems no longer work and there's nobody around to fix them, then what? You can't put a system in a bottle and say it'll be supportable decades later. Even mainframe software has had to have some evolution because I don't see IBM selling S/360s anymore. Yes I can still run most of the apps built on the original S/360 but operating systems change as do underlying architectures. But those create more problems then they solve. If it's a "we need this data for X" that's s

  • by brian.stinar ( 1104135 ) on Monday June 15, 2015 @12:51PM (#49914773) Homepage

    Unless you are (or someone is) getting paid to update, maintain, and upgrade this frequently (I would suggest every six months) I do not believe this is something than can be easily accomplished. I would recommend, if you are a contractor, that you create an ongoing maintenance agreement that provides deployments tests every six months. If you are an employee, I'd recommend you attempt to put into place such a program with your employer.

    If people do not want to pay to maintain this, then it's probably not going to work for 25 years despite your best efforts. If they only want to pay to maintain it when they REALLY need it to be maintained, it's going to be expensive and not necessarily possible. I see this kind of stuff all the time - customers don't want to pay to maintain something, until it breaks, then they want to pay.

    As an example (that isn't related to embedded, but the principle is the same), I JUST ported over Google OpenID logging in to OAuth2 in a PHP system. This was way more expensive, and way higher stress, because the customer did not listen to me six months ago when I said Google would no longer be supporting OpenID, and we should migrate to OAuth2 or they wouldn't be able to login. They didn't listen to me then, but called me when they couldn't login. The hourly rate was higher, because I hadn't scheduled this work, and it took longer (billable time) because I was under pressure to get the calendar time as small as possible. However, we got it! There are examples all the time where things basically need to be scrapped, since the technology is so, so, old and the provider doesn't exist anymore. For example, I am having a lot of trouble finding documentation for CouchBase 1.0 on some CouchBase work a customer wants done.... This wouldn't have been an issue if they had kept upgrading CouchBase versions along the way. Now, it's a pain.

  • You appear to be in the UK, so I'll suggest you check with the Ministry of Defence and get a list of UK defence manufacturers that build software-intensive systems with long life cycles. You're looking for things like airplanes and boats. Then write a NICE letter to those manufacturers and ask them how they do it.

    The military routinely deals in systems with very long life cycles and many software upgrades.

    As one example, the American F-16 first flew in the late 1970s. It is expected to continue to fly we

  • The obvious solution is to build 2-3 boxes with the software needed to maintain your package, set them aside, and leave them the hell alone.

    Compared to the hassle and costs of trying to figure out obscure bugs caused by compiler/IDE/platform updates, it's the cheapest option you've got.

    Did you not notice the recent article about a school district's HVAC system still running on a long-obsolete Commodore Amiga?

    Have you never read about NASA stockpiling Intel processors of bygone eras for maintaining th

  • This is the type of question that calls out of the basement all the Emacs and Vi nerds, and bring the associated editor wars with them.

    REMOVE THIS POST BEFORE ALL HELL BREAKS LOSE!

  • by sjames ( 1099 ) on Monday June 15, 2015 @01:05PM (#49914889) Homepage Journal

    Stick with command line utilities. Make and compiler on the command line have been around forever. Avoid IDEs with opaque 'project' files and do not let any sort of automake or autoconfig anywhere near it. Keep your Makefile simple and to the point.

    Thaty way, at least you have the worst-case option of porting forward to a new build system should all else fail. THEN archive the whole thing inside a VM. Use raw disk images rather than a proprietary something that may or may not be supported 25 years from now.

  • by OrangeTide ( 124937 ) on Monday June 15, 2015 @01:06PM (#49914891) Homepage Journal

    I use Pacific C compiler [ibiblio.org] from inside DOSbox to patch an 80188 SBC (single board computer). Seems to work fine, and it's 20 years old.

    I think as long as your VM doesn't depend too much on IPv4 or being connected to the internet in general, it should be OK. I think equally important to having a VM is posting how to setup and install it, in the most brain dead and step-by-step way imaginable (screen shots for every step if you have to). Because it's really easy to forget that stuff over 20 years.

    • I would modify this advice slightly by suggesting you print out the documentation and get it professionally bound. In fact print out multiple copies. Document (and print) all the details of the development and maintenance processes. Everything from electrical schematics and tolerances to specific compiler versions.

      Take the time to paginate all of the documentation and then build indices. When referencing code modules give printed hash values so potential bit rot can be detected.

      I have CD-Rs, hard drives, an

  • by RealGene ( 1025017 ) on Monday June 15, 2015 @01:07PM (#49914901)
    Deeply embed the tools required in the device itself. As long as the box exists, the tools exist.
  • /. just ran this article about an Amiga still being used to control HVAC at multiple public schools after nearly 30 years: http://tech.slashdot.org/story... [slashdot.org]

    Technology embeds itself (so to speak); it is far harder to retire old tech (as per this article) than you might think (Windows 8/8.1 just barely passed up WinXP this year). I think that Linux + C makes as much sense as anything, especially for an embedded system, and I'll cheerfully bet that both will still be around and in active use in 25 years. ..bruce..

  • Depending on the regulations attached to your specific industry and how your company chooses to cope with them, "being able to maintain the application" may not be sufficient. If something goes sufficiently pear-shaped to get a government agency involved, they may demand that you be able to produce not just the source code used to create any given release, but the combined libraries, toolchains, etc.

    Hell, you *MAY* even be required to completely and faithfully recreate your entire development environment a

  • About 20 years ago, I accidentally solved a similar problem. I created a Windows application using Visual Studio with MFC without thinking very far into the future. It turns out that I still maintain that application - and a few spinoffs of it - to this day. VS and MFC turned out to be a good choice for this system.

    I've had to do some migration work every few years as newer versions of VS came out, but that's been tolerable. For example, I recently migrated from VS 2003 to VS 2010 because 2003 doesn't r

    • And sadly, it was only an accident.

      I used to work for a company that still maintains a VB3 system. We also used a licensed rich-text-format textbox control for reporting. We had to buy the company to get the source code for it when it went out of business, just in case it had a critical bug and we needed it fixed.

      Proprietary platforms will, as you say, only stick around by dint of luck. Free software is the only software you're guaranteed to have stick around.

      As for hardware, buy redundant units.

  • I have maintained various legacy systems dating as far back as the late 1970s some have faired better than others. By far the biggest difference between those that faired well and those that didn't was continuous supoort. A system from 1990 that wasn't maintained for only a few years due to the false assumption it was being phased was much harder to maintain and than older system which never went out of maintainance.
    The popular technilogies faired better than the trendy ones. PL1 Cobol and later C all faire

  • Crystal Balls (Score:4, Insightful)

    by Tablizer ( 95088 ) on Monday June 15, 2015 @01:24PM (#49915027) Journal

    If I could accurately predict 25 years out I'd currently be playing poker with Warren Buffett in a mansion instead of trolling Slashdot.

  • Presumably the target embedded system isn't running Ubuntu. So really all you need is some way to keep the compiler/linker running - the rest of the build environment is irrelevant. Unless, as mentioned above, everything has to be certified - in that case then what you need is several complete dev systems on a shelf somewhere and pray that you don't use them all up.
  • Take a look at Forth. Can run on anything and worst case, you can roll you own.
  • by DickBreath ( 207180 ) on Monday June 15, 2015 @01:35PM (#49915121) Homepage
    The 2038 problem [wikipedia.org] is similar to Y2K but for Unix. 2015 + 25 years = 2040.
  • Look at stuff that existed 25 years ago that still exists and is supportable today. That will give you an idea of the *sorts* of things that could be supportable in 25 years.
    • Exactly.

      In 1990 Windows 3.0 came out, which can still run in a VM today. Considering today's slower pace of change in computing technology, I think it's a safe bet any modern OS will run, in a VM, on future computers.

      Just pick a popular and open VM container format so you're not tied to a vendor. OVF [wikipedia.org] for example.

      You might want to also consider visualizing the version control system as well. Source history may be important to future developers making changes. Use a decentralized VCS like git, so version hist

  • Well, here's a development environment that is still quite usable after 17 years. I'm talking about Visual C++ 6.0. If it still works 8 years later, than I guess it fits the criteria.

  • Well, I'm supporting some hardware/software systems that go back to the 70's (45 years). I'm afraid I don't have much encouraging to say though.

    The nice thing about PDP-11's is that they were relatively ubiquitous, so there are lots of folks out there offering emulators, and even some with PDP-11's on a PCI card. The biggest problem we've had is with getting access to old sources. Source control discipline wasn't that hot back then, and a lot of the code was just done in machine language. Even if we had th

  • I still maintain a group of embedded semi-smart terminals that run OSes from MSDOS to current versions of Windows. The original program was written in Turbo Pascal 7 back in the early 90's, and some of the old ODS boxes are still in use.

    The program has been updated over the years to run on Delphi and now XE2. Three or four different development environments, but they can all be convinced to run under Win 7/8 (I haven't tried Win 10 yet.) I see no reason to expect that they won't live on long after I am r

  • I first thought it was "Still Unstable In 25 Years Time?" and wondered how the developer failed to notice Windows..

  • by MobyDisk ( 75490 ) on Monday June 15, 2015 @03:44PM (#49916129) Homepage

    I'm working on an embedded project that will need to be maintainable for the next 25 years. This raises the interesting question of how this can be best supported. The obvious solution seems to be to use a VM that has a portable disk image that can be moved to any emulators in the future (the build environment is currently based around Ubuntu 14.04 LTS / x86_64) but how do you predict what vendors / hardware will be available in 25 years?

    [emphasis mine]

    maintainable for the next 25 years...use a VM

    Some people are answering how to make something "compilable" 25 years in the future. That's different from making it "maintainable." A VM will make the project compilable. But it won't make it maintainable. Ex: I can compile MS COBOL code for CP/M, but I can't find developers to maintain it. The only way to make it maintainable is to continue to update to newer operating systems, libraries, and tools over the course of the 25 years. If you are in a regulated environment, there is cost to that. That cost needs to be part of the maintenance budget for years to come.

    how do you predict what vendors / hardware will be available in 25 years?

    That is impossible. If management wants you to do this, ask them what the budget will be in 25 years. You can accurately predict the development environment in 25 years with the same accuracy that they can predict the budget in 25 years. The closest you can get to this goal, is to have the source code for everything. When you use closed-source software, then your contracts should require that the source code be released to you when the product is no longer supported. Such conditions are not uncommon in the medical industry. The contract will likely forbid you from using that source code for anything other than maintaining that product since they won't want you become a competitor.

    I work for a medical device manufacturer, who does this. We do *not* try to predict what tools will be available in the future. We keep VMs and make sure it doesn't require external packages to run. Ex: all installs, binaries, etc. are available. No npm or nuget required on the build server. Over the course of decades, you will have to move the source into newer repositories (RCS -> CVS -> subversion -> GIT) or keep ZIP file archives since that is easier.

You knew the job was dangerous when you took it, Fred. -- Superchicken

Working...