Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Network Security Communications Privacy Software The Internet Hardware Linux

Ask Slashdot: Best Way To Isolate a Network And Allow Data Transfer? 237

Futurepower(R) writes: What is the best way to isolate a network from the internet and prevent intrusion of malware, while allowing carefully examined data transfer from internet-facing computers? An example of complete network isolation could be that each user would have two computers with a KVM switch and a monitor and keyboard, or two monitors and two keyboards. An internet-facing computer could run a very secure version of Linux. Any data to be transferred to that user's computer on the network would perhaps go through several Raspberry Pi computers running Linux; the computers could each use a different method of checking for malware. Windows computers on the isolated network could be updated using Autopatcher, so that there would never be a direct connection with the internet. Why not use virtualization? Virtualization does not provide enough separation; there is the possibility of vulnerabilities. Do you have any ideas about improving the example above?
This discussion has been archived. No new comments can be posted.

Ask Slashdot: Best Way To Isolate a Network And Allow Data Transfer?

Comments Filter:
  • Is Futurepower still alive? That guy is a nutjob.
  • Answer (Score:5, Insightful)

    by 110010001000 ( 697113 ) on Wednesday June 21, 2017 @08:12PM (#54665165) Homepage Journal
    I'm going to answer the question even though Futurepower(R) is a schizophrenic nutjob. The answer is there is no way to do it. If a computer is on a network it isn't secure and it can't be isolated. A "network" is the anthesis of isolation. If you connect it to the Internet, game over man.
    • Re: (Score:2, Insightful)

      by sit1963nz ( 934837 )
      Yep.
      I was told the only secure computer is one that is never turned on, never connected to a network, and sits in a safe where no one has access to it.

      Anything else, is just slowing things down, not prevention.

      If something can be exploited, it will eventually be exploited. All it will take is a lazy user who thinks the USB stick in his pocket will be OK to use "this once" and be wrong.
      • Better put that in a faraday cage too, otherwise I might induce current into the circuits remotely and try to read the output RF interference.

      • To quote the first Linux book I read:

        You might not need shadow passwords if your computer is not connected to a network. Or a power cord. And is buried in six feet of concrete.

    • by dcsmith ( 137996 )

      The answer is there is no way to do it. If a computer is on a network it isn't secure and it can't be isolated. A "network" is the anthesis of isolation. If you connect it to the Internet, game over man.

      See the pilot episodes of Battlestar Galactica...

    • You're talking about direct connections, yet the entire concept of DMZ as a security principle disagrees with you.

  • by msauve ( 701917 )
    Firewall?

    Really, the manufacturers track threats and release mitigations better than you can, and are built for exactly what you're asking. Daisy-chain ones from different vendors if you're really anal.
  • by JBMcB ( 73720 ) on Wednesday June 21, 2017 @08:19PM (#54665183)

    Separates different browser and email tasks into virtualized jails.

    https://www.qubes-os.org/ [qubes-os.org]

    Kinda like Sandboxie. Speaking of which, sandboxie?

    • by BaronM ( 122102 ) on Wednesday June 21, 2017 @08:54PM (#54665411)

      Yep, and it's almost usable, too. OTOH, Qubes is focused on the workstation. For network-level isolation, it's really hard to beat two firewalls from different manufacturers and code bases back-to-back.

      Think Internet--PaloAlto--Sophos UTM--LAN (Substitute any two other unrelated NG firewalls)

      Systems on the inside initiate all connections; no reaching in. That means having staging DBs, etc. on the outside that are polled from the inside by transfer routines that parse and validate everything outside of the application that receives the data. Anything that does not positively match expected input is dropped. If you really want to be serious, all systems log externally to a log host with WORM drives that has had the transmit pin on the NIC physically cut (mostly kidding -- hi Marcus!).

      Remote access is terminal services or equivalent to a concentrator on the outside and a second hop internally with separate authentication at each hop. Absolutely no VPN or other tunneling that supports direct traffic flow from outside to inside.

      SecureID or other token-based auth is mandatory.

      Stupidly expensive and a pain to configure and maintain correctly, but very secure. If you need to ask, you probably don't need it and can't afford it.

    • by Balial ( 39889 )

      These "run every app in a VM" kits are snake oil. All they do is expand the attack surface making it easier for an attacker to get in. Sure, by virtue of being slightly different you might dodge some bullets temporarily, but once they're reliable enough to go mainstream, attackers will flock to them. The only real solution is less code and fewer interfaces.

      • by JBMcB ( 73720 )

        All they do is expand the attack surface making it easier for an attacker to get in.

        They do the opposite. If your jail is for Email, and the only thing installed is your email client, and the libraries needed to support it, that *greatly* reduces your attack surface. Heck, run it in a jail with no shell binaries, that alone will kill off most exploits.

        A long time ago we used to build secure internet-facing public FTP servers this way. Strip out pretty much everything except a limited shell, that Gnu multi-tool shell thing and an FTP server. A few lines in rc to bring up the network, a sin

  • uhhh (Score:5, Insightful)

    by Fwipp ( 1473271 ) on Wednesday June 21, 2017 @08:23PM (#54665203)

    Any data to be transferred to that user's computer on the network would perhaps go through several Raspberry Pi computers running Linux

    You are so incredibly out of your depth you don't even know it.

    • Re: (Score:2, Insightful)

      This was exactly my thought when I read that line. This is so far off in left field, I'm not entirely sure what he thinks he'll inherently benefit from by using Raspberry Pi, let alone several of them.
      • Re: (Score:2, Interesting)

        by gl4ss ( 559668 )

        I guess his idea would be to use multiple brands of packet scanners and shit. ...which sounds just fine, except that.. uh. those scanners suck and if you only want to move files between them anyways, why not just set up a network where the raspberry pi is a ftp or smb or whatever share.

        basically that's what he wants anyways. a file share between the two machines.

        here's another idea though, just make a bluetooth obex file share from the computer that you browse the internet with. or a 3rd computer. enable bt

    • While he's at it he could increase the speed of this system by splitting connections across multiple ports on the network interface. It's crazy enough to work!

    • He needs 7 RPi so he will be protected behind 7 proxies and cannot be h4x0red!!1!!!!!1!!

  • IPX/SPX (Score:5, Funny)

    by HornWumpus ( 783565 ) on Wednesday June 21, 2017 @08:28PM (#54665227)

    Make the secure network IPX, nobody has seen it in 20 years, any malicious code running on the internet connected side won't even look for it.

    I know, security by obscurity...

    Also BSD not Linux.

    • Double Down (Score:2, Funny)

      by JBMcB ( 73720 )

      IPX on Token Ring, using Banyan Vines for file sharing. Run the server on OS/2. OpenVMS groupware.

      Poor little virii won't know up from down.

  • Why use multiple computers? What's the problem with Virtualization? Virtualize the firewall, slap on a tight-ass linux with bare minimums to perform routing/firewalling for the host machine. Works great for me. Very tiny attack surface (SSH at the very most, if even that.)

    • by dbIII ( 701233 )

      What's the problem with Virtualization

      For security purposes it's only the appearance of several computers instead of actually being so. That's not a flaw, it's just not designed to do what you want it to do.

  • by Sycraft-fu ( 314770 ) on Wednesday June 21, 2017 @08:32PM (#54665257)

    If you really care about isolation, like the kind we are talking about for SIPRnet and so on then you need to use data diodes and controls.

    A data diode is a hardware device that only allows transfers in one direction. That way you can make sure that when you are bringing data in to the network, no egress can happen, and such. They are very specialty, and very expensive.

    However more important than that is proper controls. That means policies and procedures that are followed rigorously. You have to make sure that people are extremely careful with how data is moved from one network to another and what data is moved. You need a process that specifies things like who can decide data to be moved, who approves it, who reviews it, how this is all done and so on.

    If this is really important, well don't try to do it yourself based on some posts on Slashdot, you need to hire some experts. You also need to spend lots of time in the design and planning stages, you need to careful consider and document how everything will be set up and all the controls in place.

    • by thegarbz ( 1787294 ) on Thursday June 22, 2017 @03:53AM (#54666583)

      However more important than that is proper controls.

      This right here is the most important sentence in this entire Slashdot story. Security is not about patching, isolating, and airgapping. Security is a complex process that gets more and more complex the more people are involved.

      The best airgapped system will fall, the best designed DMZ will get infiltrated and even the masters of IT infiltration will fall victim to a malicious or ignorant insider if security processes and controls aren't in place.

  • There are many solutions each with its own pros and cons. But without understanding what it is you are doing you are really wasting everyones time. Go into the details and help us understand the purpose and situation to what it is you wish to achieve and /. will do it's best to help you.

    • What do you want to accomplish? Details are indeed the key. If you need to get data submitted from publically available servers, you're opening completely different attack vectors than someone who only needs to get data out of their internal servers to external targets.

      Taking as a given no solution offered in the comments will be guaranteed to solve your needs, since we don't know what they are, there are some good safeguards that are standard in IT.

      Step 1: Put managed systems between your LAN and the inter

  • Foolishness. (Score:5, Interesting)

    by Gravis Zero ( 934156 ) on Wednesday June 21, 2017 @08:45PM (#54665345)

    What is the best way to isolate a network from the internet and prevent intrusion of malware, while allowing carefully examined data transfer from internet-facing computers?

    Print it out and type it back into the computer you want to transfer it to.

    Windows computers on the isolated network...

    If you are using Windows then you are forfeiting a major advantage: absolute control of your system. Windows cannot even be trusted to respect it's own system settings let alone be worthy of being trusted. You should be suspicious of software written by corporations because their motive is profit, not security or even user satisfaction.

    • Print it out and type it back into the computer you want to transfer it to.

      Just transfer it via serial port, and make sure you leave the software open when you're done since that will block access to the serial port preventing malicious software from using it.

  • WAN -> Firewall -> Firewall -> LAN. Each firewall from a different company, and some tinkering with the router configuration to make even compromised computers not sure where they are.

    Also helps if you use machines with a completely alien architecture to what everyone else is running. Viva la Alpha, MIPS, etc. It's not that you can't attack them, it's just that your custom forged 'PC' is now in the .000000000000001% bracket of commonality with everything else out there. Do you know how much of a ba

    • Security through obscurity is TOTALLY the way to go.
      I recommend using Siemens PCS 7, WinCC and STEP7 industrial software (which isn't widely used), and air-gap it all to prevent access to Siemens S7 PLCs running custom, specialized code that nobody else could possibly know or have.. Totally secure- especially is you have all your contractors screened for special security clearance.

      Totally unbreakable.

      (See also: https://en.wikipedia.org/wiki/... [wikipedia.org] )

      TL;DR: If someone big wants to hack/infiltrate you, you will

      • Never said it was unbreakable, just making it a little bit more difficult.

        And correct me if I'm wrong, but Stuxnet runs on Windows, which is a monoculture...

  • You need to go much simpler, for a lot of reasons. Humans need to use it. Humans need to choose to use it. Humans need to not go around it.

    I think you need to base your solution around a presumed-infected node. I find working with the weeds to be better than trying to design a planter that weeds can't find.

    Given "Machine A" as the user's actual workstation, internal, no outside access.

    Given "Machine B" as the external-facing node, with whatever internet access you deem necessary, and we'll presume that

    • by omnichad ( 1198475 ) on Wednesday June 21, 2017 @10:43PM (#54665873) Homepage

      I've never heard of any malware jumping through an FTP connection.

      Any transfer protocol implementation could have buffer overflows or any vulnerability that anything else has. Why is FTP more magic than SMB?

      • Have you ever heard of malware that jumps through an ftp connection?

        • Does that make it somehow less possible? No. There just hasn't been any reason for someone to try...yet. If you're protecting against all possibilities, then you need to think about theoretical rather than actual.

          • You don't know how to read. Step one was surface area. Read step two.

            You also don't know how to think. You won't find something to protect against all theorhetical possibilities. That's not a real thing in life. It's like money. They aren't impossible to counterfit -- obviously. If the mint can print them, someone else can print them too. The idea is to make printing them dependent on an easily tracked material'ink/device, so that it's easy to find counterfitters. It's nothing more than a cat-and-m

            • FTP is one of the least secure file protocols in active use. Second only to maybe SMB, and that only because it was not designed to be exposed to the Internet.

    • Malware has managed to jump air gaps between very disparate architecture using USB sticks. Sure it was a highly targeted attack to a very specific nuclear facility, but it was done. Maybe next time they'll indeed target the FTP link for that.

  • You can build a gigabit one-way link out of three fiber optic transceivers for a few hundred dollars.
  • Place a tinfoil hat on each machine on your network. Voila! Problem solved.

  • by AHuxley ( 892839 ) on Wednesday June 21, 2017 @09:40PM (#54665595) Journal
    Understand how their staff get/got into networks/sites going back to the 1950's and what could be expected into the 2020's.
    Work out what products and services are now for sale or have been found in the wild and could be used to extract your secure data.
    Methods are shared with other "trusted" nations, staff keep methods get sold/kept for later private sector work.
    Very advanced and unexpected methods are on the open market, back market, out in the wild.
    Look at how governments failed to secure their own data and why.
    Internet-facing computers had plain text data so it could be shared with trusted contractors and other agencies.

    Internet connected computers got found doing interesting things and interesting people collected all tools on "secure" staging systems by following the networks back.
    A USB stick gets dropped around a site of interest so staff walk in and bypass all security.
    Nobody smart thought to test the "modem" or "hard disk" or just trusted the altered computer hardware that got "shipped" in.
    A company hires staff without vetting and staff walk out will all the data.
    A company finds a very secure building but low cost cleaning staff hold doors open for "workers" who can use an elevator and tell a nice story about needing to get back in to their office.
    A nice sale is made of advance private sector crypto that is junk due to government backdoors.
    Work out who wants your secrets. Another nation? Your own nation? Competitor? Someone who can afford to hire ex and former clandestine service professionals? A long term dual citizen?
    Groups on the internet with no funding but who have unlimited time and very advanced skills?
    A cult? Faith? Political groups? Private sector competition? SJW with funding?
    What will they want? Collect it all? Some files? Production work? Prototypes and concepts? Will they have an expert to guide them in your network? Or have to collect everything and sort/sell/copy later?

    Look back at how the NSA and GCHQ finally learned how to kept their secrets in the 1970-80's
    What did the security services finally get right and understand after decades of walk outs and complex staff issues? What failed with all the trust in contractors after the 1990's?
    If your company or data is interesting or has value someone is going to be looking. Down a network, a walk in from the street or as new staff.
    Keep your secrets using compartmentalization.
    If a server needs to have internet facing work, make sure its only for that project. If it has to have everything on it, hire a really good cryptographer.
    Someone who is working for you, not with the government, not part time for a university, not as contractor, not some outside brand, not for some other nation.
    Try and secure your work and use the networks the best you can.
    Try and keep any future projects away from the production networks.
    Think about your modems, your storage, what hardware got "shipped" in over the years? Other nations and the clandestine services thought of all that.
    Set up really interesting fake projects and see who asks or looks?
    Mid and low ranking staff ask too many questions hinting at terms they should not know? Do they just want a promotion or are they trying to get access?
    CCTV shows new people wondering around at strange times?
    A USB device found? Someone wanting to do charity work or to sell something been on site a lot? They want to give a quick presentation from a usb stick?
    Staff getting amazing new friends who really want to see their office? Data is collected by placing a trusted physical device internally well past any average protection.
    After a while a type writer, paper, a vault and guards could be a good idea for the best ideas.
    Fill your computer networks with encrypted bait and see what walks in or out.
  • If virtualization is too risky, maybe you need to consider total isolation: faraday cage and tinfoil hat. Anything you use to transfer files can be compromised and transfer malware.

    If you're only concerned about mainstream exploits, then make your own custom file-shoveler solution: browse, etc. on a net exposed computer, download to an external hard drive, then switch the hard drive to the isolated PC and scan with whatever you trust before moving it into the "green zone." Drives aren't smart enough to ex

  • by dbIII ( 701233 ) on Wednesday June 21, 2017 @10:05PM (#54665687)
    USB networking still exists.
    It can be used so that the "secure" computer can see only one main directory (plus it's subdirectories) on the conventionally networked computer.
    It has the added bonus that many machines have ports on the front so it can be plainly visible when the link is in place.
  • Why reinvent the wheel? If you really need this, you are probably employed at a place that can afford quality enterprise software. You can use Globalscape MFT with a DMZ host providing reverse proxy services, and enable FIPS 140-2 compliant mode encryption. It's not cheap, but it works great! You can even use workflows to run multiple antivirus engines on each file to ensure it is as virus-free as modern antivirus software is able to discern. If you are extremely concerned about personal security, your bes
  • We have confidentiality standards, but that's not all of security. Nevertheless, having a B2-level machine between two mutually untrusting worlds provides you with a good place to review incoming exceutables and outgoing information. Do it using two humans, one called a sysadmin, the other a security administrator. Both must sign off before moving anything from one world (category/level, container) to another.

    No go solve all the other problems in security (;-))

  • And examine every packet carefully.

  • Security is a tradeoff between usability and safety. You can use Xwindows to work on one remote machine, then cut and paste information to another. You lock the ever living sin out of all three.. the machine in the middle is locked down to doing only segregated X server sessions, and unable to do ANYTHING else. This is a gigantic pain in the ass. But it does put some serious obstacles in the way of malware. But this is probably too onerous of a process to use.. if someone needed this level of security
    • by pnutjam ( 523990 )
      I've been toying with setting up an x2go server to handle all internet browsing from Windows computers via a published firefox (or other browser) session. The Windows PC's would be blocked from the internet and any files would need to be downloaded on linux and scanned before they could be pulled into the windows machine.
  • by nine-times ( 778537 ) <nine.times@gmail.com> on Wednesday June 21, 2017 @11:09PM (#54665965) Homepage

    It seems to me that we have a very simple and common piece of equipment for isolating one network from another while also allowing connectivity: a firewall.

    You can get firewalls that scan traffic for patterns of attack, or compares the data being transferred against malware signatures. Granted, that's not perfect. It won't provide anything close to "perfect" security. But still, what do you anticipate your setup would provide that a good firewall wouldn't?

    For example, you reference passing traffic through several Raspberry Pi devices, which essentially has each one acting as a firewall. Yeah, you can make all your internet traffic pass through multiple different firewalls, each with their own security scanning engines, but your adding expense and complexity for diminishing returns on improving security.

    So what are you trying to do? What kind of security are you trying to provide, and what kind of attack vector are you anticipating?

  • ...go through several Raspberry Pi computers running Linux; the computers could each use a different method of checking for malware

    If you had a 100% effective way of checking for malware, then you wouldn't need to airgap your computer at all, just run this magical malware detector on the computer.

    The thing about zero-day exploits is that since they are previously unknown, there's no way to catch them with any certainty.

    If you want to keep your computer completely safe from network malware, keep it completely air gapped and off the network.

  • Microsoft ... (Score:5, Interesting)

    by ElizabethGreene ( 1185405 ) on Thursday June 22, 2017 @12:15AM (#54666139)

    Microsoft has done some work around this on the Windows side.

    They build a locked-down domain that requires Ipsec for all communication, and use it to build secure hosts called Privileged access workstations (PAWs) from known good media.

    Their reference material is here:
    http://aka.ms/cyberpaw [aka.ms]

    The configuration and software bits will obviously be different from Windows to Linux, but the underlying ideas should be the same.

    Those are:
    * restrict network communications with IPSec
    * no internet access on the PAWs
    * build everything in the red forest, including the PAWs, from known good media.

    There has been a great deal of discussion about the "right" (tm) way to bring data into and out of the red forest. You can argue for moving this data in via bastion host file servers, but I don't like that. If I'm going to all of the trouble to air gap a network then I want it to be an air gap. That means USB sticks and sneakernet.

    I'm not familiar with the intricacies of the recent Intel AMT vulnerabilities, but I _assume_ that requiring IPSec for communications at the OS layer won't prevent that vulnerability. I'd be delighted to be wrong.
    .
    (Save the Microsoft bashing for another post. I work for them. They buy my groceries. They aren't paying or pushing me to write this. In fact, I should be working.)

  • You are literally asking what is the best way to "isolate" something, and then allow "data transfer" from that thing. The thing you are asking to allow completely negates the first action. These things are literally opposites.

    Did this question make anyone else sad? I always wonder if this means today was a slow news day.

  • While there's a lot wrong with the Common Criteria process some of the underlying concepts are good. EAL7 essentially relies on the implementation of a security concept that is provably correct. This is opposed to trying to harden/secure a general purpose system. This is why people use Data Diodes, which are essentially one way network connections.
    Security Concept = Only allow data to travel in one direction. You can then prove that data can't get from the high side to the low side
    Implementation cut one of

  • by emil ( 695 ) on Thursday June 22, 2017 @10:01AM (#54667709)

    I have used various versions of the FWTK [fwtk.org] to isolate test networks. There is an independent version of the code here [sourceforge.net].

    If you (can find and) use the old version, beware of the author's [ranum.com] reflections on his code.

    As this has long been abandonware, I'd say that all of this code should be running in a chroot() as nobody should you use it. Also note that you'll need the -m32 compiler flag (in addition to many other changes) to get a clean build.

The use of money is all the advantage there is to having money. -- B. Franklin

Working...