Follow Slashdot blog updates by subscribing to our blog RSS feed


Forgot your password?
Data Storage Software Linux

Optimizing Linux Use On a USB Flash Drive? 137

Buckbeak writes "I like to carry my Linux systems around with me, on USB flash drives. Typically, SanDisk Cruzers or Kingston HyperX. I encrypt the root partition and boot off the USB stick. Sometimes, the performance leaves something to be desired. I want to be able to do an 'apt-get upgrade' or 'yum update' while surfing but the experience is sometimes painful. What can I do to maximize the performance of Linux while running off of a slow medium? I've turned on 'noatime' in the mount options and I don't use a swap partition. Is there any way to minimize drive I/O or batch it up more? Is there any easy way to run in memory and write everything out when I shut down? I've tried both EXT2 and EXT3 and it doesn't seem to make much difference. Any other suggestions?"
This discussion has been archived. No new comments can be posted.

Optimizing Linux Use On a USB Flash Drive?

Comments Filter:
  • by bluefoxlucid ( 723572 ) on Wednesday December 03, 2008 @06:22PM (#25981599) Journal
    Get a swap partition so that you can free up some memory for disk cache, for one. Enable laptop mode too.
    • Re: (Score:3, Interesting)

      Well, laptop mode's an obvious. However, I would not enable swap at all, or at least put it on the FS somewhere. If you do that, you wont be able to hibernate properly. Nor will hibernation work properly if you encrypt swap (partition).

      You're looking at standby or poweroff events due to encrypted partitions and systems. That's the pain you pay for. There is another way... TCPA, and IBM has writen the requisite linux tools to utilize it properly. Just everybody's too scared to use it properly.

    • Just to clarify for anyone who misunderstood this, I believe bluefoxlucid was referring to a swap partition *on a local hard drive* and not on the USB flash drive. For example, if you're booting on what typically is a Linux machine there is a very good chance that there is already a local swap partitioned off. Just be wary about what you may leave behind if you're not careful.
      • No, put it on flash. It'll be faster than a local hard drive and won't leave crap behind; though in practice, the seek speed probably won't make a difference unless you're about 2 seconds away from resetting the system for not responding anymore.
        • Re: (Score:3, Informative)

          by Anonymous Coward

          No, put it on flash. It'll be faster

          No, it will most certainly be much slower. The memory page size is 4K, whereas the typical flash block size is much bigger. Flash can only be erased in blocks, so to write one small 4K page to flash memory, the controller has to read a whole block, erase the flash block and write back the modified block. That's why most flash storage devices suck so badly at storing many small files, even though the continuous write speed isn't too bad.

          The performance penalty could be hidden by the OS: Instead of swapping o

          • The use case for swap isn't much writing, but reading. Swap gets written basically once in a while; when memory is read from swap, it's loaded into swap cache. This is invalidated if clean for more swap usually. When stuff's written to swap cache, and the page is used enough, it becomes permanent in memory until it's pushed out again; otherwise, it hangs out in swap cache until it's forced out, and gets written out. It's complicated but basically there's a lot of avoidance writing, because writing is ob
        • Seriously? You're claiming that USB 2.0 bus bandwidth exceeds SATA 150? Or even ATA66? I would be surprised if USB2 even beats ATA33 during sustained write operation (think DMA).

          Or perhaps you're claiming that raw disk I/O on commodity hard drives is slower than I/O to commodity NAND through a wear-leveling FTL IC??

          No, both are slower.

          USB 3.0 may be faster, but before then we'll have SATA/600 to much faster (possibly flash-based solid-tate!) disks, anyway.

          • For scattered reads of 4k in random order, backward and forward seeking, not possible to batch up, in a 32M or 256M contiguous area? Yes, zero seek time is faster.
          • by mathew7 ( 863867 )

            Please stop comparing THEORETICAL speeds. A car needs 400+HP to reach 300km/h, but a bike can do it with 150HP. But in which one would you be brave to try?
            A HDD speed is given by platter rotation and density. The only consumer HDD's I can think of reaching SATA150 speeds are the WD raptors, but even those I remember going to 120MB/s. You can get 300MB/s from a SATA300 HDD only by reading/writing exclusively to it's cache, but this data would be cached in RAM already by the OS.
            The fact is that only this year

      • Re: (Score:2, Interesting)

        by scotsghost ( 1125495 )

        Or on a mostly-Windows machine, you can always use a local-drive-based swapfile (there's likely no swap partition). Mount NTFS drive, create a "myswapfile" somewhere innocuous, mount it as a loopback swap partition (-o loop). Delete it on unmount (as part of your shutdown process) -- if you're truly paranoid you can even take the time to scrub the sectors your swap was using.

        Don't swap to the flashdrive -- you'll just hog USB bandwidth that you need for reading & writing real files off your root parti

    • Re: (Score:1, Funny)

      by Anonymous Coward

      The best thing to do is make a ram disk and use that as swap space. Swap space should be fast because it is paging things out of main memory. Ram is much faster than flash, ide, or scsi.

      • The best thing to do is make a ram disk and use that as swap space.

        So the OS can use part of the memory as swap, rather than memory, and when the part it is using as memory is full it can copy it to the part it is using as disk? (Except it won't, because if I recall correctly the RAM disk, along with disk buffers, are all dynamically assigned in memory, so when you need to swap there's nowhere to swap to.)

        Were you trying to be funny? Or just unintentionally managed it?

        • Obviously that won't do any good. You need to create a RAM disk using virtual memory mapped to space on the USB drive.
          • As has been pointed out elsewhere: Writing to the USB drive gradually burns it out. So you don't want to do a lot of swapping to it. (Also: It's REALLY slow for a swap device.)

            • by Yert ( 25874 )
              Dude. Not only once, but TWICE.... woooooooooosh.
              • Re: (Score:3, Insightful)

                Yert: It is not clear that the original post was a joke.

                "Whoooosh" is fun for playing elitist games. But I'm more interested in helping people avoid being "whoooosh"ed into a lot of lost time, effort, and perhaps compromised data because they missed something that another poster thought was obvious and funny.

  • USB has high cpu use and encrypted does not help a firewire based flash drive will be a lot faster. USB 3.0 may help but it too may still the limit of usb 2.0 high cpu use as well.

  • Hrm. (Score:5, Insightful)

    by Creepy Crawler ( 680178 ) on Wednesday December 03, 2008 @06:25PM (#25981653)

    Well, the sucky thing about USB is it requires an inordinate amount of CPU. Normally this isnt a worry, but if you're using a encrypted loopback.. well ouch.

    One thing you could instead use is the SD card slot and a USB loader. If you choose a 8GB class 6 SD card, you could have plenty of room for whatever, and 6MB/s minimum speed. You're still going to take the CPU hit for encryption, but that is your choice. The big thing is to stay off the USB.

    • by Fred_A ( 10934 )

      The internal SD card reader is typically connected to the USB bus so it probably won't change performance all that much.

      • Re:Hrm. (Score:4, Informative)

        by Creepy Crawler ( 680178 ) on Wednesday December 03, 2008 @07:07PM (#25982199)

        True, but some laptop and desktop designs have went away from USB connectedness.

        For example, on my Thinkpad the SD reader is its own bus. The Bluetooth is a USB 1.1 (grr) device, so I need to rmmod the bt modules and remove usb old modules to be power efficient.

        It really is a crapshoot on what the computer maker put on the USB bus. I just lucked out.

        • by mathew7 ( 863867 )

          Is it a T61?
          Anyway, the idea is that a card reader is very (very very) cheap to manufacture for USB. So even if it is a card-express or pcmcia card reader, it's actualy an USB card reader chip, and a USB host controller on PCI (pcmcia) or PCIe (cardexpress). And as I remember, card-express can use either PCIe or USB. So in the latter case the USB host controller is the one in your laptop chipset.
          So I really doubt you have your SD card reader on it's own bus. Maybe on it's own USB bus. But I'm 99% sure it is

    • by cuby ( 832037 )
      Usually SD readers use a USB interface inside... So, it will also have a high CPU usage. The 6MB/s are a real deal, I have a class 6 card that does it, but... in large files you can see periodic freezes during transfer, I think, because of the buffer system used by the SD.
    • I don't like USB for the same reasons. IEEE1394 (FireWire) is a much better bus in some respects like CPU usage. There exist FireWire flash drives [] that one can use, and work great under GNU/Linux.
  • by Paradigm_Complex ( 968558 ) on Wednesday December 03, 2008 @06:29PM (#25981711)
    One thing I've found really, really helps is to use smaller programs et al. While the difference in how long it takes to start up gnome-terminal vs rxvt or nautilus vs pcmanfm is minimal on a normal/modern desktop or laptop, the difference is substantial on a cheap USB flash drive. There's plenty of lists for lightweight applications, window managers etc for linux around online. In fact, I'll often just stick with terminal applications (moc, for instance).

    Another option, if you're booting on a box which has a good internet connection, is to ssh -X things over a network. Not only does this save a large amount of space, but I've found it's often faster to have a program like Firefox start on my snazzy box at home and ssh -X over than waiting for it to load off of my crappy usb drive.
  • USB1 vs USB2 (Score:5, Interesting)

    by Mendy ( 468439 ) on Wednesday December 03, 2008 @06:30PM (#25981715)

    It might be that the poor performance occurs when you're on a computer that only has USB1 support. On Dells this was added later than you might expect.

    You might find you got better performance if you were to use a CD to hold most of the static software and the USB for just your home directory.

    • by snilloc ( 470200 )
      There was a period of time on Dells where the front-access USB was USB-1, but the USB access on the rear of the machine was USB-2.
  • by liraz ( 77590 ) * <> on Wednesday December 03, 2008 @06:41PM (#25981875) Homepage

    I know a little bit about this because I am one of the developers for TurnKey Linux [], a new opensource project which builds small installable live CDs (we're up to 9) optimized for various mostly server-related tasks. I've been investigating supporting live USB mode.

    Your generic run-of-the-mill USB drive has about fourth-half the read/write performance of your hard drive nowadays (10-15MB/s). Since there are no moving parts (spinning platters), usually the seek times are very good.

    There are several things you can do to optimize the performance of an operating system running live from a USB drive:

    1) buy a faster USB drive: a good USB drive (e.g., Lexar JumpDrive []) can have 2-3 times the performance of a generic.

    2) Use a Linux distribution with a smaller footprint such as DSL [] (50MB) or Puppy Linux [] (standard edition is 68MB): the smaller the footprint, the less your drive has to read, the faster your system will load.

    3) Try loading the operating system system into a ramdisk: many live USB distributions have the ability to load themselves into RAM. With some you have to add a cheatcode in the bootloader. Others do it by default if there is enough memory (usually not a problem with small distributions and modern computers).

    4) Try turning on readahead: many distributions which are designed to run from a live CD or live USB have a feature that reads ahead various files important to the boot sequence sequentially. Whether or not this helps depends on the characteristics of the storage medium you are using, but you should investigate it.

    • Re: (Score:1, Informative)

      by Anonymous Coward

      puppy linux can load to ram and then no longer needs the loading medium

      • Re: (Score:2, Informative)

        by qaz20 ( 264928 )

        Puppy has also done a lot of optimizing for running on a USB stick, and it can handle encrypted partitions. Check it out:

    • Cant he keep /bin & /usr/bin on the HDD and just have the kernel load into ram anyway?

    • Re: (Score:1, Informative)

      by Anonymous Coward

      2) Use a Linux distribution with a smaller footprint such as DSL (50MB) or Puppy Linux (standard edition is 68MB): the smaller the footprint, the less your drive has to read, the faster your system will load.

      Puppy does exactly what the Original Poster asks for. It loads everything into RAM and when you shut down/reboot it asks if you want to save the changes to disk - this includes newly installed packages, updates, your documents, settings etc. You have the option for this "save file" to be encrypted (and password protected) or not.

  • by agristin ( 750854 ) on Wednesday December 03, 2008 @06:42PM (#25981899) Journal

    make sure you are on USB 2.0- interface can kill you.

    Also did you check the faq-

    No seriously: []

    especially the section on:

    Q: What is max_sectors and how should I use it?

    A:For USB Mass Storage devices (that is, devices which use the usb-storage driver) max_sectors controls the maximum amount of data that will be transferred to or from the device in a single command. As the name implies this transfer length is measured in sectors, where a sector is 512 bytes (that's a logical sector size, not necessarily the same as the size of a physical sector on the device). Thus for example, max_sectors = 240 means that a single command will not transfer more than 120 KB of data.

  • by nightfire-unique ( 253895 ) on Wednesday December 03, 2008 @06:58PM (#25982103)

    Most USB flash drives are very slow, and rely on heavy cacheing to make them usable. That doesn't help when you need to write large amounts of data (ie. in an apt-get update/install).

    Some flash drives that advertise themselves as 10-15MiB/sec write capable peak out at only 1 or 2, and even less with small-block random I/O (since the erase-write cycle operates on relatively large blocks).

    Several vendors make specialized flash drives that are somewhat more expensive (ie. 20-50% over average), but perform much better.

    One is here: OCZ Turbo USB 2.0 []

    • Re: (Score:3, Informative)

      by D_Gr8_BoB ( 136268 )

      You could also skip flash entirely and buy a very small hard drive []. I've got a 60-gig USB drive from Apricorn that I carry around in my pocket, with an AES-encrypted root filesystem. Performance isn't spectacular, but it's certainly usable.

  • Use Puppy (Score:1, Informative)

    by Anonymous Coward

    Puppy Linux is tiny and is set up to boot off of USB. After it's booted, if the system has enough RAM, the entire system is loaded into RAM. Makes for a damn snappy system.

  • I/O to RAM is really fast compared to I/O to any block device (most USB keys appear to the host PC as block devices, because they have a little ARM7 or other low-power 16-bit MCU on them that emulates the IDE interface). So, maybe you could get a speedup by mounting any I/O intensive parts of your filesystem on a ramdisk. It might also save wear and tear on the flash (though MLC NAND is never going to be all that reliable). Here's an fstab excerpt showing one technique:
    /dev/ram0 /tmp tmpfs defaults,nodev,
    • On Debian-based distros like Ubuntu, /dev/shm is a ramdisk which will automatically resize, compared to /dev/ramX which gets a bit weird about changing size.
    • Re: (Score:3, Informative)

      by Solra Bizna ( 716281 )

      /dev/ram0 /tmp tmpfs defaults,nodev,nosuid 0 3
      /dev/ram1 /var/run tmpfs defaults,nodev,nosuid 0 3
      /dev/ram2 /var/log tmpfs defaults,nodev,nosuid 0 3

      Just to let you know, tmpfs ignores the device path, you can put whatever you want in it (and you aren't actually using /dev/ram* with that).


  • Distros like Damn Small Linux know a mode where all frequent writes go to a RAM disk. Current flash hardware (especially disks - lesser cheap usb sticks) is already a lot smarter at "wear levelling", but a standard distro genereates a whole lot of small writing activity. It would be nice if there was a out-of-the-box way to make a server distro like Ubuntu Server USB ready. My file server could shut down its 5 disks completly until I access the files over the network. This could save quite some energy (and
  • by mbyte ( 65875 ) on Wednesday December 03, 2008 @07:16PM (#25982309) Homepage

    Try out different filesystems, NILFS [] seems to be optimized for FLASH usage.

    Brtfs could also be worth a try.

    use the "noop" IO/Scheduler with nilfs:
    echo noop > /sys/block/sdX/queue/scheduler

    Postmark benchs on an usb-stick (shameless copied from here []:
    ext3 (mount -o noatime,noadirtime, normale Partition, scheduler cfq): 49 Transactions/s
    nilfs2 (Partition aligned 128k, scheduler noop, protection_period 10s): 588 Transactions/s

    • Re: (Score:1, Interesting)

      by Anonymous Coward

      Wow - it seems like you're the only person that has actually posted the correct information. ext is the wrong filesystem to use on flash & noop is a much better scheduler for flash.

      Do you have any numbers on just noop vs cfq when the same file system is used on a USB stick?

    • Re: (Score:3, Funny)

      by Anonymous Coward

      I'm holding out for the MILFS filesystem.

    • Try out different filesystems, NILFS

      If you want to run a layer of compression and then one of encryption before hitting the NILFS backend, would you then need to use two instances of CONSFS?

  • what I do (Score:4, Insightful)

    by ILuvRamen ( 1026668 ) on Wednesday December 03, 2008 @07:20PM (#25982347)
    I have an old 10GB laptop drive inside of a very low profile USB enclosure and it runs like 35x faster than my USB drive or something absurd like that. It's a little more sensitive to bumps but it's not exactly expensive for 10GB drives. You can get a 6 pack of used one on ebay for about $3-7 each. Best of all, in a good enclosure, it still fits in your pocket.
  • busybox (Score:5, Insightful)

    by nategoose ( 1004564 ) on Wednesday December 03, 2008 @07:21PM (#25982357)
    You might want to try replacing many programs as you can with busybox. It's versions of utilities are less complete than the standard gnu utilities, but they are all rolled into one binary, so most likely most of that binary will get cached in ram pretty early and stay there.
    Also, for any packages you build you should try to use the -Os option for gcc, and perhaps even strip the binaries to remove unused symbols and debug info.
    Building the system as though this was an embedded system with a small disk should be a win in most cases since fewer things have to go over the wire to load a file and more of the binaries can fit into cache.
  • I'm surprised no one mentioned this (or maybe someone has and it's just under my threshold), but not using a journalling filesystem can help tremendously. Having a whole system on a flash-based USB mass storage media formatted and mounted as ext3 is a great way to make sure the only bottleneck you'll ever have is disk I/O.

  • by Anonymous Coward

    I run exclusively off a flash drive. I'm nomadic and it's easier to haul around than a laptop.

    Many apps like to call fsync() needlessly, causing many writes to occur to the flash drive. Buffering all these writes in RAM until you halt the system works around this.

    One of the largest performance gains I've gotten is by using tmpfs to store all of firefox's data.

    Strategy is as follows:
    On boot, mount ~/.firefox as tmpfs. Extract backup tarball into this dir.
    On halt, generate tarball of ~/.firefox

    Some scripts ar

  • by Mr Z ( 6791 ) on Wednesday December 03, 2008 @11:38PM (#25984601) Homepage Journal

    Dave Jones recently posted elsewhere his notes for improving things on the eee900. Several of the steps focus on getting proper performance out of the flash, and so would also apply to booting from a USB thumb drive or other flash media. Here's what he had to say:

    Making the eee 900 series suck less.

    Recently I've read about or spoken with a few people using Fedora on eeepc's who have been making some fairly big blunders without realising it. I've played with a few of these now, in their various incarnations.

    The current one I've been carrying around is the 900 model with a whopping 20GB of flash. It's quite deceptive, because there are actually two SSDs in there (one 4GB, and one 16GB)

    These ssds are also pretty damn awful performance-wise compared to the newer generation of SSDs, but short of opening it up and retrofitting something, there's not much that can be done. The tips below should at least make it more bearable.

    • First off, don't use the default partitioning scheme.

      By default, anaconda will choose to use lvm, and make a contiguous volume out of the two SSDs. This idea is fail, because the two disks aren't the same, and run at different speeds.

      # hdparm -t /dev/sda

      Timing buffered disk reads: 108 MB in 3.04 seconds = 35.57 MB/sec

      # hdparm -t /dev/sdb

      Timing buffered disk reads: 86 MB in 3.05 seconds = 28.20 MB/sec

      So, don't do that. Just create regular partitions, and make sure you put / on the faster of the two disks (the 4GB one), and leave the 16GB one for /home

    • Next, the default filesystem will be ext3. You really don't want this.

      Given the journal is in a fixed location on disk, scribbling to it every time a file gets written is a great way to wear out the flash. Go with ext2. (Given that you've only got a few GB of flash anyway, a fsck doesn't take that long should you need to). Additionally, not having to write to the journal means that you're doing less IO, which is obviously a win when it's on such slow media.

    • This should go without saying - no swap.

      Not only for the flash wear problem in the previous bullet, but also, because it's slow as all hell. If you find you run out of ram and get stuff oom-killed in this setup, well, you probably need to add more ram, or consider a real laptop.

    • After installing, change the fstab so that everything gets mounted with noatime.

      Writes to the disk are just painful, so minimising them is the path to success here. It's doubtful that you'll be running anything on an eee that would actually care about atimes anyway.

    With these points taken into consideration, the eee isn't a half-bad machine. I still wouldn't want to be building kernels and such on it, but it's perfectly usable for email and such whilst travelling.

    • how about disabling or decreasing syslogd log priority to log only critical or emergency messages
      and/or to have /tmp /var/tmp /var/log bind mounted on a dynamic tmpfs or static ramfs
      so you don't have logs and variable files hitting the flash ever
      • by Mr Z ( 6791 )

        I'm not sure much gets logged on a properly functioning system with few services enabled. I just looked at my /var/log/messages on my Ubuntu system. It's pretty mind-numbingly boring:

        Nov 30 07:53:25 metal syslogd 1.5.0#1ubuntu1: restart.
        Nov 30 08:08:06 metal -- MARK --
        Nov 30 08:28:06 metal -- MARK --
        Nov 30 08:48:06 metal -- MARK --
        Nov 30 09:08:06 metal -- MARK --
        ... several more pages of the same ...
        Dec 4 07:08:06 metal -- MARK --
        Dec 4 07:28:06 metal -- MARK --
        Dec 4 07:48:06 metal -- MARK --

  • "What can I do to maximize the performance of Linux while running off of a slow medium?"

    Stop trying to fight physics and use a different technology []

  • by blanchae ( 965013 ) on Thursday December 04, 2008 @02:38AM (#25985651) Homepage
    There's quite a discrepancy over the speeds on flash drives. Cheap flash drives run USB 1.1 and the transfer rates are around 1 MB/sec. USB 2.0 drives range from slow 10 MB/sec to close to 40 MB/sec. The fastest drives will cost easily over $100. The size of the drive will slow the transfer rate also. 4 GB drives are faster than 8 GB and so on. Corsair GT drives have close to 35 MB/sec read speeds, write speeds are always dramatically slower. I've installed a bootable PBX in a Flash on a 4 GB Corsair USB flash drive [] with very acceptable performance for teaching purposes. I can see other LAMP bootable installations popping up. Each student can have their own server to configure and boot-up then take shut it down and take it home.
  • Needless encryption (Score:1, Interesting)

    by Anonymous Coward

    Do you really need to encrypt everything? Why don't you create a separate partition for your /home (and /var /etc and /tmp if you really want to); encrypt that/those but not the rest. I can't see why there'd be anything confidential about what's in all other directories, especially since almost all the rest is probably open source anyway.

  • If the ordinary experience is acceptable, try running background jobs or I/O intensive non-interactive jobs using the ionice command, such as
    ionice -c 3 apt-get update
    (Make it suid or run as root.)

  • If you want a thin, snappy solution, try Puppy Linux. It loads itself into memory and runs from there.

    However I have been using a full Ubuntu install on a 4GB USB drive with some modifications to optimize writes. I used unionfs to transparently overlay a ramdisk on top of some directories that are likely to be written to (/var, /etc, /home, /usr, etc). Unionfs provides a merged view of the overlaid directory and the ramdisk, while disallowing writes to the overlaid directory. What this means is that when I

  • So in other words, this guy is concerned that somebody steals his USB drive, decrypts the passwords in /etc/passwd and /etc/shadow and then does what? Find the key for his encrypted files? Which he conveniently stored on the USB drive and used a weak password so somebody COULD crack it?

    Is this guy a member of Al Qaeda or being otherwise actively hunted by the CIA, FBI, DIA, MI6, Interpol, and the Mossad? Or is he a child molester with kiddie porn on his USB drive? The Treasurer of AIG?

    What level of parano

  • 1) Fastest USB thumb drive which is rugged and has 4 GiB or more of storage?
    2) Fastest SDHC card with 4 GiB or more of storage?


  • Use a compressed read-only filesystem like Squashfs. For genuinely read-only directories (like /usr) these can be used directly, for directories that need to be writable, the read-only Squashfs can be used with aufs/unionfs to make it writeable. A number of people have mentioned systems like Puppy, and that is exactly what these systems do.

    Using a compressed read-only filesystem not only solves the wear-levelling problem, but it makes accesses faster because less data has to be read. It also means more d

What good is a ticket to the good life, if you can't find the entrance?