Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Businesses IT

DSS/HIPPA/SOX Unalterable Audit Logs? 381

analogrithems writes "Recently I was asked by one of the suits in my company to come up with a method to comply with the new PCI DSS policy that requires companies to have write once, read many logs. In short the requirement is for a secure method to make sure that once a log is written it can never be deleted or changed. So far I've only been able to find commercial and hardware-based solutions. I would prefer to use an open source solution. I know this policy is already part of HIPPA and soon to be part of SOX. It seems like there ought to be a way to do this with cryptography and checksums to ensure authenticity. Has anyone seen or developed such a solution? Or how have you made compliance?"
This discussion has been archived. No new comments can be posted.

DSS/HIPPA/SOX Unalterable Audit Logs?

Comments Filter:
  • by Anonymous Coward on Wednesday August 01, 2007 @02:29AM (#20067393)
    Optical media are great for write once, read many.
  • by edashofy ( 265252 ) on Wednesday August 01, 2007 @02:40AM (#20067473)
    Cryptography, digital signatures, and checksums can only take you so far. They can detect tampering pretty easily. However, crypto can't prevent someone from deleting a file, although by checksumming or signing a whole bunch of files you could at least detect deletion of one of them. Ultimately, if you really want permanence, you need to write it out (as an above poster suggested) to some sort of write-once media. CD-Rs or DVD-Rs would obviously fit the bill here, although one can indeed delete a CD-R by simply throwing it out, of course.

    Another cheap write-only medium is paper; I suppose you could purchase a laser printer (or even a line printer), and have it spit out the logs as they occur. If you kept the printer in a locked transparent box, nobody but people with the keys would have access to the output.

    You could burn the logs onto PROMs as well, that's pretty permanent :)

    Anything on magnetic or flash media can be erased or tampered with somehow, unless the drive controller hardware itself prohibited overwriting existing data. Even then you're relying on someone not being able to replace the drive controller or take the drive apart and diddle the platters/flash chips directly (although I suppose a decent amount of epoxy could thwart this). Any software-based solution can be tampered with in theory. One hacker favorite (which may be a legend or not) is that people used to get root on other people's boxes and then replace their copy of PGP with an instrumented copy. Thus, even the encryption software became compromised.

    For compliance, though, I'm not sure what kind of oversight you have to have. At the end of the day, somebody has to be trusted with these logs, and that person would almost assuredly have the power to destroy them, or at least portions of them.
  • One-way data cable (Score:5, Interesting)

    by rjh ( 40933 ) <rjh@sixdemonbag.org> on Wednesday August 01, 2007 @02:47AM (#20067509)
    At USENIX/EVT06 last year a team from the University of Iowa presented a cheap one-way data cable you could make with off-the-shelf parts from Radio Shack. Total cost is about $5 (for bulk, maybe $10 if you're buying single units) and it is provably, auditably, one-way. It was originally developed for electronic voting, to allow for counting computers to communicate with webservers that post election results. An attacker compromising the webserver cannot attack the counting computer, because there is literally no return path.

    It works with very high reliability up to about 9600 baud.

    You may be able to use this to your benefit. Have an isolated system air-gapped from the rest of the network which listens for log events on a one-way data cable. While you're no longer guaranteed to be safe (since if a logging PC is compromised, an attacker could send compromised data to the syslog PC and perhaps cause some sort of mayhem), but the lack of a return path makes interactive attacks infeasible.

    ObDisclosure: I am a graduate student at UI and know the guy who invented the data cable, although I am not associated with the gadget.
  • by Nefarious Wheel ( 628136 ) * on Wednesday August 01, 2007 @02:49AM (#20067521) Journal
    Wouldn't work in Australia, compliance penalties apply if you can't dredge up the data within a specified period of time. YMMV but it'd be worth checking what the regs actually require. A good reference is this little PDF I found http://www.ironport.com/pdf/ironport_email_complia nce_guide.pdf/ [ironport.com]

    Personally I'd think about a hardware solution, block replication off-site to a third party registry. When you're talking compliance (especially fiduciary compliance) it's usually easy to come up with the bucks, so dream up something right and propose it.

  • by Ptur ( 866963 ) on Wednesday August 01, 2007 @03:15AM (#20067653)
    I would dump it in GIT or the likes.... any change of it will be recorded ;) Seriously, many version controlling systems already contain the data integrity and authenticity checks that you need
  • Guy Fawkes Protocol (Score:5, Interesting)

    by LilBlackKittie ( 179799 ) on Wednesday August 01, 2007 @03:24AM (#20067693) Homepage
    Some of the work I do may require something like this, so I'm considering implementing Guy Fawkes over syslog.

    http://www.cl.cam.ac.uk/~rja14/Papers/fawkes.pdf [cam.ac.uk]

    From the paper:

    6.2 Tamper-evident audit trails

    It is a well known problem that an intruder can often acquire root status by using well known operating system weaknesses, and then alter the audit and log information to remove the evidence of the intrusion. In order to prevent this, some Unix systems require that operations on log and audit data other than reads and appends be carried out from the system console. Others do not, and it could be of value to arrange alternative tamper-evidence mechanisms.

    A first idea might be to simply sign and timestamp the audit trail at regular intervals, but this is not sufficient as a root intruder will be able to obtain the private signing key and retrospectively forge audit records. In addition, the intervals would have to be small (of the order of a second, or even less) and the computation of RSA or DSA signatures at this frequency could impose a noticeable system overhead.

    In this application, the Guy Fawkes protocol appears well suited because of the low computational overhead (two hash function computations per signature) and the fact that all secrets are transient; this second's secret codeword is no use in forging a signature of a second ago.
  • Re:Syslog (Score:3, Interesting)

    by Boricle ( 652297 ) on Wednesday August 01, 2007 @03:28AM (#20067715) Homepage
    Probably the same thing that stops you from making scanning in the old print out, modifying, printing it out again and putting it in the stack.

    i.e. Nothing really.

    However, if you have the CD or tapes signed and dated by the ops staff, then shipped to off site security, you've made it harder to falsify.

    The interesting issue is that if you are organized enough, what's to stop you from intercepting the messages on the way to the printer / CDR?

    The only way I could see around this is some kind of trusted computing style initiative.

  • Re:Syslog (Score:3, Interesting)

    by cerberusss ( 660701 ) on Wednesday August 01, 2007 @04:55AM (#20068099) Journal
    Hmyeah I agree a line printer is good as an addition, however paper is hardly searchable. I bet one of the requirements would be to have an auditing interface searchable by user, date, et cetera.
  • by Jah-Wren Ryel ( 80510 ) on Wednesday August 01, 2007 @05:42AM (#20068303)

    They are not very good at tasks which involve writing a lot in small increments like a log. The sector size is quite big so if you guarantee that each log entry has finished physically on disc without caching till the sector is full the disc will be eaten in no time.
    I seem to recall that DVD+R was designed to work around that problem. The thinking at the time was that people would used DVD+R media like they used VHS tapes, to record tv with the ability to pause and or stop/restart recording frequently. They wanted to avoid the inefficiency of CD-R and DVD-R which are very wasteful on start/stop record operations as you indicated.

    I really can't dig up the link, it was years ago that I read this and google ain't cooperating right now, but I recall that whereas a recording pause could waste up to an entire track (once around the disc) with DVD-R, a DVD+R recorder would waste at most one sector (one the order of a few Kbytes).
  • by netglen ( 253539 ) on Wednesday August 01, 2007 @08:25AM (#20069027)
    What about the old school method of dumping the log directly to a line printer?
  • by Bright Apollo ( 988736 ) on Wednesday August 01, 2007 @08:44AM (#20069175) Journal
    I work in a regulated industry, and this is an ongoing topic at pharmaceuticals.
    Basically, you weigh the cost of non-compliance versus compliance, figure out
    what that risk is worth to your business, then try to spend as little as possible
    to mitigate the risk until the cost is acceptable.

    There is no such thing as 100% compliance or security. Oracle makes a big deal out
    of their data vault tech, but there's someone out there who can circumvent it. You
    need to figure out your comfort level for the risk, and in big corps, this is a
    financial decision.

    Which leads me to this: there is no "roll your own" compliance software. You do not
    want to assume the responsibility of proving to auditors that your software is correct
    and fully-functional. That is a difficult process to behold, and it will make your
    dev team crazy with paperwork. This is why people buy commercial off-the-shelf (COTS)
    software and then configure it, as they can then point to the COTS vendor and say
    "He vouches for the software". Auditors already versed in the COTS solution will
    then look to see examples of your configuration to see if it's sufficient, then
    move on.

    Sure, it's a nice intellectual exercise and certainly worthy of development, by a
    dedicated team willing to tackle all of the issues around securing the data, providing
    secure authentication and controls, proving non-repudiation and temporal consistency,
    etc, all of which a one-man show cannot achieve, all of which a half-assed token effort
    cannot achieve.

    Really, it boils down to this: you wanna roll the dice on your company being under a
    consent decree from the DoJ because you were too cheap to buy a system? That cost can
    shutter your doors.

    -BA

  • by fimbulvetr ( 598306 ) on Wednesday August 01, 2007 @09:08AM (#20069437)
    It's not about "deleting" the data, it's about trusting the data you have. Just like there's a difference between disinformation and no information - though I admit there should be procedures in place to keep said things away from your super powerful 1200 watt coffee maker.
  • by ajs ( 35943 ) <{ajs} {at} {ajs.com}> on Wednesday August 01, 2007 @09:48AM (#20069921) Homepage Journal

    Append only files have not been required in my experience. What is required is that there be no ability to overwrite a previously written file by the team that is sending the log data.
    The one is a way to get the other... at least partially. Physical and electronic security and partitioning of roles gets you the rest of the way.

    This can be done a number of ways, but the easiest method is to transmit the data in a way that the server chooses the filename, not the client.
    I'm not sure how filenames enter into it, since you don't give the application people access to the log host anyway.

    syslog works for most data, but not all. Linux is one of the only Unix based systems that puts sulog through syslog. The failed logins log is much more difficult, as is the wtmp data.
    There is a "syslog Working Group" that's working on that and other problems. I don't know if syslogng supports any of their proposals yet, though.

    As to writing periodically to a optical media, I wouldn't worry quite so much about that.
    It's very important to be able to talk about your risk exposure profile. When you can say that the exposure to electronic subversion of the logs (regardless of how hard you make that, via electronic security) ends when the data is written to optical disk, you can make a much stronger case for the data being functionally write-once.

    I would instead worry more about the encrypting all that security data while in network transit.
    That would only be necessary if you transmit sensitive data in the logs. For example, if a healthcare company wrote client data to their in-house application server's logs, then the logs would have to be subject to the same security constraints as every other piece of sensitive data. This is as far as I know, but my PCI exposure is tangential, and I haven't read the requirements first-hand.

    Authenticity is also an issue to be concerned with. How do you know that the event that got inserted into the log really came from that box, and not some random other server?
    Typically, you don't care (as long as you have the valid entries, any invalid ones are typically just noise), sometimes you do. I'm not aware of any PCI requirement for authentication in general, but for some purposes that may be there. I do think that syslogng might provide a means of minimal authentication, but I'm not certain about that.

    When working with PCI, know which DSS you are on, 1.0 or 1.1.
    A fair point.
  • by alcourt ( 198386 ) on Wednesday August 01, 2007 @12:01PM (#20072165)

    Append only files have not been required in my experience. What is required is that there be no ability to overwrite a previously written file by the team that is sending the log data.

    The one is a way to get the other... at least partially. Physical and electronic security and partitioning of roles gets you the rest of the way.

    Agreed. I just find that the lack of support for append only files makes it hard to use as a solution on most platforms.

    This can be done a number of ways, but the easiest method is to transmit the data in a way that the server chooses the filename, not the client.

    I'm not sure how filenames enter into it, since you don't give the application people access to the log host anyway.

    The solution I'm familiar with receives a datastream and writes to a file. If I allowed the sender to select a filename to write, they could hypothetically corrupt or worse, delete log data. It's a little easier to set up than most solutions to transmit the data securely.

    syslog works for most data, but not all. Linux is one of the only Unix based systems that puts sulog through syslog. The failed logins log is much more difficult, as is the wtmp data.

    There is a "syslog Working Group" that's working on that and other problems. I don't know if syslogng supports any of their proposals yet, though.

    I don't see how this can help. The issue isn't so much how to handle data that has gone into the syslog stream, but how to grab critical log data that doesn't normally enter into the syslog mechanism in the first place. Maybe I am missing something? I am however interested in hearing more about the working group, especially if they are likely to be able to update the standards so that the commercial Unix vendors will be able to seriously implement an improved syslog. Sun's comment on why Solaris 10 didn't have a better syslog was they wanted to, but they felt bound by POSIX. True or not, there is at least the impression by some of the vendors that they aren't allowed to use a better syslog by default. Replacing every single box's syslog would be problematic in larger shops.

    As to writing periodically to a optical media, I wouldn't worry quite so much about that.

    It's very important to be able to talk about your risk exposure profile. When you can say that the exposure to electronic subversion of the logs (regardless of how hard you make that, via electronic security) ends when the data is written to optical disk, you can make a much stronger case for the data being functionally write-once.

    I wouldn't say the risk ends, just that the risk for modification effectively ends. But I very much agree, one needs to look at the threat profile.

    I would instead worry more about the encrypting all that security data while in network transit.

    That would only be necessary if you transmit sensitive data in the logs. For example, if a healthcare company wrote client data to their in-house application server's logs, then the logs would have to be subject to the same security constraints as every other piece of sensitive data. This is as far as I know, but my PCI exposure is tangential, and I haven't read the requirements first-hand.

    Since the logs in question are security logs, my inclination is to always encrypt them just in case sensitive data does enter it. It need not be true credit card data, it could be other items that increase attack exposure. Knee-jerk? Maybe. One thing to think about though, the security logs of one box may not be critical enough to justify encrypting, but the security logs of lots of systems together may be that sensitive.

    Authenticity is also an issue to be concerned with. How do you know that the event that got inserted into the log really came from that box, and not some ra

  • by einhverfr ( 238914 ) <chris.travers@g m a i l.com> on Wednesday August 01, 2007 @12:11PM (#20072409) Homepage Journal

    Append only files have not been required in my experience. What is required is that there be no ability to overwrite a previously written file by the team that is sending the log data. This can be done a number of ways, but the easiest method is to transmit the data in a way that the server chooses the filename, not the client. Add a date string into the filename and you can (with a few other details I've worked at but am here waving a wand at) avoid the problem.

    Sure, and append-only is not foolproof. It is just one step in the right direction. Defense in depth.

    syslog works for most data, but not all. Linux is one of the only Unix based systems that puts sulog through syslog. The failed logins log is much more difficult, as is the wtmp data. wtmp data is especially annoying as it is one of the only ways to semi-reliably record both login and logout regardless of login type (including ssh), and can't really handle real time data streaming. The other annoying item is the command line history of all commands with EUID 0. I'm hoping to hear some news soon on a solution to that problem, but it is really difficult, especially since a lot of SAs become root via `sudo -s` or `su` (as opposed to `su -`, which would not modify their HISTFILE variable. Many root shells do not support direct sending of HISTFILE over the network.

    Allowing multiple people to log in as root (through su or otherwise) violates the PCI-DSS requirements according to my reading. In my view the *only* acceptable option is to either limit root to one person (for small environments) or require that everyone use sudo exclusively for executing root commands.

    As to writing periodically to a optical media, I wouldn't worry quite so much about that. I would instead worry more about the encrypting all that security data while in network transit. (Sorry, can't recall if that is a firm requirement of PCIDSS 1.1 or not). Unfortunately, this makes use of syslog a less trivial solution. Authenticity is also an issue to be concerned with. How do you know that the event that got inserted into the log really came from that box, and not some random other server? Traditionally, syslog has not concerned itself with such issues, but a PCI system may care a great deal.

    PCI-DSS only requires that certain information (useful in creating credit card transactions) is encrypted in transit and that this information may *not* be stored in the logs. So logs are not considered to be sensitive enough to require encryption. However, if you need to do this, there are a number of options including IPSec (which will give you the host-based security controls).

    Once the data is on the central logging host, it is already in a state that the author of the data (the SAs for the PCI impacted box) cannot modify it. That eliminates at least in the interpretation of PCI I've been working on, the need for writing to optical media. Immutable is not so much immutable by anyone, but immutable by the server in question.

    Agreed. The main purpose is to prevent an attacker from covering his tracks by screwing with the logs. The PCI-DSS is largely a standard to ensure that people who are processing credit card transactions are not storing overly sensitive data, are using encryption appropriately for somewhat sensitive data that they must retain, and are generally following industry-accepted best security practices.

    The point of the central copy of the logs is so that modification on either side can be readily detected and investigated. But if you cannot trust your central log host to have an accurate copy of the logs because you are receiving log data from anyone who chooses to pretend they are your PCI impacted server, then your central log host does not give you as much value as it may seem. The audit requirements aren't just for making lives miserable, they usually have a valid point behind them.

    When working with PCI, know which DSS you are o

It's a naive, domestic operating system without any breeding, but I think you'll be amused by its presumption.

Working...