DSS/HIPPA/SOX Unalterable Audit Logs? 381
analogrithems writes "Recently I was asked by one of the suits in my company to come up with a method to comply with the new PCI DSS policy that requires companies to have write once, read many logs. In short the requirement is for a secure method to make sure that once a log is written it can never be deleted or changed. So far I've only been able to find commercial and hardware-based solutions. I would prefer to use an open source solution. I know this policy is already part of HIPPA and soon to be part of SOX. It seems like there ought to be a way to do this with cryptography and checksums to ensure authenticity. Has anyone seen or developed such a solution? Or how have you made compliance?"
Write them to a DVD jukebox (Score:3, Interesting)
How sure do you need to be? (Score:4, Interesting)
Another cheap write-only medium is paper; I suppose you could purchase a laser printer (or even a line printer), and have it spit out the logs as they occur. If you kept the printer in a locked transparent box, nobody but people with the keys would have access to the output.
You could burn the logs onto PROMs as well, that's pretty permanent
Anything on magnetic or flash media can be erased or tampered with somehow, unless the drive controller hardware itself prohibited overwriting existing data. Even then you're relying on someone not being able to replace the drive controller or take the drive apart and diddle the platters/flash chips directly (although I suppose a decent amount of epoxy could thwart this). Any software-based solution can be tampered with in theory. One hacker favorite (which may be a legend or not) is that people used to get root on other people's boxes and then replace their copy of PGP with an instrumented copy. Thus, even the encryption software became compromised.
For compliance, though, I'm not sure what kind of oversight you have to have. At the end of the day, somebody has to be trusted with these logs, and that person would almost assuredly have the power to destroy them, or at least portions of them.
One-way data cable (Score:5, Interesting)
It works with very high reliability up to about 9600 baud.
You may be able to use this to your benefit. Have an isolated system air-gapped from the rest of the network which listens for log events on a one-way data cable. While you're no longer guaranteed to be safe (since if a logging PC is compromised, an attacker could send compromised data to the syslog PC and perhaps cause some sort of mayhem), but the lack of a return path makes interactive attacks infeasible.
ObDisclosure: I am a graduate student at UI and know the guy who invented the data cable, although I am not associated with the gadget.
Re:use a line printer (Score:3, Interesting)
Personally I'd think about a hardware solution, block replication off-site to a third party registry. When you're talking compliance (especially fiduciary compliance) it's usually easy to come up with the bucks, so dream up something right and propose it.
Why not store it in a version control system (Score:3, Interesting)
Guy Fawkes Protocol (Score:5, Interesting)
http://www.cl.cam.ac.uk/~rja14/Papers/fawkes.pdf [cam.ac.uk]
From the paper:
6.2 Tamper-evident audit trails
It is a well known problem that an intruder can often acquire root status by using well known operating system weaknesses, and then alter the audit and log information to remove the evidence of the intrusion. In order to prevent this, some Unix systems require that operations on log and audit data other than reads and appends be carried out from the system console. Others do not, and it could be of value to arrange alternative tamper-evidence mechanisms.
A first idea might be to simply sign and timestamp the audit trail at regular intervals, but this is not sufficient as a root intruder will be able to obtain the private signing key and retrospectively forge audit records. In addition, the intervals would have to be small (of the order of a second, or even less) and the computation of RSA or DSA signatures at this frequency could impose a noticeable system overhead.
In this application, the Guy Fawkes protocol appears well suited because of the low computational overhead (two hash function computations per signature) and the fact that all secrets are transient; this second's secret codeword is no use in forging a signature of a second ago.
Re:Syslog (Score:3, Interesting)
i.e. Nothing really.
However, if you have the CD or tapes signed and dated by the ops staff, then shipped to off site security, you've made it harder to falsify.
The interesting issue is that if you are organized enough, what's to stop you from intercepting the messages on the way to the printer / CDR?
The only way I could see around this is some kind of trusted computing style initiative.
Re:Syslog (Score:3, Interesting)
Re:Write them to a DVD jukebox (Score:3, Interesting)
I really can't dig up the link, it was years ago that I read this and google ain't cooperating right now, but I recall that whereas a recording pause could waste up to an entire track (once around the disc) with DVD-R, a DVD+R recorder would waste at most one sector (one the order of a few Kbytes).
Re:Write them to a DVD jukebox (Score:2, Interesting)
Compliance is risk-avoidance, don't be cheap. (Score:3, Interesting)
Basically, you weigh the cost of non-compliance versus compliance, figure out
what that risk is worth to your business, then try to spend as little as possible
to mitigate the risk until the cost is acceptable.
There is no such thing as 100% compliance or security. Oracle makes a big deal out
of their data vault tech, but there's someone out there who can circumvent it. You
need to figure out your comfort level for the risk, and in big corps, this is a
financial decision.
Which leads me to this: there is no "roll your own" compliance software. You do not
want to assume the responsibility of proving to auditors that your software is correct
and fully-functional. That is a difficult process to behold, and it will make your
dev team crazy with paperwork. This is why people buy commercial off-the-shelf (COTS)
software and then configure it, as they can then point to the COTS vendor and say
"He vouches for the software". Auditors already versed in the COTS solution will
then look to see examples of your configuration to see if it's sufficient, then
move on.
Sure, it's a nice intellectual exercise and certainly worthy of development, by a
dedicated team willing to tackle all of the issues around securing the data, providing
secure authentication and controls, proving non-repudiation and temporal consistency,
etc, all of which a one-man show cannot achieve, all of which a half-assed token effort
cannot achieve.
Really, it boils down to this: you wanna roll the dice on your company being under a
consent decree from the DoJ because you were too cheap to buy a system? That cost can
shutter your doors.
-BA
Re:But delete is still easy to do (Score:3, Interesting)
Re:Write them to a DVD jukebox (Score:3, Interesting)
Re:Write them to a DVD jukebox (Score:3, Interesting)
Append only files have not been required in my experience. What is required is that there be no ability to overwrite a previously written file by the team that is sending the log data.
The one is a way to get the other... at least partially. Physical and electronic security and partitioning of roles gets you the rest of the way.
Agreed. I just find that the lack of support for append only files makes it hard to use as a solution on most platforms.
This can be done a number of ways, but the easiest method is to transmit the data in a way that the server chooses the filename, not the client.
I'm not sure how filenames enter into it, since you don't give the application people access to the log host anyway.
The solution I'm familiar with receives a datastream and writes to a file. If I allowed the sender to select a filename to write, they could hypothetically corrupt or worse, delete log data. It's a little easier to set up than most solutions to transmit the data securely.
syslog works for most data, but not all. Linux is one of the only Unix based systems that puts sulog through syslog. The failed logins log is much more difficult, as is the wtmp data.
There is a "syslog Working Group" that's working on that and other problems. I don't know if syslogng supports any of their proposals yet, though.
I don't see how this can help. The issue isn't so much how to handle data that has gone into the syslog stream, but how to grab critical log data that doesn't normally enter into the syslog mechanism in the first place. Maybe I am missing something? I am however interested in hearing more about the working group, especially if they are likely to be able to update the standards so that the commercial Unix vendors will be able to seriously implement an improved syslog. Sun's comment on why Solaris 10 didn't have a better syslog was they wanted to, but they felt bound by POSIX. True or not, there is at least the impression by some of the vendors that they aren't allowed to use a better syslog by default. Replacing every single box's syslog would be problematic in larger shops.
As to writing periodically to a optical media, I wouldn't worry quite so much about that.
It's very important to be able to talk about your risk exposure profile. When you can say that the exposure to electronic subversion of the logs (regardless of how hard you make that, via electronic security) ends when the data is written to optical disk, you can make a much stronger case for the data being functionally write-once.
I wouldn't say the risk ends, just that the risk for modification effectively ends. But I very much agree, one needs to look at the threat profile.
I would instead worry more about the encrypting all that security data while in network transit.
That would only be necessary if you transmit sensitive data in the logs. For example, if a healthcare company wrote client data to their in-house application server's logs, then the logs would have to be subject to the same security constraints as every other piece of sensitive data. This is as far as I know, but my PCI exposure is tangential, and I haven't read the requirements first-hand.
Since the logs in question are security logs, my inclination is to always encrypt them just in case sensitive data does enter it. It need not be true credit card data, it could be other items that increase attack exposure. Knee-jerk? Maybe. One thing to think about though, the security logs of one box may not be critical enough to justify encrypting, but the security logs of lots of systems together may be that sensitive.
Authenticity is also an issue to be concerned with. How do you know that the event that got inserted into the log really came from that box, and not some ra
Re:Write them to a DVD jukebox (Score:3, Interesting)
Append only files have not been required in my experience. What is required is that there be no ability to overwrite a previously written file by the team that is sending the log data. This can be done a number of ways, but the easiest method is to transmit the data in a way that the server chooses the filename, not the client. Add a date string into the filename and you can (with a few other details I've worked at but am here waving a wand at) avoid the problem.
Sure, and append-only is not foolproof. It is just one step in the right direction. Defense in depth.
syslog works for most data, but not all. Linux is one of the only Unix based systems that puts sulog through syslog. The failed logins log is much more difficult, as is the wtmp data. wtmp data is especially annoying as it is one of the only ways to semi-reliably record both login and logout regardless of login type (including ssh), and can't really handle real time data streaming. The other annoying item is the command line history of all commands with EUID 0. I'm hoping to hear some news soon on a solution to that problem, but it is really difficult, especially since a lot of SAs become root via `sudo -s` or `su` (as opposed to `su -`, which would not modify their HISTFILE variable. Many root shells do not support direct sending of HISTFILE over the network.
Allowing multiple people to log in as root (through su or otherwise) violates the PCI-DSS requirements according to my reading. In my view the *only* acceptable option is to either limit root to one person (for small environments) or require that everyone use sudo exclusively for executing root commands.
As to writing periodically to a optical media, I wouldn't worry quite so much about that. I would instead worry more about the encrypting all that security data while in network transit. (Sorry, can't recall if that is a firm requirement of PCIDSS 1.1 or not). Unfortunately, this makes use of syslog a less trivial solution. Authenticity is also an issue to be concerned with. How do you know that the event that got inserted into the log really came from that box, and not some random other server? Traditionally, syslog has not concerned itself with such issues, but a PCI system may care a great deal.
PCI-DSS only requires that certain information (useful in creating credit card transactions) is encrypted in transit and that this information may *not* be stored in the logs. So logs are not considered to be sensitive enough to require encryption. However, if you need to do this, there are a number of options including IPSec (which will give you the host-based security controls).
Once the data is on the central logging host, it is already in a state that the author of the data (the SAs for the PCI impacted box) cannot modify it. That eliminates at least in the interpretation of PCI I've been working on, the need for writing to optical media. Immutable is not so much immutable by anyone, but immutable by the server in question.
Agreed. The main purpose is to prevent an attacker from covering his tracks by screwing with the logs. The PCI-DSS is largely a standard to ensure that people who are processing credit card transactions are not storing overly sensitive data, are using encryption appropriately for somewhat sensitive data that they must retain, and are generally following industry-accepted best security practices.
The point of the central copy of the logs is so that modification on either side can be readily detected and investigated. But if you cannot trust your central log host to have an accurate copy of the logs because you are receiving log data from anyone who chooses to pretend they are your PCI impacted server, then your central log host does not give you as much value as it may seem. The audit requirements aren't just for making lives miserable, they usually have a valid point behind them.
When working with PCI, know which DSS you are o