DSS/HIPPA/SOX Unalterable Audit Logs? 381
analogrithems writes "Recently I was asked by one of the suits in my company to come up with a method to comply with the new PCI DSS policy that requires companies to have write once, read many logs. In short the requirement is for a secure method to make sure that once a log is written it can never be deleted or changed. So far I've only been able to find commercial and hardware-based solutions. I would prefer to use an open source solution. I know this policy is already part of HIPPA and soon to be part of SOX. It seems like there ought to be a way to do this with cryptography and checksums to ensure authenticity. Has anyone seen or developed such a solution? Or how have you made compliance?"
Go with commercial hardware solution (Score:5, Informative)
EMC's Centera [emc.com] is my personal favorite, it isn't cheap but it does exactly what you need and is auditable and recognized by all the third party audit compmaies as well as the Federal government.
I have worked in IT for 15 years and 5 of those have been for a LARGE financial institution. When it comes to audit and SOX go with something standard, tested and commercial, unless you want to spend the next 6 months explaining to your auditors how your homegrown solution works and then the next 6 months building something new that your auditors do understand (or worse, like losing your job).
WORM Device (Score:3, Informative)
Technically, a CD-R with some checksumming would work to be compliant - these guys [am-utils.org] have some more info, but if you need it for formal compliance use, you are better off talking to your friendly neighbourhood storage vendor to save you lots of legal hassle should you ever need the WORM thing for evidence. It is the difference between a lengthy legal process where you have to explain exactly why your homebrew solution is legal and simply saying "talk to NetApp"
Clicky (Score:3, Informative)
WORM on wiki [wikipedia.org]
From Experience (Score:5, Informative)
Unalterable logs as a matter of compliance does not mean "absolutely unalterable under any circumstances". There should be no way for an end user to modify audit trails. There should be no preconceived way for an administrator to alter audit trails - i.e. no utilities for doing so. That does not mean that an admin can't go directly into the DB and alter the data from behind the application layer.
Under every circumstance when I have run into audit logs involving HIPAA compliance, they have been written by an application directly into a SQL database (oracle, ms sql, informix, and one time db2). It used to be that they were written in a fairly easy to decipher format within a single text column on a per record basis - which made for a fairly-difficult-to-alter audit trail because within that easy to decipher format were non-printable characters that you would at least have to know to look for them. With current implimentations, however, the records are stored in a separate table with a many-to-one relationship with the audit-required records, in varchar fields, as plain text - much easier to alter or get rid of single entries. There is still a level of obfuscation as far as table names and column names but thats really a side effect of other things that are going on.
These systems have been reviewed by auditors and certified as compliant. In the older system, there was no application interface to delete audit records. In the newer system, there is an application interface to delete records in any given application table - and therefore there is one for the audit tables as well. Admin level access is required to delete or alter the records, though.
Personally, I would expect more as far as HIPAA compliance goes - from both a customer standpoint and an auditor standpoint. My experience (and it is pretty extensive across several high profile enterprises) - is that the customer will demand a better system only when the auditors demand a better system. I haven't run into an auditor yet who has even given more than a casual glance at the 'back door' scenario. I suppose it's because there is no true way to keep things absolutely secure and application level audit log security is only one layer of the onion.
Before you get too far into an overly complex and potentially expensive solution, talk with your auditors about the requirements for your specific scenarios. They've seen it before and can tell you exactly what they are looking for from an audit compliance standpoint. They are usually pretty easy to work with and open with their knowledge.
Re:Syslog (Score:5, Informative)
Re:FreeBSD to the rescue (Score:3, Informative)
If the root user can set that attribute, he can just as easily unset it, modify the data, and clean up after himself before re-setting it.
Remotely spitting your logs out to a line printer managed by a trusted 3rd party would seem to be a reasonable solution.
Re:From Experience (Score:3, Informative)
Financial company I know passed audit fine with syslog -> a secure system which the normal sysadmins didn't have access to. The people whose actions were being logged couldn't get to the logs (well, presumably someone could break the system, but it was well secured and had non-overlapping sysadmin staff).
That was good enough. As long as it took two compromised people to hide any given event, that passed audit.
Re:Dont skimp... two other things. (Score:4, Informative)
The second thing is, compliance is (ridiculously) complex - the compliance vendors have spent many hours with lawyers getting it together, they know the requirements and they know they fullfill them - this is important. It also means their solutions come with an implicit warranty - "hey, your using netapp worm, we know it works" as apposed to "what software is that? how do you know it works?". At the end of the day a lawyer is going to either go "well i cant argue with the compliance solution" when your with a well-known or "your honor, the defendant is using
Compliance is the only time i will say to someone - "get a throat to cut", get a solution you know works, written by people who know what they are doing and its all because compliance req's were written by lawyers for lawyers (i.e. scum) and so their scum is going to make you have to act like scum.
Re:Don't Build Your Own Device (Score:4, Informative)
no such thing exists. given enough time and a mediocure amount of money, i'm 100% certain i could alter anything your storing your information on and make it look real.
the toughest system i've ever seen as far as audit trails goes is using cdr's in a machine that makes a hash of the data on the cdr AND reads the serial number on the cd and stores that on a geographically seperate cdr system. it's similar to those automated cd turnstyle things you can buy, only beefy with steel casing and alarms on it and what not.
Re:Write them to a DVD jukebox (Score:5, Informative)
They are not very good at tasks which involve writing a lot in small increments like a log. The sector size is quite big so if you guarantee that each log entry has finished physically on disc without caching till the sector is full the disc will be eaten in no time.
You probably need a custom writer/reader (most normal ones cannot alter sector size) and custom formatted media along with something different from isofs. Not rocket science really, but definitely beyond the scope of DIY.
Re:How odd (Score:3, Informative)
Not if they have done it properly. If it is designed as an audit solution it is likely to have a hardware crypto module, a device specific key and have all data written out to disks at least signed with it. More likely - encrypted with it. In either case even if the fs is standard you cannot do jack sh*** with it after taking the drives out.
By the way - implementing the above using OSS is trivial as all free OS-es nowdays provide a TPM API so you can have unique machine keys. In fact you can implement this on top of any Free OS and integrate it with any standard MTA and most applications with minimal effort. The implementation would also most likely pass audit scrutiny as it is trivial. The only sticking point will be the crypto procedures and especially escrow. While proving that the app and the design is compliant is not hard, proving that your CA procedures are solid is a phenomenal pain in the a***. Also, you need to prove that you have an effective escrow and taking a hammer to the log machine does not prevent reading the compliance logs later on. The vendor has already done that and the auditors are happy. Compared to that it will take you on average 4-6 months to get this done with the help of external consluttants. Now, if you have done it anyway for a different project that is an entirely differnet ball game. You always have to prove to auditors that your app does what it says on the tin anyway and the apps are often internal. So one more or one less item is not going to turn the boat if the main sticking points (the CA and the escrow) have already been done.
Re:How odd (Score:4, Informative)
Centerras don't count as the original post, of a 'cheap solution'. They're not all that expensive by 'enterprise standards' but that's ... well not quite the same as 'affordable for most people'.
Also, our data centre is under fairly intensive scrutiny and control of physical access. My employer and customer are well aware that physical access means all bets are off, so in order to get physical access you need escorting, and authorization in advance, including documentation of what you're changing, why, and which grid squares in the datacentre you need access to.
I and the rest of my team are admins on this Centerra don't get access to the datacentre. If we have a need to enter, then we can fill in the paperwork and do so, but ... well, we're based 100 miles away. Most 'hands on' is done by someone else.
Now, combine that with the fact that each 'clip' (file) is stored 4 times, on 4 separate physical devices (2 of each, on 2 different sites) it would require ... well quite a few people to be complicit to even be able to destroy (or tamper with) data, physically. And a hell of a lot more to do so without leaving great big footprints all over the place screaming to the world what you've done.
I think you'd need 2 people on each site (one to actually tamper, and one to 'not notice' as he was escorting), plus an admin person offsite to identify which drives need 'doing', on both sites, and to mess with the 'self healing' replication so that one site didn't just restore the other. (You'd have to be fairly quick on the drives too, as soon as one goes down, the healing starts to replicate to other 'spare' drives).
And then you'd need some other people to mess with the entry logs to site, CCTV footage, change authorization....
You'd have to be pretty damn serious to pull that off. I mean, it's not even a case of some pointy haired one seeing their career on the line, and demanding immediate sabotage.
Re:Write them to a DVD jukebox (Score:5, Informative)
Re:Write them to a DVD jukebox (Score:5, Informative)
Re:use a line printer (Score:3, Informative)
All SHA1 being broken means is that it is easy to find a collision, or 2 values that match. If you are using it to verify the integrity of a file, then even if a collision is found, it's going to be plainly evident.
Though it's easy to find a collision, it is *impossible* to choose the content of that collision.
The importance of SHA1 being broken is when it is used for say, obfuscating passwords. If a system is compromised, and the cracker gets a list of password hashes, they can then generate from that list of password hashes, a list of valid plaintext sequences that would generate that hash.
So in the former case, the cracker would find a matching set of plaintext to the logfile, but due to the contents of the alternative plaintext probably being a psuedo-random jumble of data, anyone who looks in any detail at the fake log file will instantly see that it is falsified. The cracker may as well create a false logfile, and lie about the hash.
In the latter, it would allow the cracker to get a list of passwords that could be used to compromise his target systems much more quickly than he could have without them.
Its spelled H-I-P-A-A (Score:1, Informative)
Re:Write them to a DVD jukebox (Score:3, Informative)
Re:Write them to a DVD jukebox (Score:3, Informative)
Maybe I'm missing something, but wouldn't that be possible with unsigned data on any media? If you can obtain the media and a writer, and the data isn't authenticated somehow, you can always simply write a new version and toss the old. Unalterable does not mean impossible to destroy, just impossible to modify once written. Cryptographically signing data before writing it to any write-once medium (like DVD+R) would seem to solve this problem, because you'd need the signing key to "modify" it as you suggest.
Re:use a line printer (Score:4, Informative)
Though it's easy to find a collision, it is *impossible* to choose the content of that collision.
In this case, "easy" means not utterly impossible to accomplish in a lifetime if you have unlimited funds.
The significance isn't that the new attack is "practical". It's more that given those results, the odds of an even better attack coming along in the next decade or two went up.
All the same, for a brand new application, why not just use SHA256? That's what Jon Callas meant by "walk, but not run, to the fire exits". No need to panic over data already protected by SHA1 or even to run around replacing all uses of SHA1 this instant, but if you're writing code anyway, why not choose a safer option?
As you say, you don't get to choose the collision, that's why it's not time to panic.
Re:Write them to a DVD jukebox (Score:5, Informative)
syslog works for most data, but not all. Linux is one of the only Unix based systems that puts sulog through syslog. The failed logins log is much more difficult, as is the wtmp data. wtmp data is especially annoying as it is one of the only ways to semi-reliably record both login and logout regardless of login type (including ssh), and can't really handle real time data streaming. The other annoying item is the command line history of all commands with EUID 0. I'm hoping to hear some news soon on a solution to that problem, but it is really difficult, especially since a lot of SAs become root via `sudo -s` or `su` (as opposed to `su -`, which would not modify their HISTFILE variable. Many root shells do not support direct sending of HISTFILE over the network.
As to writing periodically to a optical media, I wouldn't worry quite so much about that. I would instead worry more about the encrypting all that security data while in network transit. (Sorry, can't recall if that is a firm requirement of PCIDSS 1.1 or not). Unfortunately, this makes use of syslog a less trivial solution. Authenticity is also an issue to be concerned with. How do you know that the event that got inserted into the log really came from that box, and not some random other server? Traditionally, syslog has not concerned itself with such issues, but a PCI system may care a great deal.
Once the data is on the central logging host, it is already in a state that the author of the data (the SAs for the PCI impacted box) cannot modify it. That eliminates at least in the interpretation of PCI I've been working on, the need for writing to optical media. Immutable is not so much immutable by anyone, but immutable by the server in question.
The point of the central copy of the logs is so that modification on either side can be readily detected and investigated. But if you cannot trust your central log host to have an accurate copy of the logs because you are receiving log data from anyone who chooses to pretend they are your PCI impacted server, then your central log host does not give you as much value as it may seem. The audit requirements aren't just for making lives miserable, they usually have a valid point behind them.
When working with PCI, know which DSS you are on, 1.0 or 1.1. (I don't know the release schedule for the next PCIDSS.) The requirements do differ, as do even the interpretations. Reference https://www.pcisecuritystandards.org/ [pcisecuritystandards.org] for the information.
Re:From Experience (Score:3, Informative)
Unalterable logs as a matter of compliance does not mean "absolutely unalterable under any circumstances". There should be no way for an end user to modify audit trails. There should be no preconceived way for an administrator to alter audit trails - i.e. no utilities for doing so. That does not mean that an admin can't go directly into the DB and alter the data from behind the application layer.
That's VERY important to keep in mind. A lot of the wailing, hype, and FUD around all of the various auditing and retention laws comes from people who do not understand that fundamentally absolutely ANY audit trail can be altered given sufficient determination and resources. Even if the logs are chiseled into stone slabs it is not absolutely inconceivable that someone might produce a slab thgat is identical in every way except for a changed digit or 2.
WORM media can be duplicated as well. Whole vaults of WORM media can be duplicated. if you save hashes of the data seperately, the media containing the hashes can be swapped just like the main logs.
So, making it absolutely impossible to alter data is out of the question, it's really a matter of how hard you can make it without bankrupting the company in the process (cheapest solution is fold the company. No company means no data means no alterations).
Dumping the lot to a line printer in real time AND storing to a log file is one answer. In some cases, just keeping the logfiles in ext[23] and setting the append only attribute may be enough, at least until enough has accumulated to burn another sector onto a WORM device.
For a while, I ran a patched kernel that would allow the immutable or append only bits to be set by root but not cleared. Clearing the bits when necessary required booting into another kernel (which would trigger many alarms when the machine went down). Doing so was a regular procedure for maintenance, but that was scheduled and all admins were notified. An unscheduled event would NOT go un-noticed.
It's also useful to note that most of the requirements are that fraud be merely detectable. That is, the data need not be unalterable so long as the alteration is detectable. It's MUCH easier to detect that data was changed than it is to allow for reconstruction of the original data. One viable scenerio (given that fraud is a rare to non-existant event for the company) is to detect the alteration in the electronic data and then reconstruct the real data by following the paper audit trail.
Re:How sure do you need to be? (Score:3, Informative)
The trouble is most times we figure out what the data might be worth to us, not taking into account what it might be worth to the bad guys. The opposite scenario is more likely, where a company spends much more to protect a piece of data than it would ever be worth. In that case they are wasting money that would be better spent doing real work.
The best thing you can do with your cryptographic hashes is to have copies away from the actual logs. Make sure that the people who have access to the remote hashes are different than the people who have access to the logs. Then it takes at least two people working together to muck things up.
Re:Syslog (Score:2, Informative)
I recently attended a SANS Summit on Logging. Its not about making it impossible to overwrite logs... there's basically no way to do that. Every suggestion here pretty much has a reply to it about how to get around it. Its not a technical problem to solve, its a policy one.
Given a non-tiny operation, its fairly simple to reach compliance (IANAL). The group that runs the payment gateway is NOT the group that runs the centralized logging system. Use syslog-ng to send the logs to a central server. The payment gateway guys don't get access to the centralized logging server, at least not write access. If you want, store the logs in a DB, and give them read access. They'll still have local logs for troubleshooting and such anyways, so they don't really need it, unless they need to go farther back then the local server logs are stored. Backup the centralized logs regularly to tape, or whatever your backup setup is. If you're paranoid, store checksums in a separate area, email them out, whatever.
You can't make the logs unalterable. What you do is put policies in place to make sure that they are secure if the infrastructure you are logging is compromised, internally or externally. For example, the systems you are trying to protect don't need full access to your logging servers, port 514 (or whatever you pick) is enough.
It's HIPAA not HIPPA (Score:3, Informative)
It's HIPAA not HIPPA.
See Wikipedia, among others:
http://en.wikipedia.org/wiki/Health_Insurance_Por
Peace
Re:From Experience (Score:2, Informative)
lcap CAP_LINUX_IMMUTABLE CAP_SYS_RAWIO CAP_SYS_MODULE CAP_MKNOD CAP_SYS_BOOT
unlink
that will disable the ability to alter immutable bits, access
Now lets put said unalterable log on an encrypted partition that requires a key on a usb dongle to mount, split the key into a few parts and give the parts out to different people.
Also there are hardware crypto based document storage solutions out there that supposedly make things totally unalterable short of an act of god (embeded in laquer kind of thing). Ncipher makes some stuff like that, google them. (I don't have any vested interest, it's just the only company I know of that makes that sort of thing).
Comment removed (Score:3, Informative)
PCI-DSS logging and transmission requirements (Score:3, Informative)
3.1: Store only what you must.
3.2: Do not store sensitive authentication data (CVV, CVV2, magnetic stripe) subsequent to authorization.
3.3: Mask the credit card number when it must be displayed whenever possible
3.4: Render PAN (Credit Card account number) unreadible anywhere it is stored. If this is not possible, see appendix B for acceptable compensating controls.
Note that 3.4 does apply to backup media, logs, etc. The simple approach is don't log credit card numbers. And don't store them in your MySQL database in plain text (as does OSCommerce in at least one configuration).
4.1, cardholder data must be encripted when transmitted across "open, public networks." Presumably corporate intranets are excluded from this requirement but I would just as soon encrypt everywhere.
4.2: Never send credit card numbers via email.
10.1: Establish practices that allow you to track any administrative or root action and associate that with each individual user. In other words, you must be able to show not only what root did, but also which individual did it. I suspect that restricting root to *one* person and giving others access to sudo would be sufficient provided that sudo -s and su are prohibited from being used.
10.5: Protect audit trails against unauthorized modifications. This does not mean write once. It simply requires that the media be "difficult to alter." However, periodically backing recent logs up to optical disks would likely be a good practice.
10.7: log data must be retained for at least one year, and at least three months must be available online.
I'm a PCI QSA - You've got it all wrong (Score:4, Informative)
When dealing with any PCI requirement the most important thing to think about is the INTENT. Is the intent of the logging requirements in section 10 of the PCI DSS to prevent anyone, anywhere, from EVER being able to modify log files? No! The intent is to prevent a compromised system from altering its own log files--hiding the fact that it has been compromised. As long as your logging solution handles this situation effectively you really don't have anything to worry about.
In my role as auditor I would never fail a syslog host just because it was writing to a standard ext3 volume. I *would* fault a company if their logging solution was poorly configured (insecure: say, running telnetd) or was write-accessible by the same admins that send all their log data to it (unless they were a small company--if you only have one or two admins there's only so much separation of privilege you can get away with). I'd also have problems with a syslog host that wasn't backing itself up on a regular basis (90 days online, 3 year archive).
If I were you I'd be more concerned with your logging system meeting the other requirements of the PCI DSS. If it is inherently insecure or fails to implement proper access controls (say, shared root account) who cares how the logging solution is configured?
Remember: Intent is everything. If in doubt, call your acquirer (i.e. your bank). They're the ones who ultimately have to decide whether or not your implementation is good enough anyway. The auditor just writes a report--the bank has to sign off on it.
Re:FreeBSD to the rescue (Score:1, Informative)