Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Businesses IT

DSS/HIPPA/SOX Unalterable Audit Logs? 381

analogrithems writes "Recently I was asked by one of the suits in my company to come up with a method to comply with the new PCI DSS policy that requires companies to have write once, read many logs. In short the requirement is for a secure method to make sure that once a log is written it can never be deleted or changed. So far I've only been able to find commercial and hardware-based solutions. I would prefer to use an open source solution. I know this policy is already part of HIPPA and soon to be part of SOX. It seems like there ought to be a way to do this with cryptography and checksums to ensure authenticity. Has anyone seen or developed such a solution? Or how have you made compliance?"
This discussion has been archived. No new comments can be posted.

DSS/HIPPA/SOX Unalterable Audit Logs?

Comments Filter:
  • by jsimon12 ( 207119 ) on Wednesday August 01, 2007 @02:34AM (#20067431) Homepage
    I preface this by saying I know I will get flamed for not recommending Open Source but SOX is a Federal mandate (Federal equals PMITA)

    EMC's Centera [emc.com] is my personal favorite, it isn't cheap but it does exactly what you need and is auditable and recognized by all the third party audit compmaies as well as the Federal government.

    I have worked in IT for 15 years and 5 of those have been for a LARGE financial institution. When it comes to audit and SOX go with something standard, tested and commercial, unless you want to spend the next 6 months explaining to your auditors how your homegrown solution works and then the next 6 months building something new that your auditors do understand (or worse, like losing your job).
  • WORM Device (Score:3, Informative)

    by passthecrackpipe ( 598773 ) * <passthecrackpipe@@@hotmail...com> on Wednesday August 01, 2007 @02:37AM (#20067459)
    What you need is a Write Once Read Many (WORM) device. Unless EMC started shipping Open Source hardware (hahaha) I don't think you will be able to find this as Open Source. There may be some software solution, but you would most likely need some certification for it anyway ("no, officer, it _really_ is unalterable, trust me....."). Granted, most "hardware" solutions implement WORM through software, but I know from experience that it is impossible to change the data on WORM.

    Technically, a CD-R with some checksumming would work to be compliant - these guys [am-utils.org] have some more info, but if you need it for formal compliance use, you are better off talking to your friendly neighbourhood storage vendor to save you lots of legal hassle should you ever need the WORM thing for evidence. It is the difference between a lengthy legal process where you have to explain exactly why your homebrew solution is legal and simply saying "talk to NetApp"
  • Clicky (Score:3, Informative)

    by mritunjai ( 518932 ) on Wednesday August 01, 2007 @03:39AM (#20067767) Homepage
    WORM media with HIPPA compliance in mind...

    WORM on wiki [wikipedia.org]
  • From Experience (Score:5, Informative)

    by Evets ( 629327 ) * on Wednesday August 01, 2007 @03:50AM (#20067805) Homepage Journal
    I honestly don't know about DSS or SOX, but I have had plenty of fun with HIPAA.

    Unalterable logs as a matter of compliance does not mean "absolutely unalterable under any circumstances". There should be no way for an end user to modify audit trails. There should be no preconceived way for an administrator to alter audit trails - i.e. no utilities for doing so. That does not mean that an admin can't go directly into the DB and alter the data from behind the application layer.

    Under every circumstance when I have run into audit logs involving HIPAA compliance, they have been written by an application directly into a SQL database (oracle, ms sql, informix, and one time db2). It used to be that they were written in a fairly easy to decipher format within a single text column on a per record basis - which made for a fairly-difficult-to-alter audit trail because within that easy to decipher format were non-printable characters that you would at least have to know to look for them. With current implimentations, however, the records are stored in a separate table with a many-to-one relationship with the audit-required records, in varchar fields, as plain text - much easier to alter or get rid of single entries. There is still a level of obfuscation as far as table names and column names but thats really a side effect of other things that are going on.

    These systems have been reviewed by auditors and certified as compliant. In the older system, there was no application interface to delete audit records. In the newer system, there is an application interface to delete records in any given application table - and therefore there is one for the audit tables as well. Admin level access is required to delete or alter the records, though.

    Personally, I would expect more as far as HIPAA compliance goes - from both a customer standpoint and an auditor standpoint. My experience (and it is pretty extensive across several high profile enterprises) - is that the customer will demand a better system only when the auditors demand a better system. I haven't run into an auditor yet who has even given more than a casual glance at the 'back door' scenario. I suppose it's because there is no true way to keep things absolutely secure and application level audit log security is only one layer of the onion.

    Before you get too far into an overly complex and potentially expensive solution, talk with your auditors about the requirements for your specific scenarios. They've seen it before and can tell you exactly what they are looking for from an audit compliance standpoint. They are usually pretty easy to work with and open with their knowledge.
  • Re:Syslog (Score:5, Informative)

    by dkf ( 304284 ) <donal.k.fellows@manchester.ac.uk> on Wednesday August 01, 2007 @04:00AM (#20067849) Homepage

    You're not supposed to just use a remote logfile, but a remote logging daemon.
    Another thing you can do is to send the logging messages over a non-IP connection (e.g. a serial line) so that even a standard network failure won't disrupt the logging and a hacked machine will continue to generate a track-able log. And the last I heard, you can't unsend bits sent down a serial line.
  • by moosesocks ( 264553 ) on Wednesday August 01, 2007 @04:11AM (#20067895) Homepage
    Yes, but the tricky thing about this situation is that it's a "who will guard the guards" type of deal.

    If the root user can set that attribute, he can just as easily unset it, modify the data, and clean up after himself before re-setting it.

    Remotely spitting your logs out to a line printer managed by a trusted 3rd party would seem to be a reasonable solution.
  • Re:From Experience (Score:3, Informative)

    by georgewilliamherbert ( 211790 ) on Wednesday August 01, 2007 @04:15AM (#20067913)
    Second the "ask the auditors what they are looking for"... not everyone gets audited the same.

    Financial company I know passed audit fine with syslog -> a secure system which the normal sysadmins didn't have access to. The people whose actions were being logged couldn't get to the logs (well, presumably someone could break the system, but it was well secured and had non-overlapping sysadmin staff).

    That was good enough. As long as it took two compromised people to hide any given event, that passed audit.
  • by pjr.cc ( 760528 ) on Wednesday August 01, 2007 @04:18AM (#20067925)
    ext3cow was written with compliance in mind (i.e. with an untouchable past), and so its AFAIK the ONLY solution that can fit in compliance (keeping in mind that this only covers part of compliance). svn, git, cvs - im sorry, but thats just a non-solution for compliance. It also gives you no-mess management with a very easy interface to make sure you are being compliant (this is important, and its something YOU dont have to be involved in, your lawyers can "look at the past" to make sure "discovery" is going to be consistent).

    The second thing is, compliance is (ridiculously) complex - the compliance vendors have spent many hours with lawyers getting it together, they know the requirements and they know they fullfill them - this is important. It also means their solutions come with an implicit warranty - "hey, your using netapp worm, we know it works" as apposed to "what software is that? how do you know it works?". At the end of the day a lawyer is going to either go "well i cant argue with the compliance solution" when your with a well-known or "your honor, the defendant is using ... which has never been proven or certified by anyone".

    Compliance is the only time i will say to someone - "get a throat to cut", get a solution you know works, written by people who know what they are doing and its all because compliance req's were written by lawyers for lawyers (i.e. scum) and so their scum is going to make you have to act like scum.
  • by timmarhy ( 659436 ) on Wednesday August 01, 2007 @04:30AM (#20067973)
    "Our data recording and data logging has to be proven to be unalterable."

    no such thing exists. given enough time and a mediocure amount of money, i'm 100% certain i could alter anything your storing your information on and make it look real.

    the toughest system i've ever seen as far as audit trails goes is using cdr's in a machine that makes a hash of the data on the cdr AND reads the serial number on the cd and stores that on a geographically seperate cdr system. it's similar to those automated cd turnstyle things you can buy, only beefy with steel casing and alarms on it and what not.

  • by arivanov ( 12034 ) on Wednesday August 01, 2007 @04:37AM (#20068011) Homepage
    Not quite.

    They are not very good at tasks which involve writing a lot in small increments like a log. The sector size is quite big so if you guarantee that each log entry has finished physically on disc without caching till the sector is full the disc will be eaten in no time.

    You probably need a custom writer/reader (most normal ones cannot alter sector size) and custom formatted media along with something different from isofs. Not rocket science really, but definitely beyond the scope of DIY.
  • Re:How odd (Score:3, Informative)

    by arivanov ( 12034 ) on Wednesday August 01, 2007 @04:56AM (#20068101) Homepage
    Now, assuming that they use harddrives, we all know that someone could extract mount the file system and change records.

    Not if they have done it properly. If it is designed as an audit solution it is likely to have a hardware crypto module, a device specific key and have all data written out to disks at least signed with it. More likely - encrypted with it. In either case even if the fs is standard you cannot do jack sh*** with it after taking the drives out.

    By the way - implementing the above using OSS is trivial as all free OS-es nowdays provide a TPM API so you can have unique machine keys. In fact you can implement this on top of any Free OS and integrate it with any standard MTA and most applications with minimal effort. The implementation would also most likely pass audit scrutiny as it is trivial. The only sticking point will be the crypto procedures and especially escrow. While proving that the app and the design is compliant is not hard, proving that your CA procedures are solid is a phenomenal pain in the a***. Also, you need to prove that you have an effective escrow and taking a hammer to the log machine does not prevent reading the compliance logs later on. The vendor has already done that and the auditors are happy. Compared to that it will take you on average 4-6 months to get this done with the help of external consluttants. Now, if you have done it anyway for a different project that is an entirely differnet ball game. You always have to prove to auditors that your app does what it says on the tin anyway and the apps are often internal. So one more or one less item is not going to turn the boat if the main sticking points (the CA and the escrow) have already been done.

  • Re:How odd (Score:4, Informative)

    by Sobrique ( 543255 ) on Wednesday August 01, 2007 @05:16AM (#20068175) Homepage
    I should add:

    Centerras don't count as the original post, of a 'cheap solution'. They're not all that expensive by 'enterprise standards' but that's ... well not quite the same as 'affordable for most people'.

    Also, our data centre is under fairly intensive scrutiny and control of physical access. My employer and customer are well aware that physical access means all bets are off, so in order to get physical access you need escorting, and authorization in advance, including documentation of what you're changing, why, and which grid squares in the datacentre you need access to.

    I and the rest of my team are admins on this Centerra don't get access to the datacentre. If we have a need to enter, then we can fill in the paperwork and do so, but ... well, we're based 100 miles away. Most 'hands on' is done by someone else.

    Now, combine that with the fact that each 'clip' (file) is stored 4 times, on 4 separate physical devices (2 of each, on 2 different sites) it would require ... well quite a few people to be complicit to even be able to destroy (or tamper with) data, physically. And a hell of a lot more to do so without leaving great big footprints all over the place screaming to the world what you've done.

    I think you'd need 2 people on each site (one to actually tamper, and one to 'not notice' as he was escorting), plus an admin person offsite to identify which drives need 'doing', on both sites, and to mess with the 'self healing' replication so that one site didn't just restore the other. (You'd have to be fairly quick on the drives too, as soon as one goes down, the healing starts to replicate to other 'spare' drives).

    And then you'd need some other people to mess with the entry logs to site, CCTV footage, change authorization....

    You'd have to be pretty damn serious to pull that off. I mean, it's not even a case of some pointy haired one seeing their career on the line, and demanding immediate sabotage.

  • by jabuzz ( 182671 ) on Wednesday August 01, 2007 @05:38AM (#20068283) Homepage
    Or you could just use a DLT/lTO drive with WORM media. Works just fine for appending, no special software needed. Admitedly the drives are not cheap, but it is an easy solution. In fact the WORM media for DLT/LTO where developed specifically for this sort of application.
  • by ajs ( 35943 ) <{ajs} {at} {ajs.com}> on Wednesday August 01, 2007 @06:40AM (#20068551) Homepage Journal
    Optical is the right choice here, but you need to understand the PCI requirements and their most common interpretation VERY clearly. What you will probably end up with is something like this:
    • Logs are written over the network (e.g. syslog)
    • Logging host, which is locked down, and has no access from the infrastructure that it's performing logging for other than the incoming log data itself.
    • Logging host writes the logs locally to files which are marked as append-only by the OS (Linux can do this)
    • The logs are then written periodically (e.g. once per hour) to optical media.
    • Add redundant logging hosts to taste (3 is a nice number for validation purposes).

  • by The Mysterious X ( 903554 ) <adam@omega.org.uk> on Wednesday August 01, 2007 @07:34AM (#20068735)
    Despite being "broken", in this case, sha1 would be acceptable.

    All SHA1 being broken means is that it is easy to find a collision, or 2 values that match. If you are using it to verify the integrity of a file, then even if a collision is found, it's going to be plainly evident.

    Though it's easy to find a collision, it is *impossible* to choose the content of that collision.

    The importance of SHA1 being broken is when it is used for say, obfuscating passwords. If a system is compromised, and the cracker gets a list of password hashes, they can then generate from that list of password hashes, a list of valid plaintext sequences that would generate that hash.

    So in the former case, the cracker would find a matching set of plaintext to the logfile, but due to the contents of the alternative plaintext probably being a psuedo-random jumble of data, anyone who looks in any detail at the fake log file will instantly see that it is falsified. The cracker may as well create a false logfile, and lie about the hash.

    In the latter, it would allow the cracker to get a list of passwords that could be used to compromise his target systems much more quickly than he could have without them.
  • by Anonymous Coward on Wednesday August 01, 2007 @07:58AM (#20068883)
    Health Insurance Portability and Accountability Act, schmuck.
  • by CastrTroy ( 595695 ) on Wednesday August 01, 2007 @08:38AM (#20069135)
    I remember working with VHS tapes. We had to lay down a "control track" by recording a continuous stream over the entire disc before using it. Maybe it's just because my highschool had bad video editing hardware, but I remember that this control track was important if you wanted the editing machines to be able to properly align with single frames when editing, and for the time and frame number to be consistent.
  • If I want to alter records all I need to do is rip the DVD, edit it on my HD, and burn it to a new DVD+R, abd destroy the old one.

    Maybe I'm missing something, but wouldn't that be possible with unsigned data on any media? If you can obtain the media and a writer, and the data isn't authenticated somehow, you can always simply write a new version and toss the old. Unalterable does not mean impossible to destroy, just impossible to modify once written. Cryptographically signing data before writing it to any write-once medium (like DVD+R) would seem to solve this problem, because you'd need the signing key to "modify" it as you suggest.

  • by sjames ( 1099 ) on Wednesday August 01, 2007 @09:17AM (#20069539) Homepage Journal

    Though it's easy to find a collision, it is *impossible* to choose the content of that collision.

    In this case, "easy" means not utterly impossible to accomplish in a lifetime if you have unlimited funds.

    The significance isn't that the new attack is "practical". It's more that given those results, the odds of an even better attack coming along in the next decade or two went up.

    All the same, for a brand new application, why not just use SHA256? That's what Jon Callas meant by "walk, but not run, to the fire exits". No need to panic over data already protected by SHA1 or even to run around replacing all uses of SHA1 this instant, but if you're writing code anyway, why not choose a safer option?

    As you say, you don't get to choose the collision, that's why it's not time to panic.

  • by alcourt ( 198386 ) on Wednesday August 01, 2007 @09:24AM (#20069599)
    Append only files have not been required in my experience. What is required is that there be no ability to overwrite a previously written file by the team that is sending the log data. This can be done a number of ways, but the easiest method is to transmit the data in a way that the server chooses the filename, not the client. Add a date string into the filename and you can (with a few other details I've worked at but am here waving a wand at) avoid the problem.

    syslog works for most data, but not all. Linux is one of the only Unix based systems that puts sulog through syslog. The failed logins log is much more difficult, as is the wtmp data. wtmp data is especially annoying as it is one of the only ways to semi-reliably record both login and logout regardless of login type (including ssh), and can't really handle real time data streaming. The other annoying item is the command line history of all commands with EUID 0. I'm hoping to hear some news soon on a solution to that problem, but it is really difficult, especially since a lot of SAs become root via `sudo -s` or `su` (as opposed to `su -`, which would not modify their HISTFILE variable. Many root shells do not support direct sending of HISTFILE over the network.

    As to writing periodically to a optical media, I wouldn't worry quite so much about that. I would instead worry more about the encrypting all that security data while in network transit. (Sorry, can't recall if that is a firm requirement of PCIDSS 1.1 or not). Unfortunately, this makes use of syslog a less trivial solution. Authenticity is also an issue to be concerned with. How do you know that the event that got inserted into the log really came from that box, and not some random other server? Traditionally, syslog has not concerned itself with such issues, but a PCI system may care a great deal.

    Once the data is on the central logging host, it is already in a state that the author of the data (the SAs for the PCI impacted box) cannot modify it. That eliminates at least in the interpretation of PCI I've been working on, the need for writing to optical media. Immutable is not so much immutable by anyone, but immutable by the server in question.

    The point of the central copy of the logs is so that modification on either side can be readily detected and investigated. But if you cannot trust your central log host to have an accurate copy of the logs because you are receiving log data from anyone who chooses to pretend they are your PCI impacted server, then your central log host does not give you as much value as it may seem. The audit requirements aren't just for making lives miserable, they usually have a valid point behind them.

    When working with PCI, know which DSS you are on, 1.0 or 1.1. (I don't know the release schedule for the next PCIDSS.) The requirements do differ, as do even the interpretations. Reference https://www.pcisecuritystandards.org/ [pcisecuritystandards.org] for the information.
  • Re:From Experience (Score:3, Informative)

    by sjames ( 1099 ) on Wednesday August 01, 2007 @09:58AM (#20070049) Homepage Journal

    Unalterable logs as a matter of compliance does not mean "absolutely unalterable under any circumstances". There should be no way for an end user to modify audit trails. There should be no preconceived way for an administrator to alter audit trails - i.e. no utilities for doing so. That does not mean that an admin can't go directly into the DB and alter the data from behind the application layer.

    That's VERY important to keep in mind. A lot of the wailing, hype, and FUD around all of the various auditing and retention laws comes from people who do not understand that fundamentally absolutely ANY audit trail can be altered given sufficient determination and resources. Even if the logs are chiseled into stone slabs it is not absolutely inconceivable that someone might produce a slab thgat is identical in every way except for a changed digit or 2.

    WORM media can be duplicated as well. Whole vaults of WORM media can be duplicated. if you save hashes of the data seperately, the media containing the hashes can be swapped just like the main logs.

    So, making it absolutely impossible to alter data is out of the question, it's really a matter of how hard you can make it without bankrupting the company in the process (cheapest solution is fold the company. No company means no data means no alterations).

    Dumping the lot to a line printer in real time AND storing to a log file is one answer. In some cases, just keeping the logfiles in ext[23] and setting the append only attribute may be enough, at least until enough has accumulated to burn another sector onto a WORM device.

    For a while, I ran a patched kernel that would allow the immutable or append only bits to be set by root but not cleared. Clearing the bits when necessary required booting into another kernel (which would trigger many alarms when the machine went down). Doing so was a regular procedure for maintenance, but that was scheduled and all admins were notified. An unscheduled event would NOT go un-noticed.

    It's also useful to note that most of the requirements are that fraud be merely detectable. That is, the data need not be unalterable so long as the alteration is detectable. It's MUCH easier to detect that data was changed than it is to allow for reconstruction of the original data. One viable scenerio (given that fraud is a rare to non-existant event for the company) is to detect the alteration in the electronic data and then reconstruct the real data by following the paper audit trail.

  • by Maximum Prophet ( 716608 ) on Wednesday August 01, 2007 @10:00AM (#20070079)

    Anything can be destroyed or altered and as with any security issue this a matter of making the cost of doing so more then anyone is willing to pay.
    Absolutely true in principle, but in practice it can be hard to put a proper dollar value on any small piece of information. Here's an extreme example. Suppose you have a manufacturer that makes widgets that are worth $N. They implement access controls on the doors, so they know who is coming and going. If one employee can carry M widgets, then they can estimate the value of one record at $N*M. Now, let's say that one day an employee comes in during the off hours, kills another employee, then decides to spend $N*Z dollars to remove the record that he was there, where Z >> M.
    The trouble is most times we figure out what the data might be worth to us, not taking into account what it might be worth to the bad guys. The opposite scenario is more likely, where a company spends much more to protect a piece of data than it would ever be worth. In that case they are wasting money that would be better spent doing real work.

    The best thing you can do with your cryptographic hashes is to have copies away from the actual logs. Make sure that the people who have access to the remote hashes are different than the people who have access to the logs. Then it takes at least two people working together to muck things up.
  • Re:Syslog (Score:2, Informative)

    by Nos. ( 179609 ) <andrewNO@SPAMthekerrs.ca> on Wednesday August 01, 2007 @10:33AM (#20070503) Homepage

    I recently attended a SANS Summit on Logging. Its not about making it impossible to overwrite logs... there's basically no way to do that. Every suggestion here pretty much has a reply to it about how to get around it. Its not a technical problem to solve, its a policy one.

    Given a non-tiny operation, its fairly simple to reach compliance (IANAL). The group that runs the payment gateway is NOT the group that runs the centralized logging system. Use syslog-ng to send the logs to a central server. The payment gateway guys don't get access to the centralized logging server, at least not write access. If you want, store the logs in a DB, and give them read access. They'll still have local logs for troubleshooting and such anyways, so they don't really need it, unless they need to go farther back then the local server logs are stored. Backup the centralized logs regularly to tape, or whatever your backup setup is. If you're paranoid, store checksums in a separate area, email them out, whatever.

    You can't make the logs unalterable. What you do is put policies in place to make sure that they are secure if the infrastructure you are logging is compromised, internally or externally. For example, the systems you are trying to protect don't need full access to your logging servers, port 514 (or whatever you pick) is enough.

  • It's HIPAA not HIPPA (Score:3, Informative)

    by opkool ( 231966 ) on Wednesday August 01, 2007 @11:26AM (#20071541) Homepage
    Hi,

    It's HIPAA not HIPPA.

    See Wikipedia, among others:

    http://en.wikipedia.org/wiki/Health_Insurance_Port ability_and_Accountability_Act [wikipedia.org].

    Peace
  • Re:From Experience (Score:2, Informative)

    by Akatosh ( 80189 ) on Wednesday August 01, 2007 @11:33AM (#20071683) Homepage

    For a while, I ran a patched kernel that would allow the immutable or append only bits to be set by root but not cleared. Clearing the bits when necessary required booting into another kernel (which would trigger many alarms when the machine went down).
    FYI you don't need a special kernel for that for linux,

    lcap CAP_LINUX_IMMUTABLE CAP_SYS_RAWIO CAP_SYS_MODULE CAP_MKNOD CAP_SYS_BOOT
    unlink /dev/sda1 (or whatever, after fscking/mounting)

    that will disable the ability to alter immutable bits, access /dev/mem, kmem, etc, load kernel modules, access storage devices directly or reboot the system.

    Now lets put said unalterable log on an encrypted partition that requires a key on a usb dongle to mount, split the key into a few parts and give the parts out to different people.

    Also there are hardware crypto based document storage solutions out there that supposedly make things totally unalterable short of an act of god (embeded in laquer kind of thing). Ncipher makes some stuff like that, google them. (I don't have any vested interest, it's just the only company I know of that makes that sort of thing).
  • Comment removed (Score:3, Informative)

    by account_deleted ( 4530225 ) on Wednesday August 01, 2007 @11:50AM (#20071991)
    Comment removed based on user account deletion
  • Ok, I am not saying that one shouldn't encrypt. I think one generally should, but here are the requirements and relevant sections paraphrased:

    3.1: Store only what you must.
    3.2: Do not store sensitive authentication data (CVV, CVV2, magnetic stripe) subsequent to authorization.
    3.3: Mask the credit card number when it must be displayed whenever possible
    3.4: Render PAN (Credit Card account number) unreadible anywhere it is stored. If this is not possible, see appendix B for acceptable compensating controls.

    Note that 3.4 does apply to backup media, logs, etc. The simple approach is don't log credit card numbers. And don't store them in your MySQL database in plain text (as does OSCommerce in at least one configuration).

    4.1, cardholder data must be encripted when transmitted across "open, public networks." Presumably corporate intranets are excluded from this requirement but I would just as soon encrypt everywhere.

    4.2: Never send credit card numbers via email.

    10.1: Establish practices that allow you to track any administrative or root action and associate that with each individual user. In other words, you must be able to show not only what root did, but also which individual did it. I suspect that restricting root to *one* person and giving others access to sudo would be sufficient provided that sudo -s and su are prohibited from being used.

    10.5: Protect audit trails against unauthorized modifications. This does not mean write once. It simply requires that the media be "difficult to alter." However, periodically backing recent logs up to optical disks would likely be a good practice.

    10.7: log data must be retained for at least one year, and at least three months must be available online.

  • by Riskable ( 19437 ) <YouKnowWho@YouKnowWhat.com> on Wednesday August 01, 2007 @05:05PM (#20077567) Homepage Journal
    Let me first state that I'm a PCI Qualified Security Assessor. That means I am certified by PCI to perform audits and report back to banks whether or not a company is compliant or not. In other words, consider me authoritative on this matter.

    When dealing with any PCI requirement the most important thing to think about is the INTENT. Is the intent of the logging requirements in section 10 of the PCI DSS to prevent anyone, anywhere, from EVER being able to modify log files? No! The intent is to prevent a compromised system from altering its own log files--hiding the fact that it has been compromised. As long as your logging solution handles this situation effectively you really don't have anything to worry about.

    In my role as auditor I would never fail a syslog host just because it was writing to a standard ext3 volume. I *would* fault a company if their logging solution was poorly configured (insecure: say, running telnetd) or was write-accessible by the same admins that send all their log data to it (unless they were a small company--if you only have one or two admins there's only so much separation of privilege you can get away with). I'd also have problems with a syslog host that wasn't backing itself up on a regular basis (90 days online, 3 year archive).

    If I were you I'd be more concerned with your logging system meeting the other requirements of the PCI DSS. If it is inherently insecure or fails to implement proper access controls (say, shared root account) who cares how the logging solution is configured?

    Remember: Intent is everything. If in doubt, call your acquirer (i.e. your bank). They're the ones who ultimately have to decide whether or not your implementation is good enough anyway. The auditor just writes a report--the bank has to sign off on it.
  • by Anonymous Coward on Wednesday August 01, 2007 @08:51PM (#20079991)
    Actually, no. The root user cannot unset system immutable flags. They can only be removed by booting the system in single-user mode. It's an extremely cool system and I was surprised nobody recommended it higher up in the comments.

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...