Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Security

Ask Slashdot: Where Are All the Jobs Preventing Zero-Day Exploits? 112

An anonymous reader writes: Given the widespread understanding that sophisticated hackers are regularly using zero-day vulnerabilities to break into high-value systems, why is it that when I search for "zero day" on Australia's most popular job search engine only one "real" job comes up? Is the security of the Internet totally dependent on dedicated hobbyists, part-time showboats, and people willing to take meagre bug bounties (on average paying $3,650 for a critical vulnerability) instead of selling their findings (sometimes for millions of dollars) to dubious buyers?
Are they all in-house security people hunting for zero-days as part of their regular responsibilities? Share your own thoughts in the comments.

Where are all the jobs preventing zero-day exploits?
This discussion has been archived. No new comments can be posted.

Ask Slashdot: Where Are All the Jobs Preventing Zero-Day Exploits?

Comments Filter:
  • by Joe_Dragon ( 2206452 ) on Sunday November 21, 2021 @03:38PM (#62007927)

    what about better codeing / qa?

    Well may need people to test code before it comes out

    • by david.emery ( 127135 ) on Sunday November 21, 2021 @04:22PM (#62008065)

      Someone mod parent up, please.

      What SHOULD BE the First Line of defense is not putting the vulnerabilities into the software in the first place! That's a combination of developer knowledge, better tools (particularly programming languages that don't provide so many damn ways to introduce vulnerabilities), better design and better analysis (including formal method analysis.) The Second Line of defense should be runtime analysis tools and testing, to remove the bugs that remain in the code.

      I'm highly critical of both the "learn to code in 6 weeks" approach that won't teach people enough to not be dangerous, and the attitude in both academia and industry that we should just throw money at cybersecurity, expecting that eventually enough cyber-trained people (making big bucks) will somehow fix all those vulnerabilities that poor developers/poor development practices put into the code in the first place.

      But NONE of this will happen as long as companies get paid to fix bugs, rather than have to pay out for the bugs in the first place. It's long past tie to hold software companies LEGALLY LIABLE for vulnerabilities and other bugs in their code.

      • by AmiMoJo ( 196126 ) on Sunday November 21, 2021 @05:34PM (#62008287) Homepage Journal

        Learn to Code in Six Weeks is fine for non security critical stuff. Every app should be sandboxed anyway.

        The reason there are no jobs mentioning zero day vulnerabilities is because the correct term is security researcher. There are lots of jobs in that field.

        • by phantomfive ( 622387 ) on Sunday November 21, 2021 @06:07PM (#62008389) Journal

          Everything on the internet is security critical.

          • by AmiMoJo ( 196126 )

            It shouldn't be. For example, a website's Javascript code should be so well sandboxed that no matter how bad it is, it can't cause problems for the user. That includes everything from exploits like Row Hammer to simply using excessive CPU resources.

            • by LKM ( 227954 )

              Sandboxing only protects against some security problems. As a JS dev, you still need to understand topics like information disclosure (e.g. your error messages need to no leak information about the server infrastructure), CRLF injection (be careful what you put in your log messages), handling untrusted data safely, etc.

              I think the sad thing is that nowadays, you can't be a decent programmer if you don't understand what security threats you're dealing with. There are no systems anymore that aren't connected

      • by vyvepe ( 809573 )

        You have very good ideas how it should be done except the legal liability for bugs. Almost no client will want to wait longer or pay increased price needed for vulnerability free software or software delivered with warranty for bugs. These are just nonsense requests from people forgetting about the fact that software must be released to the market soon enough to be useful. A company which would like to produce bug free software will have the product finished later. A competitor will capture the market with

        • by david.emery ( 127135 ) on Sunday November 21, 2021 @07:46PM (#62008615)

          And let's not rather open the can of worms about what is bug and what is a feature and whether the software conforms to the specification and who is responsible for the bugs and especially ambiguities in the specification. Gee, isn't this the HEART OF THE MATTER?

          I come from the mil/aero world, where "time to market" is secondary to "must actually work." I've worked on weapon systems and air traffic control systems, where bugs, including cyber vulnerabilities, can have really significant consequences. Ask Boeing about 'releasing products before they're ready.' But that world keeps on buying into shit like "Agile" with the mistaken belief that, just because a software company makes a lot of money, that means they are actually doing a good job with their software.

          I buy insurance for my car. That doesn't mean I accept it if the wheels fall off, or if someone can easily break into it and steal it.

          And by the way, that's why I'm writing this on a Mac. Not because Macs are perfect, but because they're -markedly better- than Windows, particularly when it comes to reliability including vulnerabilities. There have been some significant exploits of Apple software, but NOTHING like that for Windows. I'm firmly convinced that what too many managers learned from using Windows is that "bugs are inevitable" (they are NOT) and that "all software is equally shitty" (it is NOT.) Now I've worked with software technology for provable correct systems (see https://www.adaic.org/advantag... [adaic.org] ) although I have not delivered actual products using that. But the discipline that came from having to write assertions and think about the properties of my software paid substantial dividends in the stuff that I did write for production.

          • All products are only built to be as good as they have to be, including your Mac.
            Think about everything you currently own. It's all built as poorly as it can be. The things that are slightly better, have to be either because of safety or rarely, reputation.
          • by Bert64 ( 520050 )

            Microsoft have spend years lowering peoples expectations. People generally view computers in general as being insecure and unreliable, and this comes across in popular culture where anything computerised is portrayed as trivially easy to hack and constantly suffering problems so that a major outage is shrugged off as it is an every day occurrence.

          • by vyvepe ( 809573 )
            Well, your opinion is skewed by your history in mil/aero. Other sectors are much more sensitive to price and time to market and much less sensitive to bugs. Especially mil is a complete outlier hardly with any sensitivity to price. It is a sector which just wastes a big portion of the money it spends.
          • Not because Macs are perfect, but because they're -markedly better- than Windows, particularly when it comes to reliability including vulnerabilities.

            Meh, this is only because their market share continues to remain under 10% and the bad actors direct their energy to low-hanging fruit. If Mac was over 80% (like Windows is) you can guarantee there'd a lot more energy expended in exploiting them.

            (I've been using Microsoft Surface Computers since 2013. I've not had to deal with a single exploit.)

      • But NONE of this will happen as long as companies get paid to fix bugs, rather than have to pay out for the bugs in the first place. It's long past tie to hold software companies LEGALLY LIABLE for vulnerabilities and other bugs in their code.

        That would never be a viable approach. You're literally asking every programmer to foresee every possible outcome with every possible input into their code. The day that is ever practical will be the same day there are no more bugs in any code, ever. Besides that, it would basically be the end of open source software, because releasing commercially produced source code would expose far too much liability. And that's even assuming that your idea doesn't hold individual developers liable when they produce fre

      • https://medium.com/@jasonthist... [medium.com] This isn't a new problem, and I have been preaching about it since the old blackcode.com days. Clearly, a different approach is needed.
      • by scamper_22 ( 1073470 ) on Monday November 22, 2021 @12:24PM (#62010299)

        I was a developer who switched over to cyber security. There's a lot of angles to look at things. Probably the most useful statement is everyone has some part to play.

        Yes, let's all hire great and skilled developers who can build things correctly. Unfortunately great people who are technically skilled, politically skilled, and driven are rare. There's only so many of these kinds of people around. Look at medicine. It's pretty difficult to become a doctor. Yet, not all doctors are good. Not all doctors care. Not all doctors can balance time, quality, money...

        If professions with real quality control can't do things really well, it's not reasonable to think a largely unregulated software field can.

        So you have to approach it with the reality that you simply won't have great people writing great software. You need a separation of roles where people can check the software for security issues.

        You also have bureaucracy. It may not seem like it at the ground level, but life in a bureaucracy operates differently. An executive with a cyber security mandate gets their own little fiefdom. That's how they get their funding. That's how they get power and policy passed and bodies to deal with the problem. They get to be accountable solely for security. It's actually very useful from a bureaucracy standpoint.

        To put it at a ground level. In a moment of irony, I got the pleasure of performing a security review of a product I largely built, but got transferred to other people when I switched. i knew most of the security issues in the product because I built it. But as a developer, I wasn't singular focused. My organization was focused on shipping and we were constantly assured we could fix bugs in further iterations. So I let various things go and placed them on the back burner. Being in cyber security in a separate role, all that ambiguity goes away. I focus on my job. I'm backed by my executives. We have our own funding for own tools and resources. I'm not interested in anything buy security. Things come to light way better and are dealt with better.

        I'll just reiterate separate of powers a fair bit. Back to the medical example again. Medical errors are a huge deal. Take something as simple as leaving medical equipment inside patients. It actually happens a lot. It has nothing to do with how smart doctors or nurses are. It's just the nature of life. Mistakes happen. People are rushed. People have other things on their mind. A person whose job it is to solely focus on making sure all equipment is accounted for is a very trivial task. But it's a necessary task.

        Lastly, I'm not a huge fan of just thinking of liability. Liability is important, but I think standards are really the better way historically. There was a time construction was pretty shoddy and you could certainly sue people. But things really only got better when standards came out. Standards requiring qualified trades people. Standard building codes...

        If we look at it from say software. We could create standards around things like only storing hashed passwords in a database or other such tings as just standard ways of doing things. Want to build an authentication system, hire a qualified authentication engineer who'd have to qualified. It would go a lot further to improving society than simply having bugs or waiting for exploits to happen and using for damages.

    • what about better codeing / qa?

      Well may need people to test code before it comes out

      What has surprised me is that the tech industry has, for the most part, not been required to have professional certification, especially for software. If you look around, almost everything else has some level of required certification (dentists, mechanics, lawyers, doctors, nutritionists, hair stylists, the list goes on...), but not so much the tech industry.

      It's true that, like working on your own car, you shouldn't need it if you're a hobbyist, but I wonder if it should be a requirement when you're actual

      • by AleRunner ( 4556245 ) on Sunday November 21, 2021 @04:49PM (#62008133)

        What has surprised me is that the tech industry has, for the most part, not been required to have professional certification, especially for software. If you look around, almost everything else has some level of required certification (dentists, mechanics, lawyers, doctors, nutritionists, hair stylists, the list goes on...), but not so much the tech industry.

        They would certify the wrong things. In fact, "secure coding" certificates exist and they are mostly completely wrong because they tell you how to code correctly in C when, for the most part, you probably shouldn't be coding in C in the first place. Requiring these certificates would probably do more harm than good.

        • In fact, "secure coding" certificates exist and they are mostly completely wrong because they tell you how to code correctly in C when, for the most part, you probably shouldn't be coding in C in the first place.

          If you're getting paid to code, you're probably not picking the language you get to use on any given project. That was typically decided well before you arrived.

          There are good and bad coding practices in every language, and they should be taught.

          That said... I don't put a lot of stock in certification.

      • by sjames ( 1099 )

        Most of the certifications I have seen for IT were of minimal value and in some cases required the wrong answer on test questions to get a good score.

        Too many certs and not enough experience is a red flag on a resume.

        • by Bert64 ( 520050 )

          Absolutely this..
          Too many people memorise (often wrong) answers to pass certs, but don't actually understand anything beyond the memorised answers and don't have proper troubleshooting skills or the ability to consider anything new thats not covered by the exam.

    • That isn't Agile. Got push new features every sprint. Bugs aren't bugs unless enough paying customers report them, otherwise just bury them in the backlog and they can get voted on next grooming session.

      • by Teckla ( 630646 )

        Gotta meet those sprint commitments! Stakeholders are counting on you!

        Time for sprint planning, where people pull story points out of their ass.

        • "Sprint commitments" are merely short-term waterfall in disguise, with most of the problems that waterfall brought in the first place.

          That doesn't matter though - the SDLC of choice is irrelevant. The problem is usually over-scoping due to sale incentives. Sales folk rarely, if ever, get penalized for selling something that can't get built on time, and since they bring in money are on the "correct" side of the balance sheet. Engineers are cost-centers to be avoided. Testing cannot be completed until the c

    • by AleRunner ( 4556245 ) on Sunday November 21, 2021 @04:46PM (#62008119)

      what about better codeing / qa?

      This. Zero days are bugs to prevent all zero days you have to prevent all bugs. Obviously some bugs are more exploitable than others, however anything where the software does something different from the thing that is expected is a potential security problem if someone relies on that behaviour. Basically that means that we have to invest much more in the software development process. We know how to make reasonably reliable software; there's just one problem. It's expensive. Software companies choose not to make that investment because they know they can get away with it.

      Well may need people to test code before it comes out

      Testing is definitely part of this, but it can only help so much. In fact, in serious aeronautic code, if testing finds more than a few bugs, the whole thing has to be thrown away and started again. First there's a need for the development process to become much much more serious. Test Driven Design, which is part of the development process might have a part to play but that's not really "testing", it's part of development work. In fact to really fix this we need to start to talk about things like "design by contract", "formal methods" and so on, but at the same time we need to get rid of fantasy "architects" who plan software without reference to reality.

    • Laughs in OpenBSD.

      • by lcall ( 143264 )

        Wish I could see how to upvote.

        Maybe we can help the world in its zero-day mess, by encouraging our(selves and) employers to hire more people with knowledge of OpenBSD, and who understand of why it has gone since ~1996 with only 2 remote holes in the default installation, and how it can be used effectively to avoid zero-day bugs from happening, or to carefully know the risks and mitigate them.

        Their site, https://www.openbsd.org/ [openbsd.org] , has a list of consultants: https://www.openbsd.org/suppor... [openbsd.org] and the FAQ a

    • by gweihir ( 88907 )

      Testing is a tiny part of it. Secure coding on Architecture, Design and Implementation level is what actually helps. Needs people with a real clue and these are expensive on an individual level, even if probably not overall as they tend to be more productive and cause far less problems that later are costly to clean up.

    • what about better codeing / qa?

      The original question is where are jobs related to securing systems.

      But you are right that an even better question is what about strong QA for all public facing software, as more often criminals are exploiting online services directly rather than penetrating systems...

      And the thing is, it just seems like so few companies actually care.

      I had the luck to work at one place that did actually care right at the start of my career - a company that worked on database software, they ha

      • by LKM ( 227954 )

        To me it is madness to have the same coder that writes software also write unit or other tests for that software. They already know how the code works, and so will write tests around that understanding...

        Also, writing good unit tests requires an understanding of how bad code creates security issues. A lot of the time, when I look at unit tests, they test some completely safe cases for correctness, and maybe two or three slightly odd edge cases (what if the parameter is an empty string, or null), but fail to

    • by Anonymous Coward
      Posting AC for obvious reasons.

      I work in a government information security certification program that is a prerequisite for government IT procurement, including high security deployments at various 3-letter organizations. Universally, products that come for certification have major flaws and designed without security in mind. A good application is one where I can't easily download a script to pwn it. It is THAT bad, so you don't need zero days when publicly known vulnerabilities are not patched. Unfortunat
    • by okvol ( 549849 )
      Most software in use is not developed in-house. They purchase the software or rent it. And are totally dependent on those companies (like Solar Winds) for their coding security.
    • Maybe need better proofreaders too. :P

      Anyways, dedicated QA don't really exist these days since most companies don't want to hire QA people. They just rely on users to test their stuff for free. :(

    • Thanks for this information!
  • Yes, on rare occasion, some corporate with incompetent network administrators, and stupid employees, gets ransomware.

    The reality is, it's easy to prevent, and, as a result, very rare.

    So, it's not a story.

    Nothing to see here, move on.

    • by gweihir ( 88907 )

      Ransomware is in no way a "rare" occasion. I personally know several cases. Sure, these people had secured backups and were able to detect the intrusion early and hence you will never know about it. But it would have been vastly better in all cases if they had been able to prevent the intrusion in the first place.

      So there is a huge number of ransomware attacks you never hear about and a probably large number of successful attacks you never hear about.

      • Sure, these people had secured backups and were able to detect the intrusion early and hence you will never know about it. But it would have been vastly better in all cases if they had been able to prevent the intrusion in the first place.

        But that's kindof the point for the average secure business. If you have decent disaster recovery, you treat it like any other data corruption or server failure and move on. Netflix is famous for having a "drunk monkey" that randomly took servers offline. Obviously you want to stay up to day and not have security breaches but if you isolate and backup then even when a breach occurs nothing valuable is lost or stolen.

        • by gweihir ( 88907 )

          Sure, these people had secured backups and were able to detect the intrusion early and hence you will never know about it. But it would have been vastly better in all cases if they had been able to prevent the intrusion in the first place.

          But that's kindof the point for the average secure business. If you have decent disaster recovery, you treat it like any other data corruption or server failure and move on.

          No. It is still a major incident and may require months of cleanup. It is not a simple "data corruption" or "server failure" at all. Cost of such an incident can easily reach 100k ... 1M _with_ successful recovery in an SME. The aspect you seem to overlook is that you have to be sure of what happened and to be sure it will not happen again. If you just recover your servers from backup, you will get hit again in short order.

    • by AleRunner ( 4556245 ) on Sunday November 21, 2021 @05:03PM (#62008177)

      The reality is, it's easy to prevent, and, as a result, very rare.

      Randsomware is only "easy" to prevent because, for the most part, that's not what the top end APTs do. They persist and spy. How would you protect yourself against the Solarwinds attacks which came in through the management software in many companies and bypassed all of their protections whilst disabling and/or avoiding all their monitoring?

      A person with the right zero day exploit can get past all of your security measures, direct to the heart of your company, for example onto the IT security administrator's laptop merely by having her read a normal mail from a partner company, and has absolutely no need to cause any form of logging anywhere in your systems. If you have a company using standard modern software like Windows, MacOS or Linux then done right there is nothing you can do that can prevent a zero day exploit of your system.

    • by LKM ( 227954 )

      The reality is, it's easy to prevent, and, as a result, very rare.

      You're super, super wrong on both of these counts.

      It's not easy to prevent. Your company has hundreds of employees. They have access to all kinds of things that you don't want to be leaked. Some of them have access to your backups. They will get hundreds of malware emails every day. Some of these will get through the spam filter. Some of them will be good enough to fool at least one of your employees. Now you have a problem.

      They're not rare.

  • by NFN_NLN ( 633283 ) on Sunday November 21, 2021 @03:40PM (#62007939)

    Where Are All the Jobs Preventing Zero-Day Exploits?

    So you want to be a fall guy? Your sole purpose is to get fired from a company when a zero-day exploit impacts code you didn't write from a software vendor you don't work for and software that you probably didn't even deploy?

    Hire a security expert. Pull every interface from being exposed to the internet that you can. Provide access only to people that need it. Patch systems for vulnerabilities.

  • Seriously, why did this get posted? Most (all?) exploits are zero-day when they're first discovered. Is this proposed "zero-day job" about discovering vulnerabilities in a product, or is it about scouring the Internet / dark web to discover your product has a zero-day exploit? Both of these jobs exist today and neither need "zero-day" in their title.
    • by ls671 ( 1122017 )

      Don't worry, a lot of zero-day jobs exist but since they are zero-day, nobody ever heard of them. So you are right in the end I guess; it's a silly question.

      • Don't worry, a lot of zero-day jobs exist but since they are zero-day, nobody ever heard of them. So you are right in the end I guess; it's a silly question.

        Like a job organizing Coke Zero Day, where employees play games, eat pizza and drink diet soda ...

    • In addition, there's zero money in preventing zero-day exploits.

      Why? Because there's no way you can prove that what you changed prevented a zero-day. It's less buggy, more secure code now, for sure, but maybe nobody would have ever exploited that bit of code.

      So you can't put a dollar amount on that sort of work, just like you can't really put a dollar amount on producing less buggy, more secure code. We all want to put out what we can, but deadlines and budgets are a real thing, and there's no way to say, "

      • In addition, there's zero money in preventing zero-day exploits.

        It's not much money, but there are quite a number of people that teach how to do secure coding. Surely that's money that's going into preventing zero days?

    • by jabuzz ( 182671 )

      Yes but most exploits are responsibly disclosed to the software vendor and patches exist.

      The vast majority of systems that get exploited are not patched against known vulnerabilities. If you are an organization today the most cost effective thing you can do is not protect against some rare zero-day exploit but make sure your systems are patched within a timely manner.

      Most exploitation of systems is opportunistic. Therefore at the moment the best strategy for most people is to make themselves less attractive

  • by tmshort ( 1097127 ) on Sunday November 21, 2021 @03:50PM (#62007967)

    No job refers to "zero-day" exploits. You want to look for security researcher, infosec, secure coding, SRE, DevOps, SecOps, etc.

    Companies have whole departments whose job it is to secure personal and payment data, pen test, build, test and deploy software at a moments notice.

    • And pen testing

    • There is a trivial solution to the cyber security problem.

      If you stop everyone from doing anything, if there is no productive development happens and nothing actually gets shipped, there will be no vulnerability found in your product.

      Lots of cyber security department tend towards to this trivial solution, create so much of cruft not much gets done, not much gets shipped, the attack surface is greatly diminished.

    • by Anonymous Coward

      Yeah, this reminds me of my first job when I worked with a fellow maths graduate in IT, he said to me "I really want a maths job, but when I search for maths job nothing comes up". I had to point out to him that no one hires for a "maths job", they hire statisticians, they hire analysts, engineers, teachers.

      You don't go into a "maths job" you go into a job that requires maths, for the same reason as you say you don't go into a "zero day job", you go into a security researcher job or whatever.

      I'd wager peopl

  • They call it Project Zero [wikipedia.org].

    Sadly most of the jobs around zero days appear to be finding them so that you can take advantage of them.

  • Who is going to pay you to waste months on wild goose chases? I mean, it ought to be done .. but at the same time it who has the perceived extra cash to fund it?

    • It should be thought of as insurance, not as a revenue-generating activity. I mean, a lot more people could afford cars if it wasn't for that fucking auto insurance shit.
    • Observation:

      Capitalism (at least this incarnation of it) can't fulfill the economic charge of balancing unlimited wants with limited resources unless there is a profit potential.

      Even if you make the argument that this would be beneficial overall, if there is no means to profit from it (or even make a halfassed attempt at valuation, like NFTs), it won't be done.

  • We don't have enough people to hunt for elusive zero days. I have about half the people I need to sensibly do the penetration tests I should do, as mandated by law and contracts. So what we do is to rush through the tests and do the absolute minimum we need to do to be compliant.

    • by gweihir ( 88907 )

      We don't have enough people to hunt for elusive zero days. I have about half the people I need to sensibly do the penetration tests I should do, as mandated by law and contracts. So what we do is to rush through the tests and do the absolute minimum we need to do to be compliant.

      You cannot really "hunt for zero-days" economically as a defender. It is _more_ expensive than re-writing the software with people that really understand secure coding. What you do is security by architecture, design and implementation. It is well-understood how to do this, it is just not the cheap developers or the developers hobbled by too small budgets that can do it.

      Also, do not forget that finding zero days is not the objective, unless you are an attacker. For defender the objective is to make sure th

      • You cannot really "hunt for zero-days" economically as a defender.

        Also, do not forget that finding zero days is not the objective, unless you are an attacker. For defender the objective is to make sure there are no zero-days in the code and that is much, much harder to determine than just finding some of them.

        It's obviously profitable to the attacker. Presumably it's usually at least as valuable to the defender to find them first. The defender also has the advantage of being able to look at the source code so presumably should be able to find them faster and cheaper than an attacker who has to blindly look for them. The only advantage the attacker has is the attacker doesn't care which company it finds a weakness in. It should be relatively cheap for a company to have a qualified team looking for zero day bu

        • by gweihir ( 88907 )

          You cannot really "hunt for zero-days" economically as a defender.

          Also, do not forget that finding zero days is not the objective, unless you are an attacker. For defender the objective is to make sure there are no zero-days in the code and that is much, much harder to determine than just finding some of them.

          It's obviously profitable to the attacker. Presumably it's usually at least as valuable to the defender to find them first.

          No. The Attacker needs to find one. The defender needs to find all of them. That makes it very much not worthwhile for the defender.

  • Security is an afterthought, buy cyber insurance.

    Companies are lucky to get the new products out the door with half the features that were promised.
    Why are they going to waste a high-end engineer on hunting/fixing bugs?

    Instead just buy cyber liability insurance and make everyone click a box that reads, "You don't own this software and if it has problems, don't blame us."

    Thus I quote the Microsoft EULA

    "SUPPORT SERVICES. Microsoft is not obligated under this agreement to provide any support services for the software. Any support provided is “as is”, “with all faults”, and without warranty of any kind."

    Unless you live in Germany or Austria

    i. Warranty. The properly licensed software will perform substantially as described in any Microsoft materials that accompany the software. However, Microsoft gives no contractual guarantee in relation to the licensed software.
    ii. Limitation of Liability. In case of intentional conduct, gross negligence, claims based on the Product Liability Act, as well as, in case of death or personal or physical injury, Microsoft is liable according to the statutory law.
    Subject to the foregoing clause ii., Microsoft will only be liable for slight negligence if Microsoft is in breach of such material contractual obligations, the fulfillment of which facilitate the due performance of this agreement, the breach of which would endanger the purpose of this agreement and the compliance with which a party may constantly trust in (so-called "cardinal obligations"). In other cases of slight negligence, Microsoft will not be liable for slight negligence.

  • There's no such job as doing something so specific. It's like saying at some point during my work I push a red button, so I'm sure there's other jobs out there which involve red buttons, but where are they, I can't find anything when I search "red button pusher".

    Dealing with Zero-day exploits is part of larger information security field. Type that in your search thingy instead, I'm sure you'll find jobs which involve among your many security related activities, dealing with zero-day exploits.

  • ... behind blacklist-first encrypted open-id-access-controlled interfaces.

    Obviously.

  • by Aighearach ( 97333 ) on Sunday November 21, 2021 @04:47PM (#62008123)

    If you can't even figure out a job title and search for that, how are you going to find interesting security problems that wouldn't have been found by a "virus scanner?"

    Go to school, get a degree. Explain this failed search you did to a career counselor. They can assist you in ways that slashdot cannot. (And wouldn't)

  • It's not like there's a ton of man hours in it for these people once they have developed their fuzzers and know-how. 3K is generally more than enough to make it worth their while, you can't sell all exploits on the "grey" markets (ie. by proxy selling to murderers working for kosher governments instead of other criminals) and few want to be outright criminals.

  • Wrong question (Score:5, Insightful)

    by gweihir ( 88907 ) on Sunday November 21, 2021 @04:51PM (#62008137)

    The right question is "Where are all the jobs for developers with real skills in secure coding that also pay well and treat the coders well?" The answer to that is that there are hardly any.

    You do not prevent zero days as separate skill. That is bullshit. What you do is, you put in redundancy on the security side on architecture, design and implementation/testing level. Examples are minimal privilege, input validation, privilege separation, secure containers, marking & tagging techniques, use of security scanners, security testing, code reviews and a suspicious, careful mind that keeps updated on attack techniques and, most importantly, understands KISS and deeply respects it.

    The problem is these coders are much mode expensive. That does not make the _project_ more expensive, as the people with these skills are also much more productive and cause minimal clean-up effort later. But the meta-problem here is that many "managers" both desire as many underlings as possible and also definitely underlings that earn much less then they do. Add to that a skill-set on "manager" side, that is still completely clueless about quality software engineering, almost 50 years after "The Mythical Man-Month" came out. Especially the last thing is pretty much unforgivable.

    • +100 on that

      That does not make the _project_ more expensive, as the people with these skills are also much more productive and cause minimal clean-up effort later.

      Overall, in the end you are right. However, in the short term it probably takes longer to deliver the first version and so it can make it very difficult for the manager to get funding and mean that any competing projects may win and push out the properly done project. There's even a legitimate understanding of this in implementing the worse is better [wikipedia.org] model which doesn't have to lead to insecure software but almost inevitably will.

      • by gweihir ( 88907 )

        It does not have to delay first delivery much unless you are comparing to utter crap. If you do good prototyping and then deploy the prototype with ample warnings and assured 2nd version not too much later, that can work very well, given people with the right skills. Of course, if you do it with the mediocre or incompetent or do that real version much later or not at all, you arrive exactly at the mess we currently observe everywhere.

        The problem with the managers getting funding is incompetence with regards

    • Not just "Mythical Man-Month", most software manglers haven't read Boehm's "Software Engineering Economics" to understand the (exponential) cost of fixing bugs discovered late in the process. That's another way that good design and solid code saves money in the long run. What I observed in large scale system integration was that a well-designed system with clean managed interfaces can reduce the cost of integration bugs, by making the fix local and therefore easier to verify. That being said, when we fou

      • by gweihir ( 88907 )

        Sure. And other relevant texts. Boehm was just 6 years later hence I referenced the MMM. Of course, others have refined this and added to it later.

        My guess is this half-assed dilettantism we observe everywhere in software creation these days will only go away if it gets regulated like most other engineering has long since been. Because with that real _personal_ penalties for screwing up due to lack of qualification become a thing. What a sad state of affairs. The "tragedy of the commons" all again. Seems to

    • by endus ( 698588 )

      There are plenty of application security vendors whose entire service is identifying security issues in code, though. Agree that secure coding practices need to be more widely taught and adopted, but the jobs helping companies with this stuff are definitely there.

      Performing application security testing and remediating the results is also a nonnegotiable requirement for doing business with any customer who has an even slightly-good third party risk program. You can forget about selling any internet facing

  • by cjonslashdot ( 904508 ) on Sunday November 21, 2021 @05:22PM (#62008239)
    There won't be any until companies are legally liable for at least some of the losses that people incur when a product gets hacked.
  • by S_Stout ( 2725099 ) on Sunday November 21, 2021 @05:51PM (#62008341)
    Most companies have them. They request software be run to analyze threats because they can't do anything on their own. They barely have any understanding on what the engineering team is doing. They're mostly useless, slow down processes trying to look important, and if shit hits the fan, they will blame everyone else.
    • by endus ( 698588 )

      It's not just software but services too. You can outsource your manual application level pen testing and secure code review.

      I can't argue that management has any idea what the engineering team is doing with a straight face, but there are other reasons that this gets outsourced.

      One is assurance. Any customer with an even slightly decent third party risk program is going to be looking for some level of reporting from a vendor's application security program. Outsourcing means that the people doing the testi

  • If you are dumb enough to be searching for "Zero Day" in a job engine to find a job to patch these issues then chances are you are one of the fuck knuckles that creates these issues in the first place and you certainly aren't fucking qualified to work on these issues.
  • Sadly most of those don't know how to program, so they won't know how to trigger all the sophisticated issues that zero day bugs often use. Basically you need a developer's knowledge, but a QA person's drive to find and exploit issues.

  • by Zero__Kelvin ( 151819 ) on Sunday November 21, 2021 @09:07PM (#62008797) Homepage
    Zero Day exploits are no different than any other exploit. If your job is to help ensure security then your job is to find as many vulnerabilities before someone else usually with making finds them. Try googling it and think about for a minute, after which you should have much more success finding the job you seek.
    • by jabuzz ( 182671 )

      Yes they are different. Zero Day means that you don't previously know there is a problem. The *vast* majority of security exploits are from known vulnerabilities that have fixes but for whatever reasons have not been patched.

      • Holy fuck I hope you don't work in software development. You are a complete fucking moron. Someone finds a vulnerability. If they are a white hat it gets reported, then others exploit it between the time it gets announced and the time it gets patched. If they are a black hat then they exploit it until someone else also finds it, reports it, and it gets patched. The difference is in who finds it, and that is literally the only difference.
        • by jabuzz ( 182671 )

          I don't work in software development. I actually work in system administration and a very significant part of my job is protecting against exploits. In this context there is a huge difference between an exploit that there is a patch for and a zero day exploit for which there is no patch for.

          If there is a patch I can apply it in a timely manner. Which for example if it gives me a remote root would be; right now, unscheduled downtime be dammed.

          If it is zero day then there is no patch and the best I can possib

          • If your curious it means you are going to learn something now. You can mitigate against Zero Days by learning about computer security and practicing solid layered protections. Go through and read the CVs sometime. You will notice a set of conditions that must be true for the exploit to work. In general at least one of those will be that the system administrator(s) have to have failed to implement one of the mitigations. For example take your standard "attacker must have proximity" exploit. The solution isn
            • by jabuzz ( 182671 )

              I know you can mitigate against zero day, you cannot however absolutely prevent them in all cases because not all zero days exploits have have a set of mitigation's that you can apply.

              No internet connected system that I have been responsible for the maintenance of has ever been exploited to my knowledge in over quarter of century.

              So stop trying to teach grandma to suck eggs and except that zero day is different and you can't in general protect against them.

  • Look for jobs as "Red Team" or "Blue Team" and you might come up with something.

  • Its obviously all a scam and they are not spending money to fix these problems. The same is true in many industries, for example when I search for "car fireball" or "computer electrocution" I also find nothing in the job ads. Obviously those industries are all BS when it comes to safety and are just leaving it up to enthusiasts to fix car and computer manufacturing problems.
  • Given the widespread understanding that the most devastating type of road accident is a frontal crash, why is it that when I search for "frontal crash" on Australia's most popular job search engine no "real" job comes up?

  • MICROS~1 Windows strikes again :]
  • There are a whole bunch of application security vendors out there whose entire job it is to identify bugs in software through pen testing, secure code review, etc. Some companies have internal teams doing this, lots of others pay a vendor for the expertise as well as for the fact that they are an independent entity - independence provides a layer of assurance over the results and helps deal with internal political issues. Companies invest heavily in this and it is a nonnegotiable requirement if you want t

  • TL;DR: There's not a spot in the code which says "Zero-day exploit here", and often the problem is HARD, like even an experienced developer who has seen an exploit can't believe the code has a problem.

    Story time. Due to seniority, I often was doing code reviews for more-junior people making changes in various security-sensitive code. Often, the changes were definitely the right kind of changes, but had "code smell" issues. By this I mean that my experience told me that the kind of approach they were usin

  • Job security - there are some careers where if you fix things, it looks like you are doing nothing.

    Unfortunately the best for of Job security is sometimes having everything look really difficult so the suits think "Wow, this person is almost getting rid o our problems, his work must be really difficult like he tells us.

It is easier to write an incorrect program than understand a correct one.

Working...