Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Technology

Distributed Computing and the Human Genome Project 146

I'm sure most of you have heard about the Human Genome Project by now and how it is working to map our DNA. Aparently there is now a race going on with corporations also performing the similar experiments, except with the intent of patenting the results. Now troc is wondering if another distributed computing effort might be in order. What do you all think? Click below troc's actual question.

troc asks: "I was watching a TV programme on UK TV last night about the Human Genome Project and how there was a race to sequence and publish the whole thing before the private companies do it and patent the sequences. Basically lasers are used to break up the strands, these are then read and fed into a computer that tries to match the bits up with other bits like a giant jigsaw puzzle. This requires a lot of computing time.

Is this an opportunity for the open source movement to help decode the sequences and publish the whole thing becore it's patented?

<soapbox>

I, for one, don't like the idea of a private company owning my gene sequences. They will be able to limit the use of these so only really rich pharmaceutical companies will be able to develop drugs etc and then sell them at huge profits, which isn't realy for the benefit of mankind blah blah blah.

</soapbox>"

I agree. I don't see how information like this can be patented. There is nothing truly proprietary about it, and it would do more good in the public where the benefit can truly be felt.

This discussion has been archived. No new comments can be posted.

Distributed Computing and the Human Genome Project

Comments Filter:
  • Are they really worth the effort? RC5 and SETI are both successful, but both of them require a permanent connection to the net (in essence) in order to get the best updates etc. (if ya get what i mean). As this was a UK-based thing, why not send the whole lot around on a CD?
  • by SEAL ( 88488 ) on Monday November 29, 1999 @01:00AM (#1498031)
    Patents, in general, have really taken a nose dive since the personal computer achieved widespread use. The original intent of a patent was to allow an inventor to come up with an idea and protect it for a period of time. Whether he profits from it or sits on it is then up to that inventor.

    However, with the computer age, the speed of (dare I say) innovation has been astounding. This has produced two detrimental effects. First, the patent examiners simply don't have the niche expertise to scrutinize patents. I'm sure most of us have seen some of the idiotic patents out there. Second, the time span of a patent has become too cumbersome. By the time the patent expires, the invention is often useless.

    I sincerely hope that this particular project will be placed under a HUGE spotlight when the patent requests inevitably filter in. I have a feeling it won't hold up, and at the very least, not in some countries.

    However, keep in mind that this is scientific information about a human being, not software / computer advances. In that regard, a patent will be cumbersome, but not quashing. The patent (if granted) WILL expire someday. And I'm fairly certain that the information will still be very important and valuable when that day arrives.

    Of course I'm all for beating the would-be patenters to the punch, if possible.

    Best regards,

    SEAL

  • More importantly - will there be money on offer?

    :)
  • It's just as if the first man who took a photograph had said :

    Now this is MY moon. You can't anymore take photographs of this.

    But I wonder in which countries such patents could be valid. For instance in Europe we are having a discussion about the possiblity to patent algorithms, that was not possible in the EU until now. I hope ONLY you americans are allowing companies to patent a thing like our genome.
  • I would like to see another distrubuted project on this. If it gets the same publictiy as the Seti Program it could have wonderful results. The only problem I could see is that the community's distrubuted-willing resources are already streched to the bone. I would say this does take presedence, eh? I don't want no stinkin' coperation owning my genes!

    Another thing, you don't need a permenant connection to run distuibuted. I run seti at home on a computer with a dialup that goes on about once a week. It works wonders! Cheers :)
  • by reve ( 59221 ) on Monday November 29, 1999 @01:05AM (#1498036)
    Okay, before everyone hops on this really popular anti-patent train, let's make sure we note that the sequences can't be patented. Yes, independent companies are gonna beat out the human genome project and have been filing patents. But the patents arn't on the sequences themselves, they're on applications. Whether these applications have to do with more efficient methods of genome-unraveling or whether they have to do with specific uses of the patterns they've found, it's NOT the actual sequences.

    In a number of countries it's already quite specifically illegal to attempt to put intellectual property restraints on anything involving human genes. US is considering some laws as well, but let's just get all the facts straight before panicing, okay?

  • by ghoti ( 60903 ) on Monday November 29, 1999 @01:08AM (#1498037) Homepage
    Well I don't think anybody will say "No, let's not do it, let the big bad corps patent our genes!!".

    The only problem I see here that developing a distributed client for this takes a lot of time and effort --- and one, which definitely cannot be open-source!

    Two reasons:

    • False results. If the data format etc. are known, it's possible to feed the servers bogus results, which could lead to inconsistencies in the data base. This might even destroy results that are already there (okay, this problem also exists with closed source stuff like SETI@Home, I know).
    • Data Theft. An open source program could be modified by Big Bad Corporation Inc. to simply harvest raw data and feed it into their own computers, thereby gaining information they would otherwise have to find themselves. Granted, they won't have as much computing power, but when they have their own and the stolen data, they're still saving time. And I am not sure if enough data is produced to keep hundreds of thousands of computers occupied (see the problems SETI@Home had in the beginning).

    So, sorry, folks, but I believe this is one of the few things that open source clearly is not suited for. But it would be kinda cool to have a proggy running on my machine that messed with genes ... ;-)

  • Hmmm, does anyone else think God (or Alla or Odin, or the Great Bannanarama, or whoever your supreme being is) will have a problem with these big companies patenting His invention?

  • by Anonymous Coward
    How on earth would you justify patenting a gene?
    It has been around for a _very_ long time. You may as well patent some newly discovered subatomar particle and charge everyone who uses it. Or maybe an even better comparison would be a patent to a microscope vs. patenting everything you can see through it. This is insane.

    Ciao, Peter (still without ./ Password :)



  • by Anonymous Coward
    From what I understand the patents apply to potential usage of the sequence rather than just the sequence itself. Unlike the systematic sequencing approach employed by the genome project the "grab and patent" companies often target potentially interesting genes (eg, receptors, particular classes of enzymes) by "fishing methods" such as degenerate PCR of EST libraries.

    Patents are a sore point in molecular biology if the companies choose to prosecute those that appear to infringe them. The classic case is the patenting of PCR by Cetus/Perkin Elmer/Roche as the companies made open threats against academic institutions. This was especially sad as the PCR patent was extremely shaky as there is clear evidence of prior art.

  • Even if we did have a distributed effort and made advances, someone would still have to patent the discovery.

    As we have seen with Y2K fixes and other things, making a discovery does not stop someone else patenting the idea.

    An open source body would have to be setup to patent the discoveries just so that nobody else could patent them.

    This body can declare thier patent open for use.

    There is a lot of legal issues here - if you opne your patent too much could you lose it.

    Patent law is also a case of boiler plating your patent - you have to ensure that every option is covered and also included on the patent.

    This sort of thing is costly, and this is why a lot of companies patent thier ideas. Once they have the patent they recoup thier investment, and then some.

    If an open source patent body is set up there will have to a lot of time spent considering patent administration and the costs involved.
  • I live in Iceland, and here there is the company DeCode genetics. They are building a huge database with the medical histry of every Icelander in it, to be able to trace "bad genes".

    the funny thing is, they're a privatly owned company and still they are entitled to go through all your medical records at their own will and put it in a database

    sure, they say it'll be secure but what if they start selling info on you to insurance companies?
    imagine this:
    you: Hi, I'm (some name) and I'd like a life insurance.
    insurance rep.:well... I'm sorry... it's gonna cost you (insert obscene amount here) because your family has a record of heart failiurs.

    these are just my thoughts... check it out for yourself, I think this has made it to most news medias in Europe and America, also check out www.ie.is

    ---
  • I was priveleged enough to actually speak with one of the NIH (National Inistitute of Heatlth) scientists working on the project earlier this year. He came to speak in our school Medical Society. Being the geek that I am, I made sure to inquire as to the Y2K compliancy of the computers used for analysis and data storage; alas, he wasn't involved in that aspect. ;-) He said he "thought they were", though.

    If I remember correctly, and there have been no delays, it's supposed to be finished before 2002.

    I tried to tape the whole question and answer session with my microcassette recorder, to put on my webpage (in RealAudio format), but he was against it. Oh well. (I would have tried to sneak it anyway from the back of the room, but my recorder has a crappy mic, so I wouldn't have gotten much by doing so.)

    The whole concept is very cool... imagine being able to prevent disease on a genetic level...

    Does anyone have any information on the computing systems being used? Come on, there have to be a few NIHers reading /.! ;-)

    This is slightly off-topic, but has anyone else heard about this "Soul Catcher" project, which I think is based mainly in the UK? (Based on the concept of recording an entire human consciousness to a traditional physical medium, if I remember correctly.)
  • False results can he handled easily: just submit the packet to two different places, or to 1.5 places in average, and if they disagree, the system checks the packet by itself (or hands over to third machine.) Yes, It'll slow down, but I can't see any other viable alternative..

    Data theft.. Isn't the idea that the data is already there, but it needs to be processed? No idea in data theft then. Also the system could look after domains or ip-address spaces that keep eating and eating the data space faster than anyone else and blackhole them.. Or sue them :).
  • by _Marvin_ ( 114749 ) on Monday November 29, 1999 @01:27AM (#1498046)
    Of course the seq's themselves can't be patented.
    Otherwise anyone holding such a patent would be
    (AFAIK) entitled to control the reproduction of
    the sequences, that is, since we are contantly
    reproducing them in our bodies he could charge
    us for letting us live...
    Now, this would make patent law a satire just too obviously.
    Still, (again, AFAIK, correct me, if I'm wrong)
    patents on gene sequences (that is, their
    applications) have a new quality: They do not
    cover applications that the patent holder has
    thought of, they cover all applications that
    become possible only if you know that gene
    sequence.
    If I remember it correctly, there are already
    cases where companies hold patents on certain
    proteins in our bodies (again, not the proteins
    themselves but any of their applications) and
    you are not allowed to TEST for these substances
    without paying them license fees, even if you're
    using a completely new testing method you developed on your own.
  • You heard it here first -- intellectual-property idealists will revive a grand tradition by copying their genes without a patent license. Someone will print "Information wants to be free love" on a black T-shirt, and all around the world, geeks will go out into the streets and protest WIPO and the genome barons by having sex. With themselves, mostly, but hey -- it's the thought that counts.
  • by Anonymous Coward

    Since when did they start allowing software patents here ?

    If they are indeed allowing them then they are restricting my freedom of expression. My programs are my art (I don't and won't write them for money). Patenting software is like patenting the golden ratio in paintings.

    I don't know if you've ever really considered this, but not all software is about money; in my case it's mostly about creativity and art (all non-profit, well, except intellectual profit). In case you want to know, I mostly write sound synthesis and processing software and the field is very heavily patented. What artist makes paintings and doesn't share them with other people ? Imagine if you couldn't show a painting to other people if you painted it with a certain brush unless you pay license fees. This is what software patents are to me and probably many others. They need to be stopped now (or at least make non-profit use legal) !

    AC
  • by Kingpin ( 40003 ) on Monday November 29, 1999 @01:30AM (#1498049) Homepage

    All this could be done so much easier. Use applets - people do not have to understand anything at all in order to help out on a project like this. No need to install obscure clients and what have we. I think the only good use of applets is for easy distributed computing.

  • by Lars Arvestad ( 5049 ) on Monday November 29, 1999 @01:31AM (#1498050) Homepage Journal

    Data Theft. An open source program could be modified by Big Bad Corporation Inc. to simply harvest raw data
    and feed it into their own computers, thereby gaining information they would otherwise have to find themselves. Granted, they won't have as much computing power, but when they have their own and the stolen data, they're still saving time. And I am not sure if enough data is produced to keep hundreds of thousands of computers occupied (see the problems SETI@Home had in the beginning).

    The Human Genome Project is extremely open. They try to make all data public as soon as possible, making patents impossible. So data theft is not an issue here.

    False results might be a problem, but I would expect it to be relatively cheap (computationally seen) to check a solution to see if it is valid.

    A distributed (open source) effort will probably not happen because a computation like this is more difficult to distribute than trying crypto-keys et.c.

    Lars

    --
  • by ewanb ( 18483 ) on Monday November 29, 1999 @01:34AM (#1498052) Homepage
    There are some good open source genome projects for doing this efficiently - and we do welcome help of any kind. Here are some open source projects which I know about/work on/

    • ensembl [ebi.ac.uk] is an open source genome project designed to get as much data and software into the public domain as possible
    • EMBOSS [sanger.ac.uk]
    • bioperl [perl.org]
    All these are well backed, strong open source projects with different strengths. Everytime genome stuff comes up on slashdot I try to point these things out to people, but everything gets lost in the noise about people $%!"'ing on about patents (generally without alot of knowledge!).

    Anyway - check out these projects for more information about real open source efforts in biology.

  • > False results. If the data format etc. are known, it's possible to feed the servers bogus results, which could lead to
    inconsistencies in the data base

    Send the same data to multiple receivers (randomly chosen), and see if they produce the same results. (or, at least, choerent ones). If note, one (or possibly more) are lying. Anyway, a closed-source client does not prevent someone to see what it does and send bogus data anyway. It only makes things harder for the ones that actually want to send correctd data.

    > Data Theft. An open source program could be modified by Big Bad Corporation Inc. to simply harvest raw data and feed it
    into their own computers

    This is a more realistic issue, but Big Bad Corporation is probably rich enough to do reverse engineering of the protocol by itself, and access random lumps of raw data anyway. A closed-source client don't make much sense here.

    The real point is that modified versions (i.e. to improve performance) could quickly spread so that just a few uses the original clients.

    If suddenly it turns out that a widespread modified version produces erroneous data from time to time, then probably large amount of computations has to be thrown away. Of course, you could check for that using the same method you use to check for "bad guys", but it's a serious problem if you got only few people running the original.

    My 0.02 Euro as usual.
  • I find the idea that someone can patent human Dna knowledge sickening. That someone can control the use of my own DNA is horrible. Patents have gone mad, this is just sad.
    They cannot control the use of your DNA - you would still be quite able to pass on your DNA (or merge it) by having children.

    What other use do you have for your DNA?.

    The Drug Empires are just looking to guarantee a return on their investments - it sucks, but that's Big Business. I hope the HGP beat them to the key genes/patents.

  • I am all for beating THE patenters.
  • The whole genome sequencing patent race has me mightily pissed off. But as someone mentioned a couple of weeks ago- maybe it's going to take something as rediculious as this to bring the entire patent-granting operation to its knees.

    I think the metaphor used was of a guy walking into a forest and patenting every different type of tree he came up upon. Another favorite is the Amazon.com "one-click" patent. That's like Henry Ford whipping up a nice car and patenting the tires!

    -not responsible for spelling errors-


    That's what I love about them high-school girls. I get older, they stay the same age... yes they do.
    --Wooderson 1976
  • Your T-shirt is in the mail.


    That's what I love about them high-school girls. I get older, they stay the same age... yes they do.
    --Wooderson 1976
  • by jw3 ( 99683 ) on Monday November 29, 1999 @01:53AM (#1498059) Homepage
    Hello, my name is January and the group in which I am doing my Ph.D. thesis sequenced in 1996 a bacterial genome (Mycoplasma pneumoniae [uni-heidelberg.de]). Since we are into genomics, transcriptomics and all other -mics I know at least a little about the way it works - although on a much smaller scale.

    First issue: could distributed computing help? My answer is a brief "no". First, the bottleneck is on the experimental side - getting the sequences, and not putting them all together. Second, although you need quite a lot of computing power to do so, much of the job must be revised and checked by humans, i.e. there is a lot of skilled manual work to do - you have to have "an eye" for the sequences. But the first point is more important.

    Now, TIGR [tigr.org], the commercial alternative to the Humane Genome Project has sequenced more organisms then any other scientific group in the world. Craigg J. Venter seems to be very efficient and hard working guy. Even if you don't like the idea of making money with patents in this area the scientific community owes him a lot - he was the one to sequence the first organism, to sequence Helicobacter pylori and many, many others. On the other side... you know, when M. pneumoniae sequence was about to be published, it was supposed to be the first Mycoplasma sequence. But Venter was faster with Mycoplasma genitalium - and he kept it quiet, so noone involved in sequencing those organisms actually knew there is a race. Now Venter claimed to be able to complete the human genome with much less effort and much less $$, and considerably faster then the HuGeP. I'm not sure whether he is able to do so or not, because it depends chiefly on the "hardware" side - the new Perkin Elmer automatized sequencers they are supposed to use.

    Anyway, the question is, whether it is good or bad if Venter sequences the human genome. In my opinion - it's OK. The Hugep is somewhot different in its purely scientific interest, and I'm convinced that they will produce data of much higher quality. On the other hand, human genome has a considerable variation, so two genomes are better then one. I would not be very concerned about the patent issue, because it will come anyway (because of **!'*%$! american and international patent law) - even if TIGR would not sequence the genome, someone takes the output of the HUGEP project and will patent the same sequences Venter would. Venter just wants to gain a little time for evaluating the sequence before releasing it to the public.

    And of course, not the _sequences_ are patented - what is patented, is the usage of modification of a certain sequence for medical purposes, or a certain enzyme as an aim in medical treatment.

    Regards,

    January

  • I actually submitted this a few weeks ago, but with the huge anmout of submissions, things tend to take a while to filter through the system :)

    I've had some email from Ewan Birney at ensembl [ebi.ac.uk] about doing this but it seems they lack experience of client coding! I personally no nothing about that at all, I'm a bender of metals and I can just about write html on a good day. If anyone has any help to offer, you could visit their webpage....... I've not added his email address in case he's paranoid, but I can forward stuff to him :)

    Cheers

    Troc
  • by Skinka ( 15767 )
    I might be stating the obvious, but this really depends on how much bandwidth is needed, call it some kind of "IO/MIPS-ratio". Three kilobytes worth of keyblocks from distributed.net will keep my computer uccupied for two or three days. SETI@home, I've understood, needs a lot more bandwidth, something like 100KB/day depending on the CPU (I've never tried SETI, correct me if I'm way off).

    I have no idea as to how much IO these DNA-strand caluculations need, but I would be more than happy to ditch d.net and donate my spare CPU time to this project if it is feasible.


  • Hmmm, does anyone else think God (or Alla or Odin, or the Great Bannanarama, or whoever your supreme being is) will have a problem with these big companies patenting His invention?

    Yes, he does. Unfortunately the Other Guy has all the lawyers.... :-)

    dylan_-


    --

  • by counsell ( 4057 ) on Monday November 29, 1999 @02:05AM (#1498064) Homepage

    It's good that hackers are well-informed and principled enough to think it matters. This happens to be my area of interest; I'm responsible for Bioinformatics at the Institute of Cancer Research in the UK. A couple of weeks back I went to an excellent talk by a clever guy call Ewan Birney from the Sanger Centre [sanger.ac.uk] near Cambridge, UK. He is writing code to catalogue and annotate the assembled sequences in real time as they come off the mammoth robot sequencing "production line". In one of those rare occasions where the British are leading a "big science" project the Centre has been responsible for the largest fraction of the Human Genome sequenced at any single institute. The code does stuff like figure out which bits of the sequence are real genes and which bits are that 90%+ of so-called "junk DNA" you might have heard of and also attempts to assign provisional functions to the genes by various computational means. Eventually people in white coats will have to confirm such assignments properly, but it's important to beat the drug companies to making good guesses.

    Ewan's code and all the data are entirely Open Source. If you've got a good reason and a reasonable Pentium with lots of memory and a 30Gb hard disk you could mirror the human genome and get it updated every night. (I feel strange just typing that sentence and I've been following this story for years). The Wellcome Trust and others (including US and European government agencies) funding the project are keeping everything Open because that's the way science is done and because this will subvert commercial attempts to stake a claim on our species' genetic heritage. (Er, go Wellcome!)

    Biochemists often talk about the "rate limiting step" in a reaction---the single point which sets the speed of the whole process---like a bottleneck. As far as I understood Ewan's talk (if you're reading this Ewan, please put me right), the rate-limiting step with the Genome Project isn't the assembly of the sequenced stretches of DNA (or "contigs") as the original poster suggests, but the collection of the data in the first place. At the Sanger they have clusters of PCs and Alphas crunching the contigs---distributing the effort would give us all a warm fuzzy feeling, but wouldn't be essential. Again, I may be wrong about this.

    One thing that definitely is a priority is making some sense out of all of this information. What would be great would be if members of the global community of hackers started taking molecular biology and biochemistry classes so they could write code to help people like me make sense of the embarrassment of riches that the project is creating. I'm off to Cambridge in two weeks to the Bioinformatics Open Software Development [mrc.ac.uk] meeting to listen to some project leaders talk and discuss the existing efforts. Personally, I would love to give crash courses in biology to programmers with time on their hands in an effort to harness their collective genius rather than sponsor an effort to write a contig-crunching client to harness their collective spare cycles, but I have no idea how such a thing could be organised. Any ideas?

  • by Lars Arvestad ( 5049 ) on Monday November 29, 1999 @02:06AM (#1498065) Homepage Journal
    Common successful distributed projects in cryptography rely on the fact that all you need on a client is the algorithm and a few keys to try. Therefore, clients are really cheap (resourcewise) to distribute and use.

    In the case of the Human Genome Project, the situation is somewhat different. A well known analogy is the following: Take a few copies of a newspaper. Feed it through a shredder. Remove a handful or two of paper. Insert errors. Now, piece together one copy of the original newspaper.

    In order to make a useful contribution, a client is going to need a lot of data. This means that it will be difficult to distribute (long downloading times for instance) and that few people will appreciate having the client on the machine because the client will be using a lot of memory and the machine might be a bit unresponsive (your HGP screensaver might flush all your apps to disk for instance).


    Lars

    --
  • Plus there is the double dilemma of having the company send bogus results, sabotaging the project, and also using the program to their advantage to add to their databases.
  • Bioinformatics generally has a very good cycles to data ratio - ie - we have algorithms that take alot of cycles for very little data. So it is feasible...

    Does anyone want to write it? If so - I have alot of CPU hungry algorithms to run.

  • >The patent (if granted) WILL expire someday

    Technically yes and the same thing could be said about copyright. Except the industry which holds copyrights has gotten extremly powerful. An interesting trend is that whenever the original disney copyright for mickey mouse etc... is about to expire the copyright term is extended (yes for both new copyrights and old copyrights).

    This extension of copyright clearly serves no public benifit (these works have already been created so reatroactively extending the copyright doesn't encourage the production of new works) and yet it is enacted! If the biotech industry became large enough such a scenario is possible (tho less likely because of competition within the industry).

    For further information about the copyright term extension act and efforts to fight it visit copyright commmons [harvard.edu]
  • by ewanb ( 18483 ) on Monday November 29, 1999 @02:13AM (#1498069) Homepage
    Consell -

    Great that you were following the talk. I thought I put everyone to sleep

    The rate limiting step at the moment is effectively the mapping in fact, then sequencing. The interesting thing about the analysis is that the amount of CPU is unbounded. If we have more CPU we just use more accurate algorithms. We can do something within the CPU bounds on the hinxton campus, but if anyone wants to give me a super computer, then we could get more accurate analysis.

    I can always use more juice!

  • by ewanb ( 18483 ) on Monday November 29, 1999 @02:15AM (#1498070) Homepage
    Lars

    This is only for the assembly and not for the analysis. With analysis you have a better data/cycles ratio. Assembly is done at the genome centres anyway...

  • I understand this company is giving kickbacks or the promise of free use of the results to Icelanders in return for this right.

    If they start selling your info to insurance companies they breech a contract they had with your government and you can probably throw them in jail just like you could if your doctor started selling your medical records.

    The insurance rep issue just really isn't unique. Eventually some insurance company will begin offering extremly low insurance if you DON'T have a history of heart disease in your family and these people will be more than happy to hand over records to prove this. Eventually competition will drive the price of insurance for people who don't open their records to insane values. Eventually the solutioni will have to be either a) let some people die (bad idea) or b) government guaranteed health care
  • I believe that the bottleneck is somewhere else - namely, getting the DNA, running PCR on it to amplify it, then cleving the DNA into chunks, and then running the chunks thru gel plates, and then getting the data on the chunks ...

    From what I remember and understand about gene sequencing, the process is:

    Running a PCR reaction. This induces DNA reproduction. You run this (mostly) by cycling the temperature the DNA is at in a special medium. And the DNA chains cleves and reproduces exponentially (1-2, 2->4, 4->8, etc).

    Cleving at certain sequences. This breaks the DNA chains into chunks. The chunks are then analysised by some gell chromatography. IN the movies when you see people hold up 2 film with bands in it, that's what they are doing - the chunks are of a certain size and migrate thru the gell at a certain speed - and when the lines match up in intensity and location that means the same concentration of a particular block is present in both the standard and the unknown.

    Repeating the process, cleving at different sites, until you have enough information about the "chunks" to reconstruct the sequences.

    For a computer to match the chunks up, it's not that difficult. It's just like sorting arrays - not that much processor power is required. Storage, maybe, but not processor power.

    The most time consuming part is running the reactions, spottingthe plates, running the plates, blah, blah...

    -=- SiKnight
  • You mean there are no lawyers in heaven?
  • Ewan is a very informed and knowledgable guy at one of the key centers in HGP, so he needs more moderation. Hey Ewan, go get more karma!


    This is only for the assembly and not for the analysis. With analysis you have a better data/cycles ratio. Assembly is done at the genome centres anyway...

    Then I don't get it. The original submission was about trying "to match the bits up with other bits like a giant jigsaw puzzle". Clearly this is about the assembly problem, no?

    What kind of analysis what this be?

    Lars

    --
  • I, for one, don't like the idea of a private company owning my gene sequences. They will be able to limit the use of these so only really rich pharmaceutical companies will be able to develop drugs etc and then sell them at huge profits, which isn't realy for the benefit of mankind blah blah blah.

    This is an interesting statement. How do you think drugs are made now? Well, they are made by big pharma companies which make (often) a good profit. Drugs are not made for the benefit of mankind. They are made to make money.

    When it comes to patenting the use of some genes, we should consider that:

    1. patents are short lived.
    2. A company has no interest in not using its patent. So for some money, other companies will be able to buy patents
    3. patents don't stop anyone from working on whatever is patented. Lawyers always find ways to circumvent patents

    On the subject of open source distributed computing for genome data, I am afraid I agree with other people here. There is simply too much data to download. It's a pity, but it won't work. Maybe in a few years time when the problems in genomics will have changed, other problems might be more suitable to this type of computations.

  • I declare myself prior art....use me as you will.
  • But is not the medical records anonymized before DeCode can make use of them? I guess it would be possible for them to deduce for whom the data is in some cases, since they have access to a near full family tree for icelanders, but it would in the end be quite obvious that they had done something illegal in that case, wouldn't you think?

    The main problem in my mind is that they have been given exlusive rights to this data. That is really giving away a gold mine.

    Lars

    --
  • Does that surprise you?

    //rdj
  • All the various distributed computing efforts have something in common: while they all require extensive processing resources, they all need very little data to work on.
    Basically they just need to synchronize: in the case of the famous distributed.net RC5-64 contest, they just need to decide who will try a given set of blocks.

    Sequencing the DNA seems (I don't have the title to claim otherwise) to be not that much a matter of computing power, but of an immense dataset to work on.

    It wouldn't be possible to distribute that much information in terms reasonable enough to make the effort worthwile.
  • Wouldn't it be a really good idea to tell the religious right about that? Perheaps there'l finnaly come something good out of them after all!
  • I assumme that the original poster did not understand what was going on ;). Like alot of slashdot in this case - concerned but not knowledgeable.

    Celera always talk about the assembly problem as they have gene myers solving it (he has) and think it is pretty cool. It is not trivial, but from my view (an annotation centric view) not the most important thing.

  • by Anonymous Coward

    They aren't inventing DNA at all; DNA technology
    has been around forever. They are just
    reverse-engineering something that already exists;
    therefore shouldn't be allowed to get a patent.
  • Ewan -

    You seem to be the guy to ask... can you give us some specifics on the hardware involved? How can I get more info on the systems used for data gathering, cataloging, analysis, storage, et cetera, on projects like these? Even just some CPU generalisations would wonderful... Drooling over supercomputers is a hobby of mine, see... ;-)

    Thanks!
  • I get the feeling that the patterns are significantly harder to find than to verify.

    This would make false data less of a problem ( since it would merely act like any other flooding DOS attack).
    John
  • Hi,

    >And of course, not the _sequences_ are patented -
    >what is patented, is the usage of modification of
    >a certain sequence for medical purposes, or a
    >certain enzyme as an aim in medical treatment.

    So, what you are telling me is if I'd like to use patented
    sequence for non-medical, "basic science" purpose, I don't
    even have to ask patent holder for permission? This is as
    far from the reality as it could be. I know _SEVERAL_ examples
    where people were not permitted to use such patented strings
    of AGCT because _their_"basic science"_results_ could
    possibly affect revenue of the patent holder or be used by
    others to "overcome" patent claims. How good is that?
    IMHO patents on the sequences will definitely slow down
    progress in the basic research.

    Regards,
    kovi
  • I'm surprised that the US in particular hasn't done anything to reduce the most glaring anti-competitive aspects of patenting. Doesn't the free market lobby have anything to say on the topic?

    Patents have always been intended to reduce competition for a limited period, so that inventors have an opportunity to bring their research to market during a sort of protected honeymoon period, but in practice that no longer works very well in the modern world. It's all to do with timescales: in the computer age and with instant global communications, timescales for everything are shrinking, and in some areas an advantage period for the patent holder of more than say just a couple of years is starting to become inappropriate, a restraint on progress, development and trade. Although it's impossible to tell what might have been, who knows which entire market sectors might have developed if their pivotal idea hadn't been tied down by patents.

    Be that as it may, it's rare for a week to pass without totally ridiculous patents being highlighted here, and the analogy with icebergs definitely applies -- there's vastly more out there that we don't see on Slashdot. The whole area is clearly in utter shambles and needs urgent review.

    A "fix" doesn't have to be complicated. As far as I can see, just three things are needed: a ban on patenting algorithms (as enforced elsewhere); a short, strict and non-extensible time limit (possibly related to the field, eg. default 2-3 years but longer in the nuclear power arena, for instance); and an informal "public review" system not unlike Slashdot, run by the patent office and used both to supply niche information and also to weed out the type of nonsense that translates into "how to breathe air".

    But of course, something that simple could never come about, because otherwise patent lawyers would be out of a job. Oh well.
  • Hardware at the moment generally are clusters of alpha boxes or intel boxes (running tru64 or linux respectively).

    The two big drainers on CPU for analysis are gene prediction (genscan) and database searching (blast). database searching can't be distributed easily as you have to worry about the database ;)

    However, there are programs like sim4, genewise and est2genome that could greatly help us and could be distributed.

    Genewise - you can download (I wrote it) at Wise2 [sanger.ac.uk] est2genome is somewhere around as well.

    For the more general overview of the problem - check out ensembl [ebi.ac.uk] for an idea of the project.

  • Absolutely - see my reply to the post above yours.
  • When I first read of this, I thought to myself "What exactly is the use in patenting the results of this research?" From the posts I've seen, it seems that companies intend to patent the information they discover about the human genome, which can then be used to create cures for diseases. However, if standard medical law prevails, there's no way they can deny a person access to the information necessary to save that person's life or to prevent his/her disease if that person cannot afford to pay for the information. Basically, just like an emergency room can't turn away people who can't pay, how could a company that patented a human genome withold that information from people who can't afford to pay?

    Jeremy
  • I do not think such a huge computing power is going to be neccessary. Human brain-power will be far more important IMHO.

    Depending on the sequencing methodology used, there are different approaches to assembling the sequence. As far as I remember the human genome has ben cloned into YACs, which may hold some 1,000,000 base pairs. If these are sequenced with a "shotgun like" method, they would generate some 20000 fragments, around 500 base pairs each. The whole sequence would be assembled by means of mathcnig 'overhangs'. If sufficient fragments are sequenced this should not be any problem at all, sth any desk computer could perform.
    Once all YACs are sequenced, they would be assembled into the 23 (+1?) chromosomes. This does not seem to be too difficult too.
    I see two big problems:
    1. Debugging. If they use standard sequencing methods, the error rate may be as high as 0,1%. How are they going to cope with this?
    2. Sequencing telomers or regions composed of repeats. This is going to be tricky.

    My conclusion is: No distributed computing project is neccessary to accomplish the task.

    Unfortunately I have never participated in an coordinated sequencing project, and all of the above are just my personal views.
  • There are aspects of the work which have
    a good data/cycles ratio. (surprisingly).

    I would read about the subject before you pronounce... ;)
  • isn't the whole point of insurance that ti costs you money but WHEN something happens you're covered. You're paying for security. If you move more and more to charging people whit a family history (of heart failure for example) that defeats the purpose of insurance - you are charging people for the services they require not "insurance"

    Neutrino

    the light at the end of the tunnel is the headlight of the oncoming train.
  • by ewanb ( 18483 ) on Monday November 29, 1999 @03:17AM (#1498097) Homepage

    It is clear from these postings that people would
    like the client to run. If there are people with
    experience in writing these sorts of d.net systems
    then please drop me a note. We have the problem
    for you to work on - it is just a question of
    figuring out how to do it.


    Drop me a mail (birney@sanger.ac.uk).
  • Thanks troc - just got around to reading this
    commnet.

    I have sort have appealed at the top to people
    to come along. People seem more interested
    in writing about patents than getting down to
    nuts and bolts of course....;)

    If there is anyone out there who would like to
    do this coding, as sure as hell I don't know how
    to it ;). But I know what to run...
  • While I agree that the bottleneck is elsewhere, I do not think they use the procedure you mention. If some of them do cycle sequencing, remember that it is not exponential (unlike PCR). There is no reason to use a thermostable polymerase for anything else here, and I would bet they do 'old-fashioned' sequencing with T7. And if they use restriction enzymes I would bet on SauIII because it might be used to generate more or less random fragments. Other might be useless, because they are not working with plasmids! The sequencing method you mention seem sth like Maxam-Gilbert, but you do not seem to remember it well. You do not need external standards (nor internal) for DNA sequencing, and they certainly use the Sanger dideoxy method (Maxam is usually used in footprinting experiments). Finally nobody does radiographies on film now. You just fluorescence detection in an automatic sequencing aparatus. I definitely agree with the conclusion, it takes more human-power than computer-power to complete the project.
  • Well, if you do decide to hold such classes then be sure to let us know. If it's anywhere near Cambridge then that means a 2-hour commute for me, but it would be well worth it -- this is an extremely important area.

    I sure hope that what you have in mind is evening classes though, as otherwise you'll get just the unemployed to attend, which would be limiting.

    Sounds like an excellent project!
  • join in with ensembl and help us out. You
    would learn *alot* of biology v.quickly ;)
  • I assumme that the original poster did not understand what was going on ;). Like alot of slashdot in this case - concerned but not knowledgeable....


    More like, 'not very good at writing things down succinctly' I've spent so long writing my bloody PhD that I tend to add millions of extraneous words to everything I type. I'm also not a biochemist, just a humble materials scientist who makes high-pressure gas cylinders for a living.


    :)
    troc

  • The problem is not that patents are short lived, they are too long lived.
    Now how about defining the time in internet years? Anybody in favour of defining the time based on the technological rate of change?
  • Never knew there was a race to decode gene sequences using computers. There is a race for low paid women to load the sequencers but the "decoding" of the sequence is not the limiting factor. You've got to be damn good to get into those labs. Harvard PhD quality.
  • The examples you cite are violations of patent law; one is always supposed to be able to use results like that for basic research.
  • The problem is not that patents are short lived, they are too long lived.
    This depends a lot on how long it takes to make a profit from a patent. Drugs are in general patented for 20 years. Since you need 12 years of research and development before putting a new drug on the market, you are left with 5 to 8 years to make the money you spent in 12 years of R&D + the money spent in marketing and distribution. If your patent lasts only, say, 5 years, noone will make new drugs, which is bad for everyone.

    Now how about defining the time in internet years? Anybody in favour of defining the time based on the technological rate of change?
    Real life works differently! Chemicals are still designed by human beings. Yes, a lot of robots are used, but just for dumb things.

  • The whole area of concern about clients being compromised to return incorrect results stems from the meme-setting effect of dedicated clients like rc5des, seti@home and (it seems) all others currently in existence. Their susceptibility to being cracked and reworked is entirely due to the dedicated nature of their task, as it gives nasty-minded people a visible target.

    The problem would not arise if distributed clients were generic, ie. if they would do arbitrary computations on arbitrary data received from arbitrary sources. In other words, if a global distributed computing system accepted numerous different computational tasks from the public and distributed interleaved fragments of them arbitrarily to an undifferentiated pool of clients, it would no longer be possible for clients to be compromised meaningfully. (Clients would really just be maths engines, and you'd be detected pretty quick if your client made 2+2=5.)

    Would there be interest in creating such a global computing system as a free software / open source project?

    [Note that pretty single-task stats displays would still be available from the task sponsers' site, but that's a completely separate issue to the one of data distribution and computation.]
  • by Anonymous Coward
    Someone in Japan has applied to patent curry, of all things. If successful, the guy gets a royalty every time the Brits dig in.

    And with the WTO, all other countries will have to recognize and comply with the patent.

    Can you believe it? Curry, of all things. Wonder what the folks in India think of this. We've lost our minds.

    It's on today's London Times.
  • However, if standard medical law prevails, there's no way they can deny a person access to the information necessary to save that person's life or to prevent his/her disease if that person cannot afford to pay for the information. Basically, just like an emergency room can't turn away people who can't pay, how could a company that patented a human genome withold that information from people who can't afford to pay?

    Hmm, sounds like a good point, but laywers have probably worked this out already. After all, you can patent compounds that are used for various treatments, and this has been going on since before the discovery of DNA.

    I read in an text on patenting that you cannot for example patent a surgical method, but you can patent a device that is basically necessary for same surgical technique.

    Lars

    --
  • I believe that what is meant by patenting, they probably mean their own version of the data. Like someone mentioned in a previous slashdot article (I forget the name) like a surveyors maps, if you put a copy of that map in your report, you should at least acknowledge the source, and even inform them that you're using the maps. Information from the HGC, would be free, that information could not be copywrited or patented by someone else because HGC got the information themselves.
  • There is also a slight problem of the practicality of having a distributed client. The problem here isn't really a matter of brute force.

    You need to sequence the gene first. This is the long and costly part if I remember well.

    The computing power is used mainly to see the similarities with other genes already discovered (in humans and in other species). Here you need more of a huge database holding all the information as you simply search for matches and near matches in the sequences.

    I'm not sure it would be very useful to have a distributed client for this. And for myself, I'd rather wait a few more years and be sure that I can trust those results.
  • Who's to say that if a person(organization) that was planning the applications for the public, turned in an application, then it "disappeared misteriously"?(the patent offices arent free from greed). Im not saying that ALL of the corperations out there trying to patent these applications are bad, because I dont know the whole story as to why each company wants to patent them.(my guess still remains money, but you never really know).
  • AFAIK, once you publish an idea or invention in the public domain it becomes un-patentable. This is what the Human Genome Project is doing. every gene is published within 48 hours of discovery.

    Correct me if I'm wrong but I've seen this mentioned in other YRO and AS articles.
  • by SKicker ( 27704 )

    I worked for a bit as a CO-OP student in this area last summer, which is not to say I know anything about this, but.. :]

    While distributed computing would probably benifti the HGP, there are a couple of points to take into consideration.

    1) How secure is distributed computing? SETI and RC5 arent really all that concerned with the the integrity of the data they are getting back. They can just re-check a data block if it is a sure sign of ET or whatever. Here there will need to be a guarantee that data has not been tampered with.

    2) It seemed to me that some of the tools used could do with some open source style improvement by the hacking(coding) community before throwing lots of computing power at them. [sanger.ac.uk]

    As for the patent stuff... bah!. Let the lawyers mess around with that, everyone else can concentrate on the advancement of the human race.. or something like that.

    links:
    Genome database [gdb.org]
    The Sanger Centre [sanger.ac.uk]
    The NCBI [nih.gov]

  • >Does anyone have any information on the computing systems being used?
    > Come on, there have to be a few NIHers reading /.! ;-)


    I work as a Macintosh support tech over at NHLBI (the National Heart, Lung, and Blood Institute) and interviewed recently for a position over at NHGRI (I didn't get it mainly due to non-competition agreements between the federal contractors who supply NIH). Like any good geek, I asked about the machines in use on the project. Apparently, while some processing is done here in Bethesda, a lot of it is done at other sites (universities and such) on Unix boxen, although my interviewer wasn't sure of the specific platform. At the institute itself there's a fairly large number of Macs used for graphic analysis of the data and both Macs and Wintel PCs for basic stuff like writing papers and reports.


    I can tell you NHGRI is pretty well funded within NIH, right up there with the cancer institure and the infectious disease institute (which deals with things like AIDS and whatnot). They certainly have more translucent Macs than any other institute. :-]


    And yes, they do use Linux there, although from what I gather, it's mostly being used by individuals experimenting with the system, and not for any actual rendering/mapping of gene data. Coincidentally, I took my first Linux support call a couple of weeks ago from somebody here who installed Caldera 2.2 and needed help setting up networking. Got him set up in only minutes, and soon he was enjoying NIH's 300kbps-and-up network connection. Makes watching MacWorld keynotes a lot more viable.


    If you check the Netcraft records for NHLBI [netcraft.com], NIDDK [netcraft.com] (National Institute of Diabetes and Digestive and Kidney Diseases), and NHGRI [netcraft.com], you'll see that NIH is far from your typical NT government shop. Plus, the NHGRI [nih.gov] main website has lots of info on the project and why it's a Good Thing.


    BTW, slightly off-topic: there are 12 people in my support group, and of those, I'm the only full-time Mac tech, while two others are mostly PC techs with some Mac skills. Oddly enough, the PC people are always busier than me despite having roughly the same number of machines to support.....

  • I think it has just as much potential as patening the DNA sequence - just think - the way your lungs force air to pass over your lips may just infringe on patents related to airflow caused by jet engines and wings. Anyone ready for a cease-and-desist-breathing order? :)
  • With all due respect, it is WAY to early to say that we've been looking "in vain" for signs of ET signals. So far, the SETI@Home project has only been looking for candidates. It is in the next phase that they begin to search just those candidates for repeat events. To quote from their October 22 announcement: SETI@home has now accumulated more than 100,000 years of computer time, more than any other computing project in history! We have recorded over 85 million "candidate signals" (spikes and Gaussians) in our database, and we're preparing to start the second phase of analysis, which will search these candidates looking for "repeat events". As jw3 has posted already, the human genome project is not well suited to distributed computing. SETI, on the other hand, is perfectly suited. Even though we may wind up with nothing to show for it, I think the project is still worthwhile because unlike other distributed computing projects (such as finding prime numbers, which is cool but won't actually benefit society in any way), the potential rewards of finding an ET signal are, quite literally, unimaginable. Imagine if someone suddenly gave us all the knowledge we ever wanted. Unlimited energy. Anti-gravity. Faster-than-light travel. This is the potential rewards of SETI and SETI@Home. Still think it's not worth looking?
  • Crash courses in bio for coders? Sounds cool, but how about the same thing for geneticists and biochemists, who want to learn to code? Here's a question: what do yo uthink would be more difficult, giving coders a quick, intensive lesson in biology to get them up to snuff for the sort of thing we're discussing, or trying to teach someone like me (a non-coding geneticist) how to code appropriately, as quickly as possible?
  • You need to sequence the gene first. This is the long and costly part if I remember well. This is silly. The sequencing is the computer intensive part. You need to take the chunks you've got and attempt to line up the overlaps to make larger chunks. So passing out a hundred or a thousand or however many sequence chunks to be compared and good matches returned would be an excellent way of doing things. And would have the side effect of publishing results; necessarily you'd be handing out copies of the longer sequences you've established, until everyone who wants it has the whole sequence. (Although, I think it's kind of large, bitwise.:)
  • Please, give me the examples.

    Look, imagine someone patented the modifications of the xxx gene as a target for gene therapy of early-onset Alzheimer. Of course I cannot try to develop a gene therapy using this sequence; but I can clearly use the sequence for scientific purposes, i.e. researching Alzheimers disease.

    Maybe what you mean is patenting genes which have been artificially modified, e.g. sequences of transgenic enzymes used for research purposes (all those "TM" polymerases and such). In that case the sequence is patented, because it was developed by the company selling it. Since you have usually no access to such sequences, it should not be a problem.

    Another idea - if a company sequences a gene, they can keep the sequence and you may not use it if you somehow get you hands at it (I think). But noone can forbid you to sequence the gene for yourself - in fact, that is the case of many bacterial genomes (E. coli has been sequenced several times by different teams, but only one or two sequences were published).

    I am no lawyer, so I just present you my general idea of how the things work. I'm not at all sure whether I am right or not.

    Regards,

    January

  • Unless someone has the time and money to distribute microarrays and bench time at a local hospital with a good clinical lab, the clients would be worthless. Venter's efforts are succeeding because of Celera's partnership with Perkin-Elmer.

    However, Celera appears to be less than picky about the quality of data they are producing. So the same approach as theirs (multiple shotgun sequencing runs for each block of base pairs) with parity checking and/or some means of verifying data would be fine.

    Celera's operation is effectively a distributed effort already, it just happens to be in one building. The government will most likely step in and appropriate the sequences for a reasonable fee if it turns out that Venter et al. have reneged on their promises to distribute the sequences freely.
  • Hell I got 6 servers and about 150 workstations available. I would be more than willing to put this on a small chunk of my network. It would be better than Setti
  • It's crazy what they'll grant a patent on. Does anyone remember the patent Compton's multimedia had? The one something like "Information stored and indexed on a CD-ROM"? They actually had that patented, but kept it pretty underground until CD-ROM encylopedias really took off, then announced it as some sort of trump card and implied they were going to start charging royalties to every CD-ROM manufacturer. Seems there's a whole bunch of really silly, generic patents these days. I wonder if I, as an individual can patent some fun ideas. Like "sneaking out the backdoor using yardwork as an excuse to avoid your mother-in-law." Or maybe "accessing informational databases over a computer network." Wouldn't that be wonderful?

    The real fix is for the US to stop issuing these crazy patents, however as I recall the courts have done some good when it comes to sanity checking. I think Compton never tried anything because their lawyers decided no judge would let it fly in court. It makes for a nice second line of defense.

  • ... is something along the lines of Lexis/Nexis and the law. Nobody can copyright the law, but the indices and commentaries can be copyrighted, and (whether we like it or not) currently the search algorithms to manage those indices can be patented.

    So, to carry it over to genetics, the underlying genes (law) cannot be copyrighted (and this is ambiguous still: is it code (copyright) or algorithm (patent)?), but the indices and commentaries on the sequences can be copyrighted, and the search/combination techniques and/or machinery can be patented.

    So, we need to make clear and loud the mandate that:
    1. the human genome itself constitutes information that is in the public domain (or, at worst, the property of the person(s) who contributed the gene sample(s))
    2. that while indices and commentaries on the genetic code may be proprietary in a society that protects proprietary intellectual property, equally protected is the privilege of the people to compile a separate set of indices and commentaries, at public expense, of the same public-domain information, or to license (or acquire by legal means) . Assuming, of course, that any research funded by public money is released into the public domain.


    We need to define the problem fairly and completely, then fight strenuously to make sure that bad precedent is not set.
    Your Working Boy,
  • Hmmm, does anyone else think God (or Alla or Odin, or the Great Bannanarama, or whoever your supreme being is) will have a problem with these big companies patenting His invention?

    As a matter of fact, I'm pretty darn angry about it.

  • Well, if the packets were tunneled through SSH (built into the client), there wouldn't be much of an issue of data theft now would there?
  • The huge processor time needed to assemble the Calera sequence is Calera's problem. It comes about because of how the Co. decided to sequence, by cutting all the human DNA (3e9 base pairs, or bps) into tiny bits and sequencing ~500-1000 bps of each bit. Calera owns all this sequence, and no doubt has the computing resources to do at least a poor assembly of the mass of sequence data they'll generate. A more sensible approach to sequencing is being used by the Human Genome Project. DNA clones 10e5 bps in size are cut in small bits and 500-1000 bps are sequenced from each piece. Thus the assembly is of a 10e5 bp clone. Overlapping clones are sequenced to generate larger segments of sequence. Calera is skimming the human genome. They'll generate a bunch of raw sequence, assemble what they can, patent everything that looks appealing, and then with 40-70% done, declare the sequencing complete, and close up shop. The Human Genome Project will be years finishing things up. Jim Lund
  • I knew a guy who was/is involved with the human genome project. At the time (about a year or so ago), the place he was working at was using a cluster of Alpha's for their work with the project. I gathered that at least some of these were running Linux. They may also have been using Digital Unix or another Unix variant on some boxes. I don't know the details of the machines or exactly what they were doing with them.
  • The two projects are, i believe, going about the whole thing in two very different ways.

    HUGO is sequencing the entirity of the human genome in a slow traditional manner. This does not need excessive computing power.

    TIGR on the other hand is sequencing ONLY the genes of the human genome. They are using a shutgun approach which involves sequencing bits at random and using computing power to match up all the little bits. They need lots of computing power and would be helped by a distributes computing effort.

    TIGR also get to use all the info from HUGO but not the other way around.

    Of course working out what all thoise basses means, needs more processing power then we currently have on the planet. And hopefully distributed networks will startup that try and do protien folding and gene searching and the like.
  • The sentiment expressed on the above post ("you can't patent sequences") is alarmingly common on /.

    Of course, it always gets a lot of attention because:

    a) we like to hear it, and
    b) its grounded in truth.

    But "grounded in truth" does not equate to "true."

    Yes, its true that you can't strictly patent a gene sequence. So what? You _can_ patent a gene's applications, current or future, discovered or undiscovered. Period. And you don't even need the whole gene sequence to do it.

    So....unless your only use for a gene sequence is to make a pretty picture out of it, this means that gene sequences are de facto patentable. You simply can't use them for anything once they're patented (besides, perhaps, abstract art).
  • Hey, yeah, somebody get me Jerr... Uh never mind, the thoughts of actually agreeing with Jerry Falwell on anything gives me shivers. Better keep this from them. ;)

  • If I were an ET, I sure as hell wouldn't give the human race access to all those things. First we need to learn to coexist peacefully and happily. We can't fix our problems by just fixing the symptoms, ie lack.

    - Steeltoe
  • There are a couple of problems here, ones of scale. The Human Genome Project simply has too much data to ever be put on a Distributed network. The data involved at each HGR center is in the Terabyte range, there is no way that even one center's data could be put onto CDs and distributed that way.
    The second problem is that, unlike SETI@Home's data, you cannot break this data up into relativly small packets for processing. The work that is currently being done at the computing centers is one of trying to fit the scraps of data together like jigsaw puzzel. Imagine the futility of randomly mailing 100 of your friends pieces of a puzzel. There is a chance that a ew of them could put together fragments, and help you solve the puzzel that way, but a much more effective way of doing this woudl be to bring all the participants together, and all work on the same huge bin of pieces. This is what they are doing rigth now. The project is not one that takes very complicated and prossesor intensive calulations, but that the data needs to me massaged en-masse, Terabyte by Terabyte!
    And then, once the baseline Human Genome is pieced together, then you start to figure out what all these genes do (the real work in this project). Then you start to pair up real people with the genes that they have, and to put individuals names or even refernce nummbers woudl be an serios invasion of privacy!
    Summary: the Human genome Project will never make a good canidate for Distributed Prossesing for both practical and privacy reasons!
  • Companies specializing in DNA sequencing have applied for patents on hundreds of thousands of sequences, including genes and gene fragments. PTO examines all sequence applications for fulfillment of four major patenting criteria: novelty, nonobviousness, usefulness, and enablement (i.e., detailed enough to enable one skilled in the field to use it for the stated purpose).

    Human Genome News, July-September 1996; 8:(1)

    I am a student at Gustavus Adolphus College (St. Peter MN) and recently attended the Nobel Conference [gac.edu] held here annually. This year's topic was Genetics--the opening speaker was Dr. J. Craig Venter from the Institute of Genomic Research, Rockville, Md., also the president of Celera, Inc. It would seem that many /.ers have misunderstood the purpose of patenting gene sequences--A patent on a sequence itself is possible only if the researchers can prove some distinguishing characteristic or application that sets it apart from other sequences. Given that 99.9% of the human genome is identical from one person to the next, it would prove very unlikely that the gene for a trait could be isolated; since many factors contribute to heredity and one sequence can affect a large number of biological variables, patenting the use of any one sequence would be useless in application.

  • But it is open source....
    Kind of...
    Some of the code that is still widly used was written at OU (Okie Uni) in FORTRAN back before '85 and that was and still is open source. Most of the modern version seem to have been just ports of the old stuff.
  • Get in front of a digital video camera and do your crash course. Convert and post the result to a heavy duty server as quicktime or realaudio, etc., then announce it on slashdot and see what happens.

    An Apple G4 with lots of memory and Final Cut Pro, plus a Canon XL with Firewire link thereto should be possible for less than USD8k. That and some talent should do it. Apply to Wellcome for funds?

  • Exactly. That is a cycles problem in a way that sequencing is not.

    The problem is how a 1-dimensional sequence of length n, with 21 possible values at each location, becomes a 3-dimensional structure. What is needed is an algorithm to solve the problem though, not just power. Does anyone know of a genetic algorithms approach to this problem?

    Right now there are some heuristics: this pattern means a DNA binding domain, this particular sequence means a transmembrane segment, etc. A stopgap, intellectually unsatisfying approach, I must say. :)
  • In evolutionary biology, where we are focusing on reconstructing the tree of life, there are actually very few programs that are licensed under the GPL or the LGPL. There is *one* program (Paup, being distributed with manual by Sinauer) upon which most evolutionary biologists depend that has been in beta testing for 6+ years. With a 30 day expiration built into the binaries (of course, source code is not distributed). The author refuses to license the code under the GPL or the LGPL or any other type of open source licensing scheme. Where I work, we have a cluster of linux systems for this tree of life reconstruction - they are sitting mostly idle because the most recent beta of this program expired last January. The next beta is not even likely to have PVM or MPI support. Anybody want to do some programming for me? :)

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...