Catch up on stories from the past week (and beyond) at the Slashdot story archive


Forgot your password?
Software Graphics

Where Did Affordable OCR Go? 79

Goeland86 asks: "Has OCR (Optical Character Recognition) died down? Where have all the magical programs that translate your handwriting to office compatible files gone? Most of the windows programs nowadays are either expensive (ReadIris Pro 9 about $400) and not that many OSS projects for OCR have released a recent update (Kognition was last updated on July 17th 2003 according to Freshmeat). Has everyone already scanned/translated all of their paper files? Has OCR outlived its use, or is it just a fancy technology that hit a dead end in terms of the market? Have Slashdot readers used it? If so, are you still using it? If not, why?"
This discussion has been archived. No new comments can be posted.

Where Did Affordable OCR Go?

Comments Filter:
  • ocr and pdf (Score:3, Interesting)

    by i621148 ( 728860 ) on Thursday August 12, 2004 @02:52PM (#9951108) Homepage
    i think that pdf's and the availability of the free adobe viewer have pretty much obsoleted ocr.
    ocr has to be babysat also. it is not 100% reliable like scanning to pdf is...
    • i think that education and the availability of the free online dictionaries have pretty much obsoleted punctuation and grammar. grammar and punctuation is difficult to automate also. it is bit 100% oranges like apples is...

      -1 "Stream of Conciousness Frist Ps0t attempt"
    • When you scan to a PDF, you essentially create a high-resolution single channel JPEG image which is the sole contents of the PDF file.

      It does not create text that you can search or highlight and copy with your mouse later on. It's just a picture.

      Now, there is some nice scanning software out there that if you do select "text" mode when you scan to PDF, it does an OCR pass and sticks that in the PDF. But the cost of this software is usually hidden in the purchase of a high-end scanner or printer/fax combo t
      • Image + hidden text pdf is now in most
        lower end ocr products. also in acrobat 4&5 (iirc).
        Sometime you have to dig to find the feature though.

        going from image ->ocr'd text -> text only pdf or text pdf with image snippets usually looks quite awful.

        I+HT PDFs though will let you search for and highlight the text behind the image. It isn't perfectly accurate on placement but it's usually adequate.

        You're quite right that most of the time 'scan to pdf' is just a bitmap. It's the usual default for
    • no, the point of OCR is to make a file smaller than the image, not larger as always happens with a pdf
  • I dunno (Score:2, Insightful)

    by Rie Beam ( 632299 )
    It seems to have just fallen into a middle-market that doesn't exist anymore. I mean, anymore, either documents are handled completely digitally, or just scanned and translated into PDFs or the like. There just doesn't seem to be a need, at least a large enough one to merit attention.
    • re: your sig

      (yes it happens here)
    • Regarding your sig - http;// in Firefox takes me to Interesting.
      • It seems http;//anything.* will take you to
      • It's because "I'm feeling lucky" in Google for http;// goes to Just type http;// , press "I'm feeling lucky" and see for yourself. Firefox by default feeds "I'm feeling lucky" when something that is not an valid adress is typed. This behaviour is controlled by keyword.URL preference in about:config
        • Well, that's easily solved. We just need do one of those google hacks - get enough links pointing to a site with a given keyword, and you can put any site on the top of that list. (cough... "French military defeats"... cough... "miserable failure"... etc).

          [hehehe... imagining the /. response to that suggestion - picking a site to google hack 'http'... and everyone says: "oooh! oooh! let it be mine! pleeeeeease?" ;-) ]
      • In Mozilla it takes me to which appears to be a site set up for the sole purpose of displaying advertising to people who mis-type a URL.

    • that's wierd; that's really wierd

      'zilla went to some random search page, IE went to /. but firefox went to

      if that's what you mean, it wasn't just you

      • I wouldn't know, as Firefox is a dead-end consumer-only browser. Regular Mozilla has a composer, so is a two-way communications tool. I installed Firefox once, but it wasn't impressive. Why would I want a view-only tool for the Web?
        • Why would I want a view-only tool for the Web?

          I don't know, maybe for the same reason I don't use a speaker as a microphone. Sure, it can be done, but why would I want to do it when there are more specialized tools?

  • ...that the OCR market had died down...

    I for one was encourged by the provious progress, but also fustrated by the still existant shortcomings of the software. Clear printed/type written documents still had a high rate of error (these were especially fustrating, was it un reasonable to expect letterquality reads to be nearly error free)...handwriting that was farly clear was getting there, but not quickly.

    Perhaps the Voicedictation market has something to do with it, maybe the recognition rate and qualit
    • maybe the recognition rate and quality is higher than OCR and now people are just reading the documents in with voice Rec apps?

      Assume you have documents you want to convert to digital text (and not just scan).

      If you have money, then you either hire a temp to type them in for $100 a day, or contract them out to some poor schmuck in India to type them in for $5 a day.

      If you don't have money, then your probably not what capitalists like to call a "customer."
      • Too add to this, no OCR packages is 100% accurate. Most will be 95 - 99%, which still means you have to have someone proofread / correct each page. Which is just as expensive as having the text entered manually.
        Side note: I remember a number of years ago, trying out OCR, and it turned out that I could type the page in sligtly faster than it could be scanned and recognized.
      • contract them out to some poor schmuck in India to type them in for $5 a day.

        I worked at a company that did this years ago. We started with OCR, but due to the error rate on even perfectly printed material, we dumped it and sent the material to india. It was pretty inexpensive - much less than our time correcting the OCR mistakes. We had triple entry which was still very cheap.

        I could take something and print it in courier 16 point at 600dpi, scan it, and the OCR would still screw up about 1% of the time
  • Personally... (Score:3, Interesting)

    by WildFire42 ( 262051 ) on Thursday August 12, 2004 @03:00PM (#9951188) Homepage
    Personally, I believed that the amount of return for any further research put into OCR technology wasn't really worth it at this point. OCR is actually pretty darn reliable for printed characters, even if it sucks wind for handwriting. Mostly, people are interested in OCR'ing printed characters, and handwriting recognition is just one of these nifty, shiny technologies that wouldn't be used that often.

    At this point, OCR is a commodity. It's not really worth the hundreds of thousands or millions of dollars for research to get an extra 2% accuracy, so the technology is stagnant and the prices for standard, printed character OCR are dirt cheap.

    With that being said, I see voice dictation as the next big thing. Voice recognition is where OCR was 10 years ago, still new, not many players in the market, and a lot of room for technological improvement. The accuracy isn't that great, even with extensive "training", and more and more, because of the need for archiving, data warehousing, captioning for accessibility (Section 508, W3C WAI and the like), captioning without training is going to become a shining goal within the next 10 years.
    • I'm sure that anything that can successfully recognize handwriting would also be able to recognize a significant portion of the new variety of "Only a human could recognize this" tests being used to validate new logins for email providers and the like.
    • OCR is actually pretty darn reliable for printed characters

      That has not been my experience. I have found the accuracy to be horrible - even on high-end systems. What we ended up doing for a domument management system is use the OCR for searching, and the raw image gets retrieved. 100% accuracy isn't very important then.

      • Well, I've done quite a few pages at Project Gutenberg's Distributed Proofreaders [] where you donate your proofreading time to clean up text scanned in from books and the the first draft from the computer is usually pretty dang close. I find that it is usually just a matter of cleaning up formatting for things like footnotes and scientific notations.
  • by Karma Farmer ( 595141 ) on Thursday August 12, 2004 @03:01PM (#9951198)
    I want OCR that works, and I want a flying car.

    I'm assuming people got sick of paying $39.95 for OCR software that didn't do jack squat, and was about as reliable as handing your documents to a spastic monkey. I'm also assuming software makers got sick of making $3 or $4 (or less) on each package, only to get a million tech support calls along the lines of "It doesn't work. I want my money back."

    For $400, I'm guessing the software vendors can afford a small amount of support, and can expect the users to be willing to understand the limits of the software.
  • by GoRK ( 10018 ) on Thursday August 12, 2004 @03:04PM (#9951231) Homepage Journal
    I usually don't post replies like this, but this question is ridiculously underresearched. OCR is a hard problem. Sure, a OSS alternative would be nice, but until a solution matures, when you really need OCR you need it because it's generally unreasonable either from a time standpoint or a budget standpoint to any alternative. That is why people pay for software sometimes.

    TextBridge, PaperPort, and a host of other entry level programs are available for windows under a $100 price point. Generally if you buy a decent scanner (ie not a $50 piece of crap), you'll get some software capable of doing OCR bundled for free.

    Higher-end OCR packages with better accuracy, more features, etc. often cost quite a bit more. OmniPage Pro is a decent package for only slightly more than $100. ReadIris is a really good program, and is reportedly very quick in comparison to some of the others. I imagine this is the reason that it costs $400.

    There are document management packages out there that have very good OCR integrated that cost a hell of a lot more than $400. Trust me, though, if you're looking at the time or cost of converting a few thousand pages of data into editible text documents, a program that costs even $400 should be a steal.
    • Why bother?? (Score:3, Insightful)

      by Syncdata ( 596941 )
      OCR was a good idea when Hard drive capacity was less fantastic then it is today. The idea of taking a page of handwritten text, and scanning it magically into a supersmall text file was attractive. But OCR wasn't terribly accurate, and to make it so would require quite a bit of R&D. All of a sudden software houses need to hire handwriting analysts.

      In the meantime, Harddrive capacity grew, and all of a sudden, the difference between a 4k text file and a 35k jpg became negligable. The only real bene
      • I bother because the readings for my courses are available as PDFs created from page scans. Now,
        when I need to go back and find something I can't
        grep through embedded TIFFs and JPEGs. So I tried
        AdLib Express, and it works pretty damn well if
        not expensive as hell. Plus, it embeds the OCR
        results in the PDF so you can search within the
        documents as well.
      • Re:Why bother?? (Score:3, Interesting)

        by photon317 ( 208409 )

        The problem is that jpegs can't be grepped like text. People don't just want to scan a stack of images, they want the data to have meaning. In some cases they even want to parse typed hospital forms into an xml format for example.
        • Exactly; the old phrase "you can't grep dead trees" carries forward into this; perhaps it's time to start saying "you can't grep jpegs"?

          Also, how about other uses, like readers for the blind or visually impaired?

        • Legato's software scans an image, and OCRs it when a search query runs..
    • Higher-end OCR packages with better accuracy, more features, etc. often cost quite a bit more. OmniPage Pro is a decent package for only slightly more than $100. ReadIris is a really good program, and is reportedly very quick in comparison to some of the others. I imagine this is the reason that it costs $400.

      You are, unwittingly perhaps, succumbing to one of the most persuasive, yet oldest sales tactics in the book. Just because one costs $400 and another costs $100, there is absolutely no reason to as

  • What a good question (Score:3, Informative)

    by the Man in Black ( 102634 ) <> on Thursday August 12, 2004 @03:07PM (#9951264) Homepage
    My company just paid ~$1,800USD for OCR software (ABBYY FormReader []). We're scanning in stacks of healthcare forms, reading the data, and spitting them out into DBF format. Why? I don't know, I just do my job. It was my responsiblity to review and demo other pieces of software, and ABBYY's was definitely the most robust. Open Source had, as the poster stated, few contenders and even fewer that had been worked on since the 90s.

    I don't know what happened to OCR, but there's certainly still a need for it.
  • Free OCR? (Score:4, Interesting)

    by Asprin ( 545477 ) <gsarnold@ya[ ].com ['hoo' in gap]> on Thursday August 12, 2004 @03:12PM (#9951316) Homepage Journal

    So far as I can tell, NON-free OCR isn't doing so hot either -- you pretty much have to proof-read and correct everything you scan anyway, which just makes it impractical for most purposes. If I had to scan a bunch of records, I'd probably outsource it to a pay service that specializes in that sort of thing, which means it would have to be worth the cost of getting it done.

    What I want to know is what's Google going to do about this? They have a catalog search in their Google Labs playpen that indexes products and their descriptions to make them searchable. ...and by searchable, I mean you can search for "bicycle" and it will highlight all of the instances of that word in some 200+ PRINTED catalogs, not similar HTML/XML/PDF electronic documents. So clearly, they know some things about OCR we don't (and probably 2D map indexing, too), but durned if they aren't letting on about it.

    In the next few years, I expect to see a fully automated Google OCR product that can not only scan your paper docs, but index them and help you search them too, all while maintaining the electronic copies in their original scanned (think photograph) state, not the some bastardized, mistranslated and screwed up PDF or DOC format.

    **THAT'S** what's going to kill Microsoft, and probably why they're so keen to risk overreaching on their IPO.
    • The product you describe has already been created by several companies. For example: ut modules/ocr.html

      Also, why should Google market this product? It's not like they're the only ones who can search OCR documents (if you've used's book searching feature, you'll see the same thing.) Also, it's not like they're going to use PageRank to help them search, because these aren't web pages.
      • Yup one of the companies I used to work for (Stock Exchange) used OnBase for it's OCRing. I admined it, so I got to get pretty indepth with it (And not to mention training from Hyland is AWESOME! Not the training itself really, the nightly bar crawls and trip to the Rock and Roll Hall of Fame at the end!).
        It does what the above posters said plus it has this really slick way of placing the media - you can spread it out over a half dozen different SANs, some DVD-changers, a SQL database, etc and OnBase will k
    • "In the next few years, I expect to see a fully automated Google OCR product that can not only scan your paper docs, but index them and help you search them too, all while maintaining the electronic copies in their original scanned (think photograph) state, not the some bastardized, mistranslated and screwed up PDF or DOC format."

      The scienctific publishing house Elsevier [] did this in the mid-90's.

      They took the past few years of several of their journals, scanned them in, did a less-than-perfect OCR on them
  • I bet someone could make a killing setting up an off-shore operation, say in India, where actual humans read your document and type in the text for you. It'd be cheaper and more accurate than high-end OCR software.
  • by MobyDisk ( 75490 ) on Thursday August 12, 2004 @04:09PM (#9952007) Homepage
    Ok, everyone laugh at me. I say the paperless office killled OCR. :-) Yeah, that thing that would supposedly never happen? That is the butt of office jokes? Well, I think it did and nobody noticed.

    How much paper do you see around you that wasn't already computer generated? Paper still exists as a convenient thing to hang up, or to take to a meeting, but it is always printed. There's no point in complex OCR packages when people can just get the soft copy.

    There is very little left to scan. large organizations that are moving from paper to electronic systems aldready keyed the data in manually and don't need the technology anymore. The internet killed the need for faxes, which were unreadable anyway. What's left to OCR?

    With that said, my bank doesn't offer online statements, so I scan them every month. But I don't bother to OCR them. My credit card company just started, so that will leave me with one sheet of paper every month.
    • Not much that's typed, but a lot of printed filled-in forms still lying around.

      Think "teacher's comments" in school records, "officer's comments" on traffic tickets, doctor's notes, and in some countries, paper checks.

      Yes, a lot of that is moving towards digital-data-entry, and a lot of the rest is being moved to scan-store-and-shred.

      But in the meantime, there's a market for OCR and after-the-fact handwriting recognition.

      As an example, the folks at GrokLaw [] are putting SCO-related court case files online
  • by Smallpond ( 221300 ) on Thursday August 12, 2004 @04:51PM (#9952543) Homepage Journal
    The DP site [] does OCR and proofreads the results for Project Gutenberg. Anyone can join and spend a few minutes once in a while proofreading books. If you are kind of ADD like me, it lets you read about 3 pages of a book once in a while without having to actually sit down and do cover-to-cover.
  • I remember using TextBridge in 1998 on a Mac with an Apple scanner. It was quite excellent at the time. I can't say whether it has improved, but I cannot imagine that it has gotten any worse. On typed documents, I got about 98% accuracy--sometimes better.

    It is $80 now and there appears to only be a Windows version, but you appear to be running Windows, so no problem there. Enjoy.
  • by tweedlebait ( 560901 ) on Thursday August 12, 2004 @06:08PM (#9953354)
    (I'm in the document imaging / conversion industry)

    The term paperless office is considered a joke, and the funny part of it is this: as soon as someone looks up a document in their doc management system they just print it. Even if just to glance at! Copier/printer companies are thrilled!

    There are megatons of paper and microfilm out there left to ocr and process. It's considered a pretty fast growing industry, although stunted recently after the bomb and more by the economy.

    Having ocr'd images is very handy. Here's an open secret though-- Image+ hidden text pdf.
    --Searchable, you have the original doc just as it looked, and the ocr errors don't make such an impact. It's easy to throw into a search engine and the prints look great, and small (b+w use tiff group iv, and jpeg for color jbig is not quite mature yet and only a few apps from cvision do a great job at it)

    Anyway, since people just hit print as soon as they find their doc in a system those file cabinets we tried so hard to empty and organize re fill magically.

    Also, scanning and setting up an edms (electonic doc managemnt system) is considered a luxury. business move slow with luxury items and usually get to reap the benefits of more mature software and systems (but this is NOT always true!).

    Many other slow tech adoption business are just discovering scanning ocr and doc management. Litigation is a great example. xerox was doing quite a few tv ads recently touting that stuff.

    The state of ocr itself is strange. There has been a sort of pleague in that industry of 'weird innovation' for years and many buyouts or companys changing the focus of their ocr product to another industry (like web or xml). Even the small office versions ($500 range) are not geared for any sort of reasonable volume or speed without crashing and burning, and usually designed to be babysat. Using these apps leaves the user with a really bad experience. For those not familliar the process goes something like this for a 200 page b+w document:
    Scan (or import but import is usually crippled)

    gaze at loads of memory hogging eye candy (this is what your upgrade bought you usually)


    correct skew (wait for crappy tools)
    (possibly reboot from crash)

    recognize page -slower with each new version even
    when hardware is so much faster every year. some recognition is improved in some packages. Some of the latest i've tested take over 15 sec and sometimes over 45 sec per page!
    Correct errors / tune learning engine. (sometimes i swear this effort of teaching goes straight to NULL)

    repeat 199 times

    Now since you're locked in your desk and finished scanning now it's time to export! (like i didn't know what formats i wanted before i sat down.)

    So it chews and chews and maybe crashes causing you to repeat all the above steps. Also note that most of these apps keep all the pages pretty much uncompressed in memory, then create a copy of them in memory for your desired output format. (crash)

    2 days of work gone.

    Most users walk away with the feeling of 'Yikes! all I wanted was a word doc of this. I'll just do something else'

    For the home and also small biz market here are some of the 'weird innovations'--
    typereader 5 -- pretty good app! doesn't do image+hidden text pdf though. Pitty. has a batch file import and reasonable priced in the $100 range. nice and fast with good results

    Typereader 6 and up- file import feature moved to industrial version lots of eye candy less stable minor improvement in recog and a bunch of other silly limits & slow

    Omnipage same thing only it's never been great for over 50 pages. horrid workflow and crashes like crazy. very unpredictable!
    Omnipage version 3 was better in many ways than omnipage 14. (lightning fast on today's equipment too :)

    abby finereader - very slow but great recognition, more stable but lame workflow-
  • I've had good results with transym OCR. I had to run it under VmWare. I tried all the F/OSS but it produced unusable results. I think it cost around $40

    I am heading up a project to convert an out of print computer book to LaTeX (with the author's permission) and one of the volunteers suggested this package. One other nice thing about it is that the registered version comes with API documentation and VB6 source code to the front end, so you can change it however you want as long as you don't need to modify
  • Yielded these programs, Last Updated in 2004 that to some degree
    deals with OCR:
    http://w d/ocrad.html
    http ://
  • What's paper preciouss?
  • A Service Bureau (copy shop or whatever) will do OCR in bulk for about 10 cents a page, and that includes the scanning labor (which is sometimes done offshore).

    So, $400 buys you a lot of OCR -- especially when you consider you have to pay labor costs, document management costs, etc on top. So, I wouldn't deploy OCR software unless it's a once-in-a-while thing or something thats central to your business process.
  • It doesn't work that well, and is a PITA for forms. What is worthwhile is imaging the file - just scan the document you want, and "file" it in a directory. When you want a document, look it up the way you would normally then print it. Presto.
  • by tweedlebait ( 560901 ) on Friday August 13, 2004 @03:32AM (#9956399)
    It's slightly off topic but seemed appropriate.

    Here's some quick tips/nuggets of crispy wisdom.

    The art of ocr is like working with autistics. give them what they expect. the more surprises, the more episodes.

    Don't believe the hype.

    Scan black & white to TIFF GROUP IV. OCR systems are optomized for this. Color is new and pretty wacky still. BMP even freaks out in black and white on some packages.

    Make sure your background is white and clean, not specled. despeckling tools can be overused and kill ocr results.

    3 hole punches regularly show up as o O 0 D
    staples: ~ .. // c d

    Deskew all images to a line of text, not the page

    Scan at 200-300 dpi but not higher than 600 or most apps will choke and produce bad results.

    Make a custom dictionary if you can. if you're doing automotive related stuff, look up auto terms and make a dictionary out of it.

    To process tiny text (concordences etc) scan at 800dpi and then fool the ocr by scaling the image to 300. sounds nuts right? ok try it the logcal way first and then come back and try this teq.

    Shaded text is a new thing in document as is inverted text blocks (thanks make my job hell.)you must remove the shading with something like scanfix by tms sequioa- good tool for small doc cleanup for pre-ocr. requires practice and trial and error. interface needs some work though.

    Dot matrix prints should be scanned and some blur added to join the dots (unless you are using something expressedly made for dmp's) as always your milage may vary

    Turn off auto rotate (mangle)features. They are not very smart an often have monkeyvision. just review your images before hand and rotate accordingly.

    If you're scanning something poster size, or engineering drawing size (not recommended for most ocr) cut it into smaller images. ideally regions of interest not larger than 8.5x11

    Remember 99% accurate means 1 character per hundred will be screwed up.

    Table of contents pages are an interesting test for ocr especially if they use periods to lead to page numbers. How many identical characters can occur before the ocr system misreads. often quite telling.

    OCRing a spreadsheet and using the data with out verifying every character? may the monkeygods help you.

    Above applies to processing screenshots, 17'th century print, tabloid print, multi-column, shaded background handwriting w/o special software, modern magazines, etc.

    OCR does not like non seriffed fonts much.

    The post office spends millions on theirs and they have a nice address DB list to verify against.

    Over 90% of banks scan your checks and microfilm them. (and there is some really cool signature verification sofware out there for forgery detection) MICR font at the bottom helps them immensely.

    Breaking down your document into areas can be useful. changing fonts and sizes sometimes throw it off . an example would be computer lit with code snippets interspersed.

    Do yourself a favor if it applies and use image+hidden text pdf. raw ocr is almost always yucky and all those claims of preserving document layout and format are just that--claims.

    If you do use i+HT pdf, or for a larger job for that matter, do it in small chunks so your app doesn't crash. for pdf, join the small documents together in acrobat later or use other tools to do so.

    For fun and science, take an old apple newton 100 and trace over some of the text on your page and compare its results to your ocr package.

    Anyway i hope that helps someone avoid a few landmines and there are many more tips out there. these are from my experience and off the cuff.
  • My guess would be that OCR lost it's appeal when pretty much every text on paper originated in a computer. OCRing nowadays is a need to only a niche of users (form scanning, archives and stuff like that), and those are always expected to pay the premium.

    Anyways, the possible comeback of OCR may occur in the near future, with the inevitable ubiquity of camera phones and processor power behing them. I sure could use a phone that could scan an URL from a newspaper and take me there. Or call a phone number pri
  • And I can tell you that good OCR is WORTH the money.

    And people WOULD pay more for better accuracy. My company pays huge amounts for OCR work, usually getting in boxes of CD's each week.

    Lawyers consume OCR capacity like it was Wine the night before Prohibition starts up.

    Yes we use the low end junk they put out now, but we would love to pay much more money for stuff that was even 10% better. Right now, even OCR of Typed documents SUCKS!!!. Yes it is 99+% accurate, but one letter off in one hundred mea

    • We often get crap that looks like this: "1+ is 1MP0$$18!E" instead of "It is IMPOSSIBLE"

      If it happens often, I can't believe the OCR software doesn't have some way of flagging unlikely usages of '$' with suggested autocorrection.
      • The problem is that there are just too many different possibilities, and too many errors. $o could be So or could be $0 and then there is the real problem, the minor imperfections from copying/on the page that get picked up and turned into wierd punctuation. If you OCR a blank page you often as not get . ' , : ; and all sorts of wierder junk strewn randomly over it.
  • Not super-great but it seems to have some usefullness (for Windoze):

Someday your prints will come. -- Kodak