Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Technology

From Paper To PDF? 188

Spoing dropped this bit of informative info into the bin: "Last week, a friend of mine griped that he didn't know of an easy way -- short of getting Adobe Capture and paying per-use licence fees -- of creating searchable PDFs. I scoffed, and told him I've done it many times, and it was free -- as in beer and speech. Dumbfounded, he pushed me to show him how, and I did; print to a Postscript file, and run ps2pdf on it...done! Since every document could be output as Postscript, his problem was solved. If he wanted to batch process the documents, he could set up a few scripts to simplify the task. While he was impressed, he ended up asking what seemed like an easy question; 'Can you do the same with a scanned image?'" And therein lies the question...

"After a week of on/off searching, I did find some good references as well as nearly all the parts necessary for the job, including open source OCR engines, PDF and Postscript tools, search engines, and the like.

Unfortunately, I came up with only two solutions -- neither of them Open Source, and most quite costly (premium beer); Adobe Capture or dedicated "PDF scanners" like this one.

My question to the Slashdot crowd is this:

  1. Is there a cost-effective way of moving existing dead-tree documents into either HTML, PDF, or other searchable mixed text and graphics format?

We all deal with a mix of electronic and printed documents -- and you're like me you've paid for some of them in both formats.

If you're like me, you buy new documents in electronic, searchable, format when you can. How many of us have O'Reilly's Networking Bookshelf, or some other CD texts ready to search on our notebooks and networks?

Yet, I have a four foot wide stack of technical documents and books that just isn't going to come with me on each plane trip. I'm not going to get rid of them -- they are still valuable -- but I can't figure out how to make them useful more often.

The available tools for capturing paper and converting it into searchable PDFs is costly, and is geared toward corporations that can justify the costs by the number of users. To me, a per-use licence of Adobe's Capture --

  1. Adobe Capture - Prices

    Adobe Capture - Features

-- is just not cost effective.

If the document is already a text document -- even if it's in some word processor I don't use -- generating PDF files is easy and cheap;

Print a document to a Postscript file, or create one. For example a simple text document is trivial;

  1. enscript file.txt -p file.ps

Convert the resulting Postscript file to PDF;

  1. ps2pdf file.ps file.pdf

Converting a paper document to PDF is also easy. Just scan the image and use tiff2ps or jpeg2ps to create the Post script file. The only problem is that the resulting PDF is a bitmap image and isn't searchable.

Interestingly enough, TIFF -- a format used extensively for scanned documents -- does support TIFF+Text, but usually as an extention to TIFF and isn't really an optimal format; The Unofficial TIFF Home Page.

So, if you want to search the documents and keep the formatting and diagrams, you're back to paying Adobe for Capture or some other nearly as expensive method. "

This discussion has been archived. No new comments can be posted.

From Paper to PDF?

Comments Filter:
  • by Anonymous Coward
    DjVu [djvu.com] is a much better solution than PDF for paper->web conversion.

    The files are 4 to 8 times smaller than PDF for B&W documents. the DjVu plug-in is 10 times smaller and a whole lot faster than Acrobat Reader. It runs on Linux/Unix, Windoze and Mac.

    The DjVu compressor is free for non-commercial uses, and the decoder source code is available.

    Expervision [expervision.com]'s OCR software can read DjVu files. They even have an OCR toolkit for Linux.

    Although DjVu supports embedded searchable text, Expervision's engine cannot embed text into DjVu files, only produce a text file (or a number or other formats). For web-based search, you can use a simple CGI script to return the DjVu files that correspond to the text files that contain a match to the search string.

  • by Anonymous Coward
    I know that there's software out there that does it, but that's really not the point. Just because there exists a piece of software to accomplish something doesn't mean that something is legal; see DeCSS (though of course in that case, the law is complete nonsense and it should be legal!)

    Unless I miss my guess, Adobe has patented the PDF format and only Adobe Acrobat (and other related products) can legally generate the PDF material. We had an Adobe rep out here once, and we asked him about third-party PDF authoring software, and he told us the same thing: all roads to PDF generation lie through Adobe. It's kind of a raw deal, if you ask me, but at least there are viewers for Linux. Proprietary formats are never a good thing, but they could be far worse (see MS Word!)

    Anyway I don't intend to tell you how to do business. If you're happy with the way your setup is working for you and you're not worried about the possible legal implications, then by all means, go ahead. Just remember that ignorance is no excuse in the eyes of the law (a lesson that I've learned the hard way a couple of times!)

    --
    Dale Sieven
  • by Anonymous Coward
    I friend had asked me the same thing recently...by accident I found that Adode now offers this service to anyone on their website for free...(You upload your file (MS Office/graphic file) they convert it and mail it back to you....Free (as in Beer) Limits are 50 Mb max, and or 15 minutes processing time. http://cpdf1.adobe.com/index.pl?BP=NS - The Hutch-Meister
  • by Anonymous Coward
    On my system, ps2pdf is a shell script that checks some arguments, then execs gs. It is included in the ghostscript distribution.
  • I have a bunch of jpeg files of some documents that are out of copyright I'd love to convert to text, but I haven't seen anything that actually seems to work. There _is_ the SOCR project [socr.org] but they don't seem very far along.
  • by Anonymous Coward
    At work (Department of Agriculture! woohoo!), I had a Beowulf cluster made up of old (and I mean OLD) AT&T Np-17s, running Linux. Now these are 386-based (hence the need for a Beowulf cluster to get any power out of them), so we were kind of limited in the type of scanner we could use, but someone dug up an old parallel-port greyscale, so we were fine.

    Using SANE, we scanned the documents (all text based), and uses enscript to convert them to PS (we didn't use PDF, but you can just filter them with ps2pdf as you already know). There were some problems with a few of the papers, which dealt with the nutrional value of a few hot breakfast foods, but it turned out that the FDA logo at the top was giving the old scanner trouble. It turned out it was set up for text scanning only, so don't forget to check your scanner settings.

    All in all, the higer-ups were happy, and so was I.
  • I think you might want to consider investing in a 40 metre tall video wall for each side of your building then. Or maybe just some well made telescopes to distribute among your readership.
  • Gee, the paper I work for uses PostScript for negative-making (the negatives are used to make printing plates) yet we get AP AdSend ads all the time, as well as PDFs from other sources.

    How do we use them? Well, we can do a number of things:
    1.) Use Adobe Acrobat to export eps's.
    2.) Use QuarxXPress 4.x+ and import the PDF as an image file.

    The paper I work for also does job printing; we print a number of papers for a few small towns. We get the pages sent to us on a 250MB Zip disk as PDF's. Most of the time we don't even have to convert them. How? Well, our negative maker (or imagesetter) is really a combination of a dedicated "printer" unit and a Power Mac. The Mac handles some of the conversions necessary; our OPI server can actually make 4-color separations of PDFs automatically.

    Yes, PostScript is the standard. But, hey, when something doesn't come in as a PostScript file, you convert. Either that, or you don't get paid to do the job, and your competition does so for you.
  • We used this some time back to scan whole books (including graphics). Just scan the pages in b/w and the pictures in color or line-art and use acrobat to OCR the pages.

    The texts were dutch and we ware amazed about the quality. The texts ware about 99% acurate from the paper version!

    Definitely a winner in our book!

  • There is a group of people working via Cosource.com that aim to produce an open source OCR solution:


    http://www.cosource.com/cgi-bi n/cos.pl/wish/info/337 [cosource.com]


    [Disclaimer: I work for Cosource.]

  • Follow the link and you'll see why....

  • I think part of the size reduction from going to PDF is that PDF does compression (zlib? something like it?) on the contents as part of the file format. (or am I wrong?)
  • Quoth the poster's friend:

    "The searchability factor is the only reason OCRing is needed in most instances."

    That and the need to keep the total file size down to a manageable level.
  • It doesn't matter what format the source is, you can generate PDF files without problem.

    One of my projects is a Java PDF generator library, that allows anything that can be printed in Java to be sent to a PDF file.

    To do this, I used the PDF specification that Adobe publish deep on their web site (Can't remember the link, but a search will find it). The first few pages actually encourage third party developers to write their own generators.

    The one thing that they do restrict, is that no one can "Change the PDF format", but that is reasonable - why (unless you are a certain unmentionable corporation) would you want to do that?
  • IE5 also has a nice webpage saver that saves all the files needed by the page in a separate dir, and redoes the links on the page to point to them. However, it pukes if there is one image that didn't load, and it inserts some random MS crap into the file it saves, including a comment that says the URL where the page was grabbed from (useful), and a few MSHTML thingies (not very useful).

    --

  • (Off-topic)

    Ok, how many of us just opened another window, went to Freshmeat, and spent 5 minutes searching for such a utility, hoping to score some Karma for a link to it? :)

    --

  • ...because you'll get hits on photos of racing cars, Tibetan general stores, liquor shops, yachts, delivery trucks...
  • Try KFM, Mozilla, Amaya, almost anything else.
  • If you're scanning these using an autofeeder, run each stack of documents through three times and write a little parser (patch could almost do it) to sync up the text from all three scans, and where there is not unanimity, have a "vote" (and maybe spell check the relevant word(s) to see if one of the votes matches a known word, or a suggested spelling alternative matches one or a majority of the votes) to decide which word fits here.

    This would sometimes fail where the source document is mis-spelled. A side-effect might be electronic copies better than the original.
  • Visit FooLabs [foolabs.com] and get a copy of xpdf, if your distro hasn't got one already (I'm using Mandrake 7.1 but I recall xpdf in every version of Mandrake from 6.0 on). Type:
    pdftotext filename

    Rememver to add -acsii7 if MeatheadSystems' Index Server doesn't like Latin1 character sets.
  • I wonder if there's a way to search an image for text. Let's say you want to leave the document in image form, say, to preserve the original look, perhaps for historical documents. If it's all in the same font, I wonder if it's possible to do a "text search" by searching the image for the appropriate patterns in the image. This may be a reproduction of how OCR works, although I think it's a separate functionality that could be quite useful in some circumstances.

    Any thoughts from image gurus on the viability of this?

  • didn't check the link, did you?
  • The most logical and economical solution to your problem is to start a cult and attract 10-15 monks who will spend their every waking hour recreating your precious technical documents in digital form.
  • There is an option with Acrobat to do OCR on an image. I worked with a Professor who is also a managing editor of a journal to add a searchable set of PDF's of all issues of the journal. He hired a student to feed the scanner. They then loaded the images into Adobe Acrobat and ran a capture text on it and bampf, searchable. They kept the original image with the text behind so they didn't have to correct the mistakes of the OCR. I don't have a copy of it in front of me, so I can't be more specific.
  • A nice GNU OCR package:

    http://www.socr.org/

    Not currently being developed at a notiable rate
  • The pdf writer plugin does the same thing. When we deliver documents to our clients like finished test plan results, all the people here do a print to pdf (they think word is actually GOOD for technical documentation).

  • RANTPersonally, I *hate* PDF. It's a format for people who print everything before reading. Ugh. Buy a bigger monitor and read on screen!/RANT
    Anyway, I think the way to do it would be to do the following:
    Acquire the images either by scan or by fax. (Or other docs by email or FTP... Why not make this more comprehensive?)
    Store them in a database.
    OCR them as best you can with the tools available at the time.
    Store this OCR'd text in the same row as the image.
    Create a field of keywords derived from the OCR'd text and use this for searches.
    Now you have a simple database of everything you need. The original image, (or document, or whatever,) and the 'Best Guess' as to the contents of the image.
    If a user wants a PDF, let it be created at runtime - Pages 1,2,3...x are the images. The last page is your searchable index of keywords.
    If a better way presents itself later, do that.
    If a user wants it in HTML, great. You can even embed the images.
    The benefits to using a database are this:
    You can always go back and re-OCR the image when better Open-source tools are available.
    You can search you whole company's documents, not just one at a time.
    You are not limiting your users to using one format.

    Don't think of this as a process that has to require a lot of user intervention and only gives you a dead-end format!
    With this method, you are not limiting the output.

    Cheers,
    Jim In Tokyo
  • If you need to create PDF's on-the-fly based on, say, database queries or the like, go grab Zope and the ZpdfDocument plugin.

    Go to:
    http://www.zope.org/

    ... and grab Zope:
    http://www.zope.org/Products/Zope/2.1.6/

    ... and the latest version of ZpdfDocument:
    http://www.zope.org/Members/gaaros/ZpdfDocument/

    We use this where I work (IT dept.) in production.
    Except from the fact that it only handles different kinds of text so far, it does run perfectly.

    Just click the "report link" and there Acrobat opens. Neat.


    Best regards,
    Steen Suder
  • While it's a commercial product, Trapeze from Onstream Systems [onstreamsystems.com] may be a goob idea.. Basically it uses funky document handling gizmos to scan and process paper based content. A system has recently been devised to turn a very marge chunk or Irelands Marriage and death records (and I think birth records) into a searchable, electronic dodcument system.
    As far as I know, they also do OCR as well. All together, it's pretty darn cool. And no, I don't work for them :)

    Cheers,
    Graeme
  • This can't work. Existing utilities to convert PS to HTML are text-only. That is, ASCII text, not graphical representations of text. And even if it did work, it would be a horrendous bother, with manual cutting and pasting of images. But I digress...

    Feeding such a program a Postscript file which contains nothing other than an image will not produce the desired output (if any output at all).

    It doesn't matter how many times you convert from one overlapping format to another; OCR systems don't just materialize out of the ether, someone has to write them. And so far, those who have done so don't see the need to give them away.
  • Contact the support desk for your commercial software to answer this. That's why you pay for proprietary software right? Right?
  • A Beowulf cluster of high school students?
    You're going to give them swords?
    Take a look at "Romeo and Juliet"... No, wait, Romeo and his peers were of junior high school age...
  • The Linux HOWTOs only print out like that if you're looking at the multipage HTML version.

    They are created with tools which create documents in several formats. If you want to print the entire document, you should use one of the formats which contains the entire document.

  • If you can get it to PDF, xpdf [foolabs.com]'s "pdftotext" can get you text for categorization. Indexing to actual pages is a little more complex, but a script can do it because that command can select the page to convert.

    Or there's the related PDFTOHTML [uni-stuttgart.de] if you prefer that for your access method.

  • In our environment, we have lots of users of different systems, few of which talk to each other (the systems, that is). Many of the systems we create can only address one small part of the overall need but do solve all of that one part. In a small app we're doing now, we have a need to print out a form for the user to sign after he enters a bunch of data. We also keep that data in a database so we can use it when we automate the rest of the system. That will require working with other divisions and departments and is not a small task. There is no reason not to provide a useful product even though it is not perfect.

    Creating a PDF document from the web app is the best way to make sure the form can be used, since the users may need to have HTML fonts, colors, etc. overriden for their use, but the form must be properly formatted with specified fonts, etc.
  • DocuLex [doculex.com] has a program that is certified by Adobe and an alternative to Acrobat Capture. It is actually used by the Ricoh scanner you linked to. It appears to be cheaper.


    I haven't had much luck finding anything cheaper. Ideally, I would like something to hook up to our digital copier and convert the scans to .pdf files. I've talked to every photocopier company and no one has a product. They seem to be missing a huge market, but ohwell.

  • Since the main problem with OCR is, of course, proof reading, there's a quick and dirty way to do this without having the dreaded proofreading step - at least not up front.

    Set up your script to link the OCR page with the original scan. That way, your search engine will most likely be able to get you to the correct page, but if the OCR hoses some important words, you can always just click "original page here" to see what it said. This would allow near immediate functionality of your new database and would allow you to proofread "on the fly" so to speak and correct errors when you find them.

    This should be a good solution (even though it is a bit of a hack job) especially if the searcher is familiar with the particular documents and can devise several searches - in case a keyword or two is munged by OCR.
  • I think there may be tools for converting scanned documents to HTML for the web since HTML is an open standard and the web is everywhere. Loads of vendors work with the web and there may be more tools than for PS or PDF alone.

    Printing from Netscape to PS and then using ps2pdf gives nice and searchable results.
  • He mentions this at the bottom of the article, ye who cannot be bothered to read the article.

    The guy wants to know how to take an image containing text, and create a pdf containing an image, with that text as real text, not a bitmap.

    I.e., some software that will OCR the image, grab the text from the image, create a pdf file with that text in, preferably in the same layout as before. If the original image had images with the text, then the images should be preserved in the new document.

    Why the poster didn't say so in such a clear manner is beyond me though!

    Oh, and moderators, this is "Redundant", not "Informative".

  • That is going to create a bitmap type eps file where each pixel is a point. That will not extract the text from the original document. You will need to use some of the recognition software (warning entering area where I know very little) I remember a product called "Text Bridge" that was supposed to take a scanned image and try to recognize the text within the image.
  • Shut the hell up.
    You make copies.


    I'm not insulted because I only work to make money. As long as I am paid well, treated with respect and left alone in my private life to enjoy myself as I will, I don't have any compuntions about making copies.

    Meanwhile, I can work from the inside of a large corporation to fight for the right of consumers to make copies of things.

    Maybe you don't know, but Kinko's ability to make copies for people was hampered by a lawsuit from textbook makers. Kinko's can't make copies of copyrighted things, and are expected to make every effort to prevent customers from doing the same. In spite of the fact that what they want to do might be fair use.

    Because we are not legally permitted to make the distinction, we are not allowed to do anything that could possibly infringe.

    That, and they give me plenty of vacation, holiday and sick time; schedule around my education; and pay for me to go to school.

    Not so bad for just making copies. :)
  • I considered doing the same thing years ago with scanned images. I scan hundreds of images per month and I thought the free form text search of the scanned images was in order.


    At the time the only OCR software that I could find on Linux was from a company called Vividata. At that time they were just adding Linux support and it didn't seem to work for shit, but the support was pretty new.


    I use shell scripts to drive SANE programs to do the scanning and conversion to PDF using convert (Image Magick) and then ps2pdf (ghostscript). If the Vividata product actually works now, it might be nice to scan, then OCR, then convert to PDF. A quick index by ht://Dig will then make a nice searchablke archive of scanned docs.


    The Vividata products however are not free, if this is a consideration.


    --Aaron Newsome

  • by Gleef ( 86 )
    Neither http://www.linuxdoc.org/docs/OCR/OCR-HOWTO-0.1 (what you wrote) nor http://www.microsoft.com (what your link pointed to) gives OCR information. There is a little info in the Access-HOWTO [linuxdoc.org], and a little in the unofficial AI/Alife mini-HOWTO [daphnis.com]. I couldn't find any OCR-HOWTO, and would love a real link to it if you have one.

    ----
  • by nstrug ( 1741 )
    Next time you're in Hong Kong buy 'Adobe Special Edition' for about $10. Every Adobe application and plug-in there is including Capture!

    For some reason it comes on a CD-R with a xeroxed insert. I can't imagine why Adobe would let their packaging standards slip so badly...

    Nick

  • That's funny, I generated some pristine pdf documents using php *this* *week* and the pdf library used by php is right here [pdflib.com] and works wonderfully and comes with source. Things have apparently improved since last you looked, the relevant php documentation is here [php.net]
  • In all honesty, you're not going to get away going from dead tree to digital paper without proofreading at least once. There is no OCR package that perfect. Same goes for your data entry folks.

    I've been thinking about this for a while... can't you just scan and OCR it once, nudge the paper on the scanner, scan and OCR it again, and then use a script to compare the two files? You may use more than two scannings if accuracy is that important.

    Something that's been common in the "warez" ebook scene is that people will often correct mistakes in the book as they're reading it, and then spread the corrected version. After a period of time, the book becomes more and more solid.

    --

  • Correct. I've use this on my Mac and it works pretty well. The OCR probably misses 10-20 words per page, but is quite good about flagging them as unsure. It has a good interface for going back to do touch on those. It also has a fair interface for running a scanner, getting the data directly into itself, and doing this for successive pages. If you need this, go spend the $250 and support your non-free developers out there in the world.
  • Old keypunch standard practice was to keypunch the holes in the cards, then someone else repunched in verify mode -- it compared and notched the card if it didn't match. For some reason, that practice seems to have disappeared. Do data entry shops still verify the entered data?

    So hire two sets of interns or high school kids. Compare the two. Pretty easy. Twice as expensive to get the data in, but it would be more accurate.

    Doesn't solve the problem of unreadable original documents which are misread both times, but that's a different story.

    --
  • That library is great but if you read the license agreement it is not free (Beer) for commercial use. And since we were being paid to develop what is most deffinatly a commercial site unless we got the client to cough up the cost of the lib it wasn't going to be an option.

    Not to mention I tend to prefer free (Beer,speech) software for anything I do and anything I pass along to clients.

    Luckily a bit of work with google and I found some guy in england who had written his own PDF libraries (not nearly as nice as PDFlib linked above) which were GPL'd and had enough functionality to do what I needed.
  • If you don't have to process huge amounts of pages then Adobe Acrobat can do what you want: It's basically a cheap version of Adobe Capture that is probably not as fast and not as easy to use. The "Paper Capture" option is located under the "Tools" menu. I don't think that Adobe will bring out a Unix version of Acrobat 4.0, therefore this is a MS/Mac only solution. But it's more cost effective than Capture.
  • I am not sure but I think that this may be just a UNIX problem. I bought a scanner a while back and it came with windows software to do the conversion for me. I have not tried mixed images and text yet, as I have not had a need. TextBridge is the name of the software. I found some info about it here http://www.digitalriver.com/dr/v2/ec_MAIN.Entry10? SP=10023&PN=1&V1=160950&xid=19198 It is not open source and it is fairly inexpensive IMHO. If you buy a scanner I think that they come with this software. It says it can retain color and images. Maybe this and wine? OR maybe enough people will ask them to port to Linux. I think that right now it outputs to word and wordperfect.

    Does this help??

    send flames > /dev/null

  • If you have ghostscript installed on your computer, you probably already have this (most, if not all), linux distributions have this (okay, maybe not the "micro-distributions") by default to allow postscript files to be filtered to your printer port for output. Try typing "ps2pdf" at the command line and see if you get anything. Also, you can try www.ps2pdf.com, an online engine that lets you upload the ps file and then download the pdf file. Ghostscript is also available for Windows, and you will have to search the installed subdirectories to find the "ps2pdf.bat" batch file that will do this same thing.
  • No part of this work covered by by the copyright hereon may be reproduced or used in any form or by any means - graphics, electronic, or mechanical, including photocopying, recording, taping, or information storaeg and retrieval systems - without the written permission of the publisher.

    Yes, but their statement about you not being able to do that, is just plain wrong. Just because they say you can't, that doesn't mean you really can't. You didn't actually put your own signature under those words, did you?

    If you didn't sign that page of the book, and you didn't get the book directly from the publisher under the terms of some weirdo contract (as opposed to buying it from a bookstore), then the only real restrictions are the ones stated under copyright law. Moving the book into a computer sounds pretty Fair Use -ish to me. Just don't violate the copyright.


    ---
  • He's a troll, but he's funny and subtle. "hot breakfast foods", indeed!
    --
    Compaq dropping MAILWorks?
  • Unless I miss my guess, Adobe has patented the PDF format and only Adobe Acrobat (and other related products) can legally generate the PDF material.

    File formats can't be patented, they can only be trade secrets (I believe.) Otherwise don't you think Microsoft would have patended .doc format? That would be an extremely easy way to kill off Wordperfect, StarOffice, AbiWord, etc., dot dot dot.
    --
  • That's a pity. I used a Mac version 4-5 years ago and it was fantastic. Zero intervention produced _very_ accurate text. give it the extra few minutes and it was superb. Sorry to hear it's gone downhill. Wonder why?
  • Whether you lose the formatting or not will depend upon the OCR software. The OCR software is looking at the scanned image and can be aware of where on the page it is looking, then use that to create a page which looks similar (with whatever formatting commands the OCR program uses...).

    The original article didn't mention which nice public OCR programs he found, so we don't know the capabilities of what he already found.

    What he needs is an OCR program which can separate text from images and format the text and images in a similar way on a PS or PDF page. At that point PS or PDF to text programs can be used for indexing.

  • What opensource OCR have you found? And how "intelligent" is it?

    What I'd like to do is enhance the intelligence of OCR, for things like forms. The three things that would be useful is thus...

    The ability to define rectangles and lines before OCR happens, so that it will interprete them as graphics as opposed to part of the text.

    The ability to Define columns and groups better, and what type of information the column has. For instance Phone numbers, addresses, etc. (and thus quit translating 6 to b ...).

    A list of frequent mistranslations pairs - OCR tends to make consistant mistakes - if the spell checker were to substitute for the mistranslation with the alternative character pair, I would recieve a lot fewer misspells.

    I figure that those three options would increase the accuracy of the OCR software that I've been using by 95% easily. (The other five percent is from "Fax noise", photocopy fade, and handwritten notes...)

    LetterRip
  • We're about to set one up here: Teleform [cardiff.com] takes data right from the scanner, OCRs (reads) it, passes the text and the image (tiff or pdf) to an image database (alchemy or imagexx [imagexx.com]), which has search tools and links to various webserver software. The whole thing will be stored in a DVD jukebox. It wasn't my call, but even though we have huge SPARCs and stuff at our disposal, this will all be under NT (imagexx runs either).

    Total cost: more than I'm worth.
    Value of having 8 million documents in a 2x2 cube: your guess is as good as anyone's.

    Errata:
    -Number of alternate solutions we looked at: 0.
    -Number of comparisons between this and alternate solutions I could find: 0.
    -Number of replies I got to a request for comparisons on IWETHEY: 0
    -Number of seconds my .org considered my request to look at alternate solutions: 0.
    -Rank, among the reasons I'm looking for a new job: 2, right behind "Hey let's get Citrix Metaframe so our lame-ass accounting software can track 100 PCs at your location!"

    Anyone need linux support in boston?

    -jpowers
  • Here is a possible solution (from scanned document to html pages), that could work as long as there weren't any funky symbols, etc. embedded in the text (heck, may even work with that if you are deft with a sharpie - as explained in step 1)...

    Steps for conversion:

    1. For pages with images, draw a colored border around each image on each page. Make the color something that will sharply stand out (like bright green).

    2. Tricky part - process each tiff image (in a looped script) doing the following:

    a. Scan each page to color tiff, with sequential filenames (001.tiff, 002.tiff).

    b. Using a custom written utility, build two new tiff images - a tiff of the page without the color-bordered images, and a tiff of the color-bordered image(s) on the page. Number the page images like (p001.tiff, p002.tiff), and the images for each page (p001i001.tiff, p001i002.tiff), so that it is known which images go with what page.

    c. Convert each page image to postscript, then to html (unless there is a tiff2html tool out there?) - preserve the filenames (p001.html, p002.html),
    modifying only the extension.

    d. Convert each image for each page to a (gif, jpeg, png), preserving the filenames (p001i001.png, p001i002.png), with a new extension.

    e. Add IMG tags for the images to the end (or beginning) of the html pages, for each page.

    3. After batch conversion, go back and proofread/reformat pages (to position images where they should go, etc).

    Everything to do this should exist in some form already - except for maybe step 2b - that might be a completely custom tool that needs to be written, but it shouldn't be very hard to code (loop through bytes of image, looking for the sharp color changes - kinda like edge detection code - saving/masking the areas in the outlines)...
  • Ah, heck - that's where it breaks down - the tiff to postscript utils only make a non-searchable bitmap (I read that, and still wrote my method - I must be stupid today - my bad).

    Of course, if such a program existed - tiff -> OCR'd postscript (searchable text), then my solution would work (I am not advocating the manual cutting and pasting of images - a piece of code would have to be written to that) to convert the stuff to html.

    Of course, if one went ahead and built an OCR engine (converting tiff to PS), then they could go all the way and add the extra image stuff in and save all the steps I added...

    And here I was thinking I was being smart...
  • The Xerox printers I use and support, DocuSP 6180, DocuTech 65 and Sprite Network server are all PS Level 3 compliant, which means they understand PDF's also.

    George
  • So hire two sets of interns or high school kids. Compare the two. Pretty easy. Twice as expensive to get the data in, but it would be more accurate.

    If you had the money, you could hire enough sets of high school kids to get a high=-school-kid-RAID going, that way, you could hot swap the sick ones one and not lose any productivity.

    George
  • It scans to "Image + Text" PDFs. This represents each page as an image, but includes the OCRed text for searching purposes. It's the best for legal and archival documents, because it's a true reproduction. Completely OCRed text is often inaccurate in terms of both content and presentation.

    I was going to use Acrobat Capture, until Adobe ("The Microsoft of the Graphics World") started charging a penny and a half per page. Suddenly, the job went from costing $800 (old Capture pricing) to $25000 (new capture pricing). I even called the Product Manager at Adobe for Capture and asked her why they made such a bold, stupid move. She said that Capture was now a "server product", which justified the price increase. I asked her if she expected anyone to use capture rather than the $80 Textbridge Pro which did the same thing, and she said yes. "You're on the wrong drugs," I said.

    To make TextBridge even sweeter, it turned out to be scriptable. I can hand textbridge specialized configuration files for each job. This allowed me to use Perl to automate the conversion of several tens of thousands of TIFF images into multipage, searchable PDFs. Yay, Textbridge!

    Apparently, though, Adobe had some words with Xerox (ScanSoft), because Version 9 does not include PDF support. Wankers.

    If you can find a copy of Textbridge Pro 8.0 (I think it's the "'97" release), it'll do the trick!

  • I am not certain about this, but I would presume that OCR software designed to recognize form elements will retain picture elements that do not OCR to text.

    Software like Omni Form will let you designate areas on the page to ignore. This should retain picture elements and will put OCRd text in a layout that resembles the original. This, of course, most likely requires user input, at least for each different page layout.
  • That is wrong. Adobe Acrobat 4.0 captures pages using "Capture" under the Tools menu.
  • That's WRONG! I create pdf's and capture them with Acrobat and they are FULLY SEARCHABLE. There is an OCR layer created in the file. It's searchable in Acrobat and completely indexable!
  • Textbridge is, ehrm, messy. It also requires a huge amount of user intervention, and a rather large amount of training..

  • I've only used the past few revisions, so I can't really speak for the decline.. It will still produce accurate text with little intervention if you're feeding it plain, crisp ASCII text. Feed it a memo on letterhead with paragraphs, font changes and italics, and it prompts you continually. Not to mention it generically interprets formatting; Any one of a dozen detectable ways of formatting a paragraph (one tab, two tabs, three space indent, doublespaced, etc) are rendered only one way in the result. One tab, single spaced, no indent.
  • If the text formatting is primitive, and all you want is ASCII text, there are a couple OCR packages available for Linux. They are rather primitive, and at best about twice as error-prone as an entry level commercial product, but they will handle clean text very well. Graphics, snap exception formatting, etc, are not handled by any of them, but they are scriptable.

    Entry level commercial products (read: $200, Windows) will export to a .doc or similar wordprocessor file with the gross formatting intact. A few will actually 'guess' what needs to remain an image, and will include it in the finished product. They always skew the formatting some, graphics are not always detected properly, and I have yet to see one that is scriptable. They are also not free in any sense, and tie you to the Windows platform.

    OT: Kind of, but..
    Something I would like to see is a OCR search on demand application; In most document management systems you use only image files, and the information is only searchable by meta data.
  • Textbridge (on the Mac) has a "verify" function that allows for interactivity. As it is OCR'ing, it seems to run each word through a dictionary, and if it's not found, then it asks you to verify what it should be. This process makes it only a little bit faster than raw typing.

  • According to this [advogato.com] Microsoft believes you can patent a file format, if not quite the .doc one. I'm gonna patent me raw ASCII...
  • You might want to look at the front of your four foot wide stack of reference works before you even consider OCR-ing it.

    Most books have something along the following lines printed at the front:

    All rights reserved. No part of this work covered by by the copyright hereon may be reproduced or used in any form or by any means - graphics, electronic, or mechanical, including photocopying, recording, taping, or information storaeg and retrieval systems - without the written permission of the publisher.

    Oops. I hoped that didn't apply to the copyright notice I just pirated from my copy of SNMP versions 1&2, Theory and Practice ! - antoine

  • Caveat: I used to work on OCR Engines for Caere / Scansoft The available OSS engines are what you might call 'research quality'. They have some good ideas but with OCR "the devil is in the details" and there are alot of details. This is why you will probably not see any good OSS engines in the foreseable future -- there is a very iterative process between the algorithm development and testing and the cost of doing this is significant. The software that comes with scanners is cut back (big suprise) to get you to buy the real version. 100% accuracy on clean documents is not uncommon. Usually the document formatting (which is a much harder problem) is where things break down. Just one guys opinion...
  • When we needed to do something like this, we hired high school kids to retype the text for us. It's much cheaper than an auto-feeding scanner and OCR software. =)
  • In an effort to associate everything with Gnutella/Napster (much like the Beowulf Cluster trend), I'd like to point out that I've seen tons of PDFs on Gnutella of books that are currently on the bookshelves, like all the Teach Yourself xx in xx days books, etc. All copyrighted material, all in either PDS, HTML, or txt format. So obviously, people are able to scan books and convert them into PDFs that are completely searchable and with the graphics intact. Adobe's Acrobat does all of that, including OCR, and if it cannot confidently recognize words, it would retain the bitmap of the text in question, just so you can see and possibly edit.
  • ...could you solve my PDF to HTML problem? I haven't seen any cheap converters for that either. I wouldn't hate PDF so much if I could convert it. I understand that dead tree documents have their place, but that shouldn't come at the expense of on-line documents. Until someone comes up with a free PDF to HTML converter, I will continue to complain to companies and government agencies that post documentation in PDF.


    The regular .sig season will resume in the fall. Here are some re-runs:
  • Well, as advertised, it *does* convert PDF to HTML in a way that would work very well for text-to-speach software.

    It strips *all* formatting, including many br tags. It's really not much better than a plain text converter.

    So, if you're visually impared and need to read a PDF, this is fine, but it falls far short of what I want: A true free PDF to HTML converter that does its best to preserve the look of the original document.


    The regular .sig season will resume in the fall. Here are some re-runs:
  • The person to whom's post this child posted a solution to allow you to OCR from gimp, which would then allow you to post script, and then quite easily create a pdf. This is a far cry from offtopic, but someone felt the need to mark it offtopic.

    At least check the link before you flame others about marking something as offtopic (*HINT* it points to http://www.microsoft.com and NO SUCH HOWTO exists.)&nbsp Duh. :-)

  • Perfect OCR isn't necessary for searching documents. As long as the OCR is pretty good, you can get pretty good searches. Since the question stated that they want to look at the diagrams, the original image obviously needs to be saved.

    One could make the text hidden as suggested by post #27. [slashdot.org]

  • I'd tell you I was sorry for the mistake...but I checked them before I submitted the Ask /. a few weeks ago. Back then, they were valid and worked for me!
  • Looks like it:

    From: <Saved by Microsoft Internet Explorer 5>
    Subject: Ask Slashdot: From Paper To PDF?
    Date: Sun, 18 Jun 2000 10:02:56 -0700
    MIME-Version: 1.0
    Content-Type: multipart/related;
    boundary="----=_NextPart_000_0000_01BFD90C .601E59E0";
    type="text/html"
    X-MimeOLE: Produced By Microsoft MimeOLE
    V5.50.4029.2901
    This is a multi-part message in MIME format.
    ------=_NextPart_000_0000_01BFD90C.601E59E0
    Content-Type: text/html;
    charset="Windows-1252"
    Content-Transfer-Encoding: quoted-printable
    Content-Location: http://slashdot.org/comments.pl?sid=00/06/05/23532 19&cid=171

    Et cetera. It's even saving it as if it were an mbox entry... Don't get much more open than that... MIME, HTML, BASE64.

  • >Yeah, just like, say, Linux and Windows.

    Bingo. If you have only a one man staff for over 100 people that is... In that case:

    Windows 9x: $250 gets you an OS in a box. Hope you like it. Supporting it costs very little because you can do very little with it. Like "A meal in a can" it's server capabilities are laudable only as an example not to follow -- don't pack so much crap into something that is already bursting at the seams.

    Windows NT/2k: $$$$$ gets you an OS in a paper sleeve. It doesn't matter wether you like it or not because once the managers see it you are stuck with it. Supporting it costs very much because you can't do anything with it properly. Takes about 1 server for like 10 clients. Sorta like duct tape when it is used on anything but ducts.

    Linux: No money gets you an OS on an FTP site. For one man, supporting that many users is going to cost extreme $$$$$$. But you can do it all on one machine. Just like a big swiss army knife.

    Of course, a smart company (too bad these don't exist) would hire 5 people (one per 20), run Linux, and buy X-Terms. This is cheaper than ANY of the Windows solutions I have ever seen...

    Just my $0.02
  • An interesting thing to look into is a research project called TOM [cmu.edu] at Carnegie Mellon University. It's goal is to convert all sorts of file formats from one to the other. I can't check it out to give more information because my firewall at work doesn't let unusual ports (it's served on 8001).
  • It has to save the document into a file format that has complex formatting features. Usually this is something like Word Perfect, etc.

    Omni Page [caere.com] has excellent capabilities for OCR that will scan and retain most, if not all formatting. It also supports this with WordPerfect, not just the Redmond brand X software that that you see around.

    Unfortunately, it still requires a win9+ machine, but otherwise it falls into the category of Really Good Stuff(tm)

    They were separate from TextBridge a while back, but the companies merged during the past couple of years.

    The other option is to see if the compnies have copies of the books available on CDs, etc. this depends on the company, of course.

  • Just as a Note, this is a Holy Grail for many companies. I have a number of potential clients who would love this as they have a whole wall of file cabinets filled with paper docs that they want to convert to electronic docs, but cannot because of time, cost, etc. never mind legal issues (original records for legal disputes, etc)

    One in particular that comes to mind is an auto insurance place. all of those customers who have to process stuff yearly, etc. nevermind the usual database issues...

    if you figure it out, you have the makings of a great business plan.

  • by Juggle ( 9908 ) on Friday June 16, 2000 @09:30AM (#997853) Homepage
    I learned my lesson about researching and testing what I offer before selling it to clients thanks to PDF. I knew that PHP was capable of generating PDF's so I went ahead and accepted a job to create a website which would automagically generate PDF resumes for the visitors. What I then found out was that PHP could only generate PDF's if you bought one of two pricy libraries which actually do the PDF work.

    I ended up searching for three days (and submitting an ask /. which was discarded) before I found a set of OS (free as in beer and speach) perl libraries for generating PDF's. But oh what a pain. I ended up designing a sample resume in QuarkXpress then using a pica ruler on the printout to convert it to something I could generate. But after about two weeks of hacking I had a resume generator which spits out very clean professional looking resumes in HTML and PDF for anyone who's willing to register on the site and fill out a few simple forms. Client was happy and I tucked another language into my cap. (Since the libraries I found pretty much required you to know PostScript).

    Moral of story: test the technology before selling to a client. And trying to generate PDF's on the cheap is only for those who have way more time than money!
  • by Azog ( 20907 ) on Friday June 16, 2000 @01:40PM (#997854) Homepage
    The patent on gif is not the gif file format per se, but the compression algorithm.


    Torrey Hoffman (Azog)
  • by 1010011010 ( 53039 ) on Friday June 16, 2000 @03:54PM (#997855) Homepage
    We've about finished a tool that will do PDF to XML conversions, and back again. It also sports a native API to allow t he creation of documents from scratch. It allows embedding of truetype fonts. It runs on Linux and Windows NT.

    It'll be out in the next week or so; check Freshmeat.

    The idea behind it is, create a nice layou template in the tool of your choice -- Illustrator, for example. Save as PDF. Convert to XML. Add your markup to it -- extra text, etc., convert back to PDF. Done!

    Release 1.5 will include a "template" feature, whereby you can use pages from existing PDFs as templates directly; something along these lines (pseudocode):


    p = new pdf();
    t = new pdftemplate("foo.pdf");

    p.newpage("8.5","11");
    p.include_from_template(t.page(1));
    p.drawstring("Hi!");

    p.write("bar.pdf");


    Does this type of tool sound interesting to anyone?

    On a related note, we plan to offer it as both open source and a commercial product. For instance, the ActiveX interface would be commercial. You could negotiate a commercial license. And you can use it under something like the Alladin license (a la ghostscript, pdflib, etc). Any advice on open source + commercial? I have to justify my department's budget.


  • by antdude ( 79039 ) on Friday June 16, 2000 @09:16AM (#997856) Homepage Journal
    I asked a friend about this and he said, "no, but the answer is yes, there are other ways....use other OCR engines, like Omnipage Pro or TextBridge Pro. Adobe Capture 3.0 is really really really nice, but is expensive. The searchability factor is the only reason OCRing is needed in most instances."

    Some useful sites:
    PDF Research [pdfresearch.com]
    Planet PDF [planetpdf.com]
    AcroBuddies [acrobuddies.com]
    Codecuts [codecuts.com]
    PDF Zone [pdfzone.com]
    Adobe [adobe.com]
    Deja.com [deja.com]

  • by Greyfox ( 87712 ) on Friday June 16, 2000 @10:18AM (#997857) Homepage Journal
    Easy solution:

    1) Write LaTeX resume style class. Mine's pretty primative because it only has to deal with my resume.

    2) Create resume using resume style.

    3) pdflatex resume.tex.

    Or...

    3) latex2html resume.tex (Though latex2html doesn't really generate it to look the way I need it, but it is just a simple perl program so you could always hack it.

    Nice thing about LaTeX is you can also go to XML or DVI or RTF or a number of other fairly widely used formats. Or you could just ship the raw LaTeX if the company you're dealing with is that clueful.

  • by Jamie Zawinski ( 775 ) <jwz@jwz.org> on Friday June 16, 2000 @09:31AM (#997858) Homepage

    Last year, I tried several Linux-based OCR packages, and they basically didn't work at all.

    I ended up using the Windows software that came with my scanner to OCR the documents, and at first glance it appeared to do a good job -- it didn't mess up too often. But then I went in and actually proofread and spell-checked its output to find all the typos it had made, and it turns out that this process was so time-consuming that it was faster for me to just type it all in by hand. Even though the OCR software only made a mistake every few lines, finding those mistakes took enough concentration that typing the whole thing took less time.

    Your mileage may vary, according to how fast you can type.

  • by jetson123 ( 13128 ) on Friday June 16, 2000 @09:12AM (#997859)
    Many Adobe-converted scanned pages seem to be just a sequence of TIFF images with the OCR'ed text also contained in the PDF file. The OCR'ed text is never displayed, but can be used for searching (in my experience, Adobe's OCR is not very good).

    So, a simple conversion would consist of just putting the scanned TIFF images in sequence into a PDF file.

  • I don't know about elsewhere but PDF is essential for dead-tree publishing. The advantage it has over all other formats is not that it displays the same on every screen but that it prints the same on every printer (assuming that the author remembered to embed the @#$! fonts, but that's another story :-)

    With PDF, you can design and lay out your ad and transmit it electronically (or on disk) to the newspaper, knowing that it will print exactly how it it did for you. Or you can lay out your brochure and send it off to the printers knowing the same thing. With any other format, the publisher/printer's machine is going to have at least one (oh, if only it were ever just one!) setting different than yours, which will change the layout.

    PDF is the way that print ads are submitted electronically today. It's either PDF or old-fashioned cut-and-paste (no, even more old-fashioned than you're thinking, I mean with actual scissors and glue). The Associated Press runs a "wire service" called AdSend for ad agencies to transmit PDF ads electronically to newspapers and magazines -- and they are transmitting millions of PDF's a year.

    The same thing basically goes for sending anything you want printed to a print shop. In any case, free PDF-making software enables dead-tree publising the same way that the web enables electronic publishing (though we haven't got any print shops that'll work for free, yet :-)

    ========

  • by sugarman ( 33437 ) on Friday June 16, 2000 @09:05AM (#997861)
    You mentioned OCR software, but didn't go much further with it. Wouldn't this be the solution you need?

    Scan to OCR to PS to PDF

    there are apprarently a couple tools to do this for you. check out a brief list here [umd.edu]

    Seeing as you've looked into Adobe Capture, windows may be an option. If so, then the other question would be whether you've looked into Textbridge [scansoft.com]? This looks like it would do exactly what you're asking. No muss, little fuss.

  • I am asked to do this all the time as a computer services employee of Kinkos.

    The short answer is using OCR to create a text file, proof reading the text file, and then printing to a postscript file.

    The long answer is, you need to find quality OCR software that does not choke on things like forms. You also *MUST* proof read every OCRd document. No OCR is perfect, and drawn elements will almost certainly trip the software into embedding odd characters or pipes into your text. Different fot sizes will cause the software to choke. Thin fonts will cause the software to choke.

    If you are OCRing forms, I recommend Omni Form (it's the only software I know of that recognizes forms, but I have never used it personally).

    Batch processing of OCR pages is likely easy to set up with professional OCR software (Omni Page does it), but it does not excuse you from proofreading the results. After that, the PDF part is a snap, and can be accomplished with any OCR software you choose to use.

    If you are asking which OCR software is, I can't help you directly. OCR software is a niche software market, and you either get free, dissapointing software with your scanner, or you pay big money for something that does a decent job. Just like everything else in life. Have you read any OCR software reviews?
  • by heliocentric ( 74613 ) on Friday June 16, 2000 @10:20AM (#997863) Homepage Journal
    Speaking as a former intern under a guy who wanted all these meeting minutes from the early 80s on put on the web I know what you are asking for. I knew HTML and simple coding then, and was only being asked to translate them to HTML. What I did, was OCR a ton of the text, only to reduce the keystrokes (it's much easier to drink coffee while swapping pages in a scanner every few seconds then it is typing all day) then I spell checked them as an initial step, formatted them by hand. Then when I moved onto the next ton, and they were in the scanner bed I would check the grammar of those which I did in the first batch.

    So, I ended up being the cheap labor to get the stuff together, but I incorporated the error checked suggested by the other replies, and I utilized OCR to minimize carpel tunnel damage.

    Yeah, it took a while, and yes I got paid little in comparison to the other people at the location, but I got paid, they got their silly meeting minutes online, and they didn't have to hire 1,000 monkeys with 1,000 type-writers and have redundancy of people or invest in vast warehouses of paper feeders.

    The scale of my work: I worked on a series of bound volumes that took up 3+ feet on a bookshelf and I completed the work on my own in less than 2 weeks (while also feilding tech support questions from the group). If you have 1,000,000 pages to be put online yesterday, maybe you could use a larger staff - but always remember:

    If it takes a farmer 3 days to plow a field, and 3 farms only a day to plow the same field, and it takes one woman 9 months to have a baby, how many months does it take 9 women to have one baby?

    Often putting more people on a project doesn't equate to faster solutions or better ones and usually not cheaper ones.
  • by cetan ( 61150 ) on Friday June 16, 2000 @09:16AM (#997864) Journal
    You don't need to spend all that money for Adobe Capture 3.0 when you can buy Adobe Acrobat 4.0. This is NOT the adobe reader, but the full version of Adobe Acrobat with all the bells and whistles. A url is: http://www.adobe.com/store/product s /acrobat.html [adobe.com].

    In addition, you can also buy the Adobe Acrobat Business Tools, which is a slightly broken but still functional version of Acrobat 4.0. That is available here: http://www.adobe.com/store/pro ducts/acrbustools.html [adobe.com].
  • by AnonymousHero ( 129337 ) on Friday June 16, 2000 @10:08AM (#997865)
    Ahh... mass-OCR cost-effectiveness... it takes me back...

    I just used an off-the-shelf OCR engine and hacked the text together with the images programmatically myself. We would get TIFF images, which most engines could understand.

    On really, really big OCR jobs, though, the real problem is the tradeoff between human intervention and quality. See, OCR engines just guess at stuff. The only reason they work at all is that they guess well. But they guess wrong anywhere from 0.1% to 10% of the time, depending on the quality of the input.

    Each mistake must be correct by a human being. But humans are expensive. If you have lots of documents to OCR, the technology integration costs and the cost of the OCR engines themselves are amortized. They end up dwarfed by the paychecks of the humans.

    The cost of massive amounts of OCR, therefore, is directly related to the amount of human correction of OCR mistakes.

    Thus, you can save tons of money by selectively sacrificing OCR quality. Getting every page perfectly formatted requires around 60 seconds a page for a skilled OCR operator. It's all about reducing that time. How? Simple. Don't expect everything to be perfect. There are various levels of quality you can get out of OCR engines-human systems:

    • no correction: just let 'er run. You can get it fully automated this way, but the quality is crap.
    • zoning only: The OCR engines just suck at text with multiple columns, inserts, and tables. You can get people to correct the engine's zoning at a clip of around 5 seconds a page, 10 seconds if you require them to put in tokens representing the excised images.
    • spelling correction: Typically, most people object to the spelling mistakes OCR introduces. With good quality text an operator can correct them at around 20-30 seconds a page.
    • formatting correction: OCR engines can really mess up indentation and text flow. Unfortunately this is the most time consuming problem to fix, anywhere from 30 seconds to a couple of minutes per-page.

    Oh, and it really helps if you get the workflow of the OCR down. Allow the operator to move on to the next document automatically, save them the trouble of remembering the name of the document they're working on, etc. etc. This may require a bit of hacking of the OCR engine you're using, but it's worth it.

    So when doing something like this, ask yourself: how perfect does it have to be, really? You can save tons of money if you can cut any quality corners.

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...