Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Technology

State of Speech Synthesis and Text-To-Speech? 52

Gnulix asks: "Are there any, preferably either open source products available that produce realistic speech from an arbitrary (English) text? Projects such as Festival doesn't sound all that much better than SAM (Software Automatic Mouth) did on a Commodore 64 back in 1979, nor does SoftVoice's or IBM's new products sound very good. I mean we all know that Stephen Hawking is a fun loving guy, but I bet you that he didn't choose his unrealistic, robotic voice just for the heck of it. With all the amazing advances we have seen in real-time graphics, shouldn't speech synthesis have come much, much further than what is, seemingly, available today?" Ask Slashdot last handled the Voice-To-Text issue in January of this year.
This discussion has been archived. No new comments can be posted.

State of Speech Synthesis and Text-To-Speech?

Comments Filter:
  • AT&T Natural Voices (Score:5, Informative)

    by Utopia ( 149375 ) on Thursday November 14, 2002 @08:37PM (#4673653)
    is the best Text to speech conversion program
    checkout http://www.naturalvoices.att.com/
    • by pediddle ( 592795 ) <pediddle+slashdot@NoSPam.pediddle.net> on Thursday November 14, 2002 @10:35PM (#4674291) Homepage
      Another extremely strong competetor to Natural Voices is Speechwork's Speechify [speechworks.com]. Take the "Speechify Challenge" -- it's still possible to tell which is a real recording and which is the computer, but it is very difficult. Some say it's the best engine available, but I guess that's a matter of personal preference.

      I don't know about Open Source TTS, but the commercial versions (AT&T, Speechworks, and others) are sitting on the threshold of truly natural speech. I work in the speech industry, so I follow progress and have seen some of the unreleased demos of upcoming versions. In the next couple years, we can expect amazing things. It won't be long before the Speechify Challenge will truly be impossible to beat.

      By the way, for those of you who don't know, the newest and best-sounding engines don't use purely synthesized sounds as older and small-footprint engines do (Festival and Steven Hawking). The engines are built using actual recordings: a "voice actor" will sit in a studio and record dozens of hours of speech, and then, over the course of several months, the recordings are then cut and spliced into individual phonyms, which are reassembled by the engine. This means that the voices actually sound like real people, and the only unrealistic part is the inflection when generating complete sentences. You can order custom voices (for several tens of thousands of dollars) and get a voice that sounds identical to that of your celebrity of choice.
      • One addendum: the fact that the newest engines use real recordings is exactly the reason why it will be nearly impossible for Open Source engines to approach the quality of commercial versions. The amount of work involved in extracting the raw sounds from recordings is staggering, and it requires full-time commitment from trained experts over the course of many months (not to mention the cost of hiring voice talent). There is no way to avoid the costs involved, and so Open Source alternatives cannot become available without some sort of large grant. Unfortunate.
        • Hrm... Well then, I guess I'll just have to volunteer to be the voice talent for any OS speech projects:) I've always wanted to be the annoying repetitive voice on the other end of the phone!
        • Yeah. It is pretty much impossible. Like making your own viable operating system as open source. The hundreds of thousands of man-hours and expertise required is impossible.

          Okay, so I just made fun of you. Actually, I still agree with what you said. It would take a LOT of talent which is really interested in text-to-speach for this to happen. But really, I think the pay-offs to society would be time and energy very well spent. ...as long as they don't put that technology back into coke machines again.
          • The problem is that programming is a creative work that is in many ways its own reward. The tedious work of sampling and re-sampling hours of voice, and then splicing it properly for a computer to parse is akin to washing dishes in terms of creativity. I don't foresee someone going through the effort unless they have some other reward involved (like a grant, or its on the company's dime already and they're willing to share).
            • You took the words right out of my mouth! Yes, it's not that it's too much work, but that the work would be so tiresome that no sane person would do it unless they were getting paid.

              Now, if someone wants to go and prove me wrong, then go for it! You will do the Open Source community a great service. Of course, there's more to the TTS than splicing recordings, so good luck with that too. Anyway, I hope I am wrong, but I'll believe it when I see it.
            • The trouble is is that disciplines aren't working closely enough together.

              Yes, this operation is pretty boring, but so is putting together gui installers, or building unit tests, or writing API documentation -- boring, but important. Interestingly, there are a large number of undergrad/grad Linguistics students who do this sort of thing in college laboratories all the time. They're not CS students, though -- their Linguistics (or Psychology, or Cognitive Science, or whatever wacky department their field was attached to) students.

              Someone with a CS professorship, an `open science' bent, and enough funding for 2-3 undergrads (probably $10k each, once you include `overhead') needs to find his or her local Linguistics department and make the world a better place.

              Please?
        • Right - cuz after all there aren't any sounds of any kind for free games. Nobody would EVER donate their time for a project.

          Oops - sorry. That was sarcastic, wasn't it?
          • Read my other replies to this thread. It's not that it's technically impossible, but that no one in their right mind would want to do it :)

            If you want to prove me wrong, then more power to you!
      • I just wanted to comment that the speechify voices sound very good ... word for word ... but the inflections between words and the word spacing still needs a lot of work. The "speechify challenge" prerecorded samples havae undoubtadely undergone quite a bit of hinting. I do have to say though, that they do japanese 10,000% better than they do english. Their japanese TTS is the best TTS I have heard EVER.
  • Related ? (Score:2, Insightful)

    by Tolchz ( 19162 )
    How does "voice to text" relate to "text to voice" ?

    Look at the older article, it's a completely different question.

  • Hawking... (Score:5, Interesting)

    by 3-State Bit ( 225583 ) on Thursday November 14, 2002 @08:46PM (#4673710)
    Actually, I heard that they offered Hawking a revamped speech synthesizer, since although his was state-of-the-art in the seventies, today we have much better. He declined, saying he and his friends had gotten used to the voice, and it was "his". In fact, whenever on hears that particular flavor of voice synthesis, it's difficult not to think of Hawking.

    He does relate, however, in A Brief History of Time, that at first people had trouble understanding "his voice", so that when he would speak or answer questions at lectures, he would have an interpreter who was more familiar with his voice repeat what he just said.

    Interesting stuff...
    • by GuyMannDude ( 574364 ) on Thursday November 14, 2002 @08:52PM (#4673745) Journal

      He declined, saying he and his friends had gotten used to the voice, and it was "his".

      Not to mention the legions of fans who follow his side-career as a gangsta rapper [mchawking.com] with due vigor! Changing his would give his music a very different sound!

      GMD

    • as long as the emitted "natural voice" didn't "speak" with an american accent and employ such grammatical travesties as "most everyone" instead of "almost everyone", "already" used in the present tense, "write me" instead of "write to me", etc..

    • He does relate, however, in A Brief History of Time, that at first people had trouble understanding "his voice", so that when he would speak or answer questions at lectures, he would have an interpreter who was more familiar with his voice repeat what he just said.
      FWIW, I believe he was referring to his natural voice, which he struggled with for some time before finally giving up and switching to the synthesizer.
  • Are there any, preferably either open source products available that produce realistic speech from an arbitrary (English) text?

    If it existed it'd be in government (though the Bush model is obviously a pre alpha leak..)
  • AT&T Labs Research (Score:2, Informative)

    by jcbphi ( 235355 )
    AT&T Labs Research has some recent work [att.com] in TTS. I'm not sure how state-of-the-art it is, but its certainly much better than the TTS refered to.
  • by tdyson ( 530675 ) on Thursday November 14, 2002 @08:53PM (#4673756) Homepage
    The NWS's automated weather channel broadcasts use a new technology this year. The changeas quite a big deal in the marine communities, wear people listen to these voices every day. The new voices are pretty darn good.

    Natoinal Weather Service describes their new system. [205.156.54.206]

  • Apple, and MS (Score:3, Insightful)

    by GigsVT ( 208848 ) on Thursday November 14, 2002 @08:58PM (#4673787) Journal
    Yeah, closed source :)

    MS has had text-to-speech as a object you can embed in your program with one line of VB code (same as you can embed IE) for a while now.

    Apple has had text to speech entensions in tons of different voices for a long time. Some of the G4s used to read dialog boxes to you by default if you didn't click on them fast enough. Pretty unnerving the first couple times.

    Several voice activated automated attendant systems I have called for my credit card and bank are amazing these days. They have insanely accurate speech recognition and really good text-to-speech.

    So I wouldn't say the field is not advancing... it is.

    Of course, a Google search for "open source text to speech" without quotes yields many promising looking hits, which I havn't evaluated. Why didn't you search there before asking Slashdot?
    • Yeah i like the MS engine a lot. It's very easy to call up via whatever you want. I use it in a perl script to read email and stuff. It's a hell of a lot easier than interfacing with festival (seriously i tried hard and im not the worst coder out there)
      • I used to used festival, and I believe that you could simply pipe data to it

        I can't remember the exact command, but I think it was:
        echo Hello there | festival --tts
        Unfortunately, it always took a while to start...
    • Why didn't you search there before asking Slashdot?

      Oh, I didn't? Thank's for telling me! I thought I did and I'd probably have had to Ask Slashdot to tell me that I didn't...
  • by RobotWisdom ( 25776 ) on Thursday November 14, 2002 @08:59PM (#4673807) Homepage
    Modulating intonations is part of the larger challenge of natural-language processing (NLP, a subdiscipline of AI). We simply don't have the sort of general theory of language-production that could systematically predict how the intonations should fall, any more than we have a theory of translation that can do substantially better than Babelfish.

    Nor, to harp on my pet peeve, do we have a theory of semantics that can put XML to any important use on the average webpage. These all need a model of the human psyche, because all human language is flavored with metaphors from the realm of motives and plans, etc (the psychological realm). Psychological science isn't delivering the sorts of models that NLP-etc need, and probably won't for many decades yet. [My AI FAQ] [robotwisdom.com]

  • by Kafteinn ( 542563 ) on Thursday November 14, 2002 @09:09PM (#4673864) Homepage Journal
    And the best I have found so far is Festival with Mbrola voices (although not perfect they are far superior than the Festival voices)

    For voice control stuff I found a little program called cvoicecontrol to be quite nice.
  • AT&T has done a lot (Score:3, Informative)

    by xagon7 ( 530399 ) on Thursday November 14, 2002 @09:32PM (#4673993)
    Just check THIS out:

    http://www.naturalvoices.com/

    quite a big step in the right direction in my opinion.
  • by stienman ( 51024 ) <adavis@@@ubasics...com> on Thursday November 14, 2002 @10:00PM (#4674131) Homepage Journal
    With all the amazing advances we have seen in real-time graphics, shouldn't speech synthesis have come much, much further than what is, seemingly, available today?

    We haven't had that many amazing advances in graphics. Natural speech is to advanced raytracing what current text to speech is to current graphics. We still cannot raytrace in a single system in real time at the resolution of our eyes, and we still cannot produce natural speech in a single system in real time at the resolution of our ears.

    Furthermore, we know less about the math of speech than we know about the math of light. Go visit your local university that has a good CS program, and browse the bookstore for the books used to teach speech recognition. In that book you will find that the average sound a human makes goes from production of complex, multitonal sound from the vocal cords through as many as five complex natural filters (body cavities between the vocal cords and lips) before it reaches the ears of the recipient.

    Modeling these filters for one sound is hard enough. Each letter in our alphabet, except simple vowels, changes the filters throughout the letter. Furthermore the filters for a given letter may also change depending on the previous and next letter.

    A system to create speech, therefore, must generate hundreds (perhaps thousands) of different filtered 'noises' just to reproduce the english language. Other languages can be much more complex.

    Current common technology is to simply record the hundreds of 'simple' sounds and add them together. Really good programs use hundreds of hours of speech by voice actors to get several hundred sounds.

    The penultimate is to mathematically recreate every part of the human vocal system from the lungs to the lips. This has obviously not occured. The computers may well be powerful enough, but the understanding of the vocal tract is extremely limited.

    In other words, wait 5-10 years. There still isn't a killer application for text to speech, but with devices getting smaller and smaller, there will be soon enough.

    -Adam
  • TTS Synthesizers (Score:3, Informative)

    by irrelevant ( 66554 ) on Thursday November 14, 2002 @11:33PM (#4674569)
    Here at work [prentrom.com] we monitor progress of and/or use the following:

    DECTalk [fonix.com] (One of the most widely used)
    Eloquent (http://www.eloq.com - dead URL?) (fairly natural-sounding with dialects)
    Elan [elantts.com] (European languages)

    They've all been improving over the years.
  • On a related note, check out VXML (google for it, I'm too lazy to link). If you want to tinker, you can set up an account with http://cafe.bevocal.com . It's pretty nifty, since they provide a free 800 number for testing. All you do is call up, speak a pin, and it loads up your app for testing.
  • by Anonymous Coward
    ..but the Commodore 64 wasnt even released in 1979. It didnt come out till late '82 if my memory serves me correctly.
  • by Sam Lowry ( 254040 ) on Friday November 15, 2002 @04:51AM (#4675648)
    There are basicaly two TTS technologies on the market:
    • dyphone-based synthesis where the database contains one dyphone (end of first sound + start of next sound) for each psossible sound combination. This approach is used in Festival. Dyphone-based synthesis will hardly sound better that in Festival because dyphones have to be modified artificially to fit every variation of pitch, duration and any other parameter that is needed to produce a given phrase.
    • corpus-based synthesis takes a different approach where a large database of several hours of speech is recorded and manually labelled to mark the start and end of each sound. Such a database is used to extract the best and the longest sequence of dyphones during the production. This approach gives naturally sounding results for short sentences where intonation is not so important
    Given that the cost of developing a database for corpus synthesis may easily be 100 times higher than for dyphone synthesis, there are very few companies that make them. Two companies offer a demo on the internet: ATT [att.com] and Scansoft (former L&H) [scansoft.com] and
  • Festival with MBrola (Score:2, Informative)

    by tigersha ( 151319 )
    I can only concur with the poster above who said that Festival with MBrola is probably the bet OSS bet. Actually, The MBrola voice itself has a license for "non-commercial" use, but we are a nonprofit, so...

    In particular, there is one high-res female voice in MBrola that is very good. If you need any help setting it up (I can happiyl give you my festival config file) just say mail me at netgrok @at@ yahoo . de

    That said, I think text output is very underrated technology and is quite useful, if used in moderation for the right purposes. One sometimes reads overexcited hyping about reading your emails out loud in the car or at breakfast, but that ain't gonna happen with current technology.

    For one, the synthesis is bit monotonous for long texts (but then, now that I think about it, having you SO or kids read out a letter out loud would probably be not any better...)

    Secondly, you do not necessarily want a user interface where a computer reads out things the whole time (logs, for instance) because a) its annoying and b) it will not work in an office with multiple people.

    Where it DOES work and is trivial to implement is for things that are singular events that occur during the day or alarms. Similar to the sort of PA announcements that you would get in a department store. They do not read loud the whole time, do they?

    In our case we use festival with Mbrola for two things:

    • There is a small script that checks the main services on all our critical machines and if one goes down, the system moans. A loud voice that says "The http server on web1 is down" gets more attention than a little light.
    • Our backup system moans at specific times (about three times a day) about the next tape that needs to be put in. "Please insert the correct tape into the tape drive on backup. The tape needed is Unix 1.

      The backup will start at 4 this afternoon" is the announcement I just heard. Sometimes I add "Please insert the tape, pleeeeease", just to hear the damn computer beg ME for a change :)


    • Also, if I FORGET to insert the tape the computer starts moaning continously about it. Nothing like a whining b.tch to convince you get off your butt to put the tape in :)


    It is also trivial to insert this into a standard sysv start/stop script at boot time so that you have some tag when critical servers are shut down for some reason.

    Costs for this setup? 2 hours of install time (installing MBrola on festival took some digging-through-the-docs. If you need it, mail me). Writing a script that "say x" instead of "echo x) took about 2 minutes. And putting these commands into the cron job took about 10. So for a bout 3 hours worth of time and set of very cheapo computer speakers you get a good useful functional system which works, and the voice is very, very good. This is pretty neat for critical or semi-critical announcement kind of events, not continuous interaction.

    Since the only command I use to activate it is "say x" from unix shellscripts with your current setup is trivial.

    Btw, the MBrola website has a demo of a german voice reading a weather and traffic report which is even better than the English one.

    Of yeah, it was fun to watch the cleaning lady almost get a heart attack when the computer greeted her...
  • Winbond makes a TTS (WTS701EM/T) chip that is relatively cheap (about $15 itself I think).

    Devantech (a small company in england that makes boards for robot builders) has built a little board around it, that can have 30 canned phrases and do text to speach over rs-232 or i2c. Info on their board can be found at http://www.robot-electronics.co.uk/shop/Speech_Syn thesizer_SP032006.htm

    The board goes goes about $83 from US distributors
    http://www.acroname.com/robotics/parts/R184-SP03 .h tml

    While this might be a bit expensive, this guy is makeing small quantities, and this is designed to be run from a small robot driven by a micro controller, not a full computer.
  • by Lando ( 9348 )
    I think Mandrake (the person not the company) was working with a fairly decent open source speech synthesiser(sp?) last time we talked... been about 2 years now I think... but a quick check of his site shows that he's still working with the same company and thus probably still has good links from him homepage, ie http://www.mandrake.net/ [mandrake.net]
  • Comment removed based on user account deletion

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...