52 comments
  • modeless1y

    XTTSv2 is only slightly behind StyleTTS 2 near the top of the TTS Arena leaderboard, though they are both far behind Eleven Labs: https://huggingface.co/spaces/TTS-AGI/TTS-Arena

    Personally I prefer StyleTTS 2, and it has a better license. But XTTSv2 has a streaming mode with pretty low latency which is nice. I did run into hallucination issues though. It will hallucinate nonsense words or insert extra syllables in words, pretty frequently.

    As others mentioned they shut down so there won't be any updates to XTTS.

    • eginhard1y

      They just shared the paper for XTTS, which got accepted to Interspeech and might be the reason for this being posted now: https://arxiv.org/abs/2406.04904

    • jsemrau1y

      Interesting. I got quite good results for my longform substack by combining xTTS2 with Nvidia's Nemo.

    • WhitneyLand1y

      Anyone have a sense for how these compare to OpenAI’s TTS?

    • jonahx1y

      Somewhat unrelated, but given that anyone can vote anonymously, how is the TTS-Arena protecting itself against bots or even rings of humans gaming the system?

      • modeless1y

        Low stakes, I guess

        • Grimblewald1y

          problem is that low stakes divided by low cost of bots is still an acceptable return.

  • vessenes1y

    NB: Coqui is no longer actively maintained. I’m not sure what the team is up to now. The open market is definitely in need of an upgraded TTS offering; eleven labs is far ahead at the moment.

    • eginhard1y

      We do maintain a fork, mostly with bug fixes for now: https://github.com/idiap/coqui-ai-TTS PRs welcome :)

      • dlx1y

        Any progress on the license situation? I'd love to work more on it, but worried about it being a bit of a dead end due to uncertainty about the future of the license and not being able to use it in any commercial projects.

        • eginhard1y

          The licenses of the code (MPL 2.0, allowing commercial use) and the available pretrained models (https://github.com/idiap/coqui-ai-TTS/blob/dev/TTS/.models.j...) are all clearly stated and won't change unless the model owners decide to do so. So the XTTS model is still under CPML, which doesn't allow commercial use.

        • CaptainOfCoit1y

          > Any progress on the license situation?

          What is the situation exactly? Seems to be licensed MPL at a glance, so you're able to use it in commercial projects.

          • woodson1y

            The pretrained models aren’t MPL licensed, though.

            • eginhard1y

              Many of them still allow commercial use. The question is most likely about the XTTS model, which doesn't, but its license is up to the original Coqui team.

    • personjerry1y

      Not surprising. When I was researching options for a client I tried a few companies including ElevenLabs and Play.ht, each seemed happy to talk to us... except Coqui. I think I went as far as reporting bugs to them, just to have them aggressively ignore me. I guess they're more of a research team than a business?

      • jokethrowaway1y

        They were very friendly and welcoming.

        The main problem is quality, Eleven labs is so far ahead, even though their API is not very flexible.

        Meta's Voicebox is the only other option that feels realistic - but it's for research only for now.

        • nmfisher1y

          Check out Sonic (cartesia.ai). Great quality, very fast - but with a few kinks to work out (going off the rails on long utterances, random sounds, etc).

  • phyce1y

    Coqui is great, but another fantastic tool for TTS I recommend checking out is Piper. The voice quality is great, it's extremely lightweight, and it's fast enough to generate TTS in realtime https://github.com/rhasspy/piper

    • dv35z1y

      Can you suggest (1) How to get it working on a Mac, (2) alternatively, how to get it running in a Docker container (on a mac)?

      • mlboss1y

        Works with rhel9 docker image and compiled binary link

    • huskyr1y

      Piper seems very interesting, but unfortunately the last time i tried it on macOS it didn't seem to work (anymore).

  • nishithfolly1y

    This was a great team. Sad to see they had to shut down.

    • ks20481y

      I don't know anything about the startup/VC world, but does anyone have insight on why this failed? It seemed to be one of the highest profile TTS projects and I thought money was just pouring into AI startups.

  • satvikpendem1y

    How does it compare to this recent Show HN, MARS5 [0]? Coqui is not maintained anymore so I'd be interested in what the SOTA is for open source TTS.

    [0] https://news.ycombinator.com/item?id=40616438

  • SubiculumCode1y

    I have a pet ML project that I am doing for fun. I am trying to build a custom transcription and diarizer model for a friend's podcast[1]. My initial solution involved a straight forward implementation using Whisper medium for transcription, and Nemo for diarizing, based on [1]. The results are not bad generally, but since my application involves a fixed set of five known speakers, I thought surely I could fine tune the nemo (or pyannote) diarizer model on their voices to improve accuracy.

    Audio samples are easily obtained from their podcast, but manual data labeling is painful for a hobby activity. Further, from what I understand, the real difficulty in performant diarizer models is not speaker recognition generally, but specifically speaker recognition while there is overlapping speech between multiple speakers. I am not even sure how to best implement a labeling procedure for segments with overlapping speech.

    I started to wonder whether I might bootstrap a decent sample by leveraging TTS vocal cloning models to simulate the five speakers in dialogues with overlapping speech segments. So I ask HN, is this hopelessly naive, or potentially useful technique? Also, any other advice?

    [1] https://www.3d6downtheline.com/ [2] https://github.com/MahmoudAshraf97/whisper-diarization/

    • tarasglek1y

      Unclear from docs, does your solution support inferring number of speakers from audio? Found it a bit frustrating that this wasn't automatic in diarization algos I tried last year

      • SubiculumCode1y

        The solution that this GitHub provided automatically determines the number of speaker labels, but will often create extra speaker classes for a few exerpts in the stream. You can prespecify the number of speakers I believe for better performance.

  • ackprakhack1y

    We've just opensourced MARS5 and are bullish about it's ability to capture very hard prosody -- hopefully you can validate the results and grow alongside its community.

    We tend to agree, the time for just one company to be seriously doing speech is over. It needs to be more diverse, and needs to be opensource https://github.com/Camb-ai/MARS5-TTS

    • BenRacicot1y

      If we could run this locally (Win and Mac) it could reset the standard for accessibility.

  • vijucat1y

    I absolutely love how good the voices are in the VCTK-VIS dataset (109 of them!). I found it easy to install Coqui on WSL and it is able to use CUDA + the GPU quite effectively. p236 male and p237 female are my choices, but holy cow, 109 quality voices still blows my mind. Crazy how you had to pay for a good TTS just a year ago, but now, it's commoditized. Hope you find this useful:

        CUDA_VISIBLE_DEVICES="0" python TTS/server/server.py --model_name tts_models/en/vctk/vits --use_cuda True
    
    
     def play_sound(response):
         #learning : you have to use a semaphore to serialize calls to winsound.PlaySound(), which freaks out with "Failed to play sound" if you try to play 2 clips at once
         semaphore.acquire()
         try:
             winsound.PlaySound(response.content, winsound.SND_MEMORY | winsound.SND_NOSTOP)
         finally:
             # Always release the permit, even if PlaySound raises an exception
             semaphore.release()
  • ritonlajoie1y

    Are there any project which would make TTS with my own voice with some training on my voice ?

    • probably_wrong1y

      While the other commenters provided several voice cloning projects, I would like to point out that I haven't been able to find one that works well for South-American Spanish.

    • eginhard1y

      Yes, you can train/fine-tune models on your own voice with Coqui

    • willwade1y

      Elevenlabs, coqui, piper, Microsoft, Google, Apple. Seriously. They all can these days. Don’t forget acapela or nuance.

    • mttpgn1y

      Yes, elevenlabs can.

  • roskoez1y

    Anyone knows a modern TTS program for Windows? Something you can feed a text file to and have it read while it's on screen?

    I've been using Dimio's Speech for a decade now, but it seems silly now that much better voices exist.

  • robotburrito1y

    I like this project. I used it to create a website that lets me turn a list of articles on the web into a podcast I can subscribe to via my phone.

  • nextworddev1y

    Is there anything that’s self hostable that’s on par with Elevenlabs?

  • spacemanspiff011y

    I believe the company behind this shit down at the end of 2023

    • giancarlostoro1y

      One of my favorite typos. ;) Also coqui is a frog in Puerto Rico (that wound up in Hawaii, sneaking into someone's luggage or something to that effect), when you hear them at night, what you are hearing is their mating call if I remember correctly.

  • Kerbonut1y

    I really like Parler TTS on the TTS Arena.

  • Jayakumark1y

    Its good except for license.

    • sa-code1y

      Is the license still relevant if the company has shut down?