Brian Lovin
/
Hacker News
Daily Digest email

Get the top HN stories in your inbox every day.

randkyp

This is HN, so I'm surprised that no one in the comments section has run this locally. :)

Following the instructions in their repo (and moving the checkpoints/ and resources/ folder into the "nested" openvoice subfolder), I managed to get the Gradio demo running. Simple enough.

It appears to be quicker than XTTS2 on my machine (RTX 3090), and utilizes approximately 1.5GB of VRAM. The Gradio demo is limited to 200 characters, perhaps for resource usage concerns, but it seems to run at around 8x realtime (8 seconds of speech for about 1 second of processing time.)

EDIT: patched the Gradio demo for longer text; it's way faster than that. One minute of speech only took ~4 seconds to render. Default voice sample, reading this very comment: https://voca.ro/18JIHDs4vI1v I had to write out acronyms -- XTTS2 to "ex tee tee ess two", for example.

The voice clarity is better than XTTS2, too, but the speech can sound a bit stilted and, well, robotic/TTS-esque compared to it. The cloning consistency is definitely a step above XTTS2 in my experience -- XTTS2 would sometimes have random pitch shifts or plosives/babble in the middle of speech.

bambax

I am trying to run it locally but it doesn't quite work for me.

I was able to run the demos allright, but when trying to use another reference speaker (in demo_part1), the result doesn't sound at all like the source (it's just a random male voice).

I'm also trying to produce French output, using a reference audio file in French for the base speaker, and a text in French. This triggers an error in api.py line 75 that the source language is not accepted.

Indeed, in api.py line 45 the only two source languages allowed are English and Chineese; simply adding French to language_marks in api.py line 43 avoids errors but produces a weird/unintelligible result with a super heavy English accent and pronunciation.

I guess one would need to generate source_se again, and probably mess with config.json and checkpoint.pth as well, but I could not find instructions on how to do this...?

Edit -- tried again on https://app.myshell.ai/ The result sounds French alright, but still nothing like the original reference. It would be absolutely impossible to confuse one with the other, even for someone who didn't know the person very well.

randkyp

I played with it some more and I have to agree. For actual voice _cloning_, XTTS2 sounds much, much closer to the original speaker. But the resulting output is also much more unpredictable and sometimes downright glitchy compared to OpenVoice. XTTS2 also tries to "act out" the implied emotion/tone/pitch/cadence in the input text, for better or worse.

But my use case is just to have a nice-sounding local TTS engine, and current text-to-phoneme conversion quirks aside, OpenVoice seems promising. It's fast, too.

echelon

And StyleTTS2 generalizes out of domain even better than that.

dragonwriter

> but when trying to use another reference speaker (in demo_part1), the result doesn’t sound at all like the source

I’ve noticed the same thing and I wonder if there is maybe some undocumented information about what makes a good voice sample for cloning, perhaps in terms of what you might call “phonemic inventory”. The reference sample seems really dense.

> Indeed, in api.py line 45 the only two source languages allowed are English and Chinese

If you look at the code, outside of what the model does it relies on the surrounding infrastructure converting the input text to the international phonetic alphabet (IPA) as part of the process, and only has that implemented for English and Mandarin (though cleaners.py has broken references to routines for Japanese and Korean.

causi

We're so close to me being able to open a program, feed in an epub, and get a near-human level audiobook out of it. I'm so excited.

aedocw

Give https://github.com/aedocw/epub2tts a look, the latest update enables use of MS Edge cloud-based TTS so you don't need a local GPU and the quality is excellent.

causi

Interesting. Seems like a pain to get running but I'll give it a shot. Thanks.

jurimasa

I think this is creepy and dangerous as fuck. Not worth the trouble it will be.

_zoltan_

you're gonna be REALLY surprised out there in the real world.

CamperBob2

Other sites beckon.

aftbit

I want to try chaining XTTS2 with something like RVCProject. The idea is to generate the speech in one step, then clone a voice in the audio domain in a second step.

fellowniusmonk

I'm running it locally on my M1. The reference voices sound great, trying to clone my own voice it doesn't sound remotely like me.

epiccoleman

I have got to build or buy a new computer capable of playing with all this cool shit. I built my last "gaming" PC in 2016, so its hardware isn't really ideal for AI shenanigans, and my Macbook for work is an increasingly crusty 2019 model, so that's out too.

Yeah, I could rent time on a server, but that's not as cool as just having a box in my house that I could use to play with local models. Feels like I'm missing a wave of fun stuff to experiment with, but hardware is expensive!

sangnoir

> its hardware isn't really ideal for AI shenanigans

FWIW, I was in the same boat as you and decided to start cheap, old game machines can handle AI shenanigans just fine wirh the right GPU. I use a 2017 workstation (Zen1) and an Nvidia P40 from around the same time, which can be had for <$200 on ebay/Amazon. The P40 has 24GB VRAM, which is more than enough for a good chunk of quantized LLMs or diffusion models, and is in the same perf ballpark as the free Colab tensor hardware.

If you're just dipping your toes without committing, I'd recommend that route. The P40 is a data center card and expects higher airflow than desktop GPUs, so you probably have to buy a "blow kit" or 3D-print a fan shroud and ensure they fit inside your case. This will be another $30-$50. The bigger the fan, the quieter it can run. If you already have a high-end gamer PC/workstation from 2016, you can dive into local AI for $250 all-in.

Edit: didn't realize how cheap P40s now are! I bought mine a while back.

beardedwizard

I would love a recommendation for an off the shelf "gpu server" good for most of this that I can run at home.

macrolime

Mac Studio or macbook pro if you want to run the larger models. Otherwise just a gaming pc with an rtx 4090 or a used rtx 3090 if you want something cheaper. A used dual 3090 can also be a good deal, but that is more in the build it yourself category than off the shelf.

lakomen

I'm clueless about AI, but here's a benchmark list https://www.videocardbenchmark.net/high_end_gpus.html

Imo the 4070 super is the best value and consumes the least amount of Watts, 220 in all the top 10.

So anything with one and some ECC RAM aka AMD should be fine. Intel non-xeons need the expensive w680 boards and very specific RAM per board.

ECC because you wrote server. We're professionals here after all, right?

batch12

So I went really cheap and got a Thunderbolt dock for a gpu and a secondhand Intel nuc that supported it. So far it has met my needs.

holtkam2

I'm in exactly the same boat. Yeah ofc you can run LMs on cloud servers but my dream project would be to construct a new gaming PC (mine is too old) and serve a LM on it, then serve an AI agent app which I can talk to from anywhere.

Has anyone had luck buying used GPUs, or is that something I should avoid?

ssl-3

I bought some used GPUs during the last mining thing. They all worked fine except for some oddball Dell models that the seller was obviously trying to fix a problem on (and they took them back without question, even paying return shipping).

And old mining GPUs are A-OK, generally: Despite warnings from the peanut gallery for over over a decade that mining ruins video cards, this has never really been the case. Profitable miners have always tended to treat these things very carefully, undervolt (and often, underclock) them, and pay attention to them so they could be run as cool and inexpensively as possible. Killing cards is bad for profits, so they aimed towards keeping them alive.

GPUs that were used for gaming are also OK, usually. They'll have fewer hours of hard[er] work on them, but will have more thermal cycles as gaming tends to be much more intermittent than continuous mining is.

The usual caveats apply as when buying anything else (used, "new", or whatever) from randos on teh Interwebz. (And fans eventually die, and so do thermal interfaces (pads and thermal compound), but those are all easily replaceable by anyone with a small toolkit and half a brain worth of wit.)

zoklet-enjoyer

I forgot all about Vocaroo!

tonnydourado

I might be missing something, but what are the non-questionable, or at least non-evil, uses of this technology? Because every single application I can think of is fucked up: porn, identity theft, impersonation, replacing voice actors, stealing the likeness of voice actors, replacing customer support without letting the customers know you're using bots.

I guess you could give realistic voices to people that lost their voices by using old recordings, but there's no way that this is a market that justify the investment.

paczki

The ability to use my own voice in other languages so I can do localization on my own youtube videos would be huge.

With game development as well, being able to be my own voice actor would save me an immense amount of money that I do not have and give me even more creative freedom and direction of exactly what I want.

It's not ready yet, but I do believe that it will come.

Capricorn2481

People are already doing this and it was hugely controversial in The Finals

fennecfoxy

There's a bit thing atm around human creative work being replaced by AI; even if a voice isn't cloned but just generated by AI it gets people frothing as if human hands in factories haven't been replaced by robots, or a human rider on horseback wasn't replaced by an engine.

rbits

I feel like the main reason, though, was that they could easily afford real voice actors.

AnonC

> what are the non-questionable, or at least non-evil, uses of this technology?

iPhone Personal Voice [1] is one. It helps people who are physically losing their voice and the ones around them to still have their voice in a different way. Apple takes long voice samples of various texts for this though.

[1]: https://www.youtube.com/watch?v=ra9I0HScTDw

tonnydourado

That's kinda what I was thinking on the second paragraph. Still, gotta be a small market.

tompetry

I have the same concerns generally. But one non-evil popped into my head...

My dad passed away a few months ago. Going through his things, I found all of his old papers and writings; they have great meaning to me. It would be so cool to have them as audio files, my dad as the narrator. And for shits, try it with a British accent.

This may not abate the concerns, but I'm sure good things will come too.

block_dagger

Serious question: is this a healthy way to treat ancestors? In the future will we just keep grandma around as an AI version of her middle aged self when she passes?

tompetry

Fair question. People have kept pictures, paintings, art, belongings, etc of their family members for countless generations. AI will surely be used to create new ways to remember loved ones. I think that is a big difference than "keeping around grandma as an AI version of herself", and pretending they are still alive, which I agree feels unhealthy.

mynameisash

I think everyone's entitled to their opinion here. As for me, though: my brother died at 10 years old (back in the 90s). While there are some home videos with him talking, it's never for more than a few seconds at a time.

Maybe a decade ago, I came across a cassette tape that he had used to record himself reading from a book for school - several minutes in duration.

It was incredibly surprising to me how much he sounded like my older brother. It was a very emotional experience, but personally, I can't imagine using that recording to bootstrap a model whereby I could produce more of his "voice".

Narishma

There's a Black Mirror episode about something like that, though I don't remember the details.

ssl-3

It seems unhealthy for us to sort out what is and is not a healthy way for someone else to mourn, or to remember, their own grandmother.

It is healthier for us to just let others do as they wish with their time without passing judgement.

gremlinsinc

it worked for super man, he seemed well adjusted after talking to his dead parents.

hypertexthero

Not sure if this is related to this tech, but I think it is worthwhile: The Beatles - Now And Then - The Last Beatles Song (Short Film)

https://www.youtube.com/watch?v=APJAQoSCwuA

CuriouslyC

Text to speech is very close to being able to replace voice actors for a lot of lower budget content. Voice cloning will let directors and creators get just the sound they want for their characters, imagine being able to say "I want something that sounds like Harrison Ford with a French accent." Of course, there are going to be debates about how closely you can clone someone's voice/diction/etc, both extremes are wrong - perfect cloning will hurt artists without bringing extra value to directors/creators, but if we outlaw things that sound similar the technology will be neutered to uselessness.

unraveller

Years ago The Indiana Jones podcast show made a feature length radio adventure drama (with copyright blessing for music/story of indiana jones) and it has a voice actor that sounds 98% exactly like harrison ford. No one was hurt by it because cultural artefacts rely on mass distribution first.

https://archive.org/details/INDIAN1_20190413?webamp=default

tonnydourado

That's basically replacing voice actors and stealing their likeness: both are arguably evil, and mentioned. So, I haven't missed them.

P.S.: "but what about small, indie creators" that's not who's gonna embrace this the most, it's big studios, and they will do it to fuck over workers.

CuriouslyC

As someone involved in the AI creator sphere, that's a very cold take. Big studios pay top shelf voice talent to create the best possible experience because they can afford it. Do you think Blizzard is using AI to voice Diablo/Overwatch/Warcraft? Of course not. On the other hand, there are lots of small indie games being made now that utilize TTS, because the alternative is no voice, the voice of a friend or a very low quality voice actor.

Do I want to have people making exact clones of voice actors? No. The problem is that if you say "You can't get 90% close to an existing voice actor" then the technology will be able to create almost no human voices, it'll constantly refuse like gemini, even when the request is reasonable. This technology is incredibly powerful and useful, and we shouldn't avoid using it because it'll force a few people to change careers.

ben_w

I disagree on three of your points.

It is creating a new and fully customisable voice actor that perfectly matches a creative vision.

To the extent that a skilled voice actor can already blend existing voices together to get, say, French Harrison Ford, for it to be evil for a machine to do it would require it to be evil for a human to do it.

Small indie creators have a budget of approximately nothing, this kind of thing would allow them to voice all NPCs in some game rather than just the main quest NPCs. (And that's true even in the absence of LLMs to generate the flavour text for the NPCs so they're not just repeating "…but then I took an arrow to the knee" as generic greeting #7 like AAA games from 2011).

Big studios may also use this for NPCs to the economic detriment of current voice actors, but I suspect this will be a tech which leads to "induced demand"[0] — though note that this can also turn out very badly and isn't always a good thing either: https://en.wikipedia.org/wiki/Cotton_gin

[0] https://en.wikipedia.org/wiki/Induced_demand

allannienhuis

I don't disagree with the thought that large companies are going to try to use these technologies too, with typical lack of ethics in many cases.

But some of this thinking is a bit like protesting the use of heavy machinery in roadbuilding/construction, because it displaces thousands of people with shovels. One difference with this type of technology is that the means to use it doesn't require massive amounts of capital like the heavy machinery example, so more of those shovel-weilders will be able to compete with those that are only bringing captial to the table.

undefined

[deleted]

drusepth

Super-niche use-case: our game studio prototyped a multiplayer horror game where we played with cloning player voices to be able to secretly relay messages to certain players as if it came from one of their team-mates (e.g. "go check out below deck" to split a pair of players up, or "I think Bob is trying to sabotage us" to sew inter-player distrust, etc).

Less-niche use-case: if you use TTS for voice-overs and/or NPC dialogue, there can still be a lot of variance in speech patterns / tone / inflections / etc when using a model where you've just customized parameters for each NPC -- using a voice-clone approach, upon first tests, seems like it might provide more long-term consistency.

Bonus: in a lot of voiced-over (non-J)RPGs, the main character is text-only (intentionally not voiced) because they're often intended to be a self-insert of the player (compared to JRPGs which typically have the player "embody" a more fleshed-out player with their own voice). If you really want to lean into self-insert patterns, you could have a player provide a short sample of their voice at the beginning of the game and use that for generating voice-overs for their player character's dialogue throughout the game.

Terr_

The idea of a personalized protagonist voice is interesting, but I'd worry about some kind of uncanny valley where it sounds like myself but is using the wrong word-choices or inflections.

Actually, getting it to sound "like myself" in the first place is an extra challenge! For many people even actual recordings sound "wrong", probably because your self-perception involves spoken sound being transmitted through your neck and head, with a different blend of frequencies.

After that is solved, there's still the problem of bystanders remarking: "Is that supposed to sound like you? It doesn't sound like you."

Kinrany

Being able to show friends your internal voice would be cool.

Jordrok

> Super-niche use-case: our game studio prototyped a multiplayer horror game where we played with cloning player voices to be able to secretly relay messages to certain players as if it came from one of their team-mates (e.g. "go check out below deck" to split a pair of players up, or "I think Bob is trying to sabotage us" to sew inter-player distrust, etc).

That's an insanely cool idea, and one I hadn't really considered before. Out of curiosity, how well did it work? Was it believable enough to fool players?

undefined

[deleted]

pksebben

There's a huge gap in uses where listenable, realistic voice is required, but the text to be spoken is not predetermined. Think AI agents, NPCs in dynamically generated games, etc. These things are currently not really doable with the current crop of TTS because either they take too long to run or they sound awful.

I think the bulk of where this stuff will be useful isn't really visible yet b/c we haven't had the tech to play around with enough.

There is also certainly a huge swath of bad-actor stuff that this is good for. I feel like a lot of the problems with modern tech falls under the umbrella of "We're not collectively mature enough to handle this much power" and I wish there were a better solution for all of that.

gremlinsinc

eh, you mean the solution isn't, so here's even more power... see you next week!

pksebben

If I'm getting your meaning, that is - we don't have a fix for "we can but we ought not to", then yeah I see what you mean.

Even that is not straightforward, unfortunately. There's this thing where the tech is going to be here, one way or the other. What we may have some influence on isn't whether it shows up, but who has it.

...which brings me to what I see as the core of contention between anyone conversing in this space. Who do you think is a bigger threat? Large multinational globocorps or individual fanatics, or someone else that might get their hands on this stuff?

From my perspective, we've gone a long time handing over control of "things"; society, tax dollars, armaments, law - to the larger centralized entities (globocorps and political parties and wall street and so on and so on). Things throughout this period have become incrementally worser and worser, and occasionally (here's looking at you, september '08) rapidly worser.

Put in short, huge centralized single-points-of-failure are the greater evil. "Terrorists", "Commies", "Malcontents" (whatever you wanna call folks with an axe to grind) make up a much lesser (but still present!) danger.

So that leaves us in a really awkward position, right? We have these things that (could) amount to digital nukes (or anything on a spectra towards such) and we're having this conversation about whether to keep going on them while everyone knows full well that on some level, we can't be trusted. It's not great and I'll be the first to admit that.

But, I'm much more concerned about the people with strike drones and billions of dollars of warchest having exclusive access to this stuff than I am about joey-mad-about-baloney having them.

Joey could do one-time damage to some system, or maybe fleece your grandma for her life savings (which is bad, and I'm not trying to minimize it).

Globocorp (which in this scenario could actually be a single incredibly rich dude with a swarm of AI to carry out his will) could institute a year-round propaganda machine that suppresses dissent while algorithmically targeting whoever it deems "dangerous" with strike drones and automated turrets. And we'd never hear about it. The 'media AI' could just decide not to tell us.

So yeah, I'm kinda on the side of "do it all, but do it in the open so's we can see it". Not ideal, but better than the alternatives AFAICT.

mostrepublican

I used it to translate a short set of tv shows that were only available in Danish with no subtitles in any other language and made them into English for my personal watching library.

The episodes are about 95% just a narrator with some background noises.

Elevenlabs did a great job with it and I cranked through the 32 episodes (about 4 mins each) relatively easily.

There is a longer series (about 60 hours) only in Japanese that I want to do the same thing for. But don't want to spend Elevenlabs prices to do.

ukuina

OpenAI TTS is very competitively priced: $15/1M chars.

victorbjorklund

Why is replacing voice actors evil? How is it worse than replacing any other job using a machine/software?

buu700

Agreed. I think the framing of "stealing" is a needlessly pessimistic prediction of how it might be used. If a person owns their own likeness, it would be logical to implement legal protections for AI impersonations of one's voice. I could imagine a popular voice actor scaling up their career by using AI for a first draft rendering of their part of a script and then selectively refining particular lines with more detailed prompts and/or recording them manually.

This raises a lot of complicated issues and questions, but the use case isn't inherently bad.

machomaster

The problem is not about replacing actors with technology. It is about replacing the particular actors with their computer-generated voice. It's about likeness-theft.

thorum

OpenVoice currently ranks second-to-last in the Huggingface TTS arena leaderboard, well below alternatives like styletts2 and xtts2:

https://huggingface.co/spaces/TTS-AGI/TTS-Arena

(Click the leaderboard tab at the top to see rankings)

KennyBlanken

Having gone through almost ten rounds of the TTS Arena, XTT2 has tons of artifacts that instantly make it sound non-human. OpenVoice doesn't.

It wouldn't surprise me if people recognize different algorithms and purposefully promote them over others, or alter the page source with a userscript to see the algorithm before listening and click the one they're trying to promote. Looking at the leaderboard, it's obvious there's manipulation going on, because Metavoice is highly ranked but generates absolutely terrible speech with extremely unnatural pauses.

Elevenlabs was scarily natural sounding and high quality; the best of the ones I listened to so far. Pheme's speech overall sounds really natural, but has terrible sound quality, which is probably why it's ranked so well. If Pheme could be higher quality audio, it'd probably match Elevenlabs.

carbocation

I would like to see the new VoiceCraft model on that list eventually (weights released yesterday, discussion at [1]).

1 = https://news.ycombinator.com/item?id=39865340

m463

I haven't tried openvoice, but I did try whisperspeech and it will do the same thing. You can optionally pass in a file with a reference voice, and the tts uses it.

https://github.com/collabora/whisperspeech

I found it to be kind of creepy hearing it in my own voice. I also tried a friend of mine who had a french canadian accent and strangely the output didn't have his accent.

ckl1810

Is there a benchmark for compute needed? Curious to see if anyone is building / has built a Zoom filter, or Mobile app, whereby I can speak English, and out comes Chinese to the listener.

abdullahkhalids

HG TTS arena is asking if the text-to-speech sounds human like. That's somewhat different from voice cloning. A model might produce audio which is less human like, but still sound closer to the target voice.

Jackson__

As someone who has used the arena maybe ~3 times, the subpar voice quality in the demo linked immediately stood out to me.

c0brac0bra

I'd like to see Deepgram Aura on here.

muglug

It’s funny how a bunch of models use Musk’s voice as a proof of their quality, given how disjointed and staccato he sounds in real life. Surely there are better voices to imitate.

iinnPP

Proving the handling of uncommon speech is definitely a great example to use alongside the other common and uncommon speech examples on the page.

m463

I would imagine folks with really great voices like Morgan Freeman¹ or Don LaFontaine² are already voice actors, and using their voice might be seen as soul stealing (or competing with their career)

1: https://en.wikipedia.org/wiki/File:Morgan_freeman_bbc_radio4...

2 https://youtu.be/USrkW_5QPa0

ianschmitz

Especially with all of the crypto scams using Elon’s voice

futureshock

In related news, Voicecraft published their model weights today.

https://github.com/jasonppy/VoiceCraft

smusamashah

The quality here is good (very good if I can actually run it locally). As per github it looks like we can run it locally.

https://github.com/myshell-ai/OpenVoice/blob/main/docs/USAGE...

486sx33

Still a bit robotic but better highs and lows for sure. The Catalog is huge! Thanks for posting

andrewstuart

If someone can come up with a voice clinging product that I can run on my own computer not the cloud, and if it’s super simple to install and use, then I’ll pay.

I find it hard to understand why so much money is going into ai and so many startups are building ai stuff and such a product does not exist.

It’s got to run locally because I’m not interested in the restrictions that cloud voice cloning services impose.

Complete, consumer level local voice cloning = payment.

dsign

I couldn't agree more.

I've tried some of this ".ai" websites that do voice-cloning, and they tend to use the following dark strategy:

- Demand you create a cloud account before trying.

- Sometimes, demand you put your credit card before trying.

- Always: the product is crap. Sometimes it does voice-cloning sort of as advertised, but you have to wait for the training and the execution in queue, because cloud GPUs are expensive and they need to manage a queue because it's a cloud prouduct. At least that part could be avoided if they shipped a VST plugin one could run locally, even if it's restricted to NVidia GPUs[^2].

[^1]: To those who say "but the devs must get paid": yes. But subscriptions miss-align incentives, and some updates are simply not worth the minutes they cause in productivity lost while waiting for their shoehorned installation.

[^2]: Musicians and creative types are used to spend a lot in hardware and software, and there are inference GPUs which are cheaper than some sample libraries.

andrewstuart

I don’t mind if the software is a subscription it just has to be installable and not spyware garbage.

Professional consumer level software like a game or productivity app or something.

andoando

I made a voice cloning site. https://voiceshift.ai No login, nothing required. Its a bit limited but I can add any of the RVC models. Working on a feature to just upload your own model.

I can definitely make it a local app.

riwsky

How do you figure subscriptions misalign incentives? The alternative, of selling upgrades, incentivizes devs to focus on new shiny shit that teases well. I instead rather they focus on making something I get value out of consistently.

dsign

- A one-off payment makes life infinitely simpler for accounting purposes. In my jurisdiction, a software license owned by the business is an asset and shows as that in the balance sheet, and can be subject to a depreciation schedule just as any other asset.

- Mental peace: if product X does what I need right now and I can count that I will be able to use product X five years from now to do the same thing, then I'm happy to pay a lump sum that I see as an investment. Even better, I feel confident that I can integrate product X in my workflows. I don't get that with a subscription product on the hands of a startup seeking product-market fit.

jeroenhd

RVC does live voice changing with a little latency: https://github.com/RVC-Project/Retrieval-based-Voice-Convers...

The product isn't exactly spectacular, but most of the works seems to have bene done. Just needs someone to go over the UI and make it less unstable, really.

rifur13

Wow perfect timing. I'm working on a sub-realtime TTS (only on Apple M-series silicon). Quality should be on-par or better than XTTS2. Definitely shoot me a message if you're interested.

smusamashah

Buy this one is supposed to be runnable locally. It has complete instructions on Github including downloading models locally and installing python setting it up and running it.

andrewstuart

I'm wanting to download an installer and run it - consumer level software.

endisneigh

I see these types of comments all the time, but fact is folks at large who wouldn’t use the cloud version won’t pay. The kind of person who has a 4090 to run these sort of models would just figure out how to do it themselves.

The other issue is that paying for the software once doesn’t capture as much of the value as a pay per use model, thus if you wanted to sell the software you’d either have to say you can only use it for personal use, or make it incredibly expensive to account for the fact that a competitor would just use it.

Suppose there were such a thing - then folks may complain that it’s not open source. Then it’s open sourced, but then there’s no need to pay.

In any case, if you’re willing to pay $1000 I’m sure many of us can whip something up for you. Single executable.

andoando

I have a 2070 and it works just fine, as long as you're not doing real time conversion. You can try it on https://voiceshift.ai if youre curious.

washadjeffmad

I mean this at large, but I just can't get over this "sell me a product" mentality.

You already don't need to pay; all of this is happening publication to implementation, open and local. Hop on Discord and ask a friendly neon-haired teen to set up TorToiSe or xTTS with cloning for you.

Software developers and startups didn't create AGI, a whole lot of scientists did. A majority of the services you're seeing are just repackaging and serving foundational work using tools already available to everyone.

TuringTest

I agree, buy playing devil's advocate, it's true that people without the time and expertise to setup their own install can find this packaging valuable enough to pay for it.

It would be better for all if, in Open Source fashion, this software had a FLOSS easy-to-install packaging that provided for basic use cases, and developers made money by adapting it to more specific use cases and toolchains.

(This one is not FLOSS in the classic sense, of course. The above would be valid for MIT-licensed or GPL models).

lancesells

The answer is convenience. Why use dropbox when you can run Nextcloud? You can say the same thing about large companies. Why does Apple use Slack (or whatever they use) when they could build their own? Why doesn't Stripe build their own data centers?

If I had a need for an AI voice for a project I would pay the $9 a month, use it, and be done. I might have the skills to set this up on my machine but it would take me hours to get up to speed and get it going. It just wouldn't be worth it.

nprateem

You can extend that reasoning to anything, but time and energy are limited

ipsum2

How much would you pay? I can make it.

andrewstuart

You can’t sell this cause the license doesn’t allow it.

pmontra

"This repository is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License, which prohibits commercial usage"

People could pay somebody for the service of setting up the model on their own hardware, then use the model for non commercial usage.

GTP

Doesn't allow it yet, but on the readme, they write "This will be changed to a license that allows Free Commercial usage in the near future". So someone will soon be able to sell it to you.

ipsum2

Not using this model, but something similar. How much would you pay?

ddtaylor

Bark is MIT licensed for commercial use.

palmfacehn

XTTS2 works well locally. Maybe someone else here can recommend a front end.

undefined

[deleted]

Havoc

Tried it locally - can't get anywhere near the clone quality of the clips on their site.

Not even close. Perhaps I'm doing something wrong...

joshspankit

Does anyone know which local models are doing the “opposite”: Identify a voice well enough to do speaker diarization across multiple recordings?

Teleoflexuous

Whisper doesn't, but WhisperX <https://github.com/m-bain/whisperX/> does. I am using it right now and it's perfectly serviceable.

For reference, I'm transcribing research-related podcasts, meaning speech doesn't overlap a lot, which would be a problem for WhisperX from what I understand. There's also a lot of accents, which are straining on Whisper (though it's also doing well), but surely help WhisperX. It did have issues with figuring out the number of speakers on it's own, but that wasn't a problem for my use case.

joshspankit

WhisperX does diarization, but I don’t see any mention of it fulfilling my ask which makes me think I didn’t communicate it well.

Here’s an example for clarity:

1. AI is trained on the voice of a podcast host. As a side effect it now (presumably) has all the information it needs to replicate the voice

2. All the past podcasts can be processed with the AI comparing the detected voice against the known voice which leads to highly-accurate labelling of that person

3. Probably a nice side bonus: if two people with different registers are speaking over each other the AI could separate them out. “That’s clearly person A and the other one is clearly person C”

c0brac0bra

You can check out PicoVoice Eagle (paid product): https://picovoice.ai/docs/eagle/

You pass N number of PCM frames through their trainer and once you reach a certain percentage you can extract an embedding you can save.

Then you can identify audio against the set of identified speakers and it will return percentage matches for each.

Drakim

On my wishlist would be a local model that can generate new voices based on descriptions such as "rough detective-like hard boiled man" or "old fatherly grampa"

mattferderer

You might be interested in this cool app that Microsoft made that I don't think I've seen anyone talk about anywhere called Speech Studio. https://speech.microsoft.com/

I don't recall their voices being the most descriptive but they had a lot. They also let layout a bunch of text & have different voices speak each line just like a movie script.

satvikpendem

Whisper can do diarization but not sure it will "remember" the voices well enough. You might simply have to stitch all the recordings together, run it through Whisper to get the diarized transcript, then process that how you want.

beardedwizard

Whisper does not support diarization. There are a number of projects that try to add it.

c0brac0bra

Picovoice says they do this but it's a paid product. It supposedly runs on the device but you still need a key and have to pay per minute.

Fripplebubby

Really interesting! Reading the paper, it sounds like the core of it is broken into two things:

1. Encoding speech sounds into an IPA-like representation, decoding IPA-like into target language

2. Extracting "tone color", removing it from the IPA-like representation, then adding it back in into the target layer (emotion, accent, rhythm, pauses, intonation)

So as a result, I am a native English speaker, but I could hear "my" voice speaking Chinese with similar tone color to my own! I wonder, if I recorded it, and then did learn to speak Chinese fluently, how similar it would be? I also wonder whether there is some kind of "tone color translator" that is needed to translate the tone color markers of American English into the relevant ones for other languages, how does that work? Or is that already learned as part of the model?

burcs

There's a "vocal fry" aspect to all of these voice cloning tools, a sort of uncanny valley where they can't match tones correctly, or get fully away from this subtle Microsoft Sam-esque breathiness to their voice. I don't know how else to describe it.

blackqueeriroh

Yeah, this is why I’m nowhere near worried about this replacing voice actors for the vast majority of work they currently get paid for.

akashkahlon

So every novel is a movie soon by the author itself using Sora and with Audio buys from all the suitable actors

Multicomp

I hope so, then those of us who want to tell a story (writers, whether comic or novellist or short story or screenplay or teleplay or whatever) will be able to compete more and more on quality and richness of the story copy and content to the audience, not with the current comparative advantage of media choices being made for most storytellers based on difficulty to render.

Words on page are easier than still photos, which are easier than animation, which are easier than live-action TV, which are easier than IMAX movies etc.

If we move all of the rendering of the media into automation, then its just who can come up with the best story content, and you can render it whatever way you like: book, audiobook, animation, live action TV, web series, movie, miniseries, whatever you like.

Granted - the AI will come for us writers to, it already is in some cases. Then the Creator Economy itself will be consumed with eventually becoming 'who can meme the fastest' on an industrial scale for daily events on the one end, and who has taken the time to paint / playact / do rendering out in the real world.

But I sure would love to be able to make a movie out of my unpublished novel, and realistically today, that's impossible in my lifetime. Do I want the entire movie-making industry to die so I and others like me can have that power? No. But if the industry is going to die / change drastically anyways due to forces beyond my control, does that mean I'm not going to take advantage of the ability? Still no.

IDK. I don't have all the answers to this.

But yes, this (amazingly accurate voice cloner after a tiny clip?! wow) product is another step towards that brave new world.

rcarmo

This can’t really do a convincing Sean Connery yet.

_zoltan_

just by more NVDA. :-)

Daily Digest email

Get the top HN stories in your inbox every day.

OpenVoice: Versatile instant voice cloning - Hacker News