Skip to content(if available)orjump to list(if available)

YaLM-100B: Pretrained language model with 100B parameters

narrator

I love Yandex. They are the best search engine by far for politically controversial topics. They also release a language model to benefit everyone even if it says politically incorrect stuff. They also name their projects "cocaine" probably to perhaps to prevent western competitors from using them.

You look at OpenAI and how they don't release their models mainly because they fear "bad people" will use them for "bad stuff." This is the trend in the west. Technology is too powerful, we must control it! Russia is like... Hey, we are the bad guys you're talking about so who are we keeping this technology from? The west has bigger language models than we do, so who cares. Also their attitude to copyright and patents, etc. They don't care because that's not how their economy makes money. Cory Doctorow's end of general purpose computing[1] and locked down everything is very fast approaching. I'm glad the Russians are around and aren't very interested in that project.

[1]https://csclub.uwaterloo.ca/resources/tech-talks/cory-doctor...

abra0

>They are the best search engine by far for politically controversial topics.

This is an interesting take given the political censorship in Russia (for some ineffable reason much harsher now than it used to be 4 months ago) and cases like https://twitter.com/kevinrothrock/status/1510944781492531208.

narrator

Search Google and Yandex for "2020 election fraud." The results are VERY different. The Zach Vorhies leak shows that Google regularly does blatant censorship for political purposes.[1]

[1]https://www.breitbart.com/tech/2021/08/19/google-whistleblow...

px43

Totally, just like how if you want to find out what really happened in Tiananmen Square in 1989, your best bet is Baidu. Totally different results than what Google gives you!

I sincerely have deep respect for Yandex for releasing this, and Baidu for some of the amazing research they've released over the years, but both are deeply deeply beholden to their local governments in a way that is incomparable to the relationship between Google and the US government.

Remember that the NSA was literally digging up and tapping fiber around Google data centers in a secret program called MUSCULAR because they didn't think Google was being cooperative enough when handing over data that they were requesting.

https://en.wikipedia.org/wiki/MUSCULAR_(surveillance_program...

alphabetting

Google: 118M results. Top link is the best resource on verified election fraud cases.

Yandex: 9M results. The top two links are pretty suspect. Top link promotes Dinesh D'Souza's 2000 Mules documentary in the banner which at best is a one-sided take on election fraud. At worst, very misleading.

https://i.imgur.com/n5a9LOd.png

skrebbel

I don't know man, "thegatewaypundit.com" as a top reputable source? seems to me like it's not "honest two-sided results" but just, well, a rather random mix of result of widely varying quality. Mad Altavista vibes!

What I'm trying to say is that even if you believe that "was the 2020 US election stolen?" is worth debating, which it isn't, the yandex results are shit.

jhgb

> This is the trend in the west. Technology is too powerful, we must control it!

I take it that you're either too young or too untraveled to be aware of the level of state control of technology in "the east". Xerographic machines, mimeographs, and other similar reprographic devices used to be highly controlled machinery behind the Iron Curtain. This is absolutely not something exclusive or even peculiar to "the west".

risyachka

>> They are the best search engine by far for politically controversial topics

FYI, they are Russian subject that follows ALL their censorship laws (and oh boy do they have a lot of it).

>> probably to perhaps to prevent western competitors from using them The irony here. All yandex products are exact copies of western, adjusted to local market.

cpursley

Actually they're not, some of the Yandex products are actually better and pretty innovative (ignoring the political stuff). Maps and Go are especially good. Ditto with Russian banking apps, they put American bank apps to shame.

orbital-decay

>some of the Yandex products are actually better and pretty innovative (ignoring the political stuff). Maps and Go are especially good.

Yeah, the same Yandex Maps that stopped showing state borders recently, as they are now "more focused on natural objects", in their words.

jhgb

Wait, so you're saying it's a Russian company breaking Russian laws and getting away with it?

make3

It's widely accepted that OpenAi doesn't release its models to make money from them, not because they really think they would be harmful

Moldoteck

They literally have blocklist of sites that kremlin doesnt like and it acts somehow similar to yandex news in this part. The difference here is more that google filters stuff for usa and yandex for russia

throwaway_1928

> Hey, we are the bad guys you're talking about so who are we keeping this technology from?

Laughed out loud!

winddude

I feel like you could be paid, or coerced by some country...

chinathrow

Is this sarcasm?

braingenious

This is one of the funniest threads I’ve ever seen on this website. People are yelling at eachother about the CIA and the legitimacy of Israel and Assange and the definition of fascism and… anything that pisses anybody off about international politics in general. In a thread about a piece of software that’s (to me and likely many others) prohibitively expensive to play around with.

Anyway I hope somebody creates a playground with this so I can make a computer write a fan fiction about Kirby and Solid Snake trying to raise a human baby on a yacht in the Caspian Sea or whatever other thing people will actually use this for.

braingenious

What if Street Sharks were mormon missionaries? How would Emily Dickinson describe Angie Dickinson in a poem? How would Ramses II have used Bitcoin?

THESE are the important things to talk about when it comes to this topic.

lumost

To add a voice of skepticism. The recent rush to open source these models may be indicative that the tens of millions that’s spent training these things has relatively poor roi. There may be a hope that someone else figures out how to make these commercially useful.

dandiep

There are tons of commercial uses for these models. I've been experimenting with an app targeted toward language learners [1]. We use large language models to:

- Generate vocabulary - e.g. for biking: handlebars, pedals, shifters, etc

- Generate translation exercises for given topic a learner wants to learn about - e.g. I raised the seat on my bike

- Generate questions for the user - e.g. What are the different types of biking?

- Provide more fluent ways to say things - I went on my bike to the store -> I rode my bike to the store

- Provide explanations of the difference in meaning between two words

And we have fine tuned smaller models to do other thing like grammar correction, exercise grading, and embedded search.

These models are going to completely change the field of education in my opinion.

1) https://squidgies.app - be kind it's still a bit alpha

ketzu

I started to work on something similar but way behind your project. I really believe AI models can help us as humans learn better! Do you have a blog or any other writeups on how you approached these problems?

tikwidd

How does the vocabulary generation work?

MivLives

We're using these at where I work (large retail site) to help make filler text on generated articles. Think the summary blurb no one reads at the top. As for why we're writing these articles (we have a paid team that writes them too), the answer is SEO. This is probably the only thing I've seen done with a text model in production usage. I'm not 100% sure what model they're using.

tobr

Sorry but every part of that sounds so terrible.

MivLives

Yeah I'm not a huge fan of it. I'll never forget the look in our UX person's eyes when she realized that our team doesn't exist to make customer's experience better (there's a ton of other teams for that) but to make Googlebot's experience better. Right now we're in the process of getting publishers you've heard of to write blurbs for best lists, but we're supplying the products so it's not really a best list.

I can't say I'm a big fan but my teams is great and I don't have time to look for a job right now.

varispeed

I hate this so much. These tools are getting better, so often you realise only half way through that you are reading AI text. Then you have to flush your brain and take a mental note, to never visit that site again.

MivLives

I'm not a big fan either. At least the pages are just like:

- useless ai generated intro text

- ten products that actually are the best reviewed per category by users

- brief ai blurb on product

- 3 actual user reviews of the product

So even with the ai text there's still some benefit to the page.

BonoboIO

Content made for machines. Probably a billion dollar industry.

fab1an

Content made for machines serving humans made by machines pretending to be human

jquery

Made by machines, for machines. It’s poetic.

jandrese

You just know that some Amazon listings are written by GANs.

MasterScrat

HuggingFace will soon release their BigScience model: https://twitter.com/BigScienceLLM/status/1539941348656168961

"a 176 billion parameter transformer model that will be trained on roughly 300 billion words in 46 languages"

So anything smaller than that will become worthless. May be a factor, companies have a last chance to make a PR splash before it happens.

Read more about it: https://bigscience.huggingface.co/blog/model-training-launch...

lairv

"worthless" huh, not everyone can afford inference of a ~500gb models, depending on the the speed/rate you need you might definitely go for smaller model

But maybe your sentence was more about "after BigScience model, open-sourcing anything smaller than that will be useless" which isn't necessarily true either, because there is still room to improve parameter efficiency, i.e. smaller models with comparabale performances

rahidz

Not necessarily, only ~30% of the database is in English, so it likely won't be as good as a smaller model trained solely or mostly on English words.

https://bigscience.huggingface.co/blog/building-a-tb-scale-m...

TaylorAlexander

It kinda seems like a model trained on multiple languages would to some extent be better at English than a model trained only on English? I mean so much of English comes from other languages, and understanding language as a concept transcends any specific language. Of course there are limits and it needs good English vocabulary and understanding, but I feel the extra languages would help rather than hinder English performance.

vgel

My guess is they're mostly vanity projects for large tech companies. While the models have some value, they also serve as interesting research projects and help them attract ML talent to work on more profitable models like ad-targeting.

lostmsu

They did not publish benchmarks about quality of the models, which is very suspicious.

I personally squinted hard when they said removing dropout improves training speed (which is in iterations per second), but said nothing about how it affects the performance (rate of mistakes in inference) of the trained model.

jasonphang

I agree that the lack of benchmarks makes it hard to determine how valuable this model is. But on the topic of dropout, dropout has been dropped for the pretraining stage of several other large models. Off the top of my head: GPT-J-6B, GPT-NeoX-20B, and T5-1.1/LM.

gfodor

An equally plausible frame is that once a technology becomes replicated across several companies, it makes sense to open source it since the marginal competitive advantage are the possible resultant external network effects.

I don't know if that's the right way to think about the open sourcing of large language models. I just think we really can't read too much into such releases regarding their motivation.

jenny91

Yes, commoditize your complements.

mumblemumble

From what I've seen, using these huge models for inference at any kind of scale is expensive enough that it's difficult to find a business case that justifies the compute cost.

Voloskaya

Those models aren't trained with the objective of being deployed in production. They are trained to be used as teachers during distillation into smaller models that fit the cost/latency requirements for whatever scenario those big companies have. That's where the real value is.

f311a

Yandex uses it for search and voice assistant

HeavyStorm

Maybe training it is not that expensive?

I know from practice that it takes a really really long time to train even a small nn (thousands of params) , so you'll need a lot more hardware to train one with billions... But, it's expensive to buy the hardware, not necessarily to use it. If you, for some reason, have a few hundred GPU lying around, it might be "cheap" to do the necessary training.

Now, that's not your point - cost != price. But, still...

raducu

> If you, for some reason, have a few hundred GPU lying around

Not to nitpick, but that is like saying that if you have a Lamborghini lying around, a Sunday trip in one is not so expensive.

can16358p

I can't think of anyone having a few hundred GPUs around unless:

- They were into Ethereum mining and quit.

- They've already built a cluster with them (e.g. in an academic setting).

- They live in a datacenter.

- They are a total psychopath.

But even assuming one magically has all those GPUs available and ready to train, I don't want to calculate the power cost of it anyway. Unless one has access to free or extremely cheap electricity it would still be very expensive.

alexb_

I have to wonder if 10 years down the line, everyone will be able to run models like this on their own computers. Have to wonder what the knock-on effects of that will be, especially if the models improve drastically. With so much of our social lives being moved online, if we have the easy ability to create fake lives of fake people one has to wonder what's real and what isn't.

Maybe the dead internet theory will really come true; at least, in some sense of it. https://www.theatlantic.com/technology/archive/2021/08/dead-...

dav_Oz

The bots/machine vs human reminds me of that famous experiment from the 30s in which Winthrop Kellogg[0], a comparative psychologist, and his wife decided to raise their human baby (Donald) simultaneously with a chimpanzee baby (Gua) in an effort to "humanize the ape". It was set out to last 5 years but was relatively quickly abrupted after only 9 months. The explicit reason wasn't stated only that it successfully proved the hereditary limits within the "nature vs nurture" debate of a chimpanzee, the reticent statement reads as follows:

>Gua, treated as a human child, behaved like a human child except when the structure of her body and brain prevented her. This being shown, the experiment was discontinued

There have been a lot of speculation as to other reasons of ending the experiment so prematurely. Maybe exhaustion. One thing which seemed to dawn on the parents - if one reads carefully - is that a human baby is far superior at imitating than the chimpanzee baby, frighteningly so, that they decided to abort the experiment early on in order to prevent any irreversible damage in the development to their human child which at that point had become far more similar to the chimpanzee than the chimpanzee to the human.

So, I would rephrase "the internet is dead" into "the internet becomes increasingly undead" because humans condition themselves in a far more accelerated way to behave like bots than bots are potentially able to do. From the wrong side this could be seen as progress when in fact it's opposite progress. It sure feels like that way for a lot of of people and is a crucial reciprocal element often overlooked/underplayed (mostly in a benign effort to reduce unnecessary complexities) when analyzing human behaviour in interactions with the environment.

[0]https://en.m.wikipedia.org/wiki/Winthrop_Kellogg#The_Ape_and...

boplicity

Case in point: recently, I've noticed that I'm getting more and more emails with the sign off "Warm regards." This is not a coincidence. It is an autosuggestion from Google. If you start signing off an email, it will automatically suggest "Warm regards." It just appears there -- probably an idea generated from an AI network. There are more and more of these algorithmic "suggestions" appearing every day, in more and more contexts. This is true for many text messaging programs: There are "common" replies suggested. How often do people just click on one of the suggested replies, as opposed to writing their own? These suggestions push us into conforming to the expectations of the algorithm, which then reinforces those expectations, creating a cycle of further pushing us into the language use patterns generated by software -- as opposed to idiosyncratic language created by a human mind.

In other words, people are already behaving like bots; and we're building more and more software to encourage such behavior.

alephxyz

Those suggestions appear in Google chat too and even if you don't click on them, the simple fact of reading the suggestion makes you much more likely to type it yourself. There's clearly a priming effect to it.

codeviking

Which is why it's important for folks to start applying AI to more interesting (but harder, more nuanced) problems. Instead of making it easier for people to write emails, or targeting ads, it should be used to help doctors, surgeons and scientists.

The problem is that these problems are less profitable. And that the companies with enough compute to train these types of models are concerned about getting more eyeballs, not making the world a better place.

remram

Those suggestions are very few so I suspect they were hand-picked.

r3trohack3r

A tangentially related thought:

Actors attempt to imitate humans. “Good acting” is convincing; the audience believes the actor is giving a reasonable response to the portrayed situation.

But the audience is also trying to imitate the actors to some degree. Like you point out, humans imitate. For some subset of the population, I’d imagine the majority of social situations they are exposed to, and the responses to situations they observe, are portrayed by actors.

At what point are actors defining the social responses that they then try to imitate? In other words, at what point does acting beget acting and how much of our daily social interactions actually are driven by actors? And is this world of actors creating artificial social responses substantially different than bots doing the same?

jdsully

This is a common phenomena where the fake is more believable than the real thing due to over exposure of the imitation.

Famously the bald eagle sounds nothing like it does in tv and the movies and explosions are rarely massive fireballs. For human interaction it’s much harder to pin down cause and effect but if it happens in other cases it would be very surprising to not happen there.

dougmwne

This is famously theorized by postmodernism. See: https://en.m.wikipedia.org/wiki/Simulacra_and_Simulation

dvirsky

Someone wrote once about how Wall Street people started behaving like the slick image projected of them in movies in the 80s, namely of Michael Douglas; before that they were more like the "boring accountant" type.

freewizard

So maybe the Turing Test is not about AI are smart enough, but about how stupid humans become?

rexpop

Not stupid; imaginative and agreeable.

iforgotpassword

It's the commonly believed reason; the child starting to take on habits from Gua, like noises when she wanted something, and the way monkeys scratch themselves. No authoritative source for it though, it's what I've been told during a lecture back in college, and I think PlainlyDifficult mentions it too in their video about it.

https://youtu.be/VP8DD9TGNlU

alcover

Nice post! But to me your analogy does not really stand : bots are the ones catching up with human conversation in an "accelerated way", feeding on a corpus that predates them. Bots are not an invariant nature that netizens imitate.

CRConrad

I sincerely regret that I had only one upvote to give you. This shit is so insidious that IMO everyone should just simply stop doing it until they've thought it through a lot more.

> ...humans condition themselves in a far more accelerated way to behave like bots than bots are potentially able to do.

Than bots can condition themselves to behave like humans, I presume. They can already behave exactly like bots. :-)

null

[deleted]

gigglesupstairs

Wow this is such a mind bending perspective. Thanks for sharing it.

tiborsaas

I think there will be a trend where model's size will shrink due to better optimization / compression while hardware specs keep increasing.

You can already see this with Chinchilla:

https://towardsdatascience.com/a-new-ai-trend-chinchilla-70b...

Comevius

That's definitely the future, personalized entertainment and social interactions will be big. I could watch a movie made for me, and discuss it with a bunch of chat bots. The future will be bubbly as hell, people will be decaying in their safe places as the hellscape rages on outside.

Peritract

> I could watch a movie made for me

We're a long, long way from this. Stringing words/images together into a coherent sequence is arguably the easy bit of creating novels/films, and computers still lag a long way behind humans in this regard.

Structuring a narrative is a harder, subtler step. Our most advanced ML solutions are improving rapidly, but often struggle with coherence over a single paragraph; they're not going to be doing satisfying foreshadowing and emotional beats for a while.

jb_s

For many movies, sure.

I'm pretty sure the Marvel franchise is shat out by an algorithm.

fumblebee

Maybe. But I think a lot of folks have a short term memory; it was not so long ago that Word2Vec and AlexNet were SOTA. Remember when the thought of a human besting a world-class player at Go was impossible? Me too.

We've come ludicrously far since then. That progress doesn't guarantee that innovation in the space will continue at its current pace, but it sure does feel like it's possible.

null

[deleted]

importantbrian

I actually wouldn't be surprised if the technology catches up to this faster than we realize. I think the actual barrier to large scale adoption of it will be financial and social incentives.

A big reason all the major studios are moving to big franchises is that the real money is in licensing the merch. The movies and TV shows are really just there to sell more merch. Maybe this will work when we all have high quality 3d printers at our desks and we can just print the merch they sell us.

The other big barrier is social. A lot of what people watch, they watch because it was recommended to them by friends or colleagues, and they want to talk about what other people are talking about. I'm sure that there will be many people who will get really into watching custom movies and discussing those movies with chatbots, but I bet most people will still want to socialize and discuss the movies they watch with other humans. FOMO is an underestimated driver of media consumption.

axg11

> We're a long, long way from this.

We’re probably 18 months away from this. We’re probably less than 5 years away from being able to do this on local hardware. AI/ML is advancing faster than most people realise.

thatwasunusual

> Structuring a narrative is a harder, subtler step.

You can say that about many movies/series made entirely by humans today. :)

natly

We're probably a long way away from narrative, but dall-e for video is probably only a year or two away from now (they're probably training the model as we speak).

pydry

I get the feeling that creative sci fi used to kind of help inoculate us against these kinds of future but it seems like there's much less of it than there used to be.

"Black mirror" was good but it's not nearly enough.

rasz

You really dont want to live in Mindwarp (1992 Bruce Campbell movie) or in this !114! year old short story https://en.wikipedia.org/wiki/The_Machine_Stops

dTal

The Machine Stops is eerily prescient - or perhaps just keenly observant of trends visible even at the time - but in fairness the humans in it are not socially isolated, as such; they do not converse with bots, but rather with each other. The primary social activity in the The Machine Stops is the Zoom meeting.

I do not look forward to the day when that story becomes an optimistic view of the future.

espadrine

> I have to wonder if 10 years down the line, everyone will be able to run models like this on their own computers.

Isn’t that already the case? Sure, it costs $60K, but that is accessible to a surprisingly large minority, considering the potency of this software.

alexb_

...what? 60 thousand dollars for a dedicated computer that you can't use is not everyone, not on their own computers, and is also a crazy large amount of money for nearly everyone. Sure there are some that could, but that's not what I said.

H8crilA

Indeed. What "everyone" can use is a ~$200 smartphone, so there's a ~300x gap to be bridged.

paganel

Plus, that's the energy costs involved when running a computer now worth 60k, I'm pretty sure that in the current socio-economic climate those power costs will surpass the initial acquisition cost (those 60k, that is) pretty easily.

px43

Eh, 60k is just a bit more expensive than your average car, and lots of people have cars, and that's just how things are today. I imagine capabilities will be skyrocketing and prices will fall drastically at the same time.

joshvm

You could just run this on a desktop CPU, there's nothing stopping you in principle, you just need enough RAM. A big memory (256GB) machine is definitely doable at home. It's going to cost 1-2k on the DIMMs alone, less if you use 8x32GB, but that'll come down. You could definitely do it for less than $5k all in.

Inference latency is a lot higher in relative terms, but even for things like image processing running a CNN on a CPU isn't particularly bad if you're experimenting, or even for low load production work.

But for really transient loads you're better off just renting seconds-minutes on a VM.

sascha_sl

From the readme, it looks like you need that RAM on your GPU.

uniqueuid

Nitpick: This uses 8x A100 which are at least $10k a piece to my knowledge. Add in the computer and you're closer to $100k.

taink

I believe you're confusing the amount of A100 graphics cards used to train the model (the cluster was actually made up of 800 A100s), and the amount you need to run the model :

> The model [...] is supposed to run on multiple GPUs with tensor parallelism.

> It was tested on 4 (A100 80g) and 8 (V100 32g) GPUs, [but should work] with ≈200GB of GPU memory.

I don't know what the price of a V100 is, but given $10k a piece for A100s we would be closer to the $60k estimate.

sascha_sl

And also, NVIDIA does not sell them to the consumer market whatsoever. Linus Tech Tips could only show one because someone in the audience sent theirs over for review.

kamray23

You're grossly overestimating. People who make 60k annually are getting a bit rarer nowadays, it's not like everyone can afford it. For the majority of people it'd be a multi-decade project, for a few it might only take 7 years, very few people could buy it all at once.

wellthisisgreat

What kind of computer would they be?

Can you spec it out roughly?

zackmorris

Unpopular opinion: something will stop egalitarian power for the masses. I had high hopes for multicore computing in the late 90s and early 2000s but it got blocked every step of the way by everyone doubling down on DSP (glorified vertex buffer) approaches on video cards, leaving us with the contrived dichotomy we see today between CPU and GPU.

Whatever we think will happen will not happen. A less-inspired known-good state will take its place, creating another status quo. Which will funnel us into dystopian futures. I'm just going off my own observations and life experience of the last 20 years, and the way that people in leadership positions keep letting the rest of us down after they make it.

nradov

In what sense is the dichotomy between CPU and GPU contrived? Those are designed around fundamentally different use cases. For low power devices you can get CPU and GPU integrated into a single SOC.

zackmorris

That's a good question. I wish I could answer it succinctly.

For me, the issue is that use cases and power usage are secondary to the fundamental science of computation. So it's fine to have matrix-processing stuff like OpenGL and TensorFlow, but those should be built on general-purpose hardware or else we end up with the cookie cutter solutions we have today. Want to run a giant artificial life simulation with genetic algorithms? Sorry, you can't do that on a GPU. And it turns out that most of the next-gen stuff I'm interested in just can't be done on a GPU.

There was a lot of progress on transputers and clusters (the old Beowulf cluster jokes) in the 80s and 90s. But researchers came up against memory latency issues (Amdahl's law) and began to abandon those approaches after video cards like the 3dfx Voodoo arrived around 1997.

But there are countless other ways to implement concurrency and parallelism. If you think of all the techniques as a galaxy, then GPUs are way out at the very end of one spiral arm. We've been out on that arm for 25 years. And while video games have gotten faster (at enormous personal effort by millions of people), we've missed out on the low hanging fruit that's possible on the other arms.

For example, code can be auto-parallelized without intrinsics. It can be statically analyzed to detect contexts which don't affect others, and the instructions in those local contexts could be internally spread over many cores. Like what happens in shaders.

But IMHO the greatest travesty of the modern era is that those innovations happened (poorly) in GPUs instead of CPUs. We should be able to go to the system menu and get info on our computer and see something like 1024+ cores running at 3 GHz. We should be able to use languages like Clojure and Erlang and Go and MATLAB and even C++ that auto-parallelize to that many cores. So embarrassingly parallel stuff like affine rasterization and blitters would run in a few cycles with ordinary for-loops instead of needing loops that are unrolled by hand or whatever other tedium that distracts developers from getting real work done. Like, why do we need a completely different paradigm for shaders outside of our usual C/C++/C# workflow, where we can't access system APIs or even the memory in our main code directly? That's nonsense.

And I don't say that lightly. My words are imperfect, but I do have a computer engineering degree. I know what I'm talking about, down to a very low level. Wherever I look, I just see so much unnecessary effort where humans tailor themselves to match the whims of the hardware, which is an anti-pattern at least as bad as repeating yourself. Unfortunately, the more I talk about this, the more I come off as some kind of crackpot as the world keeps rushing headlong out on the GPU spiral arm without knowing there's no there there at the end of it.

My point is that for all the progress in AI and rendering and simulation, we could have had that 20 years ago for a tiny fraction of the effort with more inspired architecture choices. The complexity and gatekeeping we see today are artifacts of those unfortunate decisions.

I dream of a day when we can devote a paltry few billion transistors on a small $100 CPU to 1000+ cores. Instead we have stuff like the Cerebras CS-2 with a trillion transistors for many thousands of dollars, which is cool and everything, but is ultimately gatekeeping that will keep today's Anakin from building C-3PO.

https://en.wikipedia.org/wiki/Multi-core_(computing)#Hardwar...

ur-whale

You're an optimist.

Before any of the things you describe happen, most states will mandate the equivalent of a carry permit to be able to freely use compute for undeclared and/or unapproved purposes.

arathore

If by running models you mean just the inference phase, then even today you can run large family of ML models on commodity hardware (with some elbow grease, of course). The training phase is generally the one not easily replicated by non-corporations.

natly

I know it's a sort of exaggerated paranoid thought. But like these things do all come down to scale and some areas of the world definitely could have the amount of compute available to make dall-e level quality full scale videos which we might be consuming right now. It really does make you start to wonder at what point we will rationally be able to have zero trust that not everything we watch online is fabricated.

thelamest

Historically, hard-to-falsify documents are an anomaly, the norm was mostly socially conditional and enforced trust. Civilizations leaned and still lean on limited-trust technologies like personal connections, word of mouth, word on paper, signatures, seals, careful custody etc. I agree losing cheap trust can be a setback, just want to point out we’re adaptable.

ggktk

I'm predicting that the upcoming Mac Pro will be very popular among ML developers, thanks to unified memory. It should be able to fit the entire model in memory.

Combine that with the fact that PyTorch recently added support for Apple silicon GPUs.

tehsauce

upcoming mac pro will have pretty poor ML performance when compared to even an old nvidia gpu sadly.

uniqueuid

Although memory capacity may matter more than speed for inference. As long as you're not training or fine tuning, the mac pro / studio may be just fine.

apart from the fact that you can't use any of the many nvidia-specific things; if you're dependent on cuda, nvcuvid, AMP or other things that's a hard no.

ketzu

Seeing those gigantic models it makes me sad that even the 4090 is supposed to stay at 24GB of RAM max. I really would like to be able to run/experiment on larger models at home.

thejosh

It's also a power issue. The 4090 sounds like you're going to need a much, MUCH higher PSU than you currently use.. or it'll suddenly turn off as it uses 2-3x the power.

You'll need your own wiring to run your PC soon :-)

melenaboija

I think it is a stupid question, but does the power consumption needed by processors to infer compared to human brains demonstrate that there is something fundamentally wrong for the AI approach or is it more physics related?

I am not a physicist or biologist or anything like that so my intuition is probably completely wrong but it seems to me that for more basic inference operations (lets say add two numbers) power consumption from a processor and a brain is not that different. It’s like seeing how expensive it is for computers to infer for any NLP model, humans should be continuously eating carbs just to talk.

agalunar

Around room temperature, an ideal silicon transistor has a 60 mV/decade subthreshold swing, which (roughly speaking) means that a 10-fold increase in current requires at least a 60 mV increase in gate potential. There are some techniques (e.g. tunneling) that can allow you to get a bit below this, but it's a fairly fundamental limitation of transistors' efficiency.

[It's been quite a while since I studied this stuff, so I can't recall whether 60 mV/decade is a constant for silicon specifically or all semiconductors.]

googlryas

> but it seems to me that for more basic inference operations (lets say add two numbers) power consumption from a processor and a brain is not that different

Sure it is - it is too hard to figure it out based on 2 numbers number, but lets multiply that by a billion - how much energy does it take a computer to add two billion numbers? Far less than the energy it would take a human brain to add them.

visarga

The AI is much faster than the brain, if you batch requests the cost goes down.

PartiallyTyped

I bought a 1500w psu soon after the previous crypto collapse for around $150, one of the best purchases I did.

Dylan16807

The RAM is not using all that much of the power, and I think that scales more on bus width than capacity.

perryizgr8

Nvidia deliberately keeps their consumer/gamer cards limited in memory. If you have a use for more RAM, they want you to buy their workstation offerings like RTX A6000 which has 48G DDR6 RAM or A100 which has 80G.

justinlloyd

What NVIDIA predominantly does on their consumer cards is limit the RAM sharing, not the RAM itself. The inability for each GPU to share RAM is the limiting factor. It is why I have RTX A5000 GPUs and not RTX 3090 GPUs.

Voloskaya

If you don't care about inference speed being in the 1-5sec range, then that should be doable with CPU offloading, with e.g. DeepSpeed.

qayxc

200+ GiB of RAM still sounds like a pretty steep hardware requirement.

Voloskaya

If you have an nvme deepspeed can offload there as a second tier once the RAM is full.

175 GB aggregate on both RAM and nvme is in the realm of home deep learning workstation.

As long as you aren’t too fussy about inference speed of course.

justinlloyd

Oh yeah, that $750 for 256GB of DDR-4 is going to totally break the bank.

josu

For the people that didn't click on the link:

>but is able to work with different configurations with ≈200GB of GPU memory in total which divide weight dimensions correctly (e.g. 16, 64, 128).

out_of_protocol

Take a look at Apple's M1 Max, a lot of fast unified memory. No idea how useful though

jeroenhd

What's the difference between Apple's unified memory and the shared memory pool Intel and AMD integrated GPUs have had for years?

In theory you could probably assign a powerful enough iGPU a few hundred gigabytes of memory already, but just like Apple Silicon the integrated GPU isn't exactly very powerful. The difference between the M1 iGPU and the AMD 5700G is less than 10% and a loaded out system should theoretically be tweakable to dedicate hundreds of gigabytes of VRAM to it.

It's just a waste of space. An RTX3090 is 6 to 7 times faster than even the M1, and the promised performance increase of about 35% for the M2 will means nothing when the 4090 will be released this year.

I think there are better solutions for this. Leveraging the high throughput of PCIe 5 and resizable BAR support might be used to quickly swap out banks of GPU memory, for example, at a performance decrease.

One big problem with this is that GPU manufacturers have incentive to not implement ways for consumers GPUs to compete with their datacenter products. If a 3080 with some memory tricks can approach an A800 well enough, Nvidia might let a lot of profit slip through their hands and they can't have that.

Maybe Apple's tensor chip will be able to provide a performance boost here, but it's stuck on working with macOS and the implementations all seem proprietary so I don't think cross platform researchers will really care about using it. You're restricted by Apple's memory limitations anyway, it's not like you can upgrade their hardware.

zaptrem

Apple gets significant latency and frequency benefits from placing their LPDDR4 on the SoC itself.

thereddaikon

Unified memory is and always has been a cost cutting tactic. Its not a feature not matter how much manufacturers who use it try to claim it is.

postalrat

Apple is selling M1's with > 200gb ram? Have a link so I can buy one?

MrBuddyCasino

Wondering if Apple Silicon will bring arge amounts of unified main memory with high bandwidth to the masses?

The Mac Studio maxes out at 128GB currently for around $5K, so 256GB isn't that far out and might work with the ~200GB Yandex says is required.

Havoc

Perhaps on quantity. Substantially slower though around ~3x from what I can tell…substantial roadblock if you’re training models that take weeks.

MrBuddyCasino

I meant for inference, not training. People just want to run the magic genies locally and post funny AI content.

EugeneOZ

Can Apple Silicone's unified memory be an answer?

lostmsu

I downloaded the weights and made a .torrent file (also a magnet link, see raw README.md). Can somebody else who downloaded the files as well doublecheck the checksums?

https://github.com/lostmsu/YaLM-100B/tree/Torrent

MichaelRazum

It's just crazy how much it costs to train such models. As I undestand 800 A100 cards would cost about 25.000.000 without considering the energy costs for 61 days of training.

StevenWaterman

Lambda labs will rent you an 8xA100 instance for 3 months for $21,900. That would put it at around $2m

MichaelRazum

Still a bit to expensive for my sideproject ; ) To be honest it seems only big corp can do that kind of stuff. By the way if try to do hyper parameter tuning or some exploration in the architecture it becomes guess 10x or 100x more expensive.

bmcahren

AWS has them in US-EAST1 for $9.83/hr spot with 96 CPU cores, 1152GB of ram, 8 A100s with 320 GB of RAM, 8TB of NVME, and 19 Gbps of EBS bandwidth to load your data quickly.

https://aws.amazon.com/ec2/instance-types/p4/

p4d.24xlarge

An alternative is the p3.16xlarge for 8 V100s with 256GB of GPU RAM but you might as well get the A100s since it's only $0.50/hr cheaper

semitones

https://coreweave.com/ offers some of the cheapest GPU compute out there

refulgentis

16,000,000 at MSRP

londons_explore

For those of us without 200GB of GPU RAM available... How possible is it to do inference loading it from SSD?

Would you have to scan through all 200GB of data once per character generated? That doesn't actually sound too painful - 1 minute per character seems kinda okay.

And I guess you can easily do lots of data parallelism, so you can get 1 minute per character on lots of inputs and outputs at the same time.

toxik

These models are not character-based, but token-based. The problem with CPU inference is the need for random access to 250 GiB of parameters, meaning immense paging and orders of magnitude slower than normal CPU operation.

I wonder how bad it comes out with something like Optane?

amelius

It's not really random access. I bet the graph can be pipelined such that you can keep a "horizontal cross-section" of the graph in memory all the time, and you scan through the parameters from top to bottom in the graph.

toxik

Fair point, but you’ll still be bounded by disk read speed on an SSD. The access pattern itself matters less than the read cache being << the parameter set size.

guywhocodes

I wonder if you can't do that LSH trick to turn it into a sparse matrix problem and run it on CPU that way.

julienfr112

What about 250gb of ram and use a cpu ?

hnechochamber2

    $ dd if=/dev/zero of=/swapfile bs=1G count=250 status=progress
    $ chmod 600 /swapfile
    $ mkswap -U clear /swapfile
    $ swapon /swapfile

pflanze

If you bother to set the permissions, I suggest to do it in a way that doesn't leave a time window during which it still is unprotected (note that non-priviledged processes just need to open the file during that window; they can keep reading even after your chmod has been run). Also, not sure what the point of `-U clear` was, that's setting the uuid for the swap, better leave it at the default random one?

    $ ( umask 077; dd if=/dev/zero of=/swapfile bs=1G count=250 status=progress )
    $ mkswap /swapfile
    $ swapon /swapfile

jstimpfle

Is there a reason why it is required to fill the swapfile with zeroes here? Normally you'd see something like "dd of=/swapfile bs=1G seek=3 count=0", creating a file of size 3G but with no space allocated (yet). It's much quicker to complete the setup this way.

Aardwolf

Way too slow on CPU unfortunately

But this does make me wonder if there's any way to allow a graphics card to use regular RAM in a fast way? AFAIK built-in GPU's inside CPU's can but those GPU's are not powerful enough

yarandex

Assuming running on CPU is memory-bandwidth limited, not CPU-limited, it should take about 200GB / (50GB/sec) = 4 seconds per character. Not too bad.

julienfr112

Slow, but is it still practical, like taking minutes to generate few words ca still be useful for testing or on certain low usage use-cases ?

easytiger

I thought cuda had a unified memory system? Maybe I misunderstood

null

[deleted]

m00dy

well, I can call this "the real open ai".

wongarsu

Now we just need someone to figure out how to compress the model to get similar performance in 10B parameters.

I assume some of the services that offer GPT-J APIs will pick this up, but it doesn't look cheap or easy to get this running.

pembrook

Side note: Yandex search is awesome, and I really hope they stay alive forever. It's the only functional image search nowadays, after our Google overlords neutered their own product out of fear over lawyers/regulation and a disdain for power users.

You can't even search for images "before:date" in Google anymore.

whywhywhywhy

Yandex Image Search is today is what Google Image Search should have been.

End of the day I’ll use what actually gets the job done.

Same goes for OpenAI and Google AI. If you don’t actually ever release and let others use your stuff and end paralyzed in fear at what your models may do then someone else is gonna release the same tech, and at this rate it seems like that’ll be Chinese or Russian companies who don’t share your sensibilities at all, and their models will be the ones that end up productized.

sereja

IMO the main reason these companies don't release their models is not ethical concerns but money:

- NVIDIA sells GPUs and interconnect needed for training large models. Releasing a pretrained LM would hurt sales, while only publishing a teaser paper boosts them.

- Google, Microsoft, and Amazon offer ML-as-a-service and TPU/GPU hardware as a part of their cloud computing platforms. Russian and Chinese companies also have their clouds, but they have low global market share and aren't cost-efficient, so nobody would use them to train large LMs anyway.

- OpenAI are selling their models as an API with a huge markup over inference costs; they are also largely sponsored by the aforementioned companies, further aligning their interests with them.

Companies that release large models are simply those who have nothing to lose by doing so. Unfortunately, you need a lot of idle hardware to train them, and companies that have it tend to also launch a public cloud with it, so there is a perpetual conflict of interests here.

SXX

OpenAI should just rebrand since nothing they do is actually open.

daniel-cussen

You know 100 years ago you could just buy uranium openly? Leo Szílard hustled up 200 kilograms, pleted, in the 30's.

jowday

The “ethical concerns” thing is just a progressive-sounding excuse for why they’re not going to give their models away for free. I guarantee you those models are going to be integrated into various Google products in some form or another.

gaudat

This reminded me of a shitpost comparing Google and Yandex.

https://desuarchive.org/g/thread/78144754/#78145600

psyc

I regularly use it for a sample of what Google and Bing are intentionally omitting.

memorable

I agree with this. When I am still addicted to porn, Yandex Image is the only one that seems to find relevant and useful links.

whoami_nr

FWIW, https://same.energy/ seems to work fine for me

jeanlucas

A 500 days old product in beta? I hope they do well.

Kye

Extended betas used to be Google's thing.

hdjjhhvvhga

> Google overlords neutered their own product out of fear over lawyers/regulation

What kind of lawyers/regulation do you have in mind? If anything, I'd find the opposite: lawyers and copyright holders should be grateful for such a tool that - when it was still working - allowed you to trace websites using your images illegally.

Now they all use Yandex for this purpose, with relatively good results.

323

hdjjhhvvhga

Oh I see. What I'm looking for is the reason why they broke the reverse image search. It was working well many years ago but some time after that they switched it to some strange image classifier (I upload an image of an apple to find exactly the same image to track its license of origin, and it says "possibly an image of an apple" - oh thank you Google I didn't know that.)

tablespoon

> You misunderstood parent post. It's about Google not being sued for discrimination.

Who's suing them and on what grounds? If they made changes, it's probably for PR reasons, not legal ones.

Also not all of these seem "fixed" e.g.:

> https://www.theguardian.com/technology/2016/apr/08/does-goog...

Article from 2016, but results look very similar today: https://www.google.com/search?q=unprofessional+hair&source=l...

thereddaikon

IIRC it was mostly from groups like Getty images. They and other image licensing companies didn't want google showing their images in search results. They claimed it was copyright infringement and given the absolute state of IP law in the US they could have made Google's life very difficult.

hdjjhhvvhga

We're talking about reverse search, right? (Because "normal" image search still kind of works, it's reverse search that is completely broken.) In this case, you already have the copyrighted image, and if you find out that the same image is on Getty Images, then all the better as you can check it license. Also, it's better for GI as it gives them more exposure, and the kind of companies who use GI are very unlikely to pirate images.

omniglottal

Couldn't compliance with a robots.txt file have prevented all of this?

null

[deleted]

schizo89

I hope one day it will be possible to run this kind of models at home.

Hendrikto

If you live in a datacenter, it already is!

alexpotato

You already have access to thousands of machine now from your home computer.

Naval Ravikant put it best here: https://twitter.com/naval/status/1002106977273565184

rocgf

When it will be possible to run this at home, the big companies will have models way bigger than this...

rapnie

Or maybe the AI will own big companies that build bigger models for it. /s

Akronymus

Well, it used to be impossible to render on anything not a mainframe in a reasonable time.

The day will come when we will be able to.

cal85

Speaking of which... I built a gaming PC a few years ago but I never use it these days. I want to install Linux on it and start playing around with machine learning.

Can anyone recommend any open source machine learning project that would be a good starting point? I want one that does something interesting (whether using text, images, whatever), but simple/efficient enough to run on a gaming PC and see some kind of results in hours, not months. I'm not sure what I want to do with ML yet, I just know I'm interested, and getting something up and running is likely to enthuse me to start playing and researching further.

My spec is: GeForce RTX 2080 Ti (11GB), a 24-core AMD Ryzen Threadripper, and 128GB RAM. I'd be willing to spend on a new graphics card if it would make all the difference. I am a competent coder and familiar with Python but my experience with ML is limited to fawning over things on HN. Any recommendations gratefully received!

schizo89

I would recommend auditing Stanford courses in following order:

1. CS231n Machine Vision https://www.youtube.com/playlist?list=PLkt2uSq6rBVctENoVBg1T...

2. CS234 Reinforcement Learning https://www.youtube.com/watch?v=FgzM3zpZ55o&list=PLoROMvodv4...

3. CS330 Meta Learning https://www.youtube.com/watch?v=0rZtSwNOTQo&list=PLoROMvodv4...

Those will get you on track with general concepts about reasoning, AI engineering and concepts of learning itself

Language models for me a bit of headache because there're in different domain on intersection with linguistics and humanities but here's a good course

https://www.youtube.com/watch?v=rha64cQRLs8&list=PLoROMvodv4...

Those are all free and high-quality but require a lot of brain power

lannisterstark

I was about to comment exactly the same thing. Stuff like this makes me feel so much behind because there's no way I can run this lol.

f6v

They hardware they mention can be rented from cloud providers. It’s just that it’s not very cheap.

albertzeyer

If your disk has enough space to store the model, I think in theory you could run them, using the disk to store states. But it will be slow. I'm not sure how slow though, and also if anyone has implemented this. It actually should not be too difficult.

redox99

Disk makes no sense considering RAM is pretty cheap. But even then RAM is way too slow (and the communication overhead way too high). You probably get like a 100x slowdown or more.

lostmsu

I think you are overestimating compute and I/O for this model. If you assume it is RAM bandwidth bound, with a single channel top DDR4 you will get inference time as a low multiple of 7 seconds (200GB/25GBs). In a workstation you can have 8 channels.

irthomasthomas

I think that unlikely. Barring some breakthrough that takes us beyond the limits of silicon.

haswell

Couldn’t the same thing be said about most things we do on our phones these days?

Won’t incremental advancement cover this eventually? (i.e. no major breakthrough required, just patience).