Brian Lovin
/
Hacker News
Daily Digest email

Get the top HN stories in your inbox every day.

adam_arthur

So what's the level of effort to create ChatGPT equivalent products?

Is it something where we'll have 100s of competing AIs, or is it gated to only a few large companies? Not up to date on current training/querying costs.

Can these models feasibly be run locally?

Given the large number of competitors already announced to ChatGPT, I fail to see how the space will be easily defensible or monetizable (despite large value add, competitors can easily undercut eachother)

johnc1

> Can these models feasibly be run locally?

Actually you can, it even works without GPU, here's a guide on running BLOOM (the open-source GPT-3 competitor of similar size) locally: https://towardsdatascience.com/run-bloom-the-largest-open-ac...

The problem is performance: - if you have GPUs with > 330GB VRAM, it'll run fast - otherwise, you'll run from RAM or NVMe, but very slowly - generating one token every few minutes or so (depending on RAM size / NVMe speed)

The future might be brighter: fp8 already exists and halves the RAM requirements (although it's still very hard to get it running), and there is ongoing research on fp4. Even that would still require 84GB of VRAM to run...

Towaway69

From guide linked above:

> It is remarkable that such large multi-lingual model is openly available for everybody.

Am I the only one thinking that this remark is a insight into societal failure? The model has been trained on global freely available content, anyone who has published on the Web has contributed.

Yet the wisdom gained from our collective knowledge is assumed to be withheld from us. As the original remark was one of surprise, the authors (and our) assumption is that trained models are expected to be kept from us.

ornornor

I think it’s similar to how search engines keep their ranking formulas secret, and you can’t run your own off a copy of their index.

Yet we also all contributed to it by publishing (and feeding it, for instance by following googles requirements for micro data). But we don’t own any of it.

lacasito25

How much money you think gpt3 training costed?

Dylan16807

If it fits in system memory, is it still faster on GPU than CPU? Does that involve swapping out one layer at a time? Otherwise I'm very curious how it handles the PCIe latency.

Enough system memory to fit 84GB isn't all that expensive...

tempay

Yes, the connection between system memory and the GPU isn’t fast enough to keep the compute units fed with data to process. Generally PCIe latency isn’t as much of a problem as bandwidth.

adam_arthur

Pretty cool!

Honestly even if it were to take a few minutes per response, that's likely sufficient for many use cases. I'd get value out of that if it allowed bypassing a paywall. I'm curious how these models end up being monetized/supported financially, as they sound expensive to run at scale.

The required disk space seems the biggest barrier for local.

afro88

If it's a few minutes per token you might be waiting a lot longer for a full response: https://blog.quickchat.ai/post/tokens-entropy-question/

I also wonder how open.ai etc provides access to these for free. Reminds me of the adage from when Facebook rose to popularity: "if something is free, 'you' are the product". Perhaps to gather lots more conversational training data for fine tuning.

JellyBeanThief

Crowd-funded AI training coming soon to Patreon?

logicallee

> if you have GPUs with > 330GB VRAM, it'll run fast

What kind of GPU's have that that are available to consumers, how much would such a kit cost roughly?

spyder

He means multiple GPUs in parallel that have a combined VRAM of that size. So around 4 x NVIDIA A100 80GB, which you can get for around $8.4 / hour in the cloud. or 7 x NVIDIA A6000 or A40 48GB for $5.5 / hour

So not exactly cheap or easy yet for the everyday user, but I believe the models will become smaller and more affordable to run, these are just the "first" big research models focused demonstrating some usefulness after that they can be more focus on the size and speed optimizations. There are multiple methods and lot of research into making them smaller with distilling them, converting to lower precision, pruning the less useful weights, sparsifying. Some achieve around 40% size reduction 60% speed improvement with minimal accuracy loss, others achieve 90% sparsity. So there is hope to run them or similar models on a single but powerful computer.

uni_rule

You'd basically need a rack mount server full of Nvidia H100 cards (80 Vram, they cost $40 thousand us dollars each). So... good luck with that? On the relatively cheap end Nvidia tesla cards are kinda cheap used, 24 gig ones going for ~$200 with architectures from a few years ago. That's still nearly $3000 worth of cards not counting the rest of the whole computer. This isn't really something you can run out home without having a whole "operation" going on.

flockonus

fp4 ?= float point of 4 bits??? I was already mind blown by floats of 8b, how can you fit any float precision in 4b?

Dylan16807

For weights, the order of magnitude is the important part. And the sign bit. So you can get pretty good coverage with only 16 values.

wokwokwok

> Can these models feasibly be run locally?

Bluntly, no.

The models which are small enough to run locally perform so badly it’s not worth bothering.

To run inference on the large models the perform decently you need the equivalent of two or three top end graphics cards.

If you're serious about looking into it now, consider looking at this project that lets you run a bunch of independent machines as a cluster for inference using Bloom:

https://github.com/bigscience-workshop/petals/wiki/Launch-yo...

(You'll need around 200GB of GPU memory across the machines in the swarm)

corobo

How badly is bad? What sort of output are we talking?

I am asking as I once had a Markov-chain IRC bot* and while it often struggled to string together a sentence, it was quite hilarious sometimes. Absolutely pointless other than the occasional laugh.

Can it form sentences or are those small models completely unusable for anything?

I'm not thinking OpenAI level uses - sort of compare a Postgres cluster to a SQLite file (not literally, conceptually I guess). Can it be used for single tasks in any way?

Could it figure out how to map search terms to URLs for a knowledge base type thing?

Forgive me if these are silly questions. The extent of my knowledge in this field is asking ChatGPT questions and going "that's so cool" when it answers.

* Your phone's predictive text except it finishes the sentence itself based on a word someone in chat used so that it felt on topic.

In my case it also learned how to form sentences from other people talking in chat, in hindsight it's amazing I never had a Tay issue.

https://en.m.wikipedia.org/wiki/Tay_(bot)

ggerganov

I was recently playing with the GPT-2 and GPT-J models. Results are often non-sensical for any practical purposes, but I think can be used for making something fun - similar to your IRC bot idea.

If you are interested in running these models yourself without having a beefy GPU, you can try my custom inference implementation. It's in pure C/C++ without any 3rd party dependencies, runs straight on the CPU and builds very easily. I think it is relatively well optimised. For example, on a MacBook M1 Pro I can run GPT-2 XL (1.5B params) at 42ms/token and GPT-J / GPT-JT (6B params) at 125ms/token.

Here are a couple of generated examples using GPT-J:

https://github.com/ggerganov/ggml/tree/master/examples/gpt-j

These are examples using zero-shot prompt where the model auto-completes a text given a starting prompt. You can try to make a conversation bot with a few-shot prompt, but it's not great. Probably the model needs some fine-tuning for that to become feasible.

wokwokwok

> or are those small models completely unusable for anything?

Sadly, they really offer almost no value.

For the effort, you’re better off with an NLP framework like spacy.

You can play with the small neo gpt models on hugging face, eg. https://huggingface.co/EleutherAI/gpt-neo-125M

…but, the tldr is they’re cute to play with, but practically, the content they can generate is short, inconsistent and full of errors.

visarga

FLAN T5 shows promising signs, but it doesn't get even to 50% of GPT-3 performance.

anigbrowl

How much of this is the language vs the vast amount of passably accurate domain knowledge? ChatGPT etc. seem magic because they can answer questions about virtually anything with a high degree of plausibility. It often gets specific facts wrong, but the general contours are correct. Many of us know a lot of trivia/specialist knowledge, but I don't think anyone is as broadly informed as ChatGPT appears to be. It's not clear where the language ends and the encyclopedic knowledge starts, but the latter must be taking up a very large amount of the space in the model.

visarga

There have been attempts to separate fact knowledge from language knowledge - for example DeepMind RETRO that uses a search index of 1T tokens. RETRO manages to reach GPT-3 performance on some tasks with a 20x smaller model. I believe smaller model are more useful for extractive and classification tasks than creative text generation.

dragonwriter

> How much of this is the language vs the vast amount of passably accurate domain knowledge?

LLMs don’t have domain knowledge, its all language.

adam_arthur

Hmmm, 2-3 high end GPUs implies it's likely not very far off from mainstream. Maybe runnable on the average device within 10-20 years... perhaps even sooner if the model/software can be optimized?

dahdum

> 2-3 high end GPUs implies it's likely not very far off from mainstream

Looks like FLOP/s per $ are doubling every ~3 years for high end cards, and 10x in ~10 years. So probably not that far off for desktop users.

https://www.lesswrong.com/posts/c6KFvQcZggQKZzxr9/trends-in-...

bemmu

I’d be surprised if there weren’t any algo breakthroughs before that to make these several times faster (10x?).

(such as are all of the weights really needed all of the time, or could you load different ones based on recent context?)

If in 10 years on top of that you’d have that 10x faster hardware as well, you might be running GPT-3s as just a subcomponent of games/apps.

lolspace

20 years?

lfkdev

Two or three top GPUs? Thats basically nothing for a professinal project or even an investeded hobby

throwifasd

[flagged]

ck2

Datasets.

The one with the largest, most personal, most obtrusive, invasive dataset will probably win.

The one that has absorbed every podcast, every youtube video, every close-caption text in existence, will have the most "complete" answers.

visarga

Hidden datasets can be replaced with model predictions collected from a public API. So they can be "exfiltrated" from the trained model. And we already maxed out on the accessible online text and the good quality sources.

What is going to make a difference is running models to generate more text for training, because relying on humans alone doesn't scale. For example we could be using LLMs to do brute force problem solving and then fine-tuning on solutions.

AlphaZero is the shining example of a model trained on its own generated data and surpassing us at our own game. The self generated data approach has potential to reach super human levels of performance.

ck2

How about illegal datasets like all the phone calls the NSA has been collecting domestically? Someone is going to train a private ChatGPT with that for queries.

simne

Only legally gathered, absolutely "white" datasets could win, because gray/black methods of gathering lack feedback.

You have not methods to ensure, if gray/black really gather data or they faked it.

aljungberg

THe RWKV model seems really cool. If you could get transformer-like performance with an RNN, the “hard coded” context length problem might go away. (That said, RNNs famously have infinite context in theory and very short context in reality.)

Is there a primer for what RWKV does differently? According to the Github page it seems the key is multiple channels of state with different decaying rates, giving I assume, a combination of short and long term memory. But isn’t that what LSTMs were supposed to do too?

thegeomaster

There's already research that tries to fix this problem with transformers in general, like Transformer-XL [1]. I'm a bit puzzled that I don't see much interest in getting a pre-trained model out that uses this architecture---it seems to give good results.

[1]: https://arxiv.org/abs/1901.02860

gok

T5 uses relative positional encoding

solomatov

My understanding is that RNNs aren't worse than Transformers per se, they are just slower to train, and use GPU much more efficiently, i.e. much more stuff could be run in parallel.

Hendrikto

Also slower to perform inference on. RNNs have to be much more sequential.

euclaise

We also don't have evidence that they scale the way transformers do

swyx

> RNNs famously have infinite context in theory and very short context in reality.

any sources to read more about this please? its the first ive heard of it

nl

Read about "RNN Vanishing Gradients". LSTMs help here, but see eg https://medium.com/analytics-vidhya/why-are-lstms-struggling... for the problems there.

solomatov

My understanding that LSTM is a kind of RNN.

georgehill

I am not sure this article will answer your question, but Karpathy has an article about RNNs.

https://karpathy.github.io/2015/05/21/rnn-effectiveness

swyx

it doesnt touch on the "infinite context in theory and very short context in reality" piece which is what i was asking about

solomatov

Naive RNN have vanishing gradient, but LSTMs and GRUs are much better in this respect.

jszymborski

While this is true, and was a major advantage of LSTMs/GRUs, they still suffer from vanishing gradients.

w.r.t proteins, our sequences often surpass 1500 amino acids and that is really tough for an LSTM to stably train on.

nl

For those wondering how on earth they are getting decent results from a RNN without long range forgetting, I don't really know either!

But they reference https://arxiv.org/abs/2105.14103 and the bottom section of https://github.com/BlinkDL/RWKV-LM has an explainer.

leodriesch

The readme does not seem to be geared towards people not familiar with the topic.

My questions:

- Is this on the run on consumer GPU scale, or run on 8 A100 scale or you can’t run it yourself ever scale? - How does it compare to other language models in quality/abilities? - What is the training data?

aljungberg

It does say on there they are training it on the Pile training data. And they have this bit comparing inference with GPT2-XL:

RWKV-3 1.5B on A40 (tf32) = always 0.015 sec/token, tested using simple pytorch code (no CUDA), GPU utilization 45%, VRAM 7823M

GPT2-XL 1.3B on A40 (tf32) = 0.032 sec/token (for ctxlen 1000), tested using HF, GPU utilization 45% too (interesting), VRAM 9655M

So it looks about twice as fast for inference while using only about 80% as much VRAM. Obviously at such a small size, just 1.5B, you can run it even on consumer GPUs but you could do that with GPT2 as well. If it remains 80% of VRAM usage when scaled up, we’re still talking 282GB once it’s the size of BLOOM w/ 176B parameters. So yeah still 8x A100 40GB cards I guess. Not going to be the Stable Diffusion of LLMs.

taktoa

I'm pretty sure those numbers are for training, not inference. I've run it on _CPU_ and gotten ~1 token per second.

zone411

The large model weights are 14B, so at 16 bits per weight, it won't quite fit on one 3090 or 4090.

undefined

[deleted]

rkwasny

Turns out it does not matter if you have transformer/MLP/lstm or whatever, as long as there are enough parameters and training epochs over large dataset things "just work"

nl

This isn't true - the model architecture matters a lot.

In general RNNs cannot handle long term dependencies (ie, long pieces of text) because the gradient vanishes. It's unclear how this solves this problem although they do reference the "attention free transformer" paper: https://arxiv.org/abs/2105.14103

PartiallyTyped

The key component is the linear attention[1] and residual connections.

[1] https://arxiv.org/abs/2006.16236

> Transformers achieve remarkable performance in several tasks but due to their quadratic complexity, with respect to the input's length, they are prohibitively slow for very long sequences. To address this limitation, we express the self-attention as a linear dot-product of kernel feature maps and make use of the associativity property of matrix products to reduce the complexity from (N2) to (N), where N is the sequence length. We show that this formulation permits an iterative implementation that dramatically accelerates autoregressive transformers and reveals their relationship to recurrent neural networks. Our linear transformers achieve similar performance to vanilla transformers and they are up to 4000x faster on autoregressive prediction of very long sequences.

cztomsik

I believe it's because you train it in GPT-mode and then only use RNN-mode for inference.

BulgarianIdiot

To some degree, because we keep recreating the truly essential components the crude "Turing machine completeness" way. In time as we analyze the resulting models, we may find what patterns emerge and optimize for them. The result will be smaller, faster models that perform like larger slower ones.

haint_

From the provided example:

Q: How would I make for loop in python?

A: I can help you create an AI chat bot. It would talk to you like a human. (additional text that is not relevant to the prompt)

It is just me or this does not seem right?

zaptrem

This is only a 1.5b parameter model. This is in line with that. GPT3.5 is ~175b params.

cztomsik

Give it at least few examples. ~1B networks are not good in zero-shot. Also, don't expect to get answers for things it was not trained on. the_pile is not programming dataset.

RWKV is important because it's fast, it can be trained in parallel and it gives very good results (compared to other networks trained on the same dataset).

VadimPR

How does this compare to BLOOMZ's performance, if anyone knows?

euclaise

Assuming you're referring to the largest model - BLOOM is huge, this is not, so presumably much worse

moneywoes

Name rolls of the tongue

avmich

Yeah, when the Web was young, and people told URLs to each other, pronouncing "www", which was almost always the prefix of any web server host name also sounded funny.

klabb3

Not sure what you’re talking about. Eitch tee tee pee ess colon slash slash doubleview doubleview doubleview dot just rolls off the tongue so easily.

golem14

it's "dub-dub-dub", isn't it?

joshxyz

chat are-woo-kei-vee.

gpt rolls out real better lol.

undefined

[deleted]

totoglazer

This might be an interesting language model. However people care about ChatGPT entirely due to its quality, which this doesn’t demonstrate yet.

phist_mcgee

The leap in public exposure wasn't so much GPT3 to GPT3.5, it was attaching a clean UI to the model, (with sane defaults) and allowing people to talk to it like a person.

Suddenly it became something 'real' then.

(This is purely talking about the public popularity of GPT)

gamegoblin

This is mostly correct. GPT3.5 is better, has a larger context window, etc. But it's a very incremental step above GPT3.

I had wired up GPT3 to a Twilio phone number and made something basically like ChatGPT months before ChatGPT was released -- me and my friends texted it all the time to get information, similar to how people use ChatGPT. The prompt to get decent performance is super simple. Just something like:

    The following is a transcript between a human and a helpful AI assistant.
    The AI assistant is knowledgeable about most facts of the world and provides concise answers to questions.

    Transcript:
    {splice in the last 30 messages of the conversation}

    The next thing the assistant says is:
Over time I did upgrade the prompt a bit to improve performance for specific kinds of queries, but nothing crazy.

Cost me $10-20/mo to run for the low/moderate use by me and a few friends.

Interestingly, for people who didn't know its limitations / how to break it, it was basically passing the turing test. ChatGPT is inhumanly wordy, whereas GPT3 can actually be much more concise when prompted to do so. If, instead of prompting it that it is an AI assistant, you prompt it that it is a close friend with XYZ personality traits, it does a very good job of carrying on a light SMS conversation.

andai

>If [...] you prompt it that it is a close friend with XYZ personality traits

A couple years ago a friend and I trained GPT-2 on our WhatsApp chat history. GPT-2 was more primitive, but it still managed to capture the gist of our personalities and interests, which was equal parts amusing and embarrassing.

We'd have it generate random chats, or ask it questions to see what simulated versions of ourselves would say.

merely-unlikely

I half remember one of Google’s many chat apps having an AI assistant a number of years ago (Allo maybe?)

junipertea

They also did reinforcement learning on top of a frozen trained model. It is considerably more than just attaching a UI as that would just finish sentences compared to answering questions. https://huggingface.co/blog/rlhf

tinsmith

This is a remarkably good take that just didn't dawn on me until I read your comment. Even if ChatGPT had a lesser quality than the current iteration, the fact that they had a way for anyone to easily interact with it really was a homerun, snd can be for any software, really.

b33j0r

My family told me that ChatGPT came up from the pulpit AT CHURCH

Me? I made a few comments like a scared luddite when ChatGPT solved two of my outstanding engineering problems instantly.

I got better. But this is exactly right. The world in general now knows about AI and ML. It’s a pivot point.

When something scares a seasoned engineer for a minute, and anyone can now make use of this… write it down in your diary as a moment in history

TJSomething

One of the important parts of ChatGPT over plain GPT-3 is the reinforcement learning from human feedback to ensure alignment, without which it's not quite as good of a product for the public.

redox99

It's not just the UI. ChatGPT (which is further finetuned and uses RLHF) definitely produces better output than GPT3, especially without prompt engineering.

axiom92

Some evidence to confirm this:

1. Twitter thread with examples: https://twitter.com/sjwhitmore/status/1601254826947784705

2. Tweet/screenshot + Colab notebook:https://twitter.com/aman_madaan/status/1599549721030246401, https://tinyurl.com/codex-chat-gpt

The second tweet is mine.

totoglazer

No. ChatGPT’s UI is incredibly simple and basically exactly what ever chat bot test repl looks like.

The delta of GPT3 -> ChatGPT is from the expanded context and control the model offers through fine tuning. Eg read the instructgpt paper to see the path on the way to ChatGPT.

moffkalast

Well yes, having no context memory, being slightly worse and requiring either a monster rig to run or paying per prompt made it completely and utterly irrelevant.

Even now that it's improved and free to use its actual practical usability is marginal at best given the rate of blatantly wrong info being spewed with 105% confidence at the moment.

visarga

> blatantly wrong info being spewed with 105% confidence

There are some approaches. For example in this paper they say truth has a certain logical consistency that is lacking in hallucinations and deception. So they find this latent direction that indicates truth in a frozen LLM. This actually works better than asking the model to self evaluate by text generation, or training with RLHF.

"Discovering Latent Knowledge in Language Models Without Supervision" https://arxiv.org/abs/2212.03827

There's also a video with the first author: "Making LLMs Say The Truth" https://www.youtube.com/watch?v=XSQ495wpWXs&t=1515s

Btw, I think this is one of the deepest discussions about LLM hallucinations and alignment I ever saw. Worth a watch, even if it is a bit long. Not every day something like this comes long.

undefined

[deleted]

beernet

[flagged]

undefined

[deleted]

leaving

[flagged]

anon291

This is a git repo, not a published paper. Hacker news is not a published journal. It's a casual space for technically oriented people.

And you can say whatever you want on your own GitHub.

Daily Digest email

Get the top HN stories in your inbox every day.

ChatRWKV, like ChatGPT but powered by the RWKV (RNN-based, open) language model - Hacker News