Get the top HN stories in your inbox every day.
aubanel
typpo
If anyone is interested in evaling Gemma locally, this can be done pretty easily using ollama[0] and promptfoo[1] with the following config:
prompts:
- 'Answer this coding problem in Python: {{ask}}'
providers:
- ollama:chat:gemma2:9b
- ollama:chat:llama3:8b
tests:
- vars:
ask: function to find the nth fibonacci number
- vars:
ask: calculate pi to the nth digit
- # ...
One small thing I've always appreciated about Gemma is that it doesn't include a "Sure, I can help you" preamble. It just gets right into the code, and follows it with an explanation. The training seems to emphasize response structure and ease of comprehension.Also, best to run evals that don't rely on rote memorization of public code... so please substitute with your personal tests :)
roywiggins
In Ollama, Gemma:9b works fine, but 27b seems to be producing a lot of nonsense for me. Asking for a bit of python or JavaScript code rapidly devolves into producing code-like gobbledegook, extending for hundreds of lines.
thot_experiment
Had a chance to do some testing and it seems quite good on oneshot tasks with a small context window but as you approach context saturation it starts to go way off the rails. Maybe this is an implementation issue? I'm using Q6_K quants of both sizes in ollama. I'll report back if I figure it out.
A larger context window really helps on RAG tasks, it's frustrating that a lot of the foundational models have such small windows.
brandall10
27b is working fine for me, hosted on ollama w/ continue.dev in VSCode.
bugglebeetle
The tokenizer in llama.cpp probably needs fixing then or it has some other bug.
undefined
lhl
I'd encourage people to test for themselves (and to let the Chatbot Arena scores to settle) before getting caught up in too much hype. I just did a personal eval and I found gemma-2-27b-it (tested on AI Studio) performed far worse in my testing than Llama 3 70B, especially for reasoning and basic world understanding queries.
WiSaGaN
I also prefer to use "Coding" or "Hard Prompts (Overall)" instead of default "Overall" in Chatbot Arena scores to determine the actual performance level of LLMs. Seems much more align to my vibe test in terms reasoning. I guess the "Overall" contains a lot of creative tasks, which is not what I use the most in the daily tasks.
nacs
Same. I tried 27B and found it to be not even close to llama3-70b.
Even llama-8b did better in some of my tests than Gemma 27b.
lhl
Just saw this, might get lost in the noise, but just for posterity, apparently the Gemma 2 models were specifically RL’d to index on Chat Arena performance: https://x.com/natolambert/status/1806384821826109597
(Relevant sections of the paper highlighted.)
occamrazor
On prompts only, with answers presumably from the teacher model (Gemini).
It was not trained or RLHFd on Arena replies or user preferences.
lhl
Yes, answers were distilled from a much stronger model. On the one hand, you can argue that this is exactly what the LMSYS, WildBench etc datasets are for (to improve performance/alignment on real-world use cases), but on the other hand, it's clear that training on the questions (most of which are repeatedly used by the (largely non-representative of general population) users of the ChatArena for comparing/testing models) makes ChatArena ELO less useful as a model comparison tool and artificially elevates Gemma 2's ChatArena score relative to its OOD performance.
At the end of the day, by optimizing for leaderboard scoring, it makes the leaderboard ranking less useful as a benchmark (Goodhart's law strikes again). The Gemma team obviously isn't the only one doing it, but it's important to be clear-eyed about the consequences.
screye
What's the most obvious standouts?
In my experience, smaller models tend to do well on benchmarks and fail at generalization. Phi-2 comes to mind.
moffkalast
It's multilingual. Genuinely. Compared my results with some people on reddit and the consensus is that the 27B is near perfect in a few obscure languages and likely perfect in most common ones. The 9B is not as good but it's still coherent enough to use in a pinch.
It's literally the first omni-translation tool that actually works that you can run offline at home. I'm amazed that Google mentioned absolutely nothing about this in their paper.
jug
Wow, that's very impressive and indeed a game changer. I've previously had trouble with various Scandinavian languages, but last I checked with was Llama 2 and I kind of gave up on it. I had expected we were going to need special purpose small models for these uses as a crutch, like SW-GPT3.
So I guess Gemma 2 is going to become Gemini 2.0 in their truly large and closed variants then? Or is it the open version of Gemini 1.5?
norealmet
[dead]
usaar333
I think this is just due to better non-English training data.
It's 15 ELO under Llama-3-70B on english hard prompts and 41 ELO under Llama-3-70B (the latter is actually stat sig) for general English.
undefined
resource_waste
Do we believe that? I've been told Google's AI was going to be great 4 times now, and its consistently #4 behind OpenAI, Facebook, and Claude.
aubanel
LMSys Chatbot Arena is a crowd-sourced ranking with an ELO system: basically users a presented with 2 hidden models, they get the answers of the 2 models when presenting their request, and they vote which one performed bests, which realized one marche and updates the ELO scores. This is the closest thing that we have to a gold truth for LLM evaluation: and Gemma2-27B performs extremely well in Chatbot Arena ELO.
alekandreev
Hello (again) from the Gemma team! We are quite excited to push this release out and happy to answer any questions!
Opinions are our own and not of Google DeepMind.
luke-stanley
It's fairly easy to pay OpenAI or Mistral money to use their API's. Figuring out how Google Cloud Vertex works and how it's billed is more complicated. Azure and AWS are similar in how complex they are to use for this. Could Google Cloud please provide an OpenAI compatible API and service? I know it's a different department. But it'd make using your models way easier. It often feels like Google Cloud has no UX or end-user testing done on it at all (not true for aistudio.google.com - that is better than before, for sure!).
Deathmax
Gemini models on Vertex AI can be called via a preview OpenAI-compatible endpoint [1], but shoving it into existing tooling where you don't have programmatic control over the API key and is long lived is non-trivial because GCP uses short lived access tokens (and long-lived ones are not great security-wise).
Billing for the Gemini models (on Vertex AI, the Generative Language AI variant still charges by tokens) I would argue is simpler than every other provider, simply because you're charged by characters/image/video-second/audio-second and don't need to run a tokenizer (if it's even available cough Claude 3 and Gemini) and having to figure out what the chat template is to calculate the token cost per message [2] or figure out how to calculate tokens for an image [3] to get cost estimates before actually submitting the request and getting usage info back.
[1]: https://cloud.google.com/vertex-ai/generative-ai/docs/multim...
[2]: https://platform.openai.com/docs/guides/text-generation/mana...
[3]: https://platform.openai.com/docs/guides/vision/calculating-c...
luke-stanley
Good to know about this API preview. Hopefully the billing problem and UI maze of Vertex AI can be sorted too?
ankeshanand
If you're an individual developer and not an enterprise, just go straight to Google AIStudio or GeminiAPI instead: https://aistudio.google.com/app/apikey. It's dead simple getting an API key and calling with a rest client.
luke-stanley
Interesting but when I tried it, I couldn't figure out the billing model because it's all connected to Google projects, and there can be different billing things for each of them.
Each thing seems to have a bunch of clicks to setup that startup LLM providers don't hassle people with. They're more likely to just let you sign in with some generic third party oAuth, slap on Stripe billing, let you generate keys, show you some usage stats, getting started docs, with example queries and a prompt playground etc.
What about the Vertex models though? Are they all actually available via Google AI Studio?
lhl
Sadly, while gemma-2-27b-it is available (as a Preview model) on the AI Studio playground, it didn't show up via API on list_models() for me.
bapcon
I have to agree with all of this. I tried switching to Gemini, but the lack of clear billing/quotas, horrible documentation, and even poor implementation of status codes on failed requests have led me to stick with OpenAI.
I don't know who writes Google's documentation or does the copyediting for their console, but it is hard to adapt. I have spent hours troubleshooting, only to find out it's because the documentation is referring to the same thing by two different names. It's 2024 also, I shouldn't be seeing print statements without parentheses.
logankilpatrick
We are working hard to improve this across ai.google.dev (Gemini API), Hang tight!
undefined
hnuser123456
I plan on downloading a Q5 or Q6 version of the 27b for my 3090 once someone puts quants on HF, loading it in LM studio and starting the API server to call it from my scripts based on openai api. Hopefully it's better at code gen than llama 3 8b.
alekandreev
Happy to pass on any feedback to our Google Cloud friends. :)
anxman
I also hate the billing. It feels like configuring AWS more than calling APIs.
luke-stanley
Thank you!
undefined
canyon289
I also work at Google and on Gemma (so same disclaimers)
You can try 27b at www.aistudio,google.com. Send in your favorite prompts, and we hope you like the responses.
dandanua
Why is AIStudio not available in Ukraine? I have no problem with using Gemini web UI or other LLM providers from Ukraine, but this Google API constrain is strange.
jpcapdevila
Will gemma2 be available through gemma.cpp? https://github.com/google/gemma.cpp
austinvhuang
This is in the works in the dev branch (thanks pchx :)
janwas
:) Confirmed working. We've just pushed the dev branch to main.
moffkalast
The 4k sliding window context seems like a controversial choice after Mistral 7B mostly failed at showing any benefits from it. What was the rationale behind that instead of just going for full 8k or 16k?
alekandreev
This is mostly about inference speed, while maintaining long context performance.
causal
Thanks for your work on this; excited to try it out!
The Google API models support 1M+ tokens, but these are just 8K. Is there a fundamental architecture difference, training set, something else?
coreypreston
No question. Thanks for thinking of 27B.
luke-stanley
Given the goal of mitigating self-proliferation risks, have you observed a decrease in the model's ability to do things like help a user setup a local LLM with local or cloud software?
How much is pre-training dataset changes, how much is tuning?
How do you think about this problem, how do you solve it?
Seems tricky to me.
alekandreev
To quote Ludovic Peran, our amazing safety lead:
Literature has identified self-proliferation as dangerous capability of models, and details about how to define it and example of form it can take have been openly discussed by GDM (https://arxiv.org/pdf/2403.13793).
Current Gemma 2 models' success rate to end-to-end challenges is null (0 out 10), so the capabilities to perform such tasks are currently limited.
luke-stanley
That's an interesting paper. `Install Mistral 7B on a GCP instance and use it to answer a simple question`. Some hosting providers and inference software might be easier to setup, for now. ;) But do you have to make it less capable, by being careful on what it's trained on? E.g: banning certain topics (like how to use Lamafile/llama.cpp, knowing what hosting providers have free trials, learning about ways to jailbreak web apps, free inference providers etc)?
Or does the model have to later be finetuned, to not be good at certain tasks?
Or are we not at that stage yet?
Is something like tree-of-thought used, to get the best of the models for these tasks?
moffkalast
Turns out LLM alignment is super easy, barely an inconvenience.
luke-stanley
[flagged]
WhitneyLand
The paper suggests on one hand Gemma is on the same Pareto curve as Llama3, while on the other hand seems to suggest it’s exceeded its efficiency.
Is this a contradiction or am I misunderstanding something?
Btw overall very impressive work great job.
alekandreev
I think it makes sense to compare models trained with the same recipe on token count - usually more tokens will give you a better model.
However, I wouldn't draw conclusions about different model families, like Llama and Gemma, based on their token count alone. There are many other variables at play - the quality of those tokens, number of epochs, model architecture, hyperparameters, distillation, etc. that will have an influence on training efficiency.
alecco
Shouldn't this (2.6B/9B) be compared with Microsoft's Phi-3 mini (3.8B) instead of Mistral and Llama-3?
(table 13 on page 7) vs https://arxiv.org/pdf/2404.14219 (page 6, quite better in general)
The report on knowledge distillation training is interesting, though.
refulgentis
Picking up from there: The games in this paper and model are annoying.
The 2.6B would get stomped by Phi-3, so there's no comparison.
Fair enough. 2.6B vs. 3.8B is a fairly substantial size difference thats hard to intuit when its 2.6 vs 3.8 versus 2,600,000,000 and 3,800,000,000.
But then we get what I'm going to "parameter creep": Mistral 7B vs. Llama 8B vs. Gemma 9B. I worried after Llama 3 went 8B that we'd start seeing games with parameters, but, thought I was being silly.
kouteiheika
There was no parameter creep with Llama. Llama 8B is actually a ~7B model comparable to Mistral 7B if you strip away multilingual embeddings and match what Mistral 7B supports.
imjonse
In the Llama 3 case I think the increase in parameters is mostly due to the input embeddings and output logits layers, reflecting the context size increase.
alecco
Phi-3 3.8B seems to perform much better on almost every test than Gemma 2 9B. It is comparable.
refulgentis
I agree.
The implication in my post is "if the reason was size, it's invalidated later"
philipkglass
It's such a wide range of model sizes that I could see why they compare with Llama 3 70b as well as Llama 3 8b (tables 12, 13). I agree that the Phi-3 series is a stronger competitor for knowledge extraction/summarizing and would make a good comparison. My current favorite for such tasks, on a VRAM-limited workstation, is Phi-3 medium (phi3:14b-instruct).
undefined
rsolva
The 9B and 27B versions are available for Ollama: https://ollama.com/library/gemma2
Workaccount2
The 27B model is also available in AI studio
https://aistudio.google.com/app/prompts/new_chat?model=gemma...
So far it seems pretty strong for its size.
chown
This is a great release! If you are looking to try it locally with a great interface, I am working on an app [1] and I just pushed an update to support Gemma2.
tr3ntg
Wow, msty looks really cool. I've bookmarked it to look into more later as a replacement for how I use a locally-hosted instance of LibreChat. It'd be a huge improvement to use local models rather than remote ones, for much of my queries.
That said, do you have a reason for keeping msty closed source rather than open? I read your FAQ for "why should I trust msty" and it feels lacking.
> We are a small team of developers who are passionate about AI and privacy. We have worked on projects before that have been used by thousands of people such as this (I've never heard of Cleavr). There are real faces (real faces = Twitter account link?) behind the product. And come chat with us on our Discord server to know us better.
This is much, much better than having no attribution, but it's miles away from being able to verify trust by reading the code. Would love to hear what your reasons against this are.
Still thinking about trying it out, anyway...
bayesianbot
Looks cool even though closed source makes me wary.
Trying to save Anthropic API key on Arch Linux doesn't do anything and there's a message "If you're experiencing problems saving API keys especially on Linux, contact Discord", if it's so common problem maybe you should have a link with possible fixes? Adding another Discord server and searching for answers for a question that clearly has been asked often enough feels like quite a hurdle for testing it out.
aashu_dwivedi
What does closed source mean in this context? The weights are open and the model architecture has to be open for people to use it for inference.
onel
I think he was referring to Msty which is closed-source
jmcv
Just downloaded, looks great. Love the synced split view.
But I'm not seeing Gemma 2 or Claude 3.5 Sonnet even though it's announced on your landing page.
Alifatisk
Any plans on adding this to Chocolatey for Windows download?
renewiltord
What the heck, this looks cool! How have I missed it. Gonna give it a whirl.
dongobread
The knowledge distillation is very interesting but generating trillions of outputs from a large teacher model seems insanely expensive. Is this really more cost efficient than just using that compute instead for training your model with more data/more epochs?
DebtDeflation
I'm also curious. It seems like 6 months ago everyone was afraid of "model collapse" but now synthetic training generation and teacher models are all the rage. Have we solved the problem of model collapse?
astrange
Model collapse was basically a coping idea made up by artists who were hoping AI image generators would all magically destroy themselves at some point; I don't think it was ever considered likely to happen.
It does seem to be true that clean data works better than low quality data.
groby_b
You're confusing it with data poisoning.
Model collapse itself is(was?) a fairly serious research topic: https://arxiv.org/abs/2305.17493
We've by now reached a "probably not inevitable" - https://arxiv.org/abs/2404.01413 argues there's a finite upper bound to error - but I'd also point out that that paper assumes training data cardinality increases with the number of training generations and is strictly accumulative.
To a first order, that means you better have a pre-2022 dataset to get started, and have archived it well.
but it's probably fair to say current SOTA is still more or less "it's neither impossible nor inevitable".
Workaccount2
Pay attention because it's only once you will get to watch humans learn they are nothing special in real time.
skybrian
Historically, similar things happened with heliocentrism and evolution, but I guess we weren't there to see it.
agi_is_coming
The distillation is done on-policy like RLHF -- the student model is generating the sequences and teacher is providing feedback in terms of logits.
thomasahle
I'm curious about the use of explicit tokens like <start_of_turn>, <end_of_turn>, <bos>, and <eos>. What happens if the user insert those in their message? Does that provide an easy way to "ignore previous instructions"?
Do I have to manually sanitize the input before I give it to the model?
richdougherty
If you have control of the tokenizer you could make sure it doesn't produce these tokens on user input. I.e. instead of the special "<eos>" token, produce something like "<", "eos", ">" - whatever the 'natural' encoding of that string is.
See for example, the llama3 tokenizer has options to control special token tokenization:
Tokenization method with args to control special token handling: https://github.com/meta-llama/llama3/blob/bf8d18cd087a4a0b3f...
And you can see how it is used combined with special tokens and user input here: https://github.com/meta-llama/llama3/blob/bf8d18cd087a4a0b3f...
If you don't have control of the tokenizer, I guess it needs to be sanitized in the input like you say.
jakobov
How much faster (in terms of the number of iterations to a given performance) is training from distillation?
raffraffraff
> We use the same data filtering techniques as Gemma 1. Specifically, we filter the pre- training dataset to reduce the risk of unwanted or unsafe utterances.
Hmmm. I'd love to know what qualifies as "unsafe".
imjonse
It will refuse to describe the process of making napalm using only double entendres.
viridian
I don't understand the point of this sort of censorship when I can go to google, ask how to make napalm, and get a million results telling me to dissolve styrofoam in gasoline.
I've seen documentaries and science shows on cable TV that demonstrate basic facts like this, or how the IRA produced IEDs, or how molotov cocktails were made in the spanish civil war.
The information is beyond easy to access, and has been for decades.
imjonse
True. But an LLM product is closely associated with a single company and unlike a search engine which can claim it only shows you what is already available, the LLM will seem like it personally tells you something harmful. When they want to sell it as a helpful assistant that kind of behavior will undermine that goal.
We saw all the bad press companies have got in recent years for all kinds of unintended AI outputs.
iamronaldo
So it's twice the size of phi 3 and considerably worse? What am I missing
ertgbnm
They used two non-mutually exclusive techniques. Phi-3 is mostly a curriculum training breakthrough. By filtering training set for high quality tokens and training on synthetic data, they were able to achieve great results. Gemma-2 is a distillation breakthrough. By training LLMs with guidance from larger teacher LLMs, they were able to achieve great results too.
Porque no los dos?
reissbaker
Phi-3 does well in benchmarks but underperforms IRL; for example, Phi-3-Medium gets beaten badly by Llama-3-8b on the LMSYS Chatbot Arena despite doing better on benchmarks.
Gemma's performance if anything seems understated on benchmarks: the 27b is currently ahead of Llama3-70b on the Chatbot Arena leaderboard.
ertgbnm
I suspect Phi-3 is not robust to normal human input like typos and strange grammar since it's only trained on filtered "high quality" tokens and synthetic data. Since it doesn't need to waste a ton of parameters learning how to error correct input, it's much smarter on well curated benchmarks compared to its weight class. However, it can't operate out of distribution at all.
reissbaker
Personally vibe checking Phi-3-Medium is worse in my experience, no matter how well you spell — it just isn't good at all compared to Llama3-8b, despite being significantly larger in param count. I suspect the "high quality tokens" were "high quality" in the sense that they resembled tokens one might encounter in benchmarks, and not "high quality" in the sense of representing human-like input/output.
ferretj
Another take on this: phi-3 small has 1100 ELO on LMSYS (ranked #52) while the confidence interval for Gemma 2 9B is [1170, 1200] ELO (ranked btw #15 and #25).
floridianfisher
Why not try it here and make your comparisons that way? https://aistudio.google.com/app/prompts/new_chat?model=gemma...
pona-a
One compelling reason not to would be a region block... [0]
azeirah
Have you tried Phi 3? It's smart which makes it perform well on benchmarks, but it's not great at conversation or as a chatbot.
I imagine Gemma 2 is a better general-purpose assistant for most people, whereas Phi 3 is a solid small LLM (SLM?) for more specific use-cases like summarization, RAG, learning about math and stuff.
m00x
Worse in some aspects, better in other.
Small models are never going to be generalists, so having several small models allows you to pick the one that best fits your needs.
k__
When would you use which?
m00x
Whichever model works better for your use. It's hard to know without testing it at the moment.
I've found Gemini to be better at some use-cases, and GPT-4 better at others for my specific taste and use-case. You can kind of go by the benchmark scores to have an idea if it's good at logic, creativity, etc.
Aerbil313
Obviously another small model would be specialized in determining that.
mixtureoftakes
Good realease but the annoying part is they're very unclear about which types of models they are comparing. They provide benchmark comparisons for the base models only and arena comparisons for instruct only? Was that intentional? Why would you ever do that? This makes things unnecessary complicated imo and the only payoff is a short term win for google on paper.
Guess I'll just fully test it for my own tasks to know for sure
QuesnayJr
There are two new chatbots on Chatbot Arena, called "late-june-chatbot" and "im-just-another-late-june-chatbot". Both of them report that they are Gemma if you ask. I'm assuming it's these two models, but AFAIK there has been no official announcement.
suryabhupa
The announcements are live on Twitter! See this for example: https://x.com/suryabhupa/status/1806342617191379167
Get the top HN stories in your inbox every day.
It's exceptionally strong. In LMSys Chatbot Arena, the 27B version scores above LLama-3-70B, at the level of OpenAI GPT-4 and Claude-3 Sonnet!