Brian Lovin
/
Hacker News
Daily Digest email

Get the top HN stories in your inbox every day.

chisleu

Just ordered a $12k mac studio w/ 512GB of integrated RAM.

Can't wait for it to arrive and crank up LM Studio. It's literally the first install. I'm going to download it with safari.

LM Studio is newish, and it's not a perfect interface yet, but it's fantastic at what it does which is bring local LLMs to the masses w/o them having to know much.

There is another project that people should be aware of: https://github.com/exo-explore/exo

Exo is this radically cool tool that automatically clusters all hosts on your network running Exo and uses their combined GPUs for increased throughput.

Like HPC environments, you are going to need ultra fast interconnects, but it's just IP based.

zackify

I love LM studio but I’d never waste 12k like that. The memory bandwidth is too low trust me.

Get the RTX Pro 6000 for 8.5k with double the bandwidth. It will be way better

tymscar

Why would they pay 2/3 of the price for something with 1/5 of ram?

The whole point of spending that much money for them is to run massive models, like the full R1, which the Pro 6000 cant

zackify

Because waiting forever for initial prompt processing with realistic number of MCP tools enabled on a prompt is going to suck without the most bandwidth possible

And you are never going to sit around waiting for anything larger than the 96+gb of ram that the RTX pro has.

If you’re using it for background tasks and not coding it’s a different story

marci

You can't run deepseek-v3/r1 on the RTX Pro 6000, not to mention the upcomming 1 million context qwen models, or the current qwen3-235b.

112233

I can run full deepseek r1 on m1 max with 64GB of ram. Around 0.5 t/s with small quant. Q4 quant of Maverick (253 GB) runs at 2.3 t/s on it (no GPU offload).

Practically, last gen or even ES/QS EPYC or Xeon (with AMX), enough RAM to fill all 8 or 12 channels plus fast storage (4 Gen5 NVMEs are almost 60 GB/s) on paper at least look like cheapest way to run these huge MoE models at hobbyist speeds.

smcleod

RTX is nice, but it's memory limited and requires to have a full desktop machine to run it in. I'd take slower inference (as long as it's not less than 15tk/s) for more memory any day!

diggan

I'd love to see more Very-Large-Memory Mac Studio benchmarks for prompt processing and inference. The few benchmarks I've seem either missed to take prompt processing into account, didn't share exact weights+setup that were used or showed really abysmal performance.

t1amat

(Replying to both siblings questioning this)

If the primary use case is input heavy, which is true of agentic tools, there’s a world where partial GPU offload with many channels of DDR5 system RAM leads to an overall better experience. A good GPU will process input many times faster, and with good RAM you might end up with decent output speed still. Seems like that would come in close to $12k?

And there would be no competition for models that do fit entirely inside that VRAM, for example Qwen3 32B.

storus

RTX Pro 6000 can't do DeepSeek R1 671B Q4, you'd need 5-6 of them, which makes it way more expensive. Moreover, MacStudio will do it at 150W whereas Pro 6000 would start at 1500W.

diggan

> Moreover, MacStudio will do it at 150W whereas Pro 6000 would start at 1500W.

No, Pro 6000 pulls max 600W, not sure where you get 1500W from, that's more than double the specification.

Besides, what is the token/second or second/token, and prompt processing speed for running DeepSeek R1 671B on a Mac Studio with Q4? Curious about those numbers, because I have a feeling they're very far off each other.

chisleu

Only on HN can buying a $12k badass computer be a waste of money

dchest

I'm using it on MacBook Air M1 / 8 GB RAM with Qwen3-4B to generate summaries and tags for my vibe-coded Bloomberg Terminal-style RSS reader :-) It works fine (the laptop gets hot and slow, but fine).

Probably should just use llama.cpp server/ollama and not waste a gig of memory on Electron, but I like GUIs.

minimaxir

8 GB of RAM with local LLMs in general is iffy: a 8-bit quantized Qwen3-4B is 4.2GB on disk and likely more in memory. 16 GB is usually the minimum to be able to run decent models without compromising on heavy quantization.

dchest

It's 4-bit quantized (Q4_K_M, 2.5 GB) and still works well for this task. It's amazing. I've been running various small models on this 8 GB Air since the first Llama and GPT-J, and they improved so much!

macOS virtual memory works well on swapping in and out stuff to SSD.

imranq

I'd love to host my own LLMs but I keep getting held back from the quality and affordability of Cloud LLMs. Why go local unless there's private data involved?

diggan

There are some use cases I use LLMs for where I don't care a lot about the data being private (although that's a plus) but I don't want to pay XXX€ for classifying some data and I particularly don't want to worry about having to pay that again if I want to redo it with some changes.

Using local LLMs for this I don't worry about the price at all, I can leave it doing three tries per "task" without tripling the cost if I wanted to.

It's true that there is an upfront cost but way easier to get over that hump than on-demand/per-token costs, at least for me.

PeterStuer

Same. For 'sovereignty ' reasons I eventually will move to local processing, but for now in development/prototyping the gap with hosted LLM's seems too wide.

mycall

Offline is another use case.

seanmcdirmid

Nothing like playing around with LLMs on an airplane without an internet connection.

noman-land

I love LM Studio. It's a great tool. I'm waiting for another generation of Macbook Pros to do as you did :).

incognito124

> I'm going to download it with Safari

Oof you were NOT joking

noman-land

Safari to download LM Studio. LM Studio to download models. Models to download Firefox.

teaearlgraycold

The modern ninite

whatevsmate

I did this a month ago and don't regret it one bit. I had a long laundry list of ML "stuff" I wanted to play with or questions to answer. There's no world in which I'm paying by the request, or token, or whatever, for hacking on fun projects. Keeping an eye on the meter is the opposite of having fun and I have absolutely nowhere I can put a loud, hot GPU (that probably has "gamer" lighting no less) in my fam's small apartment.

chisleu

Right on. I also have a laundry list of ML things I want to do starting with fine tuning models.

I don't mind paying for models to do things like code. I like to move really fast when I'm coding. But for other things, I just didn't want to spend a week or two coming up on the hardware needed to build a GPU system. You can just order a big GPU box, but it's going to cost you astronomically right now. Building a system with 4-5 PCIE 5.0 x16 slots, enough power, enough pcie lanes... It's a lot to learn. You can't go on PC part picker and just hunt a motherboard with 6 double slots.

This is a machine to let me do some things with local models. My first goal is to run some quantized version of the new V3 model and try to use it for coding tasks.

I expect it will be slow for sure, but I just want to know what it's capable of.

datpuz

I genuinely cannot wrap my head around spending this much money on hardware that is dramatically inferior to hardware that costs half the price. MacOS is not even great anymore, they stopped improving their UX like a decade ago.

chisleu

How can you say something so brave, and so wrong?

storus

If the rumors about splitting CPU/GPU in new Macs are true, your MacStudio will be the last one capable of running DeepSeek R1 671B Q4. It looks like Apple had an accidental winner that will go away with the end of unified RAM.

phren0logy

I have not heard this rumor. Source?

prophesi

I believe they're talking about the rumors by an Apple supply chain analyst, Ming-Chi Kuo.

https://www.techspot.com/news/106159-apple-m5-silicon-rumore...

mkagenius

On M1/M2/M3 Mac, you can use Apple Containers to automate[1] the execution of the generated code.

I have one running locally with this config:

    {
      "mcpServers": {
        "coderunner": {
          "url": "http://coderunner.local:8222/sse"
        }
      }
    }

1. CodeRunner: https://github.com/BandarLabs/coderunner (I am one of the authors)

minimaxir

LM Studio has quickly become the best way to run local LLMs on an Apple Silicon Mac: no offense to vllm/ollama and other terminal-based approaches, but LLMs have many levers for tweaking output and sometimes you need a UI to manage it. Now that LM Studio supports MLX models, it's one of the most efficient too.

I'm not bullish on MCP, but at the least this approach gives a good way to experiment with it for free.

zackify

Ollama doesn’t even have a way to customize the context size per model and persist it. LM studio does :)

Anaphylaxis

This isn't true. You can `ollama run {model}`, `/set parameter num_ctx {ctx}` and then `/save`. Recommended to `/save {model}:{ctx}` to persist on model update

truemotive

This can be done with custom Modelfiles as well, I was pretty bent when I found out that 2048 was the default context length.

https://ollama.readthedocs.io/en/modelfile/

zackify

As of 2 weeks back if I did this, it would reset back the moment cline made an api call. But lm studio would work correctly. I’ll have to try again. Even confirmed cline was not overriding num context

pzo

I just wish they did some facelifting of UI. Right now is too colorfull for me and many different shades of similar colors. I wish they copy some color pallet from google ai studio or from trae or pycharm.

chisleu

> I'm not bullish on MCP

You gotta help me out. What do you see holding it back?

minimaxir

tl;dr the current hype around it is a solution looking for a problem and at a high level, it's just a rebrand of the Tools paradigm.

mhast

It's "Tools as a service", so it's really trying to make tool calling easier to use.

nix0n

LM Studio is quite good on Windows with Nvidia RTX also.

boredemployee

care to elaborate? i have rtx 4070 12gb vram + 64gb ram, i wonder what models I can run with it. Anything useful?

Eupolemos

If you go to huggingface.co, you can tell it what specs you have and when you go to a model, it'll show you what variations of that model are likely to run well.

So if you go to this[0] random model, on the right there is a list of quantifications based on bits, and those you can run will be shown in green.

[0] https://huggingface.co/unsloth/Mistral-Small-3.1-24B-Instruc...

nix0n

LM Studio's model search is pretty good at showing what models will fit in your VRAM.

For my 16gb of VRAM, those models do not include anything that's good at coding, even when I provide the API documents via PDF upload (another thing that LM Studio makes easy).

So, not really, but LM Studio at least makes it easier to find that out.

sixhobbits

MCP terminology is already super confusing, but this seems to just introduce "MCP Host" randomly in a way that makes no sense to me at all.

> "MCP Host": applications (like LM Studio or Claude Desktop) that can connect to MCP servers, and make their resources available to models.

I think everyone else is calling this an "MCP Client", so I'm not sure why they would want to call themselves a host - makes it sound like they are hosting MCP servers (definitely something that people are doing, even though often the server is run on the same machine as the client), when in fact they are just a client? Or am I confused?

guywhocodes

MCP Host is terminology from the spec. It's the software that makes llm calls, build prompts, interprets tool call requests and performs them etc.

sixhobbits

So it is, I stand corrected. I googled mcp host and the lmstudio link was the first result.

Some more discussion on the confusion here https://github.com/modelcontextprotocol/modelcontextprotocol... where they acknowledge that most people call it a client and that that's ok unless the distinction is important.

I think host is a bad term for it though as it makes more intuitive sense for the host to host the server and the client to connect to it, especially for remote MCP servers which are probably going to become the default way of using them.

kreetx

I'm with you on the confusion, it makes no sense at all to call it a host. MCP host should host the MCP server (yes, I know - that is yet a separate term).

The MCP standard seems a mess, e.g take this paragraph from here[1]

> In the Streamable HTTP transport, the server operates as an independent process that can handle multiple client connections.

Yes, obviously, that is what servers do. Also, what is "Streamable HTTP"? Comet, HTTP2, or even websockets? SSE could be a candidate, but it isn't as it says "Streamable HTTP" replaces SSE.

> This transport uses HTTP POST and GET requests.

Guys, POST and GET are verbs for HTTP protocol, TCP is the transport. I guess they could say that they use HTTP protocol, which only uses POST and GET verbs (if that is the case).

> Server can optionally make use of Server-Sent Events (SSE) to stream multiple server messages.

This would make sense if there weren't the note "This replaces the HTTP+SSE transport" right below the title.

> This permits basic MCP servers, as well as more feature-rich servers supporting streaming and server-to-client notifications and requests.

Again, how is streaming implemented (what is "Streaming HTTP")?. Also, "server-to-client .. requests"? SSE is unidirectional, so those requests are happening over secondary HTTP requests?

--

And then the 2.0.1 Security Warning seems like a blob of words on security, no reference to maybe same-origin. Also, "for local servers bind to localhost and then implement proper authentication" - are both of those together ever required? Is it worth it to even say that servers should implement proper authentication?

Anyway, reading the entire documentation one might be able to put a charitable version of the MCP puzzle together that might actually make sense. But it does seem that it isn't written by engineers, in which case I don't understand why or to whom is this written for.

[1] https://modelcontextprotocol.io/specification/draft/basic/tr...

qntty

It's confusing but you just have to read the official docs

https://modelcontextprotocol.io/specification/2025-03-26/arc...

politelemon

The initial experience with LMStudio and MCP doesn't seem to be great, I think their docs could do with a happy path demo for newcomers.

Upon installing the first model offered is google/gemma-3-12b - which in fairness is pretty decent compared to others.

It's not obvious how to show the right sidebar they're talking about, it's the flask icon which turns into a collapse icon when you click it.

I set the MCP up with playwright, asked it to read the top headline from HN and it got stuck on an infinite loop of navigating to Hacker News, but doing nothing with the output.

I wanted to try it out with a few other models, but figuring out how to download new models isn't obvious either, it turned out to be the search icon. Anyway other models didn't fare much better either, some outright ignored the tools despite having the capacity for 'tool use'.

t1amat

Gemma3 models can follow instructions but were not trained to call tools, which is the backbone of MCP support. You would likely have a better experience with models from the Qwen3 family.

cchance

That latter issue isnt a lmstudio issue... its a model issue,

Thews

Others mentioned qwen3, but which works fine with HN stories for me, but the comments still trip it up and it'll start thinking the comments are part of the original question after a while.

I also tried the recent deepseek 8b distill, but it was much worse for tool calling than qwen3 8b.

xyc

Great to see more local AI tools supporting MCP! Recently I've also added MCP support to recurse.chat. When running locally (LLaMA.cpp and Ollama) it still needs to catch up in terms of tool calling capabilities (for example tool call accuracy / parallel tool calls) compared to the well known providers but it's starting to get pretty usable.

rshemet

hey! we're building Cactus (https://github.com/cactus-compute), effectively Ollama for smartphones.

I'd love to learn more about your MCP implementation. Wanna chat?

visiondude

LMStudio works surprisingly well on M3 Ultra 64gb and 27b models.

Nice to have a local option, especially for some prompts.

patates

What models are you using on LM Studio for what task and with how much memory?

I have a 48GB macbook pro and Gemma3 (one of the abliterated ones) fits my non-code use case perfectly (generating crime stories which the reader tries to guess the killer).

For code, I still call Google to use Gemini.

undefined

[deleted]

undefined

[deleted]

robbru

I've been using the Google Gemma QAT models in 4B, 12B, and 27B with LM Studio with my M1 Max. https://huggingface.co/lmstudio-community/gemma-3-12B-it-qat...

t1amat

I would recommend Qwen3 30B A3B for you. The MLX 4bit DWQ quants are fantastic.

redman25

Qwen is great but for creative writing I think Gemma is a good choice. It has better EQ than Qwen IMO.

api

I wish LM Studio had a pure daemon mode. It's better than ollama in a lot of ways but I'd rather be able to use BoltAI as the UI, as well as use it from Zed and VSCode and aider.

What I like about ollama is that it provides a self-hosted AI provider that can be used by a variety of things. LM Studio has that too, but you have to have the whole big chonky Electron UI running. Its UI is powerful but a lot less nice than e.g. BoltAI for casual use.

rhet0rica

Oh, that horrible Electron UI. Under Windows it pegs a core on my CPU at all times!

If you're just working as a single user via the OpenAI protocol, you might want to consider koboldcpp. It bundles a GUI launcher, then starts in text-only mode. You can also tell it to just run a saved configuration, bypassing the GUI; I've successfully run it as a system service on Windows using nssm.

https://github.com/LostRuins/koboldcpp/releases

Though there are a lot of roleplay-centric gimmicks in its feature set, its context-shifting feature is singular. It caches the intermediate state used by your last query, extending it to build the next one. As a result you save on generation time with large contexts, and also any conversation that has been pushed out of the context window still indirectly influences the current exchange.

diggan

> Oh, that horrible Electron UI. Under Windows it pegs a core on my CPU at all times!

Worse I'd say, considering what people use LM Studio for, is the VRAM it occupies up even when the UI and everything is idle. Somehow, it's using 500MB VRAM while doing nothing, while Firefox with ~60 active tabs is using 480MB. gnome-shell itself also sits around 450MB and is responsible for quite a bit more than LM Studio.

Still, LM Studio is probably the best all-in-one GUI around for local LLM usage, unless you go terminal usage.

SparkyMcUnicorn

There's a "headless" checkbox in settings->developer

diggan

Still, you need to install and run the AppImage at least once to enable the "lms" cli which can later be used. Would be nice with a completely GUI-less installation/use method too.

t1amat

The UI is the product. If you just want the engine, use mlx-omni-server (for MLX) or llama-swap (for GGUF) and huggingface-cli (for model downloads).

b0dhimind

I wonder how LM Studio and AnythingLLM contrasts especially in upcoming months... I like AnythingLLM's workflow editor. I'd like something to grow into for my doc-heavy job. Don't want to be installing and trying both.

jtreminio

I’ve been wanting to try LM Studio but I can’t figure out how to use it over local network. My desktop in the living room has the beefy GPU, but I want to use LM Studio from my laptop in bed.

Any suggestions?

numpad0

  [>_] -> [.* Settings] -> Serve on local network ( o)
Any OpenAI-compatible client app should work - use IP address of host machine as API server address. API key can be bogus or blank.

skygazer

Use an openai compatible API client on your laptop, and LM Studio on your server, and point the client to your server. LM Server can serve an LLM on a desired port using the openai style chat completion API. You can also install openwebui on your server and connect to it via a web browser, and configure it to use the LM Studio connection for its LLM.

undefined

[deleted]
Daily Digest email

Get the top HN stories in your inbox every day.

MCP in LM Studio - Hacker News