Brian Lovin
/
Hacker News
Daily Digest email

Get the top HN stories in your inbox every day.

MrArthegor

A good technical project, but honestly useless in like 90% of scenarios.

You want to use an NVidia GPU for LLM ? just buy a basic PC on second hand (the GPU is the primary cost anyway), you want to use Mac for good amount of VRAM ? Buy a Mac.

With this proposed solution you have an half-backed system, the GPU is limited by the Thunderbolt port and you don’t have access to all of NVidia tool and library, and on other hand you have a system who doesn’t have the integration of native solution like MLX and a risk of breakage in future macOS update.

afavour

Chicken/egg. NVidia tooling is lacking surely in part because the hardware wasn’t usable on macOS until now. Now that it’s usable that might change.

frollogaston

Nvidia GPUs were usable on Intel Macs, but compatibility got worse over time, and Apple stopped making a Mac Pro with regular PCIe slots in 2013. People then got hopeful about eGPUs, but they have their own caveats on top of macOS only fully working with AMD cards. So I've gotten numb to any news about Mac + GPU. The answer was always to just get a non-Apple PC with PCIe slots instead of giving yourself hoops to jump through.

zoky

The 2019 Intel Mac Pro had PCIe slots. The Apple Silicon Mac Pro still has them as well, but they’re pretty much useless.

bigyabai

Nvidia tooling like CUDA has worked on AArch64 UNIX-certified OSes since June of 2020: https://download.nvidia.com/XFree86/Linux-aarch64/

The software stack has been ready for Apple Silicon for more than a half decade.

fg137

Until there is official support for Mac coming from nvidia, I don't think anything will happen.

> the hardware wasn't usable on macOS

This eGPU thing is from a third-party if I understand correctly. I don't see why nvidia would get excited about that. If they cared about the platform, they would have released something already.

2III7

The eGPU "thing" should work on anything that supports thunderbolt as it has native support for pcie.

fakebizprez

Wrong.

If a model can run on a 512GB M3 Ultra via MLX or CUDA, but simultaneously benefit from the memory bandwidth of something like an RTX 6000 Pro; that would save my company hundreds of thousands of dollars. That's $20,000 for roughly 600GB of VRAM, and enough token generation speed to fulfill the needs of any enterprise that's not a hyperscaler or neocloud.

I'll let someone else do the math for you on what it costs to put together a 10U server to get that kind of performance without the $10K M3 Ultra Studio.

What we're paying for five old 80GB A100s is criminal, but it's nothing compared to what these GB200 Blackwell setups are going to cost in 2030. Market economics aside, the fact that they require sophisticated liquid cooling infrastructure and draw 3x the power of the A100s, will make these cards unattainable for small to medium organizations.

So yeah, if there's some outside chance that we can pair NVIDIA's speed with a an arm-powered machine that offers 512GB Unified Memory while drawing 50W -- you better believe it's a big deal. We'll see. Sounds too good to be true.

dapperdrake

Thank you for opening my mind to a viewpoint I didn’t even know existed.

Yes, for many scenarios this is "not even an academic exercise".

For a very select few applications this is Gold. Finally serious linear algebra crunch for the taking. (Without custom GPU tapeout.)

hank808

"Nvidia." Not NVidia or nVidia, or the other ways. I feel that I can frequently figure out if someone is going to express a negative view about this company based only on whether they picked a weird way to write their name.

spartanatreyu

Their logo literally has a lowercase "N" in their name.

the_arun

I misunderstood eGPU for virtual GPU. But I was wrong it means external GPU.

petters

> the GPU is limited by the Thunderbolt port

Not everything is limited by the transfer speed to/from the GPU. LLM inference, for example.

nailer

> GPU is limited by the Thunderbolt port

I thought Thunderbolt was like pluggable PCI? The whole point was not to limit peripherals.

zamadatix

There's more to peripheral limits than the protocol used. Thunderbolt connections offer higher latency and limits on bandwidth. Both, either, or neither of those things may be much of an actual problem (depending on the use case) but they are some examples of limits vs native PCIe.

MIA_Alive

Even with running ML experiments you'd mostly want to run them on rented out clusters anyway

bangonkeyboard

I don't know how Apple has evaded regulatory scrutiny for their refusal to sign Nvidia's eGPU drivers since 2018.

mrpippy

Evidence that NVIDIA has even been trying? My understanding is that Apple didn’t allow 3rd parties to write graphics drivers past 10.13, but they could’ve done a non-graphics driver like this.

trueno

i emailed jen sen huang at the very tail end of the maxwell era and p much begged for maxwell support on macos. i didnt expect a reply, especially since i guessed his email based on some "how to find ceo emails" google search result.

he actually did reply weeks later and said "i didnt realize people wanted this, my team has added them. go check now". pretty sure that was the last time nvidia drivers came to macos.

there's a lot of assumptions made with this topic, particularly the assumption that apple is blocking them. at least in my experience the opposite was true, nvidia just flat out wasn't making them. however i don't doubt the truth lies somewhere in between: nvidia and apple have a pretty much nonexistant relationship now. i dont know whats required here but i also don't doubt apple makes this experience suck butt for any interested parties.

sheiyei

Imagine 2026 Jensen "OpenClaw is the greatest software ever" Huang responding to emails from mere mortals

MBCook

The government doesn’t care? They’re a minority of the market? The vast majority of their computers didn’t have slots to put Nvidia GPUs in, and now none of them do?

hgoel

They said eGPU

the_arun

Yeh external GPU

frollogaston

eGPUs are kind of a joke. People would be way more likely to use dedicated GPUs with Macs if they had PCIe slots.

mulderc

Apple doesn’t have a monopoly in any market they are in.

TheDong

It depends how you define the market. In the 2001 microsoft case [0], the courts ruled Microsoft had a monopoly over the "Intel-based personal computer market".

Apple has a monopoly over the "M-chip" personal computer market. They have a monopoly over the iOS market with the app store. They have a monopoly over the driver market on macOS.

Like, Microsoft was found guilty of exploiting its monopoly for installing IE by default while still allowing other browser engines. On iOS, apple bundles safari by default and doesn't allow other browser engines.

If we apply the same standard that found MS a monopoly in the past, then Apple is obviously a monopoly, so at the very least I think it's fair to say that reasonable people can disagree about whether Apple is a monopoly or not.

[0]: https://en.wikipedia.org/wiki/United_States_v._Microsoft_Cor....

hilsdev

I wouldn’t say it is obvious. Apple does not have the monopoly of ARM based PCs. Labeling it as a monopoly of M chips is not fair or accurate when comparing to MS on Intel. It’s also probably relevant that MS was not selling PCs or their own hardware. They had a monopoly on a market where you effectively had to use their software to use the hardware you bought from a different company. Because Apple is selling their own hardware and software as a single product, the consumer is not forced into restricting the hardware they bought by a second company’s policies.

Underphil

I don't think any of what you're describing are legal "monopolies". I don't have a single Apple product in my life but I'm fairly sure there's nothing I'm prevented from doing because of that.

raw_anon_1111

That’s not how monopoly definitions work. That makes about as much sense as saying Nintendo has a monopoly on Nintendo consoles or Ford has a monopoly on Mustangs

JumpCrisscross

> Apple has a monopoly over the "M-chip" personal computer market. They have a monopoly over the iOS market with the app store

When a company is deemed an illegal monopoly, the DoJ basically becomes part of management. Antitrust settlements focus on germane elements, e.g. spin offs. But they also frequently include random terms of political convenience.

I don’t think we want a precedent where companies having a product means they have an automatic monopoly on said product.

SllX

Yes. If you define the market in a ridiculous manner and convince a court to go along with it, anybody can be a monopoly.

But the M series are an Apple product line designed by Apple with a ARM license and produced on contract by TSMC for use in other Apple products.

Don’t assume the facts from another case automatically apply in other cases.

Or as Justice Jackson once put it: “Other cases presenting different allegations and different records may lead to different conclusions”

scott_w

There’s no such thing as “monopoly on Apple-produced processors” because that’s absurd. The monopoly for MacBook would be “consumer laptops” most likely. Apple does not have a monopoly in consumer laptops to the best of my knowledge.

brookst

Reductionism is so cringe.

Intel sold chips to anyone. Anyone could make Intel computers.

Apple does not sell chips to anyone. Nobody else can make m-series computers.

Your argument is basically that Ford has a monopoly on selling mustangs because standard oil had a monopoly on selling oil.

trueno

> Apple has a monopoly over the "M-chip" personal computer market

lmao what ? the "M-chip" is literally their chip that they designed, built relationships with TSMC over and bankrolled into production to put in their products. literally hardware by apple for apple. this was a decade plus long thing in the making, this is the risk/gamble apple took and invested heavily into. that is apples innovation. any other manuf is free to go do this themselves for their own devices, they just didn't and for the most part still don't. that just like isn't a monopoly at all, i'm amused you even got to that point in the first place. seems to carry some broad misunderstandings of what the M-series chips are or carries an assumption that cpus are supposed to be shared to any interested parties just because that was intels business model. intel was historically slacking & their one-size-fits-most approach wasn't meeting the engineering requirements apple was after generation after generation, so apple took the cpu destiny into their own hands and made their own. if you feel like non-apple laptop chips aren't living up to that kind of perf/ppu.... well yeah you'd be right. but that's not really apples fault. that's not a monopoly thing, like at all. either laptop manufs need to go make their own chip (unlikely) or intel/qualcomm/etc need to catch up.

thisislife2

It isn't just about monopoly or unfair competition. This can also be covered under consumer rights - the Right to Repair. No OS provider should be allowed to dictate what software you can or not run on your own device and / or OS you have paid for.

ssl-3

> It isn't just about monopoly or unfair competition. This can also be covered under consumer rights - the Right to Repair.

If we have a right to repair (we broadly do not, AFAICT), then that doesn't necessarily mean that we have a right to modify and/or add new functionality.

When I repair a widget that has become broken, I merely return it to its previous non-broken state. I might also decide to upgrade it in some capacity as part of this repair process, but the act of repairing doesn't imply upgrades. At all.

> No OS provider should be allowed to dictate what software you can or not run on your own device and / or OS you have paid for.

I agree completely, but here we are anyway. We've been here for quite some time.

satvikpendem

Courts have already ruled it does in the iOS app store market. You can disagree of course but then you'd be disagreeing with legal experts who know more about anti-trust law than you do.

afavour

But Apple’s share of the desktop/laptop market is very different than their share of the mobile one.

hilsdev

Credentialism to prevent discussion of political and government entities is incredibly dangerous

latexr

What’s that got to do with anything? Having a monopoly isn’t the only reason to be regulated.

GeekyBear

The same way Google evaded regulatory scrutiny for refusing to allow a YouTube client for Windows Phone?

undefined

[deleted]

bigyabai

Internet Explorer Mobile is a YouTube client. You're describing a client-server disagreement when the user is talking about an entirely client-based conflict.

realusername

Google deployed custom code to actively block the clients so it went beyond just a disagreement

pjmlp

They aren't a monopoly, hence why.

jtbayly

Isn't all you have to do disable SIP?

frollogaston

Yeah I'm pretty sure Nvidia just doesn't care to make Mac drivers. For years there was no SIP, Apple sold the Mac Pro which could take Nvidia GPUs, but you basically couldn't use Nvidia because of how bad and outdated the drivers were. I had a GTX 650 in my Mac Pro for a while, it was borderline unusable.

syntaxing

From what I understand, only works with Tinygrad. Which is better than nothing but CUDA or Vulkan on pytorch isn’t going to work from this.

[1] https://docs.tinygrad.org/tinygpu/

Keyframe

Such a shame both companies are big on vanity to make great things happen. Imagine where you could run Mac hardware with nvidia on linux. It's all there, and closed walls are what's not allowing it to happen. That's what we as customers lose when we forego control of what we purchase to those that sold us the goods.

deepsun

Don't purchase? I don't own any Apple devices, everything works fine.

TheDong

Unfortunately, Apple still won't release iMessage for Android or Linux (unlike every other messenger platform, like whatsapp, telegram, wechat, microsoft teams, etc, which are all cross-platform).

Because of that, you need an apple device around to be able to deal with iMessage users.

deepsun

Then it would be more correct to say that we "lose when we forego control" when our friends push the iMessage on us.

In my bubble literally noone uses iMessage. More tech savvy use Signal/GroupMe, less tech savvy use SMS/Email. Family use Signal to chat with me, as I can steer my own family a little.

Also I sometimes open web-interface of Facebook, but any attempts to offer WhatsApp I answer "sorry no Facebook apps on my phone, no Instagram/Messenger either". Never had any issues with that. Although I heard some countries are very dependent on Facebook, so might be hard there.

By the way, I noticed it's not hard to use multiple messengers actually, sometimes it's even faster to find a contact as you always remember what app to look at in recents.

UPDATE: My point is that you can also influence your life and how people communicate with you. Up to a point of course, but it's not like you can do nothing with it.

troad

But you don't need an Apple device to contact iMessage users. Every iMessage ID is a phone number (SMS/RCS) or email.

You've listed a whole bunch of alternatives available to you, but for some reason you demand that Apple change its unique offering into just another one of those for you. Why? Is that not a completely enforced monoculture?

Apple has always been off to the side, doing their own thing, and for some reason that fact utterly enrages people. They demand that Apple become just like everyone else. But we already have everyone else! And in every single field Apple is in, there is more of everyone else than there is of Apple.

Have you considered people like Apple products precisely because they're not like everything else? That making Apple indistinguishable from Facebook or Google is no victory, but a significant loss for customer choice?

fg137

> to be able to deal with iMessage users.

I have been an Android user for the past 15 years, and somehow iMessage has never been a problem. Most of the time I don't even know if someone uses iMessage or not.

pezezin

Good thing that iMessage is only popular in the US. I have never seen anybody using it, I don't even know how it looks, and if someone told me to use it I would laugh at them.

sunnybeetroot

That is no longer true. https://bluebubbles.app/ Well… it’s not exactly no longer true, you do need an Apple VM but it doesn’t have to be the end device.

kllrnohj

Why? Just make iMessage users put up with green bubbles if they want to talk to you?

Thanks to Apple co-opting phone numbers, there's literally no need to ever have iMessage for anyone

raw_anon_1111

No you don’t. You can “deal” with iMessage users by using SMS and RCS

Underphil

[dead]

aljgz

I don't understand the logic for downvotes. We vote with our wallets. When I could not update the Ram on my personal Dell machine I asked for a Frame.work in my new job. As my Intel based FW at work had thermal throttling problems, for my next personal purchase I got an AMD one. As Ubuntu had shady practices, I installed Fedora, as Gnome forced UX choices I did not want, I used KDE. As I wanted my machine to be even more stable I use an immutable spin.

The machine I'm using now represents my choices and matches what matters to me, and works closer to perfectly than all my machines in the past

And yes, I have worked with macs, and no, the UX and the entire tyranny in the Apple ecosystem was not something I could live with

And yes, this machine is fast, predictable, a joy to work with and is a tool I control, not a tool to control me. If something happens to it, I can order the part with the same price that goes into a new machine, and keep using my laptop

TheDong

"We vote with our wallet, so don't complain" is a bad take in my opinion.

Like, for phones, I want a phone which runs Linux, has NFC support, and also has iMessage so my friend who only communicates with blue-bubbles and will never message a green-bubble will still talk to me. I also want it to have regulatory approval in the country I live in so I can legally use it to make calls.

Because apple has closed the iMessage ecosystem such that a linux phone can't use it, such a device is impossible. I cannot vote for it.

As such, I will complain about every phone I own for the foreseeable future.

arjie

Woah, this is exciting. I'm traveling but I have a 5090 lying around at home. I'm eager to give it a go. Docs are here: https://docs.tinygrad.org/tinygpu/

I hope it'll work on an M4 Mac Mini. Does anyone know what hardware to get? You'll need a full ATX PSU to supply power, right? And then tinygrad can do LLM inference on it?

999900000999

You can buy a cheap GPU enclosure for about 100$ off ali express.

Takes a standard PSU. However, Mac Minis don't have occulink. So you might be a bit limited by whatever USB C can do.

Now if Intel can get there Arc drivers in order we'll see some real budget fun.

https://www.newegg.com/intel-arc-pro-b70-32gb-graphics-card/...

32 GB of VRAM for 1000$. Plus a 500$ Mac Mini.

Fnoord

Those $100 ones don't come with a cage. If you do want a cage, you'll end up with $180 in total, with zero warranty.

Article mentions: "Apple finally approved our driver for both AMD and NVIDIA"

Does not mention Intel (GPUs). Select AMD GPUs work on macOS, but...

Macs (both Intel and ARM) support TB, but eGPU only work on Intel Macs, and basically only with AMD.

Good news is for medium end gaming choices are solid, and CUDA works on AMD these days.

999900000999

Fortune favors the bold my friend.

I own one of these, the cage is just a piece of plastic. Anyway, I don't think 80$ is that big of a difference here. I can't really afford a 4k Nvidia GPU. Intel is my only hope.

MuffinFlavored

How big of a handicap on performance is the external enclosure for something like an RTX5090?

undefined

[deleted]

manmal

Maybe I’m lacking imagination. But how will a GPU with small-ish but fast VRAM and great compute, augment a Mac with large but slow VRAM and weak compute? The interconnect isn’t powerful enough to change layers on the GPU rapidly, I guess?

zozbot234

> But how will a GPU with small-ish but fast VRAM and great compute, augment a Mac with large but slow VRAM and weak compute?

It would work just like a discrete GPU when doing CPU+GPU inference: you'd run a few shared layers on the discrete GPU and place the rest in unified memory. You'd want to minimize CPU/GPU transfers even more than usual, since a Thunderbolt connection only gives you equivalent throughput to PCIe 4.0 x4.

manmal

But isn’t the Mac Mini the weak link in that scenario?

arjie

My Mini is actually the smallest model so it actually has "small but slow VRAM" (haha!) so the reason I want the GPU for are the smaller Gemmas or Qwens. Realistically, I'll probably run on an RTX 6000 Pro but this might be fun for home.

GeekyBear

We've seen many recent projects to stream models direct from SSD to a discrete GPU's limited VRAM on PCs.

How big a bottleneck is Thunderbolt 5 compared to an SSD? Is the 120 Gbps mode only available when linked to a monitor?

manmal

That’s what, 14GB/s? The GPU‘s VRAM can do 100x that.

rldjbpin

based on your card, it should be using a decent bit of the 600w or so passed through the new 16-pin connector. goes without saying, a proper PSU (doesn't have to be ATX, but at least 750W to be on the safer side) is a must.

for thunderbolt enclosures, consider going through the list - https://egpu.io/best-egpu-buyers-guide/#tb3-enclosures

zero idea about mac support so YMMV.

lowbloodsugar

“Lying around”. I’ve got an unopened 5090 in a box that I know will suffer the same fate, so I’m sending it back. So privileged to have the money to impulse buy a 5090 and yet no time to actually do anything with it.

c-c-c-c-c

You should see his ferrari.

mlfreeman

I followed the instructions link and read the scripts...although the TinyGPU app is not in source form on GitHub, this looks to me like the GPU is passed into the Linux VM underneath to use the real driver and then somehow passed back out to the Mac (which might be what the TinyGrad team actually got approved).

Or I could have totally misunderstood the role of Docker in this.

gsnedders

https://docs.tinygrad.org/tinygpu/ are their docs, and https://github.com/tinygrad/tinygrad/tree/4d36366717aa9f1735... is the actual (user space) driver.

My read of everything is that they are using Docker for NVIDIA GPUs for the sake of "how do you compile code to target the GPU"; for AMD they're just compiling their own LLVM with the appropriate target on macOS.

ajdegol

I think that metal isn’t double precision; so that limits some serious physics simming; but if you’re doing that I guess you just rent a gpu somewhere.

I would definitely be into this if adding an egpu was first class supported.

nxobject

It'll be interesting to see whether this is price-competitive versus remoting into a cluster. Might be for smaller orgs/consultants.

eoskx

Interesting, but cannot run CUDA or more to the point `nvidia-smi`.

embedding-shape

Well, to be fair, the whole shebang is from a completely different company, that have their own ML library and such, so that isn't that surprising. Although I agree that some CUDA shim or similar would be a lot more interesting, still getting to the place of running inference and training with your very own library is pretty dope already.

wmf

Pretty misleading. This driver is only for compute not graphics.

polotics

As a sizable share of the market is going to want to use this for local LLMs, I do not think this is that misleading.

bigyabai

Most people I know are not using TinyGrad for inference, but CUDA or Vulkan (neither of which are provided here).

comboy

GPUs can do graphics too?

aobdev

I can’t tell if you’re making a joke about the current state of AI and GPUs or refuting the purpose of this driver

manmal

Graphics was not what came to mind when I saw the headline.

mort96

Graphics is typically what comes to my mind when people talk about graphics processing units

manmal

The latest MacBook Pros don’t even need external GPUs to run AAA games.

Fnoord

The term eGPU gives it away, but is inaccurate.

Something like eNPU or eTPU seems more appropriate here.

EagnaIonat

> If you have a Thunderbolt or USB4 eGPU and a Mac, today is the day you've been waiting for!

I got an eGPU back in 2018 and could never get it to work. To the point that it soured me from doing it again.

These days for heavy duty work I just offload to the cloud. This all feels like NVidia trying to be relevant versus ARM.

embedding-shape

> This all feels like NVidia trying to be relevant versus ARM.

Except it's done by a third group, tinygrad, so it's more non-nvidia people wanting to use nvidia hardware one Apple hardware, than "nvidia trying to be relevant".

ffsm8

Yeh, Nvidia couldn't give less of a fuck about consumers. And egpu is inherently only consumer targeted.

bigyabai

FWIW Nvidia already supports UNIX OSes and AArch64 with their drivers. CUDA and CUDNN could be working overnight if Apple signed the drivers.

EagnaIonat

Thanks for the correction. I guess my PTSD on trying to get this running before is bias'ing my response.

vondur

If you could get Nvidia driver support on Mac’s I bet Apple would have sold more MacPro’s.

ProllyInfamous

If unfamiliar: it is a big deal that AAPL & NVDA again have an official relationship.

For well over the previous decade Apple has not allowed newer nVidia GPUs (by not allowing drivers).

A seven year old GPU (e.g. VEGA64, RTX1080Ti) can still process more tokens/second than most Apple Silicon (particularly the lower-ends).

As discussed elsewhere, Apple MAX/Ultra processors are best-suited for huge models (but are not as fast as e.g. RTX5090).

bigyabai

This is not an official relationship, this is a third-party effort by tiny corp with no Nvidia involvement.

ProllyInfamous

From headline title:

>>Apple approves...

This is a big deal.

the__alchemist

I'm writing scientific software that has components (molecular dynamics) that are much faster on GPU. I'm using CUDA only, as it's the eaisiest to code for. I'd assumed this meant no-go on ARM Macs. Does this news make that false?

wmf

This driver doesn't support CUDA.

ksec

This comment should be pinned at the top.

brcmthrowaway

Isnt mlx a cuda translation later?

ykl

No, MLX is nothing like a Cuda translation layer at all. It’d be more accurate to describe MLX as a NumPy translation layer; it lets you write high level code dealing with NumPy style arrays and under the hood will use a Metal GPU or CUDA GPU for execution. It doesn’t translate existing CUDA code to run on non-CUDA devices.

superb_dev

My understanding is that MLX is Apple’s CUDA, so a CUDA translation layer would target MLX

wmf

Does tinygrad support MLX?

Daily Digest email

Get the top HN stories in your inbox every day.