Get the top HN stories in your inbox every day.
latchkey
dijit
> 999999% uptime
Assuming you mean 99.9999%; your hyperscaler isn't giving you that. MTBF is comparable.
It's hardware at the end of the day, the VM hypervisor isn't giving you anything on GPU instances because those GPU instances aren't possible to live-migrate. (even normal VMs are really tricky).
In a country with a decent power grid and a UPS (or if you use a colo-provider) you're going to get the same availability guarantee of a machine, maybe even slightly higher because less moving parts.
I think this "cloud is god" mentality betrays the fact that server hardware is actually hugely reliable once it's working; and the cloud model literally depends on this fact. The reliability of cloud is simply the reliability of hardware; they only provided an abstraction on management not on reliability.
llm_trw
I think people just don't realize how big computers have gotten since 2006. A t2.micro was an ok desktop computer back then. Today you can have something 1000 times as big for a few tens of thousands. You can easily run a company that serves the whole of the US out of a closet.
JohnBooty
It's just wild to me how seemingly nobody is exploiting this.
Our industry has really lost sight of reality and the goals we're trying to achieve.
Sufficient scalability, sufficient performance, and as much developer productivity as we can manage given the other two constraints.
That is the goal, not a bunch of cargo-culty complex infra. If you can achieve it with a single machine, fucking do it.
A monolith-ish app, running on e.g. an Epyc with 192 cores and a couple TB of RAM???? Are you kidding me? That is so much computing power, to the point where for a lot of scenarios it can replace giant chunks of complex cloud infrastructure.
And for something approaching a majority of businesses it can probably replace all of it.
(Yes, I know you need at least one other "big honkin server", located elsewhere, for failover. And yes, this doesn't work for all sets of requirements, etc)
geodel
Well the problem nowadays is what can be done has become what must be done. totally bypassing on question of what should be done. So now instead of single service serving 5 million requests in a business is replaced by 20 micro services generating traffic of 150 million requests with distributed transactions, logging (MBs of log per request), monitoring, metrics and so on. All leading to massive infrastructure bloat. Do it for dozen more applications and future is cloudy now.
Once management is convinced by sales people or consultants any technical argument can be brushed away as not seeing the strategic big picture of managing enterprise infrastructure.
dartos
Well you’d probably also at least want a cdn in each region, so like 3 closets.
zaptrem
As someone who has done a bunch of large scale ML on hyperscaler hardware I will say the uptime is nowhere near 99.9999%. Given a cluster of only a few hundred GPUs one or multiple failures is a near certainty to the point where we spend a bunch of time on recovery time optimization.
everforward
> The reliability of cloud is simply the reliability of hardware; they only provided an abstraction on management not on reliability.
This isn't really true. I mean it's true in the sense that you could get the same reliability on-premise given a couple decades of engineer hours, but the vast majority of on-premise deployments I have seen have significantly lower reliability than clouds and have few plans to build out those capabilities.
E.g. if I exclude public cloud operator employers, I've never worked for a company that could mimick an AZ failover on-prem and I've worked for a couple F500s. As far as I can recall, none of them have even segmented their network beyond the management plane having its own hardware. The rest of the DC network was centralized; I recall one of them in specific because an STP loop screwed up half of it at one point.
Part of paying for the cloud is centralizing the costs of thinking up and implementing platform-level reliability features. Some of those things are enormously expensive and not really practical for smaller economies of scale.
Just one random example is tracking hardware-level points of failure and exposing that to the scheduler. E.g. if a particular datacenter has 4 supplies from mains and each rack is only connected to a single one of those supplies, when I schedule 4 jobs to run there it will try to put each job in a rack with a separate power supply to minimize the impact of losing a mains. Ditto with network, storage, fire suppression, generators, etc, etc, etc.
That kind of thing makes 0 economic sense for an individual company to implement, but it starts to make a lot of sense for a company who does basically nothing other than manage hardware failures.
traceroute66
> instances aren't possible to live-migrate
Some of the cloud providers don't even do live-migration. They adhere to the cloud mantra of "oh well, its up to the customer to spin up and carry on elsewhere".
I have it on good authority that some of them don't even take A+B feeds to their DC suites - and then have the chutzpah to shout at the DC provider when their only feed goes down, but that's another story... :)
yencabulator
> (even normal VMs are really tricky)
For what it's worth, GCP routinely live-migrates customer VMs to schedule hardware for maintenance/decommissioning when hardware sensors start indicating trouble. It's standard everyday basic functionality by now, but only for the vendors who built the feature in from the beginning.
dijit
I’m aware, but it won’t work for gpu accelerated workloads.
wkat4242
> Assuming you mean 99.9999%; your hyperscaler isn't giving you that. MTBF is comparable.
Yeah we've already had about a day's worth of downtime this year on office 365 and Microsoft is definitely a hyperscaler. So that's 99.3% at best.
dijit
meta: I'm always interested how the votes go on comments like this. I've been watching periodically and it seems like I get "-2" at random intervals.
This is not the first time that "low yield" karma comments have sporadic changes to their votes.
It seems unlikely at the rate of change (roughly 3-5 point changes per hour) that two people would simultaneously (within a minute) have the same desire to flag a comment, so I can only speculate that:
A) Some people's flag is worth -2
B) Some people, passionate about this topic, have multiple accounts
C) There's bots that try to remain undetected by making only small adjustments to the conversation periodically.
I'm aware that some peoples job very strongly depends on the cloud, but nothing I said could be considered off topic or controversial: Cloud for GPU compute relies on hardware reliability just like everything else does. This is fact. Regardless of this, the voting behaviour on my comments such as this are extremely suspicious.
michaelt
> There is absolutely no way anyone is going to be making any money offering $2 H100s unless they stole them and they get free space/power...
At the highest power settings, H100s consume 400 W. Add another 200 W for CPU/RAM. Assume you have an incredibly inefficient cooling system, so you also need 600 W of cooling.
Google tells me US energy prices average around 17 cents/kWh - even if you don't locate your data centre somewhere with cheap electricity.
17 cents/kWh * 1200 watts * 1 hour is only 20.4 cents/hour.
ckastner
That's just the power. If one expects a H100 to run for three years at full load, 24 x 365 x 3 = 26280. Assuming a price of $25K per H100, that means about $1/h to amortize costs. Hence the unless they stole them, I guess.
Factor in space, networking, cooling, security, etc., and $2 really do seem undoable.
Negitivefrags
None of that matters if you already bought the H100 and have no use for it. You might as well recoup as much money as you can on it.
swyx
amortization curves for gpus are 5-7 years per my gpu rich contacts. even after they cease to be top of the line they are still useful for inference. so you can halve that $1/h
latchkey
You are not looking at the full economics of the situation.
There are very few data centers left that can do 45kW+ rack density, which translates to 32 H100/MI300x GPUs in a rack.
Most datacenters, you're looking at 1 or 2 boxes of 8 GPU, a rack. As a result, it isn't just the price of power, it is whatever the data center wants to charge you.
Then you factor in cooling on top of that...
sandworm101
For the fuller math one has to include the cost of infrastructure financing, which is tied to interest rates. Given how young most of these H100 shops are, I assume that they pay more to service their debts than for power.
Wytwwww
> I assume that they pay more to service their debts than for power.
Well yes, because for GPU datacentres fixed/capital costs make up a much higher fraction than power and other expenses than for CPUs. To such an extent that power usage barely even matters. A $20k that uses 1 kW ( which is way more than it would in reality ) 24x7 would cost $1.3k to run per year at 0.15$ per kWh, that's almost insignificant compared to depreciation.
The premise is that nobody could make any money by renting H100s for 2$ even if they got them for free unless they only had free power. That makes no sense whatsoever when you can get 2x AMD EPYC™ 9454P servers at 2x408 W (for full system) for around $0.70 in a German data center.
neom
This reads exactly like what people said about DigitalOcean when we launched it.
count
To be fair, DO was muuuch sketchier in the past (eg https://news.ycombinator.com/item?id=6983097).
Launching any multitenant system is HARD. Many of them are held together with bubble gum and good intentions….
neom
Boy I'm never going to live that one down around here huh? Hackernews always going to keep you honest, ha. :D
imglorp
How was DO able to provide what AWS didn't want to? Was it purely margins?
neom
AWS just really didn't want to, very different market segment. They were doing a pure enterprise play, looking to capture most of the enterprise. We were doing a b2c play that we presumed over time would suck us up into the SMB. My theory was we had like 1% risk from them. From what I could tell Jeff and Jassy had zero interest in our segment. I left just before the IPO but when we started it, the margin was about 60%, after we figured out how many VMs we could comfortable fit on the box, Ben U just did napkin math and said "50% seems like a fine enough margin to start"
undefined
bjornsing
> There is absolutely no way anyone is going to be making any money offering $2 H100s unless they stole them and they get free space/power...
That’s essentially what the OP says. But once you’ve already invested in the H100s you’re still better off renting them out for $2 per hour rather than having them idle at $0 per hour.
Wytwwww
Then how come you can still get several last gen EPYC or Xeon systems that would use the same amount of power for under $1 per hour?
For datacentre GPUs the energy, infrastructure and other variable costs seem to be relatively insignificant to fixed capital costs. Nvidia's GPUs are just extremely expensive relative to how much power they use (compared to CPUs).
> H100s you’re still better off renting them out for $2 per hour rather than having them idle at $0 per hour.
If you're barely breaking even at $2 then immediately selling them would seem like the only sensible option (depreciation alone is significantly higher than the cost power of running a H100 24x365 at 100% utilization).
bjornsing
> If you're barely breaking even at $2 then immediately selling them would seem like the only sensible option (depreciation alone is significantly higher than the cost power of running a H100 24x365 at 100% utilization).
If you can then probably yes. But why would someone else buy them (at the price you want), when they can rent at $2 per hour instead?
traceroute66
> 999999% uptime
I've said it before and I've said it again....
Read the cloud provider small-print before you go around boasting about how great their SLAs are.
Most of the time they are not worth the paper they are written on.
kjs3
This is beyond true. Read and understand what your cloud SLAs are, not what you think they are or what you think they should be. There was significant consternation generated when I pointed out that the SLA for availability for an Azure storage blob was only 4 nines with zone redundancy.
https://azure.microsoft.com/files/Features/Reliability/Azure...
latchkey
Not just the fine print, but also look at how they present themselves. A provider with pictures of equipment and detailed specifications is always going to be more interesting than a provider with just a marketing website and a "contact us" page.
marcyb5st
But it is about minimizing losses, not making profits.
If you read the article, such prices happen because a lot of companies bought hardware reservations for the next few years. Instead of keeping the hardware idle (since they pay for it anyway), they rent it out on the cheap to recoup something.
rajnathani
From your bio, your company is Hot Aisle.
This company TensorWave covered by TechCrunch [0] this week sounds very similar, I almost thought it was the same! Anyway, best of luck, we need more AMD GPU compute.
[0] https://techcrunch.com/2024/10/08/tensorwave-claims-its-amd-...
latchkey
Thanks! Definitely not the same at all.
tasuki
> If you're trying to do the latter, you're not considering how you are risking your business on unreliable compute.
What do you mean by "risking your business on unreliable compute"? Is there a reason not to use one of these to train whatever neural nets one's business needs?
oefrha
Well, someone who’s building a GPU renting service right now obviously wants to scare you into using expensive and “reliable” services; the market crashing is disastrous for them. The reality is high price is hardly an indicator of reliability, and the article very clearly explains why H100 hours are being sold at $2 or less, and it’s not because of certain providers lacking reliability.
latchkey
Nah, don't be silly. No need to scare anyone into anything. Use whatever you want to use. My point in saying and of this is to simply point out that we offer this service to people who value these things.
lazide
If it crashes half way through, you don’t get a useful model, and you’re still on the hook for the rental costs to get there maybe?
tasuki
That's... possible? But a little unlikely.
I think I'll take that risk over paying more for your allegedly more reliable GPUs anytime.
latchkey
Depends on the SLA.
TechDebtDevin
I've been saying this would happen for months. There (was) a giant arbitrage for data centers that already have the infra.
If you could get a hold H100s and had an operational data center you essentially had the keys to an infinate money printer on anything above $3.50/hr.
Of course, because we live in a world of effecient markets that was never going to last forever. But they are still profitible at $2.00 assuming they have cheap electricity/infra/labor.
pico_creator
Problem is - u can find some at $1
startupsfail
The screenshot there is 1xH100 PCIE, for $1.604. Which is likely promotional pricing to get customers onboarded.
With promotional pricing it can be $0 for qualified customers.
Note also, how the author shows screenshots for invites for private alpha access. It can be mutually beneficial for the data center to provide discounted alpha testing access. The developer gets discounted access, the data center gets free/realistic alpha testing workflows.
pico_creator
When I did the screenshot a month ago, it wasn't public info yet.
Now its public: SFCompute list it on their main page - https://sfcompute.com/
And they are *not* the only one
electronbeam
The PCIE has much lower perf than even a 1x slice of an SXM
shrubble
So are you thinking that the lower price is to get the customer in the door and then when they need the Infiniband connected GPUs to charge them more?
swyx
original title i wrote for this piece was "$1 H100s" but i deleted because even i thought it was so ridiculously low lol
but yes sfcompute home page is now quoting $0.95/hr average. wild.
ipsum2
sfcompute is a scam. You can't buy GPUs at that price. They're running a "private beta" where people can bid for a spot GPU, but they let a limited number of people into the beta, so the prices are artificially low.
electronbeam
The real money is in renting infiniband clusters, not individual gpus/machines
If you look at lambda one click clusters they state $4.49/H100/hr
latchkey
I'm in the business of mi300x. This comment nails it.
In general, the $2 GPUs are either PE venture losing money, long contracts, huge quantities, pcie, slow (<400G) networking, or some other limitation, like unreliable uptime on some bitcoin miner that decided to pivot into the GPU space and has zero experience on how to run these more complicated systems.
Basically, all the things that if you decide to build and risk your business on these sorts of providers, you "get what you pay for".
jsheard
> slow (<400G) networking
We're not getting Folding@Home style distributed training any time soon, are we.
krasin
Distributed training data creating & curation is more useful and feasible. Training gets cheaper 1.5x every year, but data is just as expensive, if not more, given that the era of "free web crawls of human knowledge" is over.
marcyb5st
I agree with you, but as the article mentioned, if you need to finetune a small/medium model you really don't need clusters. Getting a whole server with 8/16x H100s is more than enough. And I also believe with the article when it states that most companies are finetuning some version of llama/open-weights models today.
pico_creator
Exactly, it covered in the article that there is a segmentation happening via GPU cluster size.
Is it big enough for foundation model training from scratch = ~$3+ Otherwise it drops hard
Problem is "big enough" is a moving goal post now, what was big, becomes small
swyx
so why not buy up all the little h100s and enough together for a cluster? seems like a decent rollup strategy?
ofcourse it woudl still cost a lot to do... but if the difference is $2/hr vs $4.49/hr then there's some size where it makes sense
ipsum2
Only if they're networked with Infiniband.
pico_creator
Makes sense, though only folks like runpod / sfcompute / etc, have enough visibility to maybe pull this off?
Its a risker move - then just taxing the excess compute now, and print money on the margins from bag holders
ranger_danger
Last year we reached out to a major GPU vendor for a need to get access to a seven figure dollar amount worth of compute time.
They contacted (and we spoke with) several of the largest partners they had, including education/research institutions and some private firms, and could not find ANYONE that could accommodate our needs.
AWS also did not have the capacity, at least for spot instances since that was the only way we could have afforded it.
We ended up rolling our own solution with (more but lower-end) GPUs we sourced ourselves that actually came out cheaper than renting a dozen "big iron" boxes for six months.
It sounds like currently that capacity might actually be available now, but at the time we could not afford to wait another year to start the job.
chronogram
If you were able to make do with cheaper GPUs, then you didn't need FP64 so you didn't need H100s in the first place right? Then you made the right choice in buying a drill for your screw work instead of renting a jackhammer even if the jackhammer would've seemed cooler to you at the time.
KeplerBoy
Does anyone doing AI need FP64, and yet they sell well.
ranger_danger
> didn't need H100s
I think we're splitting hairs here, it was more about choosing a good combination of least effort, time and money involved. When you're spending that amount of money, things are not so black and white... rented H100s get the job done faster and easier than whatever we can piece together ourselves. L40 (cheaper but no FP64) was also brand new at the time. Also our code was custom OpenCL and could have taken advantage of FP64 to go faster if we had the devices for it.
InkCanon
[dead]
wg0
> Collectively there are less than <50 teams worldwide who would be in the market for 16 nodes of H100s (or much more), at any point in time, to do foundation model training
At best 100 and this number will go down as many would fail to make money. Even traditional 100 software development companies would have a very low success rate and here we're talking about products that themselves work probabilistically all the way down.
pico_creator
Im quite sure there is more than a 100 clusters even. Though that would be harder to prove.
So yea, it would be rough
Der_Einzige
I just want to observe that there are a lot of people paying huge amounts of money for consulting about this exact topic and that this article is jam packed with more recent and relevant information than almost any of these consultants have.
pico_creator
Feel free to forward to the clients of "paid consultant". Also how do i collect my cut.
swyx
author @pico_creator is in here actively replying in case u have any followups.. i just did the editing
undefined
pico_creator
Also: how many of those consultants, have actually rented GPU's - used them for inference - or used them to finetune / train
aurareturn
I’m guessing most of them are advising Wallstreet on AI demand.
grues-dinner
> For all the desperate founders rushing to train their models to convince their investors for their next $100 million round.
Has anyone actually trained a model actually worth all this money? Even OpenAI is s struggling to staunch the outflow of cash. Even if you can get a profitable model (for what?) how many billion dollar models does the world support? And everyone is throwing money into the pit and just hoping that there's no technical advance that obsoletes everything from under them, or commiditisation leading to a "good enough" competitor that does it cheaper.
I mean, I get that everyone and/or they investors has got the FOMO for not being the guys holding the AGI demigod at the end of the day. But from a distance it mostly looks like a huge speculative cash bonfire.
justahuman74
> For all the desperate founders rushing to train their models to convince their investors for their next $100 million round.
I would say Meta has (though not a startup) justified the expenditure.
By freely releasing llama they undercut every a huge swath of competition who can get funded during the hype. Then when the hype dies they can pick up what the real size of the market is, with much better margins than if there were a competitive market. Watch as one day they stop releasing free versions and start rent seeking on N+1
grues-dinner
Right, but that is all predicated that, when they get to the end, having spent tons of nuclear fuel, container shiploads of GPUs and whole national GDPs on the project, there will be some juice worth all that squeeze.
And even if AI as we know it today is still relevant and useful in that future, and the marginal value per training-dollar stays (becomes?) positive, will they be able to defend that position against lesser, cheaper, but more agile AIs? What will the position even be that Llama2030 or whatever will be worth that much?
Like, I know that The Market says the expected payoff is there, but what is it?
vineyardmike
As the article suggests, the presence of LLAMA is decreasing demand for GPUs. Which are critical to Metas ad recommendation services.
Ironically, by supporting the LLM community with free compute-intense models, they’re decreasing demand (and price) for the compute.
I suspect they’ll never directly monetize LLAMA as a public service.
rsynnott
> having spent tons of nuclear fuel
It will be primarily gas, maybe some coal. The nuclear thing is largely a fantasy; the lead time on a brand new nuclear plant is realistically a decade, and it is implausible that the bubble will survive that long.
scotty79
> there will be some juice worth all that squeeze.
Without the squeeze there'd be a risk for some AI company getting enough cash to buy out Facebook just for the user data. If you want to keep status quo it's good to undercut someone in the cradle that could eventually take over your business.
So it might cost Meta pretty penny but it's a mitigation for existential risk.
If you climbed up to the top of wealth and influence ladder you should spend all you can to kick off the ladder. It's gonna be always worth it. Unless you still fall because it wasn't enough.
pico_creator
Given their rising stock price trend, due to their moves in AI. Definitely worth it for them
jordwest
> I get that everyone and/or they investors has got the FOMO for not being the guys holding the AGI demigod at the end of the day
Don't underestimate the power of the ego...
Look at their bonfire, we need one like that but bigger and hotter
Aeolun
Isn’t OpenAI profitable if they stop training right at this moment? Just because they’re immediately reinvesting all that cash doesn’t mean they’re not profitable.
Attach6156
And if they stop training right now their "moat" (which I think is only o1 as of today) would last a good 3 to 6 months lol, and then to the Wendy's it is.
Aeolun
That is similarly true for all other AI companies. It’s why they don’t do that. But everyone is still happy to give them more money because their offering is good as it is.
wyclif
>and then to the Wendy's it is
I didn't really catch that pop culture reference. What does that mean?
0xDEAFBEAD
This guy claims they are losing billions of dollars on free ChatGPT users:
https://nitter.poast.org/edzitron/status/1841529117533208936
fragmede
Ed Zitron's analysis hinges on a lot of assumptions. Much of it comes down to the question of how much it actually costs to run a single inference of ChatGPT. That $20/month pro subscription could be a loss-leader or it could be making money, depending on the numbers you want to use. If you play with the numbers, and compare it to, say, $2/hr for an H100 currently on the front page, $20/$2/hr gets you 10 hours of GPU time before it costs more in hardware than your subscription, and then factoring in overhead on top, it's just not clear.
Aeolun
You’d need to know how much they are using for that. I only use the API and the $20 I bought a year ago aren’t gone yet.
elcomet
Not everyone is doing LLM training. I know plenty of startups selling AI products for various image tasks (agriculture, satellite, medical...)
mark_l_watson
Yes, a lot of the money to be made is in the middleware and application sides of development. I find even small models like Llama 3.2 2B to be extremely useful and fine tuning and integration with existing businesses can have a large potential payoff for smaller investments.
hackernewds
Lots of companies have. Most recently Character AI trained an internal model and did raise $100M early last year. They didn't release any benchmarks since the founding team and Noam taken to Google
tonetegeatinst
Pretty sure anthropic has
anshulbhide
This reminds me of the boom and bust oil cycle as outlined in The Prize: The Epic Quest for Oil, Money & Power by Daniel Yergin.
swyx
care to summarize key points for the class?
dplgk
It seems appropriate, in this thread, to have ChatGPT provide the summary:
In The Prize: The Epic Quest for Oil, Money & Power, Daniel Yergin explains the boom-and-bust cycle in the oil industry as a recurring pattern driven by shifts in supply and demand. Key elements include:
1. Boom Phase: High oil prices and increased demand encourage significant investment in exploration and production. This leads to a surge in oil output, as companies seek to capitalize on the favorable market.
2. Oversupply: As more oil floods the market, supply eventually exceeds demand, causing prices to fall. This oversupply is exacerbated by the long lead times required for oil development, meaning that new oil from earlier investments continues to come online even as demand weakens.
3. Bust Phase: Falling prices result in lower revenues for oil producers, leading to cuts in exploration, production, and jobs. Smaller or higher-cost producers may go bankrupt, and oil-dependent economies suffer from reduced income. Investment in new production declines during this phase.
4. Correction and Recovery: Eventually, the cutbacks in production lead to reduced supply, which helps stabilize or raise prices as demand catches up. This sets the stage for a new boom phase, and the cycle repeats.
Yergin highlights how this cycle has shaped the global oil industry over time, driven by technological advances, geopolitical events, and market forces, while creating periods of both rapid growth and sharp decline.
DebtDeflation
This isn't just the story of GPUs or Oil, this is the entire story of capitalism going back to the early Industrial Revolution in the 1700s. The economist Hyman Minsky added asset prices and debt financing to it to round out a compelling theory of the business cycle including the extreme bubbles and depressions sometimes seen.
authorfly
Haha.
Cries in sadness that my university lab was unable to buy compute from 2020+ when all the interesting research in AI was jumping up and now AI is going into winter finally compute will be cheap again.
7734128
I don't feel any winter yet.
thelastparadise
At least not until LLM gains hit a wall. So far every open weight model has far surpassed the previous releases at the same model size.
danpalmer
But closed models are clearly slowing. It seems reasonable to expect that as open weight models reach the closed weight model sizes they’ll see the same slowdown.
physicsguy
Open models like Llama make it pointless for the majority of companies to train from scratch. It was obvious this would happen.
7734128
Inference should always be more significant than training in the end though.
Tepix
There are more options for inference.
bjornsing
True. The hard part is timing it.
kristopolous
This sounds like bad news for the gpu renter farms. Am reading this right?
swyx
the marketplaces like sfcompute do great, bc so much cheap supply and theres lots of demand. its the foundation model startups who locked into peak hype contracts for access that are eating a lot of losses right now... (which perhaps explains why the bigcos are acquiring only the founders and not assuming the liabilities of the oldco...)
sgu999
> which perhaps explains why the bigcos are acquiring only the founders and not assuming the liabilities of the oldco...
Who did?
murtio
Enjoyed the article and I was ready to try the promoted featherless.ai. I signed up and spent 15 minutes trying load or chat with Llama 3 models. All attempts failed. Naturally I would ask, if it's so cheap to run GPU's, why I would need to sign up to try a model?
Get the top HN stories in your inbox every day.
I am building a bare metal mi300x service provider business.
Anyone offering $2 GPUs is either losing money on DC space/power, or their service is so sketchy under the covers, which they do their best to hide. It is one thing to play around with $2 gpus and another to run a business. If you're trying to do the latter, you're not considering how you are risking your business on unreliable compute.
AWS really twerked people's perception of what it takes to run high end enterprise GPU infrastructure like this. People got used to the reliability hyperscaler offers. They don't consider what 999999% uptime + 45kW+ rack infrastructure truly costs.
There is absolutely no way anyone is going to be making any money offering $2 H100s unless they stole them and they get free space/power...