Get the top HN stories in your inbox every day.
derefr
You can really tell, in the comments sections of changes like these, who is speaking from the perspective of having a professional/business vs. a personal use-case.
Individuals tend to be upset; while professionals are happy that individual free-riders will no longer be sucking up undue amounts of compute power, and so QoS on the system will improve for them.
TigeriusKirk
I'd think anyone trying to run a business off a $10 or $49 tier is the one sucking up undue compute power.
derefr
A business wants to pay (at least) what something costs, because 1. they’re making money themselves from the result, and 2. they don’t want the thing they depend on to stop being offered. You’re not a free rider per se if you want a cost-plus pricing model but it’s just not on offer. (In that case, it’s instead the provider’s fault for not capturing your value surplus.)l similar to how, in scalping, it’s the original seller’s fault for not charging what the market will bear.)
ponow
> similar to how, in scalping, it’s the original seller’s fault for not charging what the market will bear
Absolutely. Unfortunately, people hate scalpers, irrationally if you take a broad view.
version_five
Google's business model is exploiting people's data to sell advertising. If you use a google "product" that's not advertising, you're not paying what it costs, at least in money. In line with what you're saying, I want to compensate a cloud provider for their services in a way that makes me a customer instead of a vector for advertising or data exploitation.
Jugurtha
I've explored Colab users as a target audience for our product, especially given the fact that practically all the posts on the GoogleColab subreddit complain about how bad it is. Even those with the Pro+ or Pro tend to revert to the free Colab offer because there's no transparency.
What I understood from my interactions is that they complain but will not use a paid product because even though they're paying from nothing to $49, the actual resources used is in the $800/month ballpark (notebooks running 23 hours per day, seven days a week, using a GPU).
These are clearly hobbyists. The pros had different problems such as not being able to pay for it from certain countries.
In other words, there are people who need a notebook to run and not crash and willing to pay for that and there are others working on toy projects/individual pet projects or projects with no real stakes who'll complain about it but will not switch because another company will not really subsidize usage.
Yes, there are other companies that offer notebooks, but our product was for professionals in the ML field, and there's much more to ML project than running a notebook (real time collaborative notebooks, automatic experiment tracking, plugging compute from any cloud provider, one click model deployment, object storage like a filesystem, live monitoring dashboard for deployed models, and more).
discordance
Have been experimenting lately with GPUs off vast.ai. Has worked well for experiments with Stable Diffusion and is cheap!
Any other suggestions for where to rent cheap GPUs? - i've heard about Hetzner (https://www.hetzner.com/sb?search=gpu), but they are 1080s.
frederickgeek8
I tried use Paperspace Gradient "Growth" plan for Teams. The product was so buggy it was unusable. Their support and engineers were wonderful to talk to, but they admitted that there are a lot of features that just don't work and they don't have the bandwidth to fix them. It seems like an early product and I wouldn't recommend it if you need something dependable, at least not now. I would love to work with them in the future if the stability improves.
etaioinshrdlu
Vast.ai told me as far as they know they are the usually cheapest option, but that sometimes lambda offers something similar or slightly lower.
sabalaba
Lambda has $1.10/hr A100 instances. That's less than half the price of GCP on demand. https://lambdalabs.com/service/gpu-cloud
mark_l_watson
If I may ask, and if you use Lambda Labs cloud GCP, how do you like it? I have visited their pricing page about ten times, their price for a single A100 is very good, but I haven’t talked to any customers. Do you basically get VPS with a GPU and all the usually software installed, and SSH/Mosh into it?
why_only_15
Yeah I found Lambda super easy to use, especially compared to GCP etc. It took me about 5 minutes to get a model up and running. They don't support launching machines with docker images or kubernetes or anything like that though (problem if you want to run a business on it) and recently they are extremely supply constrained (only available machine type 8xA100, no A6000s or V100s).
AdamJacobMuller
Paperspace, Vultr
zhl146
Feel free to check out https://www.runpod.io/ :)
Aeolun
Am I the only one that thinks it’s nice they’re being explicit about how much they’re giving you? I found the original ‘however much we have available and feel like giving to you’ plan limit highly unprofessional.
I got an A100 after I susbscribed, so it worked out for me, but still annoying you don’t know what you’ll get.
cperry
Why thank you!
mark_l_watson
I deeply appreciate Colab. I bought a nice home GPU rig a few years ago, but seldom use it. When I am lightly using Colab I use it for free and when I have more time for personal research the $10/month plan works really well. I can see occasionally paying for the $50/month plan as the need arises in the future.
I am working on an AI book in Python. (I usually write about Lisp languages.) About half the examples will be Colab notebooks and half will be Python examples to be run on laptops.
In any case, I like the soon to be implemented changes, sounds like a good idea to get credits and see a readout of usage and what you have left.
cperry
Thanks! I think people will much prefer this over the current opaque system.
I read every feedback submission in Colab so if you ever have feedback you'd like addressed, send away.
mardifoufs
Thanks a lot for collab, I have access to really powerful compute from my job yet I still find myself using collab a lot. The only problem I had with it was the very vague FAQ and the spotty resource allocations but the new faq is much better. Thanks again!
cperry
I hope it works better for you, please send feedback if it doesn't!
goodfight
Reeling us in with unlimited and locking it down. Classic
cperry
PM for Colab - I wrote the email.
No intention to lock it down? Whatever that would mean? We ensure notebooks are totally portable for any other Jupyter install you want to move to.
This change is about laying the groundwork for increased transparency for your paid compute consumption, vs. the current model of kind of hiding that away.
xkapastel
It wasn't unlimited though, there was always a quota. It just wasn't visible.
TorqueFilet
Used Google Colabs for the last 8 months, will fully divest from them with this change...
desindol
If you used it in the last 8 months and didn’t get restricted you won’t be now. Reading comprehension and stuff.
TigeriusKirk
We actually don't know. The email doesn't indicate if the available compute will stay the same.
cperry
The change is meant to provide increased visibility in your paid compute consumption.
LuciferSam86
Yeah, I was looking on Google Colab Github page. There were some issues about "Hey! I got a Pro(+) account, and I'm keeping to be limited, but I don't know how I can check how much resources I still have" kind of messages.
I think this is one of the reasons Google is switching to this model.
undefined
undefined
geogra4
Yep, similar to what they did with Google photos/gmail
mardifoufs
What does this mean in this context? What's being locked down?
frognumber
I like the transparency, but this doesn't feel like the right way to do it. Computation should be free (or nearly free) if there's idle capacity, paid if Google is near capacity, and expensive/bidding if Google is above capacity.
Flat compute units seem simple, but result in a lot of waste.
skybrian
There might not be servers going idle. Google has plenty of batch jobs they can run in lower-traffic times, both internal and external.
And if it does go idle, it saves energy, which costs money. At scale, compute isn't free.
moralestapia
>Computation should be free (or nearly free) if there's idle capacity
I guess nothing stops you from buying infra and offering it for "free (or nearly free)".
geodel
Agree. I just do not like restaurants which charge full price of burgers to me when there are clearly more patties on grill than customers at that time.
mobiuscog
So you also wouldn't expect to be paid when a company has less work than it has employees during a week ?
frognumber
There are stores in my area which sell, at a discount, overstock food products before they expire. It costs less stores less than simply wasting food, and people who aren't picky about what they buy are able to get quality food at a low price.
And to the employee point, overtime (and contractors) typically get paid more to fill gaps when demand exceeds supply. In some cases, when there is reduced demand, employees are offered furloughs. I recall during a recession when I was a child, my Dad spent about a month at home, but was still paid some fraction of his normal salary to stay on payroll.
Adjusting pricing to supply-and-demand isn't exactly crazy, freeloading, or communism. If co-lab were designed to generate a profit, the same type of pricing as AWS and GCP do would make sense. Since it seems designed to position Google in some position in the ML ecosystem, free makes sense.
As a footnote, I don't use co-lab. That's not because of pricing, but because in free offerings like these, I am the product. That's not always a bad deal, but it is for me for the type of work I do. I like running ML models locally (I have a pretty fancy GPU) using my employer's in-house cluster, or paying for AWS. I don't discourage others from using co-lab (that decision is specific to what I do).
I'm not sure why this discussion is so hostile, and reads so much false subtext into statements which shouldn't be controversial at all.
johndfsgdgdfg
> Computation should be free (or nearly free) if there's idle capacity
HN used to be a place for interesting discussions. Now it's a grievance forum for entitled freeloaders.
knorker
How do you mean? Should GCS storage also be free, unless Google is nearing storage capacity?
frognumber
That's a little bit different. I would assume Google grows GCS over time to meet demand. Most of the demand is static. If Google needs 1PB of storage, they will probably have 1.01PB, and the amount won't go down.
Compute is dynamic. You might be above capacity for Christmas shopping, and below capacity at 4am in the middle of the weekend.
By varying pricing, you can be more efficient. People who can will smooth out that load. If I don't need to run something during peak hours, I might wait until off-peak. Google needs less capacity. Everyone comes out ahead.
For a profit-making project, dynamic pricing makes sense. I suggested free since the primary goal of co-lab isn't to make money (but they also don't want to subsidize it too much, so they do need to charge).
knorker
> By varying pricing, you can be more efficient.
AWS and GCP have spot pricing on VMs, so they do have products that do this.
Maybe that's enough. When Colab load goes low, they can turn down a bunch of Colab tasks, and sell the freed capacity as Spot VM / preemptable VMs.
I'm sure there are many big companies out there that essentially have a standing bid for any compute cheaper than $X, and will soak it up as Spot VMs.
Not every GCP product needs to have spot pricing to be efficient. In fact if you silo capacity per product then you'll be less efficient. E.g. someone is willing to pay for spot VMs, but you only have spot Colab available.
Mathnerd314
A computation could use fewer compute units if resources are idle. There isn't enough information to make a judgment.
fibrennan
At Paperspace we've long offered an alternative to Google Colab that includes free CPU, GPU, and (recently released) IPU machines.
Free notebooks can be run for 6 hours at a time.
More info available in docs: https://docs.paperspace.com/gradient/machines/#free-machines...
moconnor
At last! I love Colab but the vague promises around availability and quota made it impossible to recommend for my team to use professionally.
I even tried and failed to get it up and running with a Google cloud GPU recently, before just switching to Lambda which worked first time (but had since hit availability issues).
stableskeptic
Question for the Colab team:
The restrictions listed at https://research.google.com/colaboratory/tos_v3.html differ slightly from the limits listed at https://research.google.com/colaboratory/faq.html specifically tos_v3.html does not mention these items from the faq
* using a remote desktop or SSH
* connecting to remote proxies
I can appreciate why those were added - I've read posts and notebooks explaining how you can use ngrok or cloudflare to do those things in violation of the restrictions in the faq and clearly many people aren't using Colab as intended.Speaking as someone who has been playing around with the Colab free tier with the expectation of moving to a paid service once I know what I'm really doing, I'd like to know if it's likely these restrictions will be eased a bit with the move to a compute credit system.
I'm still learning and haven't had a need to do those things yet but I believe remote ssh access would greatly simplify managing things. The Jupyter interface and integrated Colab debugger are good for experimenting but I'm worried that as I get closer to production I'll need a way to observe and change the state of long-running Colab processes the way I could with ssh, ansible or other existing tooling.
Clearly I can build that myself or use something like Anvil Works https://anvil.works but that's time and effort I'd rather avoid if possible. So I'm hoping that the Colab team will ease the SSH restriction for people like me who want to use it for more traditional ops/monitoring of long running tasks.
Do you anticipate any change or easing of the SSH restriction?
cperry
I do not anticipate changes in the short term, but I am always open to changes in the medium term.
Both of those address angles of abuse that I don't want to discuss in big forums, and go counter to interactive notebook compute, our top priority.
etaioinshrdlu
Lambdas labs has run out of GPUs to rent lately. I think it’s too many people running SD.
LittlePeter
Barely two weeks after Stable Diffusion release, we use SD as its acronym? That's fast.
roboy
I really like the increase in transparency, I found it somewhat disturbing to pay for what feels like a random amount of stuff. How should I know if I need Pro or Pro+ if there is no estimate out there what either might get me. The update does not seem to change that though. I would love to have a distribution plotted of how much compute I might expect. Or at least Min/Average/Max run time until disconnect (rn. only Max is known).
cperry
I aspire to offer that level of transparency. I am foiled by (1) GPU prices can change randomly on me, and (2) it's hard to convey to a user pricing without giving them a huge incomprehensible price sheet.
All is not lost though, I've got a few irons in the fire that should help resolve those points of feedback over the coming year.
In the meantime, you can always just buy a GCP VM and you have all the certainty you want: https://research.google.com/colaboratory/marketplace.html I find most people don't want that because it's a pain that Colab Pro/Pro+ largely abstracts.
visarga
They can benchmark a few architectures (ResNet50, BERT) and tell us how many times we can train a model on a specific level of subscription.
minimaxir
From the Google Colab product lead:
> This has been planned for months, it's laying the groundwork to give you more transparency in your compute consumption, which is hidden from users today.
https://twitter.com/thechrisperry/status/1564806305893584896
Get the top HN stories in your inbox every day.
Just got this in my inbox. They haven't updated the FAQs page yet, as far as I can tell.
> Hi-
We’re improving the Terms of Service that apply to your Colab Pro or Colab Pro+ subscription making them easier for you to understand and improving the ways you can use Colab. The changes will take effect on September 29.
The [updated Terms of Service](https://research.google.com/colaboratory/tos_v3.html) include changes that will allow you to have more control over how and when you use Colab, allowing us to offer new services and features that will enhance your experience using Colab.
We will increase transparency by granting paid subscribers compute quota via compute units which will be visible in your Colab notebooks, allowing you to understand how much compute quota you have left. These compute units are granted monthly and will expire after 3 months. You will be entitled to a certain number of compute units based on your subscription level and will have the ability to purchase more compute units as needed.
Additionally, we will allow paid subscribers to exhaust their compute quota at a much higher rate. This will result in paid subscribers having more flexibility in accessing resources. Read more about these changes at our [FAQ](https://research.google.com/colaboratory/faq.html#compute-units).
If you would like to cancel your Colab Pro or Pro+ subscription, you can do that by going to pay.google.com and clicking Subscriptions and services. If you have any trouble canceling, you can email colab-billing@google.com for assistance. Please include an order number from one of your receipt emails if you email us for assistance.
-The Colab team