Brian Lovin
/
Hacker News
Daily Digest email

Get the top HN stories in your inbox every day.

milon

I'm glad this is out, i'm going to deploy this locally and learn as much about it as possible. Oxide is pretty much the company I dream to work at, both for the tech stack, plus the people working there. Thank you Oxide team!

refulgentis

Can you get me excited? I spent 20 seconds browsing the homepage and walked away with "so the idea is vertical integration for on-premise server purchases? On custom OS? Why? Why would people pay a premium?"

But immediately got myself to "what does a server OS do anyway, doesn't it just launch VMs? You don't need Linux, just the ability to launch Linux VMs"

Tell me more? :)

mustache_kimono

> so the idea is vertical integration for on-premise server purchases? On custom OS? Why? Why would people pay a premium?

As I understand it, re: vertical integration, the term is actually "hyperconverged". Here, that means it's designed at the level of the rack. Like -- there aren't per compute unit redundant power supplies. There is one DC bus bar conversion for the rack. There is an integrated switch designed by Oxide. There is one company to blame when anything inside the box isn't working.

In addition, the pitch is they're using open source Rust-based firmware for many of the core components (the base board management controller/service processor, and root of trust), and the box presents a cloud like API to provision.

If the problem is: I'm running lots of VMs in the cloud. I'm used to the cloud. I like the way the cloud works, but I need an on-prem cloud, this makes that much easier than other DIY ways to achieve (OMG we need a team of people to build us a cloud...).

steveklabnik

The terminology in this space is confusing, but "hyperconverged" isn't really what we're doing. I wrote about the differences here: https://news.ycombinator.com/item?id=30688865

(That said I think other than saying "hyperconverged" your broad points are correct.)

SteveNuts

It seems like the folks on HN tend to think the world runs on AWS (I'm not trying to say they don't have a huge market share), but many huge enterprises still run their own datacenters and buy ungodly amounts of hardware.

The products that are on the market for an AWS-like experience on-prem are still fairly horrible. A lot of times the solutions are collaborations between vendors, which makes support a huge pain (finger pointing between companies).

Or, a particular vendor might only have compute and storage, but no offering for SDN and vice-versa. This sucks because then you have two bespoke things to manage and hope they work together correct.

These companies want a full AWS experience in their datacenter, and so far this looks to be the most promising without dedicating huge amounts of resources to something like Openstack.

refulgentis

The "(finger pointing between companies)" took me from confusion to 100% understanding, was at Google until recently. It was astonishing to me that it was universally acceptable to fingerpoint if it was outside your immediate group of ~80 people.*

Took me from "why would people go with this over Dell?" to "holy shit, I'm expecting Dell to do software and make nvidia/red hat/etc/etc etc/etc etc etc help out. lol!"

* also, how destructive it is. never, ever, ever let ppl talk shit about other ppl. There's a difference between "ugh, honestly, it seems like they're focused on release 11.0 this year" and "ughh they're usless idk what they're thinking??? stupid product anyway" and for whatever reason, B made you normal, A made you a tryhard pedant

lijok

Wouldn't a "full AWS experience in their datacenter" be AWS Outpost?

adfm

With DHH and others promoting a post-SaaS approach (once.com, etc.) we might see hardware refresh as cost-cutting. Astronomical compute bills and lack of granularity bring all things cloudy into sharp focus.

_zoltan_

OpenStack is pretty smooth sailing these days and I bet you it would be much cheaper to just get 3 FTEs for your OpenStack install than an Oxide rack

milon

Having a solid on-prem rack product to me is a great thing. I like IaaS services a lot, don't get me wrong, and I think they're the right pick for a bunch of cases, but on-prem servers also have their "place in the sun", so to speak :) I could present any number of justifications that I don't think I'm qualified enough to defend, but the gist is that at the bare minimum, I'm glad the option exists.

As to why I'm personally excited: I enjoy the amount of control having such an on-prem rack would afford me, and there surely could be a great amount of cost-savings and energy-savings in many scenarios. Sometimes, you just need a rack to deploy services for your local business. I like the prospect of decentralizing infrastructure, applying all the things we've learned with IaaSes.

bionsystem

In the last 10 years and 6 different clients/employers I worked there is pretty much no way to run production on the cloud. Only 1 of them had some stuff running in the (GCP) cloud at all.

Of all of the 6 infrastructures I've seen, only 1 of them is half decent, with 6 dedicated teams around the datacenter working closely together (by dedicated I mean, nothing is required of them concerning the core software product that the company develops). Network, Unix/Virtu, Windows, Storage, PC, and datacenter. That's 30+ people just to run a couple big datacenters and a few more server rooms. The service was actually quite good with VMs/zones delivered under an hour and most tech issues solved in half a day. The other infrastructures were either bigger or smaller, with more or less people, and were all terrible, sometimes needing weeks of email exchanges with excel attached to get a single VM.

AWS was the dream everywhere I went for everybody. Oxide may be coming out with a product that will solve a LOT of issues. SmartOS/IllumOS has all the tech to be self-sufficient (virtualization, storage, SDN...), add support for networking and storage and you get a complete product that a handful of people can run (well, you still need a windows team in most cases but fine).

lijok

> Why would people pay a premium?

I would pay a premium just to not have to deal with HPE, DELL, etc

_zoltan_

Dell's been nothing but fantastic for us (compute, not storage.)

adamnemecek

One company making both HW and SW generally leads to really good, integrated experiences. See e.g. Apple.

0cf8612b2e1e

I am really hoping the broader industry takes note. By owning the platform, the Oxide team was able to dump legacy stuff that no longer makes sense.

throwup238

The best elevator pitch I've heard is "AWS APIs for on-prem datacenters". They make turn-key managed racks that behave just like a commercial cloud would with all the APIs for VM, storage, and network provisioning and integration you'd expect from AWS, except made to deploy in your company's datacenter under your control.

magnawave

I guess the wildcard is price.

AWS's pricing model works kinda at their OMG eyewatering scale - aka all the custom hardware they design is highly cost optimized, but just doing custom hardware has a notable cost. This is easily covered by their scale, to make for their famous margins. [during their low scale times, they did use a good bit of HP/Dell, etc]

Oxide seems to be no different (super custom hardware) only major difference being the "in your datacenter" part. Since you own the cost of your datacenter, Oxide has to come in a lot cheaper to even compete with AWS, but how do you do that with low volume [and from the look of it not-cost optimized, but instead fairly tank-like] bespoke hardware? Feels like the pricing / customer fundamentals are going to be pretty rough here outside perhaps a few verticals.

kortilla

That’s the elevator pitch for open stack

capitol_

That just sounds like a bunch of api's on top of linux.

throwawaaarrgh

It's a mainframe. If you can't get excited for mainframes it'll be hard to be excited about this.

IllumOS is the OS/360 to Oxide's System/360. (It won't get that popular but it's a fair enough comparison for illustrative purposes)

panick21_

Except that it use the same standard CPU as commodity machine. Doesn't have much of the extra reliability stuff. It can go from vertical to horizontal scaling. The OS is open source Unix. And yeah its not like a mainframe at all really.

linksnapzz

It's a mainframe, for people who do not, actually, know what a mainframe is or does.

undefined

[deleted]

EvanAnderson

I'm excited to see how this compares to SmartOS. I'm pretty heavily invested in SmartOS in my personal infrastructure but its future, post-Joyent acquisition, has been worrying me.

I really wish I did work for an org big enough to use Oxide's gear. Not having to futz around with bogus IBM PC AT-type compatibility edifice, janky BMCs and iDRACs, hardware RAID controllers, etc, would be so unbelievably nice.

nwilkens

SmartOS is being actively developed since the aquisition from Joyent[1] in April 2022.

We've released a new version every two weeks post acquisition, and are continuing to develop and invest.

We also hold office hours events roughly every two weeks on Discord[2], and would love for you to stop by and ask any questions, or just listen along!

[1]: https://www.tritondatacenter.com/blog/a-new-chapter-begins-f... [2]: https://discord.gg/v4NwA3Hqay

rjzzleep

IllumOS needs to attract new developers. To do that, the platform build needs to become a lot more straightforward. It's a pretty huge endeavour in my opinion. I'd be happy to help out on that regard, but in the past Joyent has not been very open to outside support.

_rs

I had been using SmartOS for a long time but finally had to bite the bullet and give up. I ended up deciding on Proxmox on a ZFS root and am quite happy with it.

icybox

I've been running smartos at least since 2015 where I co-located my server. There have been times where I felt like giving up, but people like danmcd, jperkin and others always stepped in and fixed what needed to be fixed for LX to be usable and working. (Keeping java updated and running is hard, uphill battle. Thanks!) I always ran a mixture of OS and LX zones and bcantrill's t-shirt with "Save the whales, kill your VM" made sense. I've used zones in Solaris 10 even before and they just click with me. FreeBSD's jails are nice, but far from it. And linux's cgroups are a joke. And using KVM/VMs for security containerization is just insane. At dayjob, I've implemented multiple proxmox clusters, because we're linux shop and there's no way to "sell" smartos or tritonDC to die-hard debian colleagues, but I've managed to sell them ZFS. With personal stuff, I like my systems to take care of themselves without constant babysitting and SmartOS or OpenBSD provide just that. I don't dislike windows, I love UNIX. You could really feel those extra 20y UNIX had compared to linux. I migrated all my stuff to proxmox for like 2 months. And then went back to SmartOS, because there was something missing ... probably elegance, sanity, simplicity or even something you'd call "hack value".

geek_at

the nice thing about the Proxmox + ZFS setup is that it works and is even recommended without using hardware raid controllers. Less headaches either way.

I recently wrote a guide [1] how to use proxmox with ZFS over iSCSI so you can use the snapshot features from a SAN

[1] https://blog.haschek.at/2023/zfs-over-iscsi-in-proxmox.html

rjzzleep

I feel the same. I used a SmartOS distro called Danube Cloud for a long time and am looking to move and looked at Harvester[1] and OpenNebula, but with everything I know about Kubernetes(and LongHorn) I'm reluctant to use something so heavily based on Kubernetes.

At its peak I reached out multiple times to Joyent to fix their EFI support for virtualization. The Danube team had similar experiences with them, working on live migrations for VMs, and a few months back I did a rebase of the platform image to a more recent illumos stack.

Two of the fundamental issues with Illumos is that they don't seem to understand that they need to fix the horrendous platform build to get community support to keep up with the pace of development of other OS's. The platform build is a huge nasty mess of custom shell scripts, file based status snapshots, which includes the entire userspace in the kernel build. Basically if your openssl version is out of wack the entire thing will fail. Not because it has to, but because it was never adapted to modern needs of someone just wanting to hack on a kernel. It's fixable, but I don't see any desire to fix it, and even if that desire eventually shows up it might just be too little, too late.

[1] https://harvesterhci.io/

jjav

> Oxide is pretty much the company I dream to work at, both for the tech stack, plus the people working there.

Same for me. Oxide is the only company I know that I'd really love to work for. Similar (I think, observing from the outside) to Sun. That's what I dream about.

Unfortunately their pay structure is such that I can't afford it, with a family to support. Maybe when the kid is out of university, if I don't need much income anymore, I can fulfill the dream.

DominoTree

>> Oxide is pretty much the company I dream to work at, both for the tech stack, plus the people working there.

Thought I was the only one :P

yjftsjthsd-h

I mean, I phrase it as "my dream job is systems integration at Sun" (but Oxide is the living equivalent)

rbanffy

Sun had some lovely desktop hardware. They also had SPARC.

I really miss Sun.

codethief

Can anyone ELI5 what Oxide's offer is? I've looked at their website and still got no clue. Is it hardware + software I can purchase and use on-premise? Is it a PaaS / yet another cloud provider?

steveklabnik

I believe you're being downvoted because there is already a big thread about this here, though I think that's a bit unfair to you. I haven't posted in that thread yet because I wanted to let others say what is meaningful about the product to them, but this seems like a good place to put my reply. Regardless of all that: it is hardware + software you can purchase and use on-premise, that's correct.

The differentiator from virtually all existing on-prem cloud products is that we are a single vendor who has designed the hardware and software (which is as open source as we can possibly make it, by the way, hence announcements like this) to work well together. Most products combine various other products from various vendors, and are effectively selling you integration. We believe that that leads to all kinds of problems that our product solves.

Another factor here is that we only have two SKUs: a half rack and a full rack. You don't buy Oxide 1U at a time, you buy it a rack at a time. By designing the entire rack as a cohesive unit, we can do a lot of things that you simply cannot do in the 1U form factor. There is a running joke that we talk about our fans all the time, and it's true. Because our sleds have a larger form factor than a traditional 1U, we can use larger fans. This means we can run them at a lower RPM, which means power savings. That's the deliberate design choice. But we also have gained accidental benefits from doing things like this: lower RPM also means that our servers are way quieter than others. That's pretty neat. Some early prospective customers literally asked if the thing is on when it was demo'd to them, because it's so quiet. Is that a reason to buy a server? Not necessarily, but it's just a fun example of some of the things that end up happening when you re-think a product as a whole, rather than as an integration exercise.

codethief

Thanks so much for elaborating!

chrishare

On-prem, fully-integrated compute and storage solution with cloud-like APIs to provision resources, all with a commitment to open source.

haolez

Do you know if they support GPUs or whatever is needed to host LLM models?

steveklabnik

The current product does not have any GPUs in it. https://news.ycombinator.com/item?id=39183072

mkoubaa

Mainframe 2.0

danpalmer

This is really not accurate in any way that matters I don't think. It's a mainframe in as much as you buy a rack and spec it out. It's not a mainframe in that the performance is typical server performance rather than the mainframe profile which is very different and requires different considerations, the compute model is typical server compute rather than the mainframe compute model which (aside from compatibility layers) is a radically different environment to build software for.

sneak

I know they’re ex-Sun, but is there any real technical benefit for choosing not-Linux (for their business value prop)?

I know of the technical benefits of illumos over linux, but does that actually matter to the customers who are buying these? Aren’t they opening a whole can of worms for ideology/tradition that won’t sell any more computers?

As someone who runs Linux container workloads, the fact that this is fundamentally not-Linux (yes I know it runs Linux binaries unmodified) would be a reason against buying it, not for.

steveklabnik

> does that actually matter to the customers who are buying these?

It's not like we specifically say "oh btw there's illumos inside and that's why you should buy the rack." It's not a customer-facing detail of the product. I'm sure most will never even know that this is the case.

What customers do care about is that the rack is efficient, reliable, suits their needs, etc. Choosing illumos instead of Linux here is a choice made to help effectively deliver on that value. This does not mean that you couldn't build a similar product on top of Linux inherently, by the way, just that we decided illumos was more fit for purpose.

This decision was made with the team, in the form of an RFD[1]. It's #26, though it is not currently public. The two choices that were seriously considered were KVM on Linux, and bhyve on illumos. It is pretty long. In the end, a path must be chosen, and we chose our path. I do not work on this part of the product, but I haven't seen any reason to believe it has been a hindrance, and probably is actually the right call.

> the fact that this is fundamentally not-Linux (yes I know it runs Linux binaries unmodified) would be a reason against buying it, not for.

I am curious why, if you feel like elaborating. EDIT: oh just saw your comment down here: https://news.ycombinator.com/item?id=39180814

1: https://rfd.shared.oxide.computer/

wmf

The Linux vs. Illumos decision seems to be downstream of a more fundamental decision to make VMs the narrow waist of the Oxide system. That's what I'm curious about.

amluto

Especially since Oxide has a big fancy firmware stack. I would expect this stack to be able to do an excellent job of securely allocating bare-metal (i.e. VMX root on x86 or EL2 if Oxide ever goes ARM) resources.

This would allow workloads on Oxide to run their own VMs, to safely use PCIe devices without dealing with interrupt redirection, etc.

throwawaaarrgh

A team should always pick the tools they are most familiar with. They will always have better results with that, than trying to use something they understand less. With this in mind, using their own stack is a perfectly adequate choice. Factors outside their team will determine if that works out in the long term.

wmf

A handful of the team are more familiar with Illumos and the next hundred people they hire after that will be more familiar with Linux.

steveklabnik

I do not personally agree with this. I do think that familiarity is a factor to consider, but would not give it this degree of importance.

It also was not discussed as a factor in the RFD.

apache8080

Is there any chance Oxide is going to make more of these RFFs public? I think it would be a useful artifact to see why a company would choose to run a non-Linux OS. I also think there are other Oxide RFDs that would have a similar benefit e.g why Oxide decided to build dropshot instead of using one of the existing rust REST server crates?

steveklabnik

Yes, for sure. We agree there's tons of value there, there's just so much to do it's easy to let things fall through the cracks. This thread has given some renewed energy to work on releasing some of them, so no promises but we'll see :)

networked

If you publish the RFD, please submit it to HN. As a former (mostly former) FreeBSD user interested in bhyve, I would like to read the case for bhyve on illumos.

binjip

It would be great if that RFD will become public someday, if it of course possible, especially if it's a long read.

bcantrill

Keep in mind that Helios is really just an implementation detail of the rack; like Hubris[0], it's not something visible to the user or to applications. (The user of the rack provisions VMs.)

As for why an illumos derivative and not something else, we expanded on this a bit in our Q&A when we shipped our first rack[1] -- and we will expand on it again in the (recorded) discussion that we will have later today.[2]

[0] https://hubris.oxide.computer/

[1] https://www.youtube.com/watch?v=5P5Mk_IggE0&t=2556s

[2] https://mastodon.social/@bcantrill/111840269356297809

kaliszad

Perhaps you could talk a bit about the distributed storage based on Crucible with ZFS as the backing storage tonight. I would really love to hear some of the details and challenges there.

bcantrill

Yes! Crucible[0] is on our list of upcoming episodes. We can touch on it tonight, but it's really deserving of its own deep dive!

[0] https://github.com/oxidecomputer/crucible

undefined

[deleted]

StillBored

Linux is a nightmare in the embedded/appliance space because one ends up just having platform engineers who spend their day fixing problems with the latest kernels, drivers, core libraries, etc, that the actual application depends on.

Or one goes the route of 99% of the IoT/etc vendors, and never update the base OS and pray that there aren't any active exploits targeting it.

This is why a lot of medium-sized companies cried about Centos, which allowed them to largely stick to a fairly stable platform that was getting security updates without having to actually pay/run a full blown RHEL/etc install. Every ten years or so they had to revisit all the dependencies, but that is a far easier problem than dealing with a year or two update cycle, which is too short when the qualification timeframe for some of these systems is 6+ months long.

So, this is almost exclusively a Linux problem; any of the *BSD/etc. alternatives give you almost all of what Linux provides without this constant breakage.

bcantrill

This is a really, really good point -- and is a result of the model of Linux being only a kernel (and not system libraries, commands, etc.). It means that any real use of Linux is not merely signing up for kernel maintenance (which itself can be arduous) but also must make decisions around every other aspect of the system (each with its own communities, release management, etc.). This act is the act of creating a distribution -- and it's a huge burden to take on. Both illumos and the BSD derivatives make this significantly easier by simply including much more of the system within their scope: they are not merely kernels, but also system libraries and commands.

This weighed heavily in our own calculus, so I'm glad you brought it up!

trhway

>including much more of the system within their scope: they are not merely kernels, but also system libraries and commands.

giving limited resources of the dev team it may lead to limited support of the system outside of the narrow set of officially supported/certified hardware with that support falling behind on modern hardware, as it happened with Sun, and vendor lock-in as a result into overpriced and low performing hardware.

There is a reason that back then at Solaris dev there was a joke about embedding Linux kernel as a universal driver for Solaris kernel in order to get reasonable support for the hardware around.

pjmlp

Interesting that you bring up embedded/appliance space, as I have noticed there are plenty of FOSS alternatives coming up, key features not being Linux based, and not using GPL derived licenses.

FreeRTOS, Nuttx, Zephyr, mbed, Azure RTOS,...

GrumpySloth

CentOS wasn’t used in embedded systems.

dralley

Sure it was. So is RHEL.

Embedded isn't limited to devices equal or less powerful / expensive than the Raspberry Pi.

sarlalian

Arista EOS is definitely CentOS Linux release 7.9.2009 (AltArch) based.

mlindner

Even Windows was and is used substantially in embedded systems.

skullone

It seems healthy to have options, almost like the universe is healing a bit after oracle bought Sun. I can't imagine better hands bringing the oxide system together than that team. As an engineer who works entirely with Linux anymore, I pine for the days of another strong Unix in the mix to run high value workloads on. Comparing openvswitch on Linux, to say, the crossbow SDN facility on Solaris, I'd take crossbow any day. Nothing "wrong" with Linux, but it is sorely lacking in "master plan" levels of cohesion with all the tooling taking their own path, often bringing complexity that requires even for abstraction with yet more complicated tooling on top.

pjmlp

Their customers run virtualised OS on top of this.

This is no different from Azure Host OS, Bottlerocket, Flatcar or whatever.

This maters to them, as knowing the whole stack, some of the kernel code is still theirs from Sun days, and making it available matters to the customers that want source code access for security assement reasons.

greggyb

If you're running in one of the big 3 cloud providers, the bottom-level hypervisors are not-linux. This is equivalent. Are you anti-AWS or anti-Azure for the same reason?

This is the substrate upon which you will run any virtualized infrastructure.

qmarchi

Small note, that's not true for Google Cloud, which runs on top of Linux, though modified.

Disclaimer: Former Googler, Cloud Support

refulgentis

Another Xoogler here: any idea what they mean by it's not Linux at the bottom for other providers? Like, surely it's _some_ common OS? Either my binaries wouldn't run or AWS is reimplementing Linux so they can, which seems odd.

Or are they just saying that the VM my binary runs on might be some predictable Linux version, but the underlying thing launching the VM could be anything?

bewaretheirs

As I understand it, there's linux running on the Google Cloud hardware but the virtualized networking and storage stacks in Google Cloud are google proprietary and largely bypass linux -- in the case of networking see the "Snap: a Microkernel Approach to Host Networking" paper.

In contrast, it appears that Oxide is committing to open-source the equivalent pieces of their virtualization platform.

tptacek

I don't about EC2 but Lambda and Fargate are presumably Firecracker, which is Linux KVM.

zokier

AWS "Nitro" hypervisor which powers EC2 is their (very customized) KVM.

https://docs.aws.amazon.com/whitepapers/latest/security-desi...

wmf

I suspect a lot of people would (irrationally) freak out if they saw how the public cloud works because it's so different from "best practices". Oxide would probably trigger people less if they never mentioned Illumos but that's not really an option when it's open source.

shusaaafuejdn

As far as performance and feature set, probably not anymore (I would have answered differently 10 years ago, and if I am wrong today would love to be educated about it).

However, if we are considering code quality, which I consider important if you are actually going to be maintaining it yourself as oxide will have to do since they need customizations, then most of the proprietary Unix sources are just superior imo. That is, they have better organization, more consistency in standards, etc. The BSDs are slightly better in this regard as well, it really isn't a proprietary vs open source issue, it's more about the insane size of the Linux kernel project making strict standards enforcement difficult if not impossible the further you get from the very core system components.

Irregardless of them being ex-Sun (and I am not ex Sun), if I needed a custom OS for a product I was working on, Linux would be close to the last Unix based OS source tree I would try to do it with, only after all other options failed for whatever reason. And that's not even taking into account the licensing, which is a whole other can of worms.

spamizbad

Seems strange to me too but it sounds like the end-users basically never interact with this - it's just firmware humming along in the background. As long as its open-source and reasonably well documented its already lightyears ahead of what else is out there.

criddell

I’m unfamiliar with illumos so I went to their webpage and the very first thing it says is:

> illumos is a Unix operating system

Is illumos an actual Unix (like macOS) or a Unix-like OS (like GNU/Linux)?

steveklabnik

Actual Unix. Wikipedia is pretty good: https://en.wikipedia.org/wiki/Illumos

> It is based on OpenSolaris, which was based on System V Release 4 (SVR4) and the Berkeley Software Distribution (BSD). Illumos comprises a kernel, device drivers, system libraries, and utility software for system administration. This core is now the base for many different open-sourced Illumos distributions, in a similar way in which the Linux kernel is used in different Linux distributions.

Thoreandan

Nobody's paid to have it pass Open Group Unix Branding certification tests

https://www.opengroup.org/openbrand/register/

so it can't use the UNIX™ trade mark.

But it's got the AT&T Unix kernel & userland sources contained in it.

PDP-11 Unix System III: https://www.tuhs.org/cgi-bin/utree.pl?file=SysIII/usr/src/ut...

IllumOS: https://github.com/illumos/illumos-gate/blob/b8169dedfa435c0...

msla

Legally, NetBSD isn't actually Unix. The brand doesn't mean what people seem to think it means.

yjftsjthsd-h

Right, "unix" roughly means

- Derived from Bell Labs unix source

- Legally allowed to use the UNIX trademark (AKA certified Unix)

- A unix-shaped OS (similar but not 100% the same as POSIX complacence)

and those things are basically independent. Most GNU/Linux are unix-likes but not derived from original unix code or certified, but there's been 1-2 that did get certified. The BSDs are (now quite distantly ) derived from unix source but not certified (although ex. UnixWare is IIRC). Solaris was all 3 but OpenSolaris and now illumos are obviously unix-like and still based on the original code but not certified UNIX™.

(Take all this with a grain of salt; I'm typing this all from memory and IANAL)

GrilledChips

This isn't NetBSD. NetBSD broke off loooooooong after the release of BSD that Sun used to build this OS.

BirAdam

It was an open source branch of Solaris that Ian Murdock worked on while he was at Sun under the name Project Indiana. It descends from UNIX SVR4.

chucky_z

Actual Unix. I believe it is in the Solaris family.

undefined

[deleted]

busterarm

Not that I'm not rooting for Oxide, but their product is still so niche and early stage that I can't imagine any actual businesses buying their stuff for a long time. They only just shipped their first rack to their first customer at the end of last summer and it's Idaho National Laboratory. State research institutions are basically the only entities positioned to gamble on this right now.

steveklabnik

Just a small note, but from when we announced this back in October, two customers were mentioned: https://oxide.computer/blog/oxide-unveils-the-worlds-first-c...

> Oxide customers include the Idaho National Laboratory as well as a global financial services organization. Additional installments at Fortune 1000 enterprises will be completed in the coming months.

lijok

This describes every single product in its early days in existence. If you're planning to launch any other way, you've doomed the company before you even launched. Lucky few survive, in spite of, and that's what contributes to the 9/10 startups statistic.

Lazer focus on the first set of customers that will help you cross the chasm. Only then mass market.

newsclues

I hope they sooner or later release a smaller, cheaper, homelab product for people to learn or for startups that will lead to future rack sales or workers.

steveklabnik

This is a common request and we absolutely understand the desire, but I suspect such a thing, if ever, will be a long time off. Given that the product is designed as an entire rack, doing something like this would effectively be a different product for a different vertical, and we have to focus on our current business. Honestly it's kind of frustrating not being able to reciprocate the enthusiasm back in more than just words, but it is what it is.

amluto

For what it’s worth, there’s a somewhat common view at least in the Linux community that it’s important for hardware vendors to make their tech stack targetable from the office or home. This isn’t to be polite or to make money — it’s to foster adoption among developers, which drives sales.

Some examples:

x86 owned the desktop, workstation and laptop world for a long time. So everyone targeted x86, which made x86 the default in the datacenter. It was hard for ARM to break in and it mostly happened when AWS did it by fiat. If ARM had made some loss-leader actually useful laptops and workstations available, it might have happened sooner.

But x86 largely didn’t deploy AVX-512 in client machines, so people who wrote libraries only used it for fun or benchmarking, so it wasn’t widely used, and most users flubbed it anyway. (And might have gotten it right if they had the hardware on their desk.)

People target Nvidia datacenter GPUs. But people have targeted them for a long time, because they have them in their gaming machines too.

Xilinx used to push free academic gear quite hard, because that was a big lead into people learning how to use their gear.

So, if I were giving Oxide straightforward sales advice, absolutely don’t get distracted with small systems. But maybe, if Oxide thought of it as lead generation, Oxide should do it anyway. If I could buy something small enough to be affordable but big enough to be useful [0], I might get one. And I’d target it with my own stuff, and fix bugs, and evangelize it at little cost to Oxide.

[0] For me, maybe 100-150TB of spinning rust (or cheap NVMe or the ability to attach a JBOD), plus anywhere from 4-64 cores, in a format that works on 120V and fits in, say, 16U or less, at a credible price point, would be quite likely to net Oxide a sale. (Just one sale but still!) It could be sold as a developer thing, and there would be absolutely no expectation that it would perform like the real thing. If I found it awesome, I might buy a couple more. But I would also use it and make things work on it and talk about it, and if a whole bunch of people did this, Oxide might get a bunch of real sales.

(Also, I get the idea behind two SKUs, but can buyers at least configure storage and compute separately? Different workloads need radically different ratios.)

newsclues

I appreciate the response, I totally understand and don’t expect it to materialize soon, but am still hopeful that someday it will be a possibility.

whalesalad

can't wait to find liquidated oxide gear on ebay in 2035. all my current homelab gear is "ancient" enterprise gear like R720's etc

faitswulff

We'll have to wait for it to hit Groupon.

chologrande

I work at a recently IPO'd tech company. Oxide was a strong consideration for us when evaluating on prem. The pitch lands among folks who still think "on prem.... ew".

Looks like a cloud like experience on your own hardware.

If only it were as cheap as dell...

busterarm

As did some elements of my own company, but business risks like those are not for fledgling public companies. To be honest, right now anyone in a _public_ company advocating for it at this stage of development should have all of their decision making power removed if not outright be shown the door.

That goes double if it's your CTO...which is exactly what ended up happening with us.

I'm not saying "no, never", but clearly "no, not right now".

RandomChance

My company looked at them, and we were very impressed with the product. The only issue was that they are built for general compute and we really needed the option for faster processors.

technofiend

It is somewhat niche, but Broadcom's purchase of VMWare now puts 0xide closer to Nutanix in that you can go buy a fully supported virtualization platform from a vendor who welcomes your business. I don't know the actual number, but it seems Broadcom is only interested in enterprise customers with huge annual spends.

rmccue

With Broadcom’s plan for VMWare, Oxide certainly seems to have had excellent timing here.

elzbardico

Large financial institutions surprisingly are good customers for new, still-untested computing technology.

I would not get surprised if Oxide next customers were a few giant banks and funds.

__d

In my experience, some financial institutions have a very good understanding of risk.

They are able to identify, and most importantly, quantify risk in a way that many businesses cannot.

Consequently, they're able to take risks with new hardware/software that other companies shy away from.

__float

We have historically had private institutions with impactful research labs. Are there any of those still kicking?

yjftsjthsd-h

Sweet:) And a big thanks for writing what appears to be clear and straightforward documentation; IMO that's an area that the illumos community has historically struggled with. And seeing a new source release talking about consolidations gives me the warm fuzzies, even if this does seem to depart from the traditional gate paradigm unless I'm seriously misreading the repo organization here.

Some (mostly tooling) questions:

- Why gmake? Especially since dmake is needed later anyways?

- Instructions say run rustup with bash explicitly; is that a defect in upstream, or is the local sh not completely posix compatible?

- How is this developed internally? Do Oxide folks run illlmos workstations or is this all developed in Virtual machines or SSHed to servers?

- Why MPL? GPL compatibility?

steveklabnik

I can't answer all your questions, because I don't actually work on helios, but I do have an answer to some of them:

> Do Oxide folks run illlmos workstations or is this all developed in Virtual machines or SSHed to servers?

I wrote about this topic here: https://news.ycombinator.com/item?id=39181727

That said, some folks certainly run illumos on a workstation.

> Why MPL? GPL compatibility?

On MPL: https://news.ycombinator.com/item?id=39181844

That said in that comment I didn't really speak to the "why." We feel like it's a good compromise in the possibility space: more copyleft than BSD, but also less restrictive than the GPL.

antranigv

> is that a defect in upstream, or is the local sh not completely posix compatible? AFAIK it's an issue with upstream. Just like most open-source projects, there is Linuxism/Bashism in there.

jclulow

FWIW, though for historical reasons we use dmake to build the core operating system, I tend to recommend people use GNU make (gmake) for new Makefiles in other consolidations. It's broadly available (including on other platforms) and has more modern features.

ahmedfromtunis

It is great that the software is open-source, but would be you useful to be deployed on other hardware?

And what would happen if, for whatever reason, a company can no longer purchase Oxide racks, will it need to start over its infra, or can it just build around Oxide hardware?

steveklabnik

It is not likely that it would be immediately useful outside of our hardware, but the main thing they're doing is deploying virtual machines. If they decided to no longer use the Oxide rack they have purchased, they would move their VMs to whatever infrastructure they choose to succeed it.

jclulow

Yeah, we definitely only intend Helios to run on either the Oxide rack or in service of software engineering work surrounding the rack (that's what we use the ISO installers and virtual machine images for).

If you're interested in a distribution targeting end user use of illumos on servers, I would absolutely recommend looking at OmniOS! Helios is very closely based on OmniOS r151046 LTS, and we use that LTS of OmniOS release directly for non-Oxide-rack infrastructure systems inside Oxide as well.

mihaic

I'm really curious: what kind of workload would companies want to run on a custom Unix that isn't Linux/Mac/BSD?

I'm rooting for more mature OS diversity, I just have no idea who the end users would be and what their needs would look like.

rhinoceraptor

The compute you'd provision on the Oxide rack are virtual machines, they've ported bhyve from FreeBSD and added live migration. I'm pretty sure you could even boot Windows Server on it if you were being held hostage.

As for why they used Illumos, many of the people came from Sun, Joyent, etc. so there's an obvious bias. However they do have a compelling reason that this is not an IBM compatible x86 personal computer, there's no BIOS, no UEFI, no traditional BMC, as far as I can tell they've removed as much proprietary firmware and binary blobs as they could possibly remove, while still using modern x86.

Each sled has a service processor and a hardware root of trust that directly boots the CPU, loads the AMD training blob, and boots the OS. It would be difficult to upstream the changes required to do that into a Linux or BSD for a computer only you currently have. So you'd have to maintain your own downstream fork, there is no one else responsible for the robustness of the OS, so it might as well be OS that you have had to support and develop for years.

steveklabnik

This is not a user-facing detail of the product. Customers run VMs on the rack, they do not build their applications for illumos. They're gonna run whatever operating system in those VMs that they need to accomplish their goals.

tw04

That being said - there's something to be said for enterprise support. Are there plans to support importing/converting/running third party OVAs? Many vendors will support something running in KVM, I can't recall the last time I saw b-hyve as a supported hypervisor.

I'd imagine as broadcom slowly destroys vmware's market share vendors will look to alternatives, but I doubt b-hyve is even a blip on their radar at this point.

steveklabnik

I don't know the status of supporting OVA as a file format, but we absolutely support creating and uploading your own images. Here are the current docs on how to do so: https://docs.oxide.computer/guides/creating-and-sharing-imag...

antranigv

OVAs are basically ZIP files with some XML. If you want, you can convert an OVA to RAW image or VMDK or whatever the latest fancy format is, and bhyve can boot that for you. Better to use Raw.

bhyve, unlike other "famous" hypervisors is pretty stable, has good enough virtualized drivers (altho I'm sure Oxide has made it better) and can boot a VM with 1.5 TB of RAM and 240 vCPU[1]. Something I was not able to do with anything other than bhyve.

I know this is HackerNews, so I have to say it, marketing != engineering. Just because the FreeBSD project's marketing suck, doesn't mean engineering is bad. usually it better than the mainstream ones.

1: https://antranigv.am/posts/2023/10/bhyve-cpu-allocation-256/

epistasis

ZFS is native on illumos, and the containerization equivalent, etc, is pretty great.

There's a good argument that your servers in the cloud don't need to be on the same OS, as long as you can hire enough talent to work on them.

GrilledChips

You'd have no idea that it isn't Linux. You don't run code on this OS, you run code on VMs that it provides.

lifeisstillgood

I would be interested in how did you first hear of Oxide.

I somehow landed on their podcast because it covered <whatever the hell I thought was interesting at that moment>.

The podcast is for me amazeballs marketing - it does everything but sell their product (might be a good idea to add a pitch in for each out-tro!)

I mean they talk about it, like “we had such a tough time getting the compiler to do something something and then veer off to discuss back in the day stories.

Ah never mind. Keep talking guys hope it works out

zengid

If you listen to their original podcast 'On The Metal' it was infamous for it's overly repeated use of 2 or 3 pre-recorded self promotions, so much so that a fan recorded their own commercial for them to air.

'Oxide and Friends' however isn't really what I would consider a podcast, but a recording of live "spaces" or group calls, beginning on Twitter and now happening in Discord. IMO it's not really best consumed as a podcast, but rather to participate in live. If you tune in live you'll pick up on the vibe of the recordings a lot better.

https://oxide.computer/podcasts/oxide-and-friends

Hackbraten

Was following @jessfraz on Twitter back then, so I got word of Oxide when they first announced it there.

__d

Between Jess, Bryan, and Adam, it was hard to miss :-)

ahmedfromtunis

For me, it was when Pentagram showcased their branding when Oxide was first announced.

nubinetwork

I was hoping for this since they announced the server rack... nobody wants a paperweight if (God forbid) oxide were to go out of business.

steveklabnik

To be clear about it, the "paperweight problem" is very important to us as well. It's worth remembering that the MPL doesn't care if a copy is posted openly on GitHub or not, and (I am not a lawyer!) we have obligations to our customers under it regardless if non-customers can browse the code.

segmondy

I really hope Oxide succeeds. I thought they were crazy when they announced what they were going to do. It's not the kind of play you see from a start up, but they were determined. Most folks that started the race at that time are out. I hope their computer is ready to deploy GPUs. Deploying multiple GPUs today is a freaking pain.

undefined

[deleted]
Daily Digest email

Get the top HN stories in your inbox every day.