Get the top HN stories in your inbox every day.
tanvach
Was at Oculus post acquisition and can say that the whole XROS was an annoyance and distraction the core technology teams didn’t need. There were so many issues with multiple tech stacks that needed fixing first.
Mind you, this XROS idea came after Oculus reorged into FB proper. It felt to me like there were FB teams (or individuals) that wanted get on the ARVR train. Carmack was absolutely right, and after the reorg his influence slowly waned for the worse.
dedup-com
Just a small bunch of XROS people came from FB proper (mostly managers) because an average FB SWE has no required skills. Most folks were hired from the industry at E5/E6 and I think we had ever took one or two bootcampers that ultimately were not successful and quickly moved elsewhere in FB.
globnomulous
What were the required skills that bootcampers lacked? Has anybody without a university degree succeeded there?
dedup-com
I just realized that you might not know what a "bootcamper" is. Facebook's hiring process generally goes like this:
- you're interviewed with a random team and evaluated if you'd be a good fit for the company.
- you are hired and go through a multi-week "bootcamp" to learn FB's vocabulary, processes, and tech stack, fixing some real bugs and implementing some real (but minor) features in the process.
- upon completing the bootcamp you seek a team that is of interest to you and if interest is mutual, you join the team. If you can't find a team after X weeks, you part ways with the company.
dedup-com
Knowledge of C (XROS was written in C and during the interviews the candidate rather uncommonly wasn't given a choice of programming language) and general understanding of how a computer works at a low level. Knowing the purpose of "volatile", understanding cache lines, mapping virtual memory to physical memory, DMA, this kind of thing.
I think everyone had a degree but looking at my degree (applied math) in particular nothing that I had learned at the uni was immediately useful and I think there isn't really anything that would prevent a smart person with a GED and some history of, say, Linux kernel contributions from succeeding on a team like this. Except may be a degree is needed for H1B visa for those who need it.
ezoe
It looks to me Meta was a victim of ivory tower researchers who just want to experiment on their non-practical theoretical research on company's expense.
It has some value a huge company funds these research, as long as it doesn't affect the practical real for-profit projects.
thrown-0825
[flagged]
dang
You can't attack other people like this on HN. Since you've been breaking the site guidelines in other threads as well, I've banned this account.
Please don't create accounts to break HN's rules with.
frognumber
John describes exactly what I'd like someone to build:
"To make something really different, and not get drawn into the gravity well of existing solutions, you practically need an isolated monastic order of computer engineers."
As a thought experiment:
* Pick a place where cost-of-living is $200/month
* Set up a village which is very livable. Fresh air. Healthy food. Good schools. More-or-less for the cost that someone rich can sponsor without too much sweat.
* Drop a load of computers with little to no software, and little to no internet
* Try reinventing the computing universe from scratch.
Patience is the key. It'd take decades.
ksec
Love this idea and wondering where that low cost of living place would be. But genuinely asking;
What problem are we trying to solve that is not possible right now? Do we start from hardware at the CPU ?
I remember one of an ex Intel engineer once said, you could learn about all the decisions which makes modern ISA and CPU uArch design, along with GPU and how it all works together, by the time you have done all that and could implement a truly better version from a clean sheet, you are already close to retiring .
And that is assuming you have the professional opportunity to learn about all these, implementation , fail and make mistakes and relearn etc.
frognumber
> Love this idea and wondering where that low cost of living place would be
Parts of Africa and India are very much like that. I would guess other places too. I'd pick a hill station in India, or maybe some place higher up in sub-Saharan Africa (above the insects)
> What problem are we trying to solve that is not possible right now?
The point is more about identifying the problem, actually. An independent tech tree will have vastly different capabilities and limitations than the existing one.
Continuing the thought experiment -- to be much more abstract now -- if we placed an independent colony of humans on Venus 150 years ago, it's likely computing would be very different. If the transistor weren't invented, we might have optical, mechanical, or fluidic computation, or perhaps some extended version of vacuum tubes. Everything would be different.
Sharing technology back-and-forth a century later would be amazing.
Even when universities were more isolated, something like 1995-era MIT computing infrastructure was largely homebrew, with fascinating social dynamics around things like Zephyr, interesting distributed file systems (AFS), etc. The X Window System came out of it too, more-or-less, which in turn allowed for various types of work with remote access unlike those we have with the cloud.
And there were tech trees build around Lisp-based computers / operating systems, SmallTalk, and systems where literally everything was modifiable.
More conservatively, even the interacting Chinese and non-Chinese tech trees are somewhat different (WeChat, Alipay, etc. versus WhatsApp, Venmo, etc.)
You can't predict the future, and having two independent futures seems like a great way to have progress.
Plus, it prevents a monoculture. Perhaps that's the problem I'm trying to solve.
> Do we start from hardware at the CPU ?
For the actual thought experiment, too expensive. I'd probably offer monitors, keyboards, mice, and some kind of relatively simple, documented microcontroller to drive those. As well as things like ADCs, DACs, and similar.
Zero software, except what's needed to bootstrap.
killerstorm
Software is bloated and unreliable. It's clearly a "local minimum".
01HNNWZ0MV43FF
If it's so bloated then just start cutting
Whatever expertise you need to prune a working system is less than the expertise you'll need to create a whole new one and then also prune it as it grows old
bcrl
Perverse incentives are everywhere...
sim7c00
been writing an OS for ever 10 years to try.
its seriously not something you want to do if you want to get anywhere.
then again,its a lot of fun, maybe imagining where it could be some day if you had an army of slave programmers (because still it wont make money lol)
827a
Continuing the thought experiment: There's an interesting sort-of contradiction in this desire: I, being dissatisfied with some aspect of the existing software solutions on the market, want to create an isolated monastic order of software engineers to ignore all existing solutions and build something that solves my problems; presumably, without any contact from me.
Its a contradiction very much at the core of the idea: Should I expect that the Operating System my monastic order produces be able to play Overwatch or be able to open .docx files? I suspect not; but why? Because they didn't collaborate with stakeholders. So, they might need to collaborate with stakeholders; yet that was the very thing we were trying to avoid by making this an isolated monastic order.
Sometimes you gotta take the good with the bad. Or, uh, maybe Microsoft should just stop using React for the Start menu, that might be a good start.
ksec
>maybe Microsoft should just stop using React for the Start menu, that might be a good start.
Agree but again worth pointing out the obvious. I don't think anyone is actually against React per se, as long as M$ could ensure React render all their screens at 120fps with No Jank, 1-2% CPU resources usage, minimal GPU resources, and little memory usage. All that at least 99.99% of the time. Right now it isn't obvious to me this is possible without significant investment.
frognumber
An isolated monastic order in the hills around the Himalayas should ideally be completely isolated from Overwatch and .docx files.
ronald_petty
Not saying these are perfect, but consider reviewing the work of groups like the Internet Society or even IEEE sectors. Boots on the ground to some extent such as providing gear and training. Other efforts like One Laptop Per Child also leaned into this kind of thinking.
What could it could mean for a "tech" town to be born, especially with what we have today regarding techniques and tools. While the dream has not really bore out yet (especially at a village level), I would argue we could do even better in middle America with this thinking; small college towns. While its a bit of existing gravity well, you could do a focused effort to get a flywheel going (redo mini Bell labs around the USA solving regional problems could be a start).
Yes it takes decades. My only thought on that is, many (dare say most) people don't even have short term plans much less long term plans. It takes visionaries with nerves and will of steel to stay on paths to make things happen.
Love the experiment idea.
frognumber
Pick a university, and given them $1B to never use Windows, MacOS, Android, Linux, or anything other than homebrew?
To kick-start, given them machines with Plan9, ITS, or an OS based on LISP / Smalltalk / similar? Or just microcontrollers? Or replicate 1970-era university computing infrastructure (where everything was homebrew?)
Build out coursework to bootstrap from there? Perhaps scholarships for kids from the developing world?
ezoe
They will just face the same problems we solved decades ago and reinvent the mostly similar solution we also had decades ago.
In a few decades, they will reach to our current level, but then, rest of our world didn't idle for these decades and we don't need to solve the old problems.
mastermage
Honestly sounds like a very cool Science fiction concept.
Zeebrommer
A bit like Anathem.
percentcer
Not quite the same but check out A Canticle for Leibowitz
mkoubaa
Who needs good schools? Make it "The Summer of code in Sardinia"
JKCalhoun
I'd rather drop a load of musical instruments into said village but I guess I'm completely missing the point.
jnwatson
I've written a lot of low level software, BSPs, and most of an OS, and the main reason to not write your own OS these days is silicon vendors. Back in the day, they would provide you a spec detailed enough that you could feasibly write your own drivers.
These days, you get a medium-level description and a Linux driver of questionable quality. Part of this is just laziness, but mostly this is a function of complexity. Modern hardware is just so complicated it would take a long time to completely document, and even longer to write a driver for.
dist1ll
Intel still does it. As far as I can see they're the only player in town that provide open, detailed documentation for their high-speed NICs [0]. You can actually write a driver for their 100Gb cards from scratch using their datasheet. Most other vendors would either (1) ignore you, (2) make you sign an NDA or (3) refer you to their poorly documented Linux/BSD driver.
Not sure what the situation is for other hardware like NVMe SSDs.
[0] 2750 page datasheet for the e810 Ethernet controller https://www.intel.com/content/www/us/en/content-details/6138...
throwaway2037
Wow... that PDF is 2,750 pages! There must be an army of technical writers behind it. That is an incredible technical achievement.
Real question: Why do you think Intel does this? Does it guarantee a very strong foothold into data center NICs? I am sure competitors would argue two different angles: (1) this PDF shares too much info; some should be hidden behind an NDA, (2) it's too hard to write (and maintain) this PDF.
WalterBright
I may be the only person who ever understood every detail of C++, starting with the preprocessor. I can make that claim because I'm the only person who ever implemented all of it. (You cannot really know a language until you've implemented it.) I gave up on that in the 2000's. Modern C++ is simply terrifying in its complexity.
(I'm not including the C++ Standard Library, as I didn't implement it.)
awjlogan
This is a pretty standard document length. Modern microcontrollers have similar lengths (e.g. ATSAMD51 is ~2000 pages). Some of it is not software related, things like pin outs and electrical and mechanical descriptions.
It does take a huge amount of work to write and maintain. Typically the authors are not technical, so it also relies on the designers being available to answer questions as well. Then there’s a choice of how it’s written: narrative and potentially imprecise but readable, or terse and precise but hard to read. There’s both styles in the same document, terse for register descriptions.
jovial_cavalier
Look up the Texas Instruments am3358. It's a tiny SOC, it was used in the beaglebone black. Its technical reference manual[1] is over 5000 pages, and it details all peripherals, all of the interconnects and every single register in the system. This, by contrast, is really just an overview.
Regards to (1), if you don't publish this information you're not selling a CPU, you're selling a very expensive chunk of sand. There is simply no way that a customer can guess at what your implementation looks like. Additionally, Intel barely has IP in the traditional sense. They hold patents, but their only real competitor in making x86 processors, AMD, has a long-standing mutual non-enforcement agreement wrt patents.
Regards to (2), I'm guessing a majority of this PDF can be generated sort of like you generate API documentation from doxygen comments.
[1]: https://www.ti.com/lit/ug/spruh73q/spruh73q.pdf?ts=175651560...
mrandish
> Real question: Why do you think Intel does this?
I'm not sure large traditional silicon vendors like Intel, TI, et al re-evaluate the documentation requirements (and costs) on a chip by chip basis. It's probably done by chip class and for companies who've been selling chips by the millions over many decades to industries as diverse as defense, aerospace, automotive, etc there are classes of chips where robust, complete documentation is not only expected but often a required part of the RFP, compliance or conformance processes.
While this level of effort probably isn't needed for every chip in that class, it could be hard to reliably predict when a general purpose chip is still in the design phase which customers may be interested in it during its life (which for some of these chips might be decades). Many chips which conform to MIL-SPEC or other similar standards which can require extensive documentation are simply enhanced versions of standard chips, so the docs exist anyway. Finally, there's the organizational capabilities and culture aspect. Once the org needs to maintain the systemic ability to generate serious documentation at scale, you end up with a lot of managers and staff who think this way.
lelanthran
For datasheets that's normal. Might even be leaning towards smaller than average for the device in question.
For comparison, a data sheet for a single transistor can be around 12 to 30 pages. A data sheet for a tiny microcontroller is probably a few hundred pages.
I once wrote a driver for a flash chip and that had a data sheet of around 80 pages.
undefined
pjjpo
In terms of (2), I wonder if it's even possible to write a driver without such a document. In the end, the vendor is on the foot for the driver for major platforms (let's assume Linux) - if they can write a Linux driver without a similar spec to this doc, then the doc probably doesn't need to exist since the business wins from hobbyist drivers will be low. If they can't though, then it's just a matter of formatting an internal document for public consumption - the doc itself has to be maintained anyways so the cost seems lower and maybe reasonable. I have a feeling the doc is necessary but I am not specialized in the field.
Assumptions, fair or not, about (1) seems more likely somehow.
miki123211
Probably CPU vendor culture? I forgot how large Intel's manual set is, but ARM's was ~11k pages the last time I checked. Intel's was smaller, but not that much smaller, certainly within an order of magnitude.
wtallis
The NVMe spec is freely downloadable and sufficient to write a driver with, if your OS already has PCIe support (which doesn't have open specifications). You don't need any vendor-specific features for ordinary everyday use, so it's a bit of a different situation from NICs. (Also, NVMe was in very large part an Intel creation, though it's maintained by an industry consortium.)
the-rc
On the other hand, see the complete mess that are the IPU6/7 camera chipsets and their Linux support.
XorNot
Good christ this is my current work laptop. It...mostly doesn't work. Plug in a USB camera and it'll just go. Several drivers, userspace utilities and other daemons and sometimes gstreamer works, but does Zoom work? Who knows!
theideaofcoffee
That's interesting that it's that short. I remember a long while ago I had aspirations of implementing a custom board for Prestonia-/Gallatin-era Xeons and the datasheets and specs for those was around 3000 pages, iirc. Supporting infra was about that long as well. So I'm surprised to see a modern ethernet controller fit into the same space. I appreciated all of the docs because it was so open, I felt like I could actually achieve that project, but other things took priority.
mbac32768
Yeah this. I tried to modify a hobby OS recently so it would process the "soft reboot" button (to speed up being rebooted in GCP) and it was so unbelievably hard to figure out how to support it. I tried following the instructions on the OS Dev Wiki and straight up reading what both Linux and FreeBSD do and still couldn't make progress. Yes. The thing that happens when you tell Windows or Linux to "restart". Gave up on this after spending days on it.
The people who develop OSes are cut from a different cloth and are not under the usual economic pressures.
gmueckl
I also think that they have access to more helpful resources than people outside the field do, e.g. being able to contact people working on the lower layers to get the missing info. These channels exist in the professional world, but they are hard to access.
toast0
To clarify, are you having trouble getting the signal to reboot from the gcp console into your OS? Or are you having trouble rebooting on gcp?
mbac32768
I mean when the hobby OS wants to shut down, it can power the machine it's running on down. Not unlike what would happen if you clicked power off on your desktop OS menu.
Getting it to work on GCP meant properly driving something called the Intel PIIX4 controller which was emulated into the VM.
Separately from the OS being able to turn itself off, the OS needs to process a signal received by the hypervisor on this controller to support the hypervisor gracefully shutting it down. Otherwise GCP will wait 90 seconds after it has sent the shut down signal to give up and terminate the VM itself.
The problem I was trying to solve was (a) OS can shut itself down in GCP (b) restarts in GCP from the GCP console would be instant, rather than take 90+ seconds
sitkack
The VMM on GCP has only really been tested with Linux. You are kinda wasting your time, the only way to make it work is to make the hobby OS Linux.
toast0
> You are kinda wasting your time, the only way to make it work is to make the hobby OS Linux.
Not the parent, but of course they're wasting their time... That's the point of a hobby OS.
I'm working on a hobby OS, and I have no illusions that it's most likely fewer than 10 people will ever run it, and less than 100 will hear about it, but it lets me explore some interesting (to me) ideas, and forces me to learn a little more about random pieces of computing. If I ran on GCP, I'd want the reboot button to work. That sounds useful.
On the topic, I don't see why anyone would want to build a general purpose OS. There's enough already and even with the shrinking of hardware variety, there's a lot of stuff to support to make a general purpose OS work on enough hardware for people to consider using it. You can take Linux or a BSD and hack it up pretty good to explore a lot of OS ideas. Chances are you're going to borrow some of their drivers anyway, and then you'll end up with at least some similarity... may as well start there and save a lot of time. (My hobby OS has a custom kernel and custom drivers, but I only support a bare minimum of devices... (pc) console i/o, one real NIC, and virtio-net... that's all I need; I might add support for more NICs and more consoles later)
b112
Modern hardware is just so complicated it would take a long time to completely document, and even longer to write a driver for.
That's what's claimed. That's what people say, yet it's just an excuse. I've heard the same sort of excuse people have, after they write a massive codebase, then say "Oops, sorry, didn't get around to documenting it".
And no, hardware is not more difficult than software to document.
If the system is complex, there's more need to document, just as with a huge codebase. On their end, they have new employees to train up, and they have to manage testing. So any excuse that silicon vendors have to deal with such immense complexity? My violin plays for them.
makeitdouble
> "Oops, sorry, didn't get around to documenting it".
That's obviously the wrong message. They should say "Go ask the engineering VP to get us off any other projects for another cycle while we're writing 'satisfying' documentation".
Extensive documentation comes at a price few companies are willing to pay (and that's not just a matter of resources. Look at Apple's documentation)
MathMonkeyMan
I write documentation as I'm writing the code. In my opinion, the code is only as good as its documentation -- they're two parts of the same thing. It's mostly comments at the top of files, and sometimes a markdown file in the same directory.
This way, good documentation is priced into my estimate for the project. I don't have a work item "spend a few days documenting." Nope, if I'm doing a foo then that includes documenting a foo at the same time.
PeterStuer
With documentation one of the major hurdles is the maintainance. It is caring for a set of documents, created by people with different specializations, that describe the artefact from specific perspectives, but need to be kept in sync with the active creation and evolution of the artefact itself.
This is not impossible, but the effort and costs required are substantial and often lose out on a priority basis to just fixing or improving the product itself.
throwaway2037
> Look at Apple's documentation
To clarify for me: Is this good or bad?supermatt
> If the system is complex, there's more need to document
It’s not first party documentation that’s the problem. The problem is that they don’t share that documentation, so in order to get documentation for an “unsupported” OS a 3rd party needs to reverse engineer it.
WalterBright
I find myself largely unable to document code as I write it. It all seems obvious at the time. It's when I go back to it later, and I re-figure it out, that the documentation then can be written.
leoc
My hunch is that for nearly anyone who is serious about it these days, the way forward is either to have unusually tight control over the underlying platform, or to include a servant Linux installation with your OS. If Windows is a buggy set of device drivers, then Linux is a free set of buggy device drivers. If you're happy with your OS running as a client of a Linux hypervisor indefinitely then you could go for that; otherwise you'd have to try to gradually move bits of the hardware support into your OS over time—ideally faster than new Linux dependencies arise...
andreww591
At least for certain types of OSes, it should be relatively easy to get most of Linux's hardware support by porting LKL (https://github.com/lkl/linux) and adding appropriate hooks to access hardware.
Of course, your custom kernel will still have to have some of its own code to support core platform/chipset devices, but LKL should pretty much cover just about all I/O devices (and you also get stuff like disk filesystems and a network stack along with the device drivers).
Also, it probably wouldn't work so well for typical monolithic kernels, but it should work decently on something that has user-mode driver support.
snickerbockers
>but LKL should pretty much cover just about all I/O devices (and you also get stuff like disk filesystems and a network stack along with the device drivers).
thus calling into question why you ever bothered writing a new kernel in the first place if you were just going to piggyback Linux's device drivers onto some userspace wrapper thingy.
Im not necessarily indoctrinated to the point where I can't conceive of Linux being suboptimal in a way which is so fundamental that it requires no less than a completely new OS from scratch but you're never going to get there off of recycling linux's device drivers because that forces you to design your new OS as a linux clone in which cade you definitely did not need to write an entire new kernel from scratch.
eru
You make a good argument, but let me take the other side:
What you describe is probably necessary for getting _fast_ Linux compatibility. However, if you are willing to take the overhead of a few layers of indirection, you can probably sandbox the Linux land somewhere, and not have it impact the rest of your design much.
Most hardware access doesn't have to be particularly efficient. And, yes, for the few pieces of hardware that you do want to support efficiently (eg your storage devices or networking, whatever you want to concentrate on in your design) these you can handle natively.
Btw, I would suggest that most people these days should write their toy operating systems to run as a VM on a hypervisor like Xen or similar. The surface to the outside world is smaller that way.
lelanthran
If you're going this route, I have found netBSD a better option for this sort of thing.
It has a rump kernel architecture which makes reusing the drivers almost trivial compared to reusing linus drivers with a new kernel.
andrekandre
> you're never going to get there off of recycling linux's device drivers because that forces you to design your new OS as a linux clone in which cade you definitely did not need to write an entire new kernel from scratch.
thats in interesting point, and makes me wonder if some kind of open interface for drivers to write to (and os's could implement) wouldn't be worthwhile?probably it would have to be very general in design, but something along the lines of driverkit or iokit might work?
PeterStuer
Is this the old 'an OS is just a bag of buggy device drivers' argument?
boredatoms
Presumably if you’re meta you could pay the vendors enough to write drivers for any arbitrary OS
eklitzke
Writing drivers is easy, getting vendors to write *correct* drivers is difficult. At work right now we are working with a Chinese OEM with a custom Wifi board with a chipset with firmware and drivers supplied by the vendor. It's actually not a new wifi chipset, they've used it in other products for years without issues. In conditions that are difficult to reproduce sometimes the chipset gets "stuck" and basically stops responding or doing any wifi things. This appears to be a firmware problem because unloading and reloading the kernel module doesn't fix the issue. We've supplied loads of pcap dumps to the vendor, but they're kind of useless to the vendor because (a) pcap can only capture what the kernel sees, not what the wifi chipset sees, (b) it's infeasible for the wifi chipset to log all its internal state and whatnot, and (c) even if this was all possible trying to debug the driver just from looking at gigabytes of low level protocol dumps would be impossible.
Realistically for the OEM to debug the issue they're going to need a way to reliably repro which we don't have for them, so we're kind of stuck.
This type of problem generalizes to the development of drivers and firmware for many complex pieces of modern hardware.
throwaway2037
> custom Wifi board
Why didn't you use something more mainstream? Cost?rwmj
But is that a good use of Meta's money? Compared to making a few patches to Linux to fix any performance problems they find.
(And I feel bad saying this since Meta obviously did waste eleventy billion on their ridiculous Second Life recreation project ...)
b112
I don't like Meta, but there used to be a time where big corp used to spend 30% of its budget on R&D. It's how we got all the toys we have now, R&D labs of big Bell and others.
So please don't mock the spend. Big spends fail sometimes, and at least people were paid to do the work.
dedup-com
XROS had a completely new and rapidly evolving system call surface. No vendor would've been able to even start working on a driver for their device, let alone hand off a stable, complete result. It wasn't a case of "just rename a few symbols in a FreeBSD implementation and run a bunch of tests".
baq
Things you can’t buy: vendor who cares enough to replicate your exact use cases in their lab
silvestrov
Vendors might say that they don't have the resources (man hours) and don't want to hand over documentation to external developers.
lelele
>> These days, you get a medium-level description and a Linux driver of questionable quality.
Then how do devices end up up having drivers for major OSes? It's all guesswork?
tanvach
Yeah reverse engineering all the drivers is going to be a huge headache.
markus_zhang
Sounds like super fun if I could be paid a bit for it.
What is an easy gate task to get into “reverse engineering some drivers for some OS”?
Second thought: I don’t even know how to write a driver or a kernel, so I better start from there.
toast0
I don't know how you get paid for it, but if you want to write your own kernel, I'd start with an osdev tutorial. started with this one [1], but this one [2] has a promissing name... and I haven't really looked around.
It helps to have a concept to guide you too, but you can certainly make some progress on the basics before you figure out what you really want to do.
wmf
Asahi Linux.
mastermage
Isn't that what low level does on his YouTube channel teach people to reverse engineer stuff?
swiftcoder
The problem that is kind of glossed over here is that Meta hired a bunch of folks from Microsoft who were primarily interested in writing operating systems, and set them to work on XR - obviously they wanted to write a custom operating system
Aurornis
> They also got me reported to HR by the manager of the XROS effort for supposedly making his team members feel bad
I've only seen John Carmack's public interactions, but they've all been professional and kind.
It's depressing to imagine HR getting involved because someone's feelings had been hurt by an objective discussion from a person like John Carmack.
I'm having flashbacks to the times in my career when coworkers tried to weaponize HR to push their agenda. Every effort was eventually dismissed by HR, but there is a chilling effect on everyone when you realize that someone at the company is trying to put your job at stake because they didn't like something you said. The next time around, the people targeted are much more hesitant to speak up.
jamra
I followed his posts internally before he left. He was strict about resource waste. Hand tracking would break constantly and he brought metrics to his posts. His whole point was that Apple has hardware nailed down and it’ll be efficient software that will be the differentiator. The bloat at Meta was the result of empire building.
Fade_Dance
I remember watching Carmack at a convention 15 years ago. He took a short sabbatical and came back with ID Tech 3 on an iPhone, and it still looks amazing well over a decade later.
https://www.youtube.com/watch?v=52hMWMWKAMk&t=1s
This is a guy who figures that what he wants to do most with his 3 free weekends is to port his latest, greatest engine to a Cortex-A8. Leading corporate strategy? Maybe not. But Carmack on efficiency? Just do it.
markus_zhang
Impressive. JC is always one of the engineers I look up to and read up to when depressed.
John Carmack, David Cutler, Tom West, Cameron Zwarich, etc. There are about maybe 50 of them.
torginus
The quality you can achieve with simple painted textures and computed lightmaps never ceases to impress.
ezoe
At that time, Rage was delayed forever I consider it vaporware, falling the same category was Half-Life 3 or Duke Nukem Forever.
Still, I saw this demo at that time and I felt it was impressive considering the toy level performance of 2010's smartphone.
influx
I followed his posts internally too. It's amazing how many people were arguing against fucking John Carmack. What a waste of talent.
ignoramous
> were arguing against fucking John Carmack
I am sure Carmack himself encourages debates and discussions. Lionizing one person can't be expected of every employee (unless that person is also the founder or the company is tiny).
flr03
Damn, that's medieval. Anyone should be able to challenge anyone regardless of status.
kelipso
Ugh. Can we as an industry stop blowing people up like this? It’s a clear sign that the community is filled with people with very little experience.
I remember this guy wanted $20 million to build AGI a year ago (did he get that money?), and people here thought he would go into isolation for a few weeks and come out with AGI because he made some games like that. It’s just embarrassing as a community.
monkeyelite
I disagree that you should just defer - but it’s sad that politics was obviously consuming and inhibiting his ability to help the product.
Aeolun
Can’t really imagine a better person to argue against?
terribleperson
The software for the Quest 3 is unreliable and breaks often. A team that attacks attempts to hold them accountable makes a lot of sense.
alpaca128
In my experience the one big problem on the Quest 3 is the user interface. I am still puzzled why they made a floating taskbar with tiny buttons that you have to hit with VR controllers. I have good eyes, decent hand-eye coordination and don't have shaky hands, yet I manage to hit a button at first try maybe 40% of the time. They made a cut-down 2D desktop interface that makes up a small fraction of the field of view for a VR device and called it a day, and then put the user into some virtual room with zero interactable elements.
Meta Quest 3 feels like sci-fi tech with badly executed UI design from the 90s.
NBJack
I saw a few of those. He really leaned in on just how much waste was in the UI rendering, with some nasty looking call times to critical components. I think it was close to when he left.
Dude just seemed frustrated with the lack of attention to things that mattered.
But...that honestly tracks with Meta's past and present.
osullivj
Would love to hear Carmack's thoughts on render cost...
dagmx
John can be quite blunt and harsh in person, from everyone I know who’s interacted with him.
If he doesn’t believe in something, he can sometimes be over critical and it’s hard to push back in that kind of power imbalance.
stephc_int13
Carmack is a legend and I admire his work, but he seems to believe his own legend these days (like a few others big-ego gamedevs) and that can lead to arbitrary preferences being sold as gospel.
spydum
I'm sure that's true but I've worked with a lot of engineers that are of this caliber and as long as you can form a coherent logical explanation they will bend they're way more open than you expect. But you got to put in the work to make that argument. They won't take it on faith
rvba
If you want to build 3d glasses probably you should fpcus on 3d glassess and software for it.
Not some new OS so you can start making the glasses 2 years later.
Just like when you want to bake a cake, you dont start by designing an oven, or creating an universe from scratch
anikom15
What was your experience working with him?
daseiner1
[flagged]
Uehreka
Seriously? Have you never had a person more powerful than you tell you that you’re wrong when they in fact are wrong? Often in corporate environments the answer to a “what to do next” question isn’t easily provable, and people who take advantage of this can make life really suck.
nipponese
of course he has more power, but at this point, he's earned it.
and also, it wasn't enough to "win" against a den-of-wolves place filled with power-players like meta.
kid64
Are you arguing that everyone's power is equal? What an asinine position.
WD-42
Which makes sense when you are one of 3 developers at ID software. There's absolutely no room for waste.
This is Meta. Let the kids build their operating system ffs. Is he now more concerned with protecting shareholder value? Who cares.
leoc
Meta's AR/VR division has burned a huge amount of money and years of time, with relatively little to show for it. Now it seems to be on the verge of being cancelled or slashed back, and in response people are saying that this proves VR, something Carmack champions, is commercially untenable or even that Carmack himself is partly responsible for the failed initiative. I don't even entirely agree with him on the question of whether anyone should try developing a new OS, but he's been proven absolutely right that there was no room for him to be that complacent about the use of Meta's resources.
dedup-com
There were almost no kids on the XROS team. The bulk of the team were E6s with graying hairs, multiple kids, and very impressive history of work on other well-known operating systems -- and most of them wrote a lot of code. This was the senior-est team I ever was a member of. Also, the most enjoyable interview process I've ever been through, no bullshit whatsoever and a rare case that I actually had to implement the exact thing that I was asked about during the interview (took me 3 weeks compared to 20 minutes during the loop, go figure).
XROS was an org that hired for specific specialist positions (as opposed to the usual "get hired into FB, go through the bootcamp, and find your place within the company"). At one point we got two separate requests from the recruiting execs: - Your tech screen pass rate is way too low compared to other teams at FB. Please consider making your tech screen easier to expand the pool of candidates. - Your interview-to-offer rate is way too low compared to other teams at FB. Please consider making your tech screen more difficult to reduce time that engineers spend on interviewing and writing feedback.
Anyway, IMO it was a very strong team in a very wrong environment. Most of the folks on the team hated the Facebook culture, despised the PSC process (despite having no problems with delivering impact in a greenfield project), had very little respect for non-technical managers coming from FB proper (the XROS team saw themselves as part of Oculus), and the majority I believe fled to other companies as soon as the project was scrapped. The pay was good however, and the work was very interesting. My overall impression was that most people on the team saw XROS as a journey, not a destination, and it was one of the reasons why it was destined to never ship.
Aeolun
No, no. If you want your VR apps running in two years on something that looks like an OS, just build an app that runs on an existing one.
If you want to be the dominant player in the market in 10-15 years, build the OS and keep funding it.
Fade_Dance
>Is he now more concerned with protecting shareholder value? Who cares.
It doesn't sound like he's concerned with waste. It sounds like it's a typical Carmack argument - distilled and hyper logical, and his conclusion is more to do with the pointlessness of it. He actually concedes the point that the project may have been highly efficient (which it may or may not have been, he was steelmanning).
His main points seemed to be:
If every cycle matters and efficiency is paramount, just make the project monolithic C++ code. If every cycle matters, that is somewhat incompatible with general purpose OSs, and if it doesn't, the existing landscape is more than good enough. Presumably, he's calling out the absurdity of counter-arguments which are being unrealistic about the objectives of creating a new general purpose OS, while also focusing on extreme efficiency. He states that the requirements to fully achieve these objectives would require a "monastic coding enclave" like Plan 9 OS, and it wasn't realistic even with the high talent in Meta.
And that plays into the second point, which seems to essentially be "new OSs aren't a draw for developers, they are a burden". This is painfully obvious when looking at the history of OSs and software, and it's the obvious reason why "let the kids build their operating system ffs" should result in a reflexive "noooo..." from the greybeards. The deeper point though is that if A. is achieved, the B. Burden on devs will be even more onerous. Therefore unless the entire project is committed to truly moving crowds to new paradigms (good luck, literally billions have been lost here), just use the proven, faily high performance options that have widespread support.
The conclusion is "on balance, it's a bad idea." He's arguing it sharply (although I understand a Carmack steelman is intimidating to attack), but in essence it's a fairly banal and conservative conclusion, backed with strong precedent.
baq
This is how megacorps die. You’re describing Intel-level complacency.
anikom15
Professional engineers cannot be made immune to criticism.
KaiserPro
> This is Meta. Let the kids build their operating system ffs.
the problem was, it was holding back products. Because if youre going to make your own OS, it changes what chips you put in. If you don't know what chipset you're going to have, you don;t know what your pixel budget is, you can't plan features.
It takes about 2 years to get hardware out the door, and another 1.5 years to iron out the bugs and get a "finished" product.
chem83
This is what got Lucovsky pushed out. He wanted to build OS from scratch and couldn't see past the technical argument and acknowledge the Product's team urgency to actually land something in the hands of customers. Meanwhile, he left a trail of toxicity that he doesn't even realize was there[0].
Interestingly, he was pulling the same bs at Google until reason prevailed and he got pushed out (but allowed to save face and claim he resigned willingly[1]).
[0] https://x.com/yewnyx/status/1793684535307284948 [1] https://x.com/marklucovsky/status/1678465552988381185
lokar
I saw the same thing at Google. A distinguished engineer tried gently at first to get a Jr engineer to stop trying to do something that was a bad idea. They persisted so he told them very bluntly to stop. HR got involved.
I even found myself letting really bad things go by because it was just going to take way to much of my time to spoon feed people and get them to stop.
LPisGood
What kind of thing is bad enough that it warrants multiple discussions without the junior engineer getting the hint that it’s a bad idea?
knorker
A junior engineer can make an API that can basically live forever as tech debt.
(Especially if it's an "API" persisted to disk)
lokar
I can’t say much without it being clear who it was, but a critical low level thing.
And the Sr was generally a very nice person who did not give much weight to levels, willing to engage with anyone.
markus_zhang
I have mixed feelings about this. In one part, JC is someone I look up to, at least from the perspective of engineering. On the other hand, putting myself in the shoes in someone who got the once in life chance to build a new OS with corp support for a new shiny device…I for hell would want to do this.
leoc
Look at the outcome of Meta's performance in AR/VR over the past few years: a fortune has been spent; relatively little has been achieved; the whole thing is likely about to be slashed back; VR, something Carmack believes in, remains a bit commercially marginal and easily dismissed; and Carmack's own reputation has taken a hit from association with it all. You can understand perfectly well why he doesn't feel that it would have been harmless to just let other people have whatever fun they wanted with the AR/VR Zuckbucks.
(Mind you, Carmack himself was responsible for Oculus' Scheme-based VRScript exploratory-programming environment, another Meta-funded passion project that didn't end up going far. It surely didn't cost remotely as much as XROS though.)
torginus
It's insane how VR has succeeded beyond most people's wildest dreams on the hardware front (all that hardware that goes into a VR headset either sounded like science fiction or seemed like would be exotic stuff costing tens of thousands of dollars), and the software also had standout successes, but it kinda just petered out both in the entertainment and professional realms.
ux266478
Reading on from that he says:
> If the platform really needs to watch every cycle that tightly, you aren't going to be a general purpose platform, and you might as well just make a monolithic C++ embedded application, rather than a whole new platform that is very likely to have a low shelf life as the hardware platform evolves.
Which I think is agreeable, up to a certain point, because I think it's potentially naive. That monolithic C++ embedded application is going to be fundamentally built out of a scheduler, IO and driver interfaces, and a shell. That's the only sane way to do something like this. And that's an operating system.
balamatom
>That monolithic C++ embedded application is going to be fundamentally built out of a scheduler, IO and driver interfaces, and a shell. That's the only sane way to do something like this. And that's an operating system.
Exactly! I picture the choice being grandfathering in compatibility with existing OSes (having the promised performance of their product in fact indirectly modulated by the output of all other teams of world's smartest throughout computing history and present day), vs wringing another OS-sized piece of C++ tech debt upon unsuspecting humanity. In which case I am thankful to Carmack for making the call.
I can understand how "what you're doing is fundamentally pointless" is something they can only afford to hear from someone who already has their degree of magnitude of fuck-you money. Furthermore in a VC-shaped culture it can also be a statement that's to many people fundamentally incomprehensible
WD-42
Exactly! It seems very narc-y. Just let me build my cool waste of company resources, it's not like Zucky is going to notice, he's too busy building his 11 homes.
Imagine being able to build an operating system, basically the end-game of being a programmer, and get PAID for it. Then some nerd tells on you.
markus_zhang
I'm not sure if you are trying to be /s, but yeah that's basically what I'm trying to say. Definitely better than working on those recommendation systems.
Damn, I'd pay to work in some serious OS/Compiler teams, but hey why should they hire me? Oh well...Yeah I'm doing a bit of projects on my side but man I'm so burnt out by my 9am-5pm $$ job + 5pm-10pm kid job that I barely have any large chunk of time to work on those.
kranke155
Carmack saw it as a waste of time. Is this really what we are doing now? Justifying that my waste of company resources is no less inefficient than the others?
com2kid
I got the chance to do this at Microsoft, it is indeed awesome! Thankfully the (multiple!) legendary programmers on the team were all behind the effort.
Anyway, if anyone reading this gets a chance to build a custom OS for bespoke HW, and get paid FAANG salary to do so, go for it! :-D
kranke155
If you want to do it you should be able to defend it against contrarian arguments that it’s a waste of time and company resources.
dmitrygr
Yup. This is how bloat is created.
undefined
randall
meta was a weird place for a while. because of psc (the performance rating stuff) being so important… a public post could totally demoralize a team because if a legend like carmack thinks that your project is a waste of resources, how is that going to look on your performance review?
impact is facebook for “how useful is this to the company” and its an explicit axis of judgement.
this_user
How large is their headcount these days? And how many actually useful products have they launched in the last decade? You could probably go full Twitter and fire 90% of the people, and it would make no difference from a user perspective.
undefined
aprilthird2021
But... That's not an HR violation. If something a team is working on is a waste of resources, it's a waste. You can either realize that and pivot to something more useful (like an effort to take the improvements of the current OS project and apply them to existing OSes), or stubbornly insist on your value.
Why is complaining to HR even an option on the table?
firesteelrain
One could argue that if it’s not in your swim lane, you just let it fail. And if you aren’t that person’s manager, you tell them the code or design that you are reviewing and thus the gatekeeper is not adequate. Politely. You said your part and no need to get yourself in trouble. Document and move on. If the company won’t listen then you move on. No need to turn it into a HR issue.
fluoridation
Complaining is always an option. The problem is that HR actually takes the complaint seriously.
bongodongobob
Just because something isn't an HR violation doesn't mean it's not wrong, rude, or unprofessional. Society is not a computer program. Being tactful is important to well adjusted people.
kranke155
Facebook has literally done very little in terms of new breakthrough products in a decade at least, and Bytedance has apparently just beat them on revenue.
izacus
Yeah, people getting really angry if you say anything bad about a product (!) is a depressing commonality in certain places these days.
I got angry emails from people because I wrote "replacing a primary page of UI with this feature I never use doesn't give me a lot of value" because statements like that make "the team feel bad". It was an internal beta test with purpose of finding issues before they go public.
Not surprisingly, once this culture holds root, the products start going down the drain too.
But who cares about good products in this great age of AI, right?
anal_reactor
When I compare workplace dynamics in the American company I work for with local company a friend of mine works for, I feel like I sold my soul to the devil.
shortrounddev2
Masters of doom portrays carmack as a total dictator of a boss. Doom Guy by John Romero seems to back this up
leoc
Masters of Doom does seems to want to, however accurately or not, set Carmack up as the antagonist of its story against Romero as the hero sometimes. I think that readers just largely didn't notice that since Carmack's heroic image was already so firmly established. In fact some of the early-ID stuff really does seem to raise some questions. (Was Tim Willits mostly Carmack's protégé, for instance?)
shortrounddev2
yeah and Doom Guy takes a lot of issues with Masters of Doom. You get the impression that MoD was looking to create a McCartney vs Lennon story and stretched the truth to do so (there are several factual errors in the book).
In Doom Guy, though, Romero says that after he left iD, he heard from others still working there that the company had become something of a dictatorship under Carmack, and that within X months (I forget how many), half the company had quit. Romero also qualifies several times that they were all in their early/mid-20s and didn't have the requisite life experience to be handling business situations well
undefined
pinoy420
[dead]
gorset
Mechanisms for getting the linux kernel out of the way is pretty decent these days, and CPUs with a lot of cores are common. That means you can isolate a bunch of cores and pin threads the way you want, and then use some kernel-bypass to access hardware directly. Communicate between cores using ring buffers.
This gives you best of both worlds - carefully designed system for the hardware with near optimal performance, and still with the ability to take advantage of the full linux kernel for management, monitoring, debugging, etc.
ronsor
> use some kernel-bypass to access hardware directly
You can always mmap /dev/mem to get at physical memory.
fooker
No, that's not really what kernel bypass means.
oasisaimlessly
Accessing hardware directky via /dev/mem is literally the original kernel bypass strategy, before we got the UIO and VFIO APIs to do it in a blessed way.
dmazzoni
I was at Google when the Flutter team started building Fuchsia.
They had amazing talent. Seriously, some of the most brilliant engineers I've worked with.
They had a huge team. Hundreds of people.
It was so ambitious.
But it seemed like such a terrible idea from the start. Nobody was ever able to articulate who would ever use it.
Technically, it was brilliant. But there was no business plan.
If they wanted to build a new kernel that could replace Linux on Android and/or Chrome OS, that would have been worth exploring - it would have had at least a chance at success.
But no, they wanted to build a new OS from scratch, including not just the kernel but the UI libraries and window manager too, all from scratch.
That's why the only platform they were able to target was Google's Home Hub - one of the few Google products that had a UI but wasn't a complete platform (no third-party apps, for example). And even there, I don't think they had a compelling story for why their OS was worth the added complexity.
It boggles my mind that Fuchsia is still going on. They should have killed it years ago. It's so depressing that they did across-the-board layoffs, including taking away resources from critically underfunded teams, while leaving projects like Fuchsia around wasting time and effort on a worthless endeavor. Instead they just kept reducing Fuchsia while still keeping it going. For what?
cmrdporcupine
Not only did they target Home Hub, they basically forced a rewrite on it (us, I worked on the team). After we already launched. And made our existing workable software stack into legacy. And then they were late. Then late again. And late again. With no consequences.
100% agree with your points. To me watching I was like -- yeah, hell, yeah, working on an OS from scratch sounds awesome, those guys have an awesome job. Too bad they're making everyone else's job suck.
raggi
By forced I guess you’re referring to the room full of leads who all said yes, but then reported otherwise back down to their ics to avoid retribution. I caught early wind of this from folks being super rude in early on the ground discussions and tried to raise it with Linus. One of the directors got his kickers in a twist and accused me of making a mountain out of a molehill. I guess clearly not, as the sentiment and division still stands.
cmrdporcupine
I don't care who agreed to what, it's bad engineering practice to take a working successfully launched product and throw out its entire working software stack no matter how inelegant it seems. To what end? What did Fuchsia offer? When it finally shipped -- what, 2, 3 years late? --- custmers couldn't even tell it happened.
And in order to make it happen it also required writing the already-launched HTML-based UI in Flutter/Dart. Again ... why? What for? There wasn't even a working "native" Flutter at the time, despite promises, and there certainly wasn't a working accessibility stack -- no screen reader, no magnification, nothing -- so that all had to be kludged in. It was everything wrong with the "rewrites considered harmful" distilled.
Not to mention terrible for morale, execution, planning, budget, customer satisfaction.
I was just a lowly SWE 3 "IC" just in the trenches, not nearly as "important" as all that, so my opinion mattered not at all. But to me it violated every sound engineering / project planning principle I'd learned in the 15 years of my career up to that point. Just another event that led to me becoming quite cynical about the ability of leadership at Google to actually manage anything of significant complexity that wasn't ads/search related.
Again, Fuchsia .. very neat. But it didn't belong there.
jppittma
Other teams decommitting is just how it goes.
surajrmal
It's a lot of work and hard to justify if you're looking for short term improvements. But if you're really committed to long term improvements, it absolutely makes sense. Google is actually willing to make long term investments. Publicly justifying the investment has never been a goal of the project which is why most folks probably don't understand it. Honestly I'm not sure why folks care enough to even do commentary on it. If you find it useful, you can participate, if not just ignore it.
Fwiw inventing a new application ecosystem has never been a goal and is therefore not a limitation for its viability. The hard part is just catching up to all the various technologies everyone takes for granted on typical systems. But it's not insurmountable.
I'm also not sold on the idea that having more options is ever a bad thing. People always talk about web browser monoculture and cheer on new entrants, yet no one seems to mind the os monoculture. We will all come out ahead if there are more viable OS out there to use.
touristtam
> People always talk about web browser monoculture and cheer on new entrants, yet no one seems to mind the os monoculture. We will all come out ahead if there are more viable OS out there to use.
3 main OSes vs 2 main browser engine for consumer to choose from?
Anyway the main issue with the Browser engine consolidation is that whoever owns the Browser engine, can make or break what goes in there. Just think about VSCode's current status with all the AI companies wanting to use it and make it their own product, while MSFT attempting to curtail it. At some point either MSFT decide it commit to FOSS on this one, or the multiple forks will have to reimplement some functionalities.
jppittma
I think the hope is that you just start there. They might have migrated the meeting room devices. Why would you set out to replace *everything* at once? Do something, get some revenue/experience, then try to fan out.
nashashmi
Wasn’t Fuchsia supposed to be a platform where different OS could run in a virtual environment and software packages would be complete containers? Was not this a new way of tackling the ancient OS problem?
These were my imaginations. I thought maybe an OS that could run on the web. Or an OS that could be virtualized to run on several machines. Or an OS that could be run along several other instances on the same machine each catering to a different user.
surajrmal
That doesn't sound anything like what fuchsia is or ever was. Fuchsia takes a different set of tradeoffs with respect to baseline primitives and built a new stack of low level user space on top of those new primitives. This gives the software fundamentally different properties which might be better or worse for your use case. For consumer hardware products I think it comes out ahead, but only time will tell.
raggi
I think what op was thinking of was early harmonyos, seen people confusing those a lot. Harmony now ofc isn’t what
CyberDildonics
Reinventing QNX will be cutting edge for decades to come.
diego_sandoval
Yeah, those were definitely your imaginations.
phendrenad2
I always felt that Fuchsia was a make-work program to keep talented kernel engineers away from other companies. Sort of a war by attrition.
surajrmal
That's a weird rumor that I'm not sure I understand. Things are not that complicated.
phendrenad2
If it's even a rumor then I started it, I just can't imagine Fuchsia serves any other purpose. I don't even usually give Google a lot of credit, but I just can't imagine they made something this useless and misunderstood the feasibility of such an OS this badly. It would be like Hewlett-Packard in the early 2000s levels of incompetence.
com2kid
Microsoft used to legit do this in the 90s. Recruit bus factor 1 employees from competitors by offering them large salaries.
It was much easier to cripple your competition back when there were several orders of magnitude less software engineers in the world.
aprilthird2021
And the crazy thing is there is arguably a lot more of a reason for Meta / Oculus to have had its own operating system because it is meant for a specific configuration of hardware and to utilize those hardware resources to a quite different goal than most other OSes out there. Even in that environment it was still a waste
yard2010
I guess it's just a political shit show at this point. Ideas go hard if the people behind them aren't playing the game well enough, no matter their value.
cmrdporcupine
There's few things worse for the long-term health of a software project than people who have hammers and are hunting for nails for them.
surajrmal
Isn't this how folks use Linux today? It's the only tool they know and don't understand the tradeoffs, hurting the product.
sulam
My understanding is that people are working on Fuschia in name only at this point. Of course some people are passionate enough to try and keep it alive, but it’s only useful to the degree that it can help the Android team move faster.
danielodievich
Back in mmm like 2002 or 2003 or 2004 while at Microsoft I read an internal paper from a few OS guys who hackathoned something for Bill Gates's Think Week (which is when he used to go to some island in San Juans or somewhere similar and just read curated papers and think, it was a huge prestige to get such a paper to him) and that something was an OS written from scratch with GC and memory management on top of something very .NET framework'y (which was released a couple of years ago. They had it booting on all kinds of hardware and doing various neato things. One of explicitly called design principles was 0 compatibility with anything Windows before. Which is why it didn't go anywhere I assume. I remember it was just a handful of engineers (presumably OS folks) hacking for like a month. . It was awesome to read about.
scrlk
Was it Singularity?
https://en.wikipedia.org/wiki/Singularity_(operating_system)
https://www.microsoft.com/en-us/research/project/singularity...
modeless
Singularity was cool. I'm sad that it was abandoned. The concept of using software isolation instead of hardware memory protection was really interesting.
adastra22
It was a multi-year project at Microsoft Research with a team of >100 developers.
https://www.zdnet.com/article/whatever-happened-to-microsoft...
danielodievich
I am very certain in my recollection that this was started much earlier than this as hackathon skunkworks before something like this happened at MSR. It didn't do anything beyond kernel and command line, there was no browser. I don't know if those two shared roots either. Anyhow, but yeah, still both were intellectual feats!
labrador
> my old internal posts... got me reported to HR by the manager of the XROS effort for supposedly making his team members feel bad
That jives with my sense that META is a mediocre company
gmueckl
It matters who you communicate concerns to. Something as fundamental as "I think that your team shouldn't even exist" should go to the team leads and their managers exclusively at first. Writing that to the entire affected team is counterproductive in any organization because it unnecessarily raises anxiety and reduces team productivity and focus. Comments like this from influential people can have big mental and physical health impacts on people.
sesm
This entire situation looks very suspicious. Was Carmack even responsible for triaging research projects and allocating resources for them? If yes, then he should have fought that battle earlier. If no, then the best he could do is to refuse to use that OS in projects he controls.
cma
It should be fine to give your opinion on efforts.
gmueckl
Carmack had no direct say over research AFAIK.
monkeyelite
That’s not how big companies work.
1718627440
Not when this is his personal opinion he thought nothing should follow from.
"I think that your team shouldn't even exist" doesn't mean "I want your team to no longer exist.".
gmueckl
But the name Carmack carries some clout and people listen to him (too) closely because of his reputation alone. This is soft power that automatically comes with responsibility.
labrador
If I was on that team I'd welcome the opportunity to tell John Carmac why he was wrong or if I agreed start looking for another project to work on.
When I was on nuclear submarines we'd call what you are advocating "keep us in the dark and feed us bullshit."
gmueckl
This assumes that you would be sincerely listened to, which you wouldn't in a case like this. Higher ups in large organizations don't have the bandwidth to listen to everybody.
Your sub's officers also need to constantly be aware of what to communicate to whom and in which language. Your superiors certainly kept you in the dark about a ton of concerns that were on their plate because simply mentioning them to subordinates would have been too distracting.
jonas21
Maybe on a mediocre team. But that was the parent comment's point.
On well-functioning teams, product feedback shouldn't have to be filtered through layers of management. In fact, it would be dishonest to discuss something like this with managers while hiding it from the rest of the team.
aprilthird2021
> Comments like this from influential people can have big mental and physical health impacts on people.
So what are we supposed to do? Just let waste continue? The entire point of engineering is to understand the tradeoffs of each decision and to be able to communicate them to others...
tejohnso
I'm sure that kind of crap helped nudge JC out of there. He mentions (accurate and relevant) reasons why something is probably a bad idea, and the person in charge of doing it complains that JC brought up the critiques, rather than addressing the critiques themselves. What a pathetic, whiny thing to do.
crote
You've got to remember that context is critical with stuff like this.
There's nothing wrong with well-founded and thoughtful criticism. On the other hand, it is very easy for this to turn into personal attacks or bullying - even if it wasn't intended to be.
If you're not careful you'll end up with juniors copying the style and phrasing of less-carefully-worded messages of their tech demigod, and you end up with a huge hostile workplace behaviour cesspit.
It's the same reason why Linus Torvalds took a break to reflect on his communication style: no matter how strongly you feel about a topic, you can't let your emotions end up harming the community.
So yes, I can totally see poorly-worded critiques leading to HR complaints. Having to think twice about the impact of the things you write is an essential part of being at a high level in a company, you simply can't afford to be careless anymore.
It's of course impossible to conclude that this is what happened in this specific case without further details, but it definitely wouldn't be the first time something like this happened with a tech legend.
pklausler
Ugly people like to blame the mirrors.
armchairhacker
What would be the real advantage of a custom OS over a Linux distribution?
The OS does process scheduling, program management, etc. Ok, you don’t want a VR headset to run certain things slowly or crash. But some Linux distributions are battle-tested and stable, and fast, so can’t you write ordinary programs that are fast and reliable (e.g. the camera movement and passthrough use RTLinux and have a failsafe that has been formally verified or extensively tested) and that’s enough?
mikepurvis
I think the proper comparison point here is probably what game consoles have done since the Xbox 360, which is basically run a hypervisor on the metal with the app/game and management planes in separate VMs. That gives the game a bare metal-ish experience and doesn't throw away resources on true multitasking where it isn't really needed. At the same time it still lets the console run a dashboard plus background tasks like downloading and so on.
ksec
Hold on a sec, is that the same on PS5? I am pretty sure that wasn't the case two generations ago. Is that the norm now, running on hypervisor ?
mikepurvis
It's been the case since the PS3: https://www.psdevwiki.com/ps5/Hypervisor
raggi
For this use case a major one would be better models for carved up shared memory with safe/secure mappings in and out of specialized hardware like the gpu. Android uses binder for this and there are a good number of practical pains with it being shoved into that shape. Some other teams at Google doing similar stuff at least briefly had a path with another kernel module to expose a lot more and it apparently enabled them to fix a lot of problems with contention and so on. So it’s possible to solve this kind of stuff, just painful to be missing the primitives.
Nuthen
Based on the latter tweet in the chain, I'm wondering if Carmack is hinting that Foveated Rendering (more processing power is diverted towards the specific part of the screen you're looking at) was one advantage envisioned for it. But perhaps he's saying that he's not so sure if the performance gains from it actually justify building a custom OS instead of just overclocking the GPU along with an existing OS?
mook
Wouldn't that be an application (or at most system library) concern though? The OS is just there to sling pixels, it wouldn't have any idea whether those pixels are blurry… well for VR it would all be OpenGL or equivalent so the OS just did hardware access permissions.
hedgehog
I think the context is that foveated rendering ties sensor input (measuring gaze direction) to the rendering pipeline in a way that requires very low latency. Past a certain point reducing latency requires optimizations that break normal abstractions made by user land, so you end up with something more custom. I'm not sure why that would require a whole new OS, the obvious path would be to put the latency-sensitive code onto dedicated hardware and leave the rest managed by Linux. If a bunch of smart people thought XROS was a good idea there's probably something there though, even if it didn't pan out.
raggi
Just overclock (more) the system that’s already in a severe struggle to meet power, thermal and fidelity budgets?
v9v
Maybe not applicable for the XR platform here, but you could add introspection capabilities not present in Linux, a la Genera letting the developer hotpatch driver-level code, or get all processes running on a shared address space which lets processes pass pointers around instead of the Unix model of serializing/deserializing data for communication (http://metamodular.com/Common-Lisp/lispos.html)
nolist_policy
You can do that on Linux today with vfork.
sulam
I stated this elsewhere, but at least six years ago a major justification was a better security model. At least that’s what Michael Abrash told me when I asked.
jamboca
Think you answered your own question. No real differences except more articles, $, and hype
const_cast
And, let's be real here: engineering prestige.
Everyone wants to make an OS because that's super cool and technical and hard. I mean, that's just resume gold.
Using Linux is boring and easy. Yawwwwn. But nobody makes an OS from scratch, only crazy greybeard developers do that!
The problem is, you're not crazy greybeard developers working out of your basement for the advancement of humanity. No. Youre paid employees of a mega corporation. You have no principles, no vision. You're not Linus Trovalds.
GeekyBear
My objection is that there is no universe in which Meta can be trusted with direct access to your raw gaze tracking data.
The only thing I can imagine that would be more invasive would require a brain implant.
mrpippy
My understanding is that this is a key tenant of visionOS’s design, where apps don’t get access to gaze data (I think unless they’re taking over the full screen?)
qiine
sadly they are working on it
agsnu
Huawei seem pretty committed to building their own OS and uncoupling from the Western technology stack in total
https://en.wikipedia.org/wiki/HarmonyOS_NEXT https://www.usenix.org/conference/osdi24/presentation/chen-h...
ronsor
The only reason Chinese companies can even get away with these big projects is because of state backing and state objectives. By itself, the market doesn't support a new general-purpose OS at this point.
betaby
> because of state backing and state objectives
MS is a state backed company. Very natural that China went the same path.
howdyhowdy123
No it isn't
torginus
'China only succeeds for evil reasons'
Besides, the statement's completely nonsensical - there were multiple OSes developed by for-profit corporations in the West (Microsoft, Apple, Nintendo, QNX, Be, etc.).
It's kind of an extraordinary statement that an OS couldn't be developed by a for-profit organization, especially if the hardware's somewhat fixed and you don't need to support every piece of equipment under the sun.
balder1991
Actually the “market” won’t prioritize anything that won’t give returns as soon as possible (except for the weird situation of VC money being poured in).
const_cast
You're downvoted but you're 100% correct.
It makes absolutely zero financial sense to create a new general purpose operating system.
That's billions of lines of code. With a B. And that's just the code - getting it to work with hardware?
Do YOU want to talk to 10,000 hardware vendors and get them on board? No! Nobody does! That's just money burning!
But, there are valid political reasons for creating a new general purpose OS.
com2kid
It is a lot less if you are aiming to support a small set of platforms, don't need general driver support for everything possible accessory and peripheral under the sun, and if your file system usage is limited.
If you are building for a single abstraction, code gets much simpler, instead of building a platform that multiple abstractions can then be built on top of.
baq
If you are China, the vendors are you and money is treated differently than in the west. Balance sheet will accommodate a project like that easily, especially if it decouples them from the US. They’ve already got their own software ecosystem which most people don’t hear about or heard once or twice, and it’s running their tech scene.
undefined
maxglute
lol the market has tons of support for OS that can't be sanctioned, especially Huawei, who you know is.
jambutters
They actually reuse Linux driver stack for hardware compatibility
laserbeam
Geopolitical reasons for making your own OS are actually reasonable and understandable. Not saying they are good, because I would much prefer a planet where we collaborate on these things… but they’re not dumb. They make sense in a similar way the space race made sense.
Get the top HN stories in your inbox every day.
https://xcancel.com/ID_AA_Carmack/status/1961172409920491849