Get the top HN stories in your inbox every day.
andix
RandomBK
Back when I ran Windows in a KVM VM for gaming, a lot of anti-cheat systems didn't take kindly to running in a virtualized environment.
Turning on HyperV to go KVM->HyperV->Windows effectively 'laundered' my VM signature enough to satisfy the anticheats, though the overall perf hit was ~10-15%.
beebeepka
Very interesting. I wonder what sort of (available) CPU would be ideal for such a setup. A 7800x3D or 7950x. Also, was there any hit on the GPU side?
RandomBK
More cache never hurts. I'd imagine there were GPU perf gaps, though they were hard to distinguish from CPU-based performance hits. The most notable issues were random latency spikes caused by the multiple layers of hypervisors, which interfered with some games and occasionally caused audio/video desync on Youtube.
I ultimately tore down that setup and just swapped to dual-boot. The steps needed to set up high-performance VFIO (i.e. clearing enough contiguous RAM for 1GB Hugepages) meant most of the benefits of VFIO never really materialized for me.
declaredapple
Yeah I'm very curious as to how this effected 99% framerates and frame pacing.
I suspect only a modest hit to average framerate, but I can only imagine it hurt the actual max frametimes which make it "feel choppy" even if the framerate is still higher then your monitor's refresh rate.
neilalexander
You are right that Windows itself runs under Hyper-V as a guest when virtualisation-based security is enabled and it even has paravirtual devices that are not massively different to VirtIO.
I think your statement about VMware Workstation is right as of today too with recent versions, although for a long time older versions would simply refuse to start if it detected that Hyper-V was enabled, presumably because it made assumptions about the host virtualisation support.
andix
It's not just security features that need Hyper-V. Also WSL (Linux on Windows) or the Android Subsystem (run any side loaded app or anything from the Amazon App Store) need Hyper-V. Both of them are super useful for me, more and more things are iOS/Android App based only. Linux should speak for itself.
ComputerGuru
Only WSLv2 needs (or uses) Hyper-V.
rdedev
Is it possible to use hyper v directly? Like could I boot into linux but switch over to Windows with just a key press? I'm guessing no since its not in Microsoft interest to do so
andix
That's an interesting idea, to run Hyper-V completely without Windows. I think it's not possible, at least not without some substantial amount of hacking.
But it's no problem to run Linux on Hyper-V. It's a hypervisor, off course you can start nearly any operating system as a VM. You can also give the VM access to some hardware components. But I don't think it's possible to get a full native Linux desktop experience, with GPU/Screen, Keyboard and Mouse connected to the host system.
Edit: this post seems to answer your question, not sure if it's correct: https://superuser.com/a/1531799
als0
You can soon run Linux on Hyper-V without Windows: https://www.theregister.com/2021/02/17/linux_as_root_partiti...
ComputerGuru
Not with Hyper-V but the thing to be aware of is there is no difference which you initially “boot into” since each is essentially run at the same level.
You can install ESXi (free) to do what you are asking, though.
andix
ESXi is a completely headless system, except some minimal management UI/CLI there is no possibility to directly interact with the VMs on the host system. At least that's my understanding.
And I think a very similar thing can be archived with Windows Server Core. Running Hyper-V with just a minimal Windows installation for management, without the full Windows UI.
edude03
> A lot of Windows features depend on Hyper-V, once enabled Windows is not booted directly any more, Hyper-V is started and the main Windows system runs in a privileged VM.
Got a source for this? Not that I don't believe you but other than for the Xbox I haven't seen/can't find any details about this.
dgellow
Surprised you didn’t find the information, it’s covered in details in Microsoft own docs: https://learn.microsoft.com/en-us/virtualization/hyper-v-on-...
quote:
“ In addition, if you have Hyper-V enabled, those latency-sensitive, high-precision applications may also have issues running in the host. This is because with virtualization enabled, the host OS also runs on top of the Hyper-V virtualization layer, just as guest operating systems do. However, unlike guests, the host OS is special in that it has direct access to all the hardware, which means that applications with special hardware requirements can still run without issues in the host OS.”
From https://learn.microsoft.com/en-us/virtualization/hyper-v-on-...
marshray
"Virtualization-Based Security: Enabled by Default"
https://techcommunity.microsoft.com/t5/virtualization/virtua...
josephg
> Hyper-V is started and the main Windows system runs in a privileged VM.
What are the performance implications of that?
abhinavk
Minor performance loss. 5% fps on average. MS recommends turning it off if gaming is your primary use.
overstay8930
Even then it's really not that much of a hit if you have half-decent hardware, I've kept it on and I think the only issue I saw was launch day BG3 and it would use much more power from the wall than when I turned it off.
moffkalast
> Hyper-V is started and the main Windows system runs in a privileged VM
Wait it's all VMs? Always has been?! That is actual one sentence horror.
deaddodo
It hasn't always been, nor is it necessarily now. If you enable Hyper-V, that will act as Hypervisor for your machine and boot Windows by default. Applications that use it (VMWare, for instance, or Microsoft ones like WSL2) will add their own guests to the Hypervisor.
It is not the default configuration. And it wasn't even installed before Windows 8.
andix
Isn't virtualization based security the default for Windows 11? I only have upgraded Win 11 systems, so no idea what's the default on a fresh installation.
lodovic
Sometimes it's hard to tell how many VMs there are between my code and the actual hardware. It seems to be VMs all the way down.
undefined
tbenst
Does anyone know the state of running Windows / Linux x86-64 virtualization on Apple Silicon? This article is super interesting but dances around the most important application for VMs on Mac.
tecleandor
For Linux, and if you only need to run CLI tools, I've been very happy with Lima [0]. It runs x86-64 and ARM VMs using QEMU, but can also run ARM VMs using vz [1] (Apple virtualization framework[2]) that is very performant. Also, along with the project colima [3] you can easily start Docker/Podman/Kubernetes instances, totally substituting Docker Desktop for me.
For desktop environments (Linux/Windows) I've used UTM [4] with mixed success. Although it's been almost a year since last time I used it, so maybe it runs better now
There's also Parallels, and people say it's a good product, but it's around USD/EUR 100, and I haven't tested it as I don't have that need.
And there's VMWare Fusion but... who likes VMWare? ;)
[0] - https://lima-vm.io
[1] - https://lima-vm.io/docs/config/vmtype/#vz
[2] - https://developer.apple.com/documentation/virtualization?language=objc
[3] - https://lima-vm.io/docs/faq/colima/
[4] - https://mac.getutm.app/
[5] - https://www.parallels.com/products/desktop/vaxman
A correct solution is to remote into instances on dedicated (bare metal) servers (use ECC memory and SSH with a good cipher for your transport, even across your local or VPN/WireGuard.. network!), perhaps using KVM/QEMU for macOS VMs (yep, requires a MacPro to be legal) and KVM/Firecracker for Linux VMs. You could do Windows VMs in KVM/QEMU, but will have less friction remoting into an alternate (HyperV) box for that (using Windows-specific security products). RDP-over SSH for Windows, MPEG-VNC-over-SSH for macOS (and Wayland).
Why? Did you checkout the Privacy Policy for Parallels? The last time I checked, it allowed them to remotely take anything from your systems that they want. If I wanted that, I would just use a VPS running on someone else's machine in a cage somewhere.
VMware, by the way, is now Broadcom, as in they reportedly replaced the staff and ripped up the perpetual licensing model (subscription only now)... Even before that, the Fusion product development had been shifted overseas, presumably to avoid paying higher wage software engineers in Silicon Valley (what a brilliant way for a software company to innovate) --now a company in Singapore is wearing their skin and the C-suite are out of jobs too.
cangeroo
Parallels has a bad desktop user experience using Linux because of poor support for continuous scrolling. Lots of users have complained on their forums for years, but they refuse to do anything about it. I bought it for one year, and regretted the experience. It works well with Windows though.
Generally, the experience with MacOS is mediocre thanks to Apple and their Virtualization Framework, with many basic features missing for years.
deaddodo
This is ironic, considering Parallels was originally an Apple first product designed specifically for virtualizing Windows and running it's apps "seamlessly" alongside native Mac ones.
a_vanderbilt
Can you elaborate on the continuous scrolling? I've actually never noticed anything off about the scrolling.
jiveturkey
> who likes VMWare?
I do!
I abandoned Parallels when they crippled the perpetually licensed version. "Pro" is only available via subscription for a few years now. Even before then, their store was disgusting with forced bundling of additional hostile products, and later they became optional but were still added to your cart by default.
dada78641
My personal experience is that Windows 11 for ARM runs extremely well on Parallels. It includes an emulation layer for x86 apps that's completely invisible and just works. I can even still run Cakewalk, a program originally from the 90s, on my M1 Mac to edit midi files.
With that being said, this is just my view as someone who uses simple consumer oriented programs, and I'm not sure how well it'll work for more serious purposes.
sydbarrett74
Have you tried any Windows games on Apple Silicon? What kinds of Windows apps do you tend to run? I've used the macOS version of World of Warcraft on my '20 Mac Mini (16GB RAM) and even with utilities that adjust the mouse acceleration curve, I still find game play clunky. I was hoping I could run WoW under a VM and have it be somewhat performant.
solardev
For gaming, you want to use Crossover or the FOSS Whisky app. Parallels only runs Arm Windows which then emulates x86. This is much much slower than using Wine to translate system calls and Apple's Game Porting Toolkit to handle the Vulkan or DirectX graphics. Crossover and Whisky take care of the internals of those for you. Give those a shot, I think you'll find it much better than a full VM. In my experience some games do run better this way than the MacOS versions, though that's usually because the Mac client wasn't compiled for Apple Silicon and so Rosetta is emulating. Unfortunately, I'm pretty sure WOW is already Apple Silicon native, so you probably won't get better performance this way.
Crossover is paid but has better compatibility: https://www.codeweavers.com/crossover/ (or see https://www.codeweavers.com/compatibility for compatible games)
Whisky is free, and will work just as well for games it supports, but has compatibility with fewer games (no official list, so you just have to download it and try yourself): https://github.com/Whisky-App/Whisky
For the mouse stuff, try a USB mouse if you're not already using one, combined with https://github.com/ther0n/UnnaturalScrollWheels to disable acceleration and fix the scroll wheel.
That works really well for me to get a Windows-like mouse curve.
TLDR skip the emulation and go for translation layers via Crossover, Whisky, and GPT. It'll be much faster. The mouse thing is separate and has nothing to do with the graphics layer.
------
Personally though, I'd just pay $20 a month for Geforce Now. It is much much faster than even the highest end Mac. I don't think WOW is on there, but for supported games, it's a phenomenal experience... sold my 3080 desktop and replaced it with GFN on my Macbook. It's fantastic.
Supported games: https://www.nvidia.com/en-us/geforce-now/games/
swozey
When I first got it I tested a few games on my 2022 M1 Max 64GB 16" MBP both natively and in Windows ARM.
The only one that I remember is Crusader Kings II. It has a native MacOS version which I tried and it ran pretty rough. Very, very choppy on the map. I didn't tweak any graphics settings from the defaults and put no effort into making it run better, FWIW.
Next, I ran it via Windows ARM in Parallels. Now that I'm writing this I have no idea what I did to test it. I feel like it just ran but I don't think I did anything specific to make an x86 process run on ARM. Maybe Windows ARM does that for you, I forget.
Anyway, it ran really well. Absolutely much, much better than the native app. It felt completely smooth navigating the map, etc. I did NOT play it in a big game that lasted hundreds of years. I probably did 5 turns, mostly checking to see how smooth scrolling the map and the UI/UX stuff was.
I have a 4090'd gaming desktop so it wasn't a big deal to me to be able to game on the mac which is why I put as much effort into this as you can see. lmao.
It's amazing at everything else!
rogual
Not OP, but I use Parallels on M2 and gaming is a bit hit-or-miss. I'd say maybe 80% of games work flawlessly, and 20% have some sort of issue ranging from the annoying to the unplayable.
For non-gaming, Parallels is extremely solid. I use Visual Studio and various productivity apps and they all work perfectly -- although Parallels is enshittified scumware that pops up ads at every available opportunity, so if that kind of thing bothers you, it's worth considering it before buying.
timenova
YMMV, but from my own experiments, on an M1 Macbook Air, it did not work well for me. I was trying to compile an Elixir codebase on x86-64 Alpine Linux. Elixir does not have cross-compiling. I tried it in a Docker container, and in a Linux VM using OrbStack. Both approaches fail, as it just segfaults, even on the first `mix compile` of a blank project.
This problem does not exist in ARM containers or VMs, as the same project compiles perfectly in an ARM Alpine Linux container/VM.
It's definitely not plug-and-play for all scenarios. If anyone knows workarounds, let me know.
cschmatzler
That’s an underlying QEMU bug, which is used by Lima [1]. Add `ENV ERL_FLAGS="+JPperf true"` to your Dockerfile and it will build just fine cross platform. The flag just changes some things during build time and won’t affect runtime performance.
timenova
Thanks. I can confirm that this works. Compiling a new project no longer segfaults, and `Mix.install()` works in `iex` too.
thejosh
For anything that doesn't need a UI, you're FAR better off having some remote server than trying to emulate, it's far to slow for ARM64<>x86-64 in both directions..
Many things are just so much easier with a remote server/workstation somewhere than trying to deal with VM shenanigans.
ARM64 visualised on the otherhand (Linux works great, macos seems good(?), haven't tried Windows) with UTM is pretty great.
timenova
I absolutely agree! I finally went in that direction. The only reason I was trying this whole ordeal was because I was trying to get some private dependencies included in the build without going through the whole hassle of git submodules. Now I just include those deps as a path include in mix.exs. Not a great solution I know...
travisgriggs
I’ve been able to do this (build x86/ubuntu targeted elixir) with UTM on my M1 Mac. It ain’t fast, that’s for sure. But it works. Which is interesting because sibling responses to your Lima experience claim it’s because of a qemu “bug”, but utm runs qemu as well.
ramchip
The bug is triggered by the JIT - maybe you didn’t have it enabled?
toast0
> Elixir does not have cross-compiling.
Elixir compiles to beam files, like Erlang, right?
I was pretty sure beam files are bytecode and not platform specific?
timenova
You're right that Elixir source code compiles to BEAM bytecode, however, if you run `mix release`, you need to ensure that the release runs on the same target OS and OpenSSL version. My aim was to build a `mix release` on my M1 Mac to run it on an x86-64 server.
From the docs [0]:
> Once a release is assembled, it can be packaged and deployed to a target, as long as the target runs on the same operating system (OS) distribution and version as the machine running the mix release command.
The `mix release` command outputs a directory containing your compiled Elixir bytecode files, along with the ERTS (Erlang Runtime System). The ERTS it bundles is only for your host machine's architecture. Another point to remember is that some dependencies use native NIFs, which means they need to be cross-compiled too. Hence it's not as easy as replacing the ERTS folder with one for another architecture in most circumstances.
There's a project that aims to alleviate these issues called Burrito [1], but when I tried it, I had mixed success with it, and decided not to use it for my deployment approach. It looks like Burrito has matured since then, so it would be worth taking a look into if you need to cross-compile.
The gist is, while possible, its significantly harder to get an Elixir release running on another architecture than say is the case for Go.
[0] https://hexdocs.pm/mix/1.16.0/Mix.Tasks.Release.html [1] https://github.com/burrito-elixir/burrito
kamilner
I regularly use Orbstack to develop for x64 Linux (including kernel development). It works transparently as an x64 linux command line that uses Rosetta under the hood, so performance is reasonably good.
It can also run docker containers, apparently faster than the normal docker client, although I haven't used that feature much so I'm not sure.
undefined
deergomoo
You can use Rosetta to run x86 Linux binaries with good performance under a virtualised ARM Linux [0], but if you want to run fully x86 Windows or Linux you’ll need to emulate, not virtualise. It’s possible, but there’s a big performance hit as you might expect.
[0] https://developer.apple.com/documentation/virtualization/run...
kamilner
I'm not sure how OrbStack does it, but it can run a fully x64 Linux using Rosetta with quite good performance.
AkshitGarg
IIRC that runs a x86_64 userland (using Rosetta) on a arm64 kernel.
outcoldman
I do my work on Apple Silicon laptops since the first M1 came out.
I use Docker Desktop that can run for me amd64 images as well.
I do run Splunk in it (which is a very enterprise product, written mostly in C++), I was so shocked to see that I was able to run it on Rosetta pretty much from day 1. Splunk worked on macOS with Rosetta from day 1, but had some issues in Docker running under QEMU, now Docker uses Rosetta for Linux, which allows me to run Splunk for Linux in Docker as well.
I use RedHat CodeReady Containers (local OpenShift), which works great as well.
And I use Parallels to run mostly headless Linux to run Kubernetes. And sometimes Windows just to look at it.
In a first two years of Apple Silicon architecture I definitely had to find some workaround to make things work. Right now I am 100% rely only on Apple Silicon, and deliver my software to large enterprise companies who use it on amd64/arm64 architectures.
donatj
Your mileage may vary, but I've been quite happy running x86-64 software in an ARM build of Windows 11 in UTM.
Nothing graphical or all that intensive though, just some productivity tools I can't live without.
avidphantasm
I run full AMD64 containers using Docker Desktop, which uses Rosetta under the hood. On my M1 Pro they were a bit slow (maybe 25% slower than my work laptop, which is a 12th gen. i9), but good enough in general. I have since upgraded to an M3 Max and AMD64 VMs seem to be a lot faster, maybe even faster than my 12th gen. i9. I really hope Apple doesn’t get rid of Rosetta support in VMs, ever. It’s just too useful.
Erratic6576
I wish every OS user logged in their isolated VM of the OS. This way, Adobe could install all their bloatware and take control of their user and I could keep ownership of my Apple’s computer
jdewerd
What's sad is that processes are already virtual machines, they just need to have a better permissions system. What's really sad is that for the most part those better permissions systems have been built (namespaces/cgroups on linux, gatekeeper on Mac OS) but nobody figured out how to expose that to end users before the business people figured out that there were trillions of dollars available if you charged rent to centrally manage it.
We were so close. Sigh.
lox
Is this not essentially what docker did with cgroups? It’s incredibly tricky securing containers, I’m not at all confident process only sandboxes would be adequate.
theossuary
There's a big difference between securing containers, and using them to prevent Adobe from polluting they entire system. Containers are an excellent way to provide lower guarantees of security (though still more than is there currently), with higher usability. Microvms also fit into the model very cleanly and could be used transparently when higher security was required.
The fact that VMs are necessary has shown how much OSes have failed. That we need to take an OS and package it into multiple VMs to get any real isolation is a problem that OSes should solve for.
xorcist
Docker makes it really hard to do anything with cgroups. Unless you mean letting Docker manage everything about them, in which case you can configure nothing.
Systemd did the cgroups thing right. Apart from the v1/v2 thing, but if you can use only v2 then you do not need to think about it.
GuB-42
Essentially shipping an entire OS with every app looks horribly inefficient to me. Especially if the only thing you need is sandboxing.
Containers would be a more appropriate solution, and even containers would be somewhat overkill. Simply using UNIX-style permissions and an application-specific UID could do. I think it is how it is done in Android.
curt15
Isn't that roughly what Qubes OS provides?
deusum
Qubes does allow creating a VM for just about any program or service. But, in my experience, it suffers from latency. So, while fine for web browsing, it wasn't too keen on playing videos. YMMV of course, but Adobe products are already hogs without the emu layer.
rustcleaner
I daily drive Qubes and will never go back to a normie system again if I can help it!!
fulafel
Does it support macOS VMs?
jacquesm
> I could keep ownership of my Apple’s computer
That's a funny slip...
Erratic6576
Pun intended
svdr
I wanted to use a MacOS VM with Parallels for development. It is very easy to install and runs fast, but it's impossible to sign in with an Apple ID, which severely limits its use.
naikrovek
That’s Apple’s decision. It was intentional.
Apple are very weird about MacOS VMs.
sneak
Severely? I use macOS directly on hardware without an Apple ID as my daily driver.
It works fine.
justinclift
One major point which this article failed to mention, is that macOS only lets you run two VMs at the same time.
So if you were thinking of getting a mac mini to use as a build server, just buy the absolutely cheapest model.
Unlike on Linux systems, you're not allowed to have enough VMs to actually fully utilise the hardware. So buying headroom (resource wise) is a complete waste.
stephenr
Two macos VMs. It you want dozens of Linux VMs for each distro/release and eg the two most recent macOS releases, you're fine.
justinclift
Thanks, didn't know that.
Sounds like we could potentially get some Windows ARM64 builds happening as well then. Might be a project for a future weekend. :)
gnatolf
What's the progress, or who's behind a virtio layer for windows? Any hope that this will work in the foreseeable future?
virtioliker
There's mature VirtIO drivers for just about everything already, under the virtio-win umbrella: https://github.com/virtio-win/kvm-guest-drivers-windows
My desktop PC is using libvirt+qemu (on an Arch host. I use Arch, btw) to PCI passthru my RTX 4090 GPU to a Windows guest. I installed the guest initially with emulated SATA for the main drive. Once Windows was up and running, I installed virtio-win and the guest is now using virtIO accelerated drivers for the network interface + main disk. I'm also sharing some filesystems using virtio-fs.
ComputerGuru
Did you have to use any hacks to get a regular GTX/RTX card to pass through? Last time I tried this with ESXi, it was insanely difficult and poorly documented to get non-Quadro cards to do pass thru (admittedly on a Windows guest).
my123
NVIDIA changed this in 2021: https://nvidia.custhelp.com/app/answers/detail/a_id/5173/~/g...
diffeomorphism
Do you mean windows using virtio? Then the answer would be red hat and since many years ago:
virtioliker
(oh and to answer the other part of your question: I believe Red Hat contribute a lot to virtio-win)
gnatolf
Thanks. I'm sorry if my question wasn't particularly complex to answer ; - )
mschuster91
> Running older versions of macOS in a VM enables users to run Intel-only apps long after Rosetta 2 support is dropped from the current macOS
Now if they'd offer that for x86 Windows guests... I mean, games are the obvious thing but I guess the architectural differences between Apple's PowerVR-family GPU and NV/AMD are just too large, but there's a ton of software that only has Windows binaries available and which I still need either an Intel macOS device or an outright Windows device to run.
Yes I know UTM exists but it's unusably slow and the Windows virtio drivers it ships are outright broken.
mort96
Even if you could get Windows working, what good would ARM Windows do?
Honestly, running virtualized x86_64 Steam (using something like FEX) under Asahi Linux and using Proton seems like the most fruitful way to play Windows games on Apple Silicon hardware (at least once the GPU drivers mature).
zamadatix
ARM Windows probably already does better than future Asahi+Proton+FEX in that it includes a Rosetta2/FEX like layer of it's own, is otherwise the native Windows without needing to fake that interface, and e.g. Parallels already has DX11 working through Metal without the need for a future version of Asahi drivers combined with the layer in Proton.
The downside to either approach is anticheats. Games without them can run great today, games with them can't run at all because they are kernel level x86 code and emulating the kernel architecture is too slow for games. It looks like Windows is doing another ARM push with higher end chips and less vendor exclusivity this time around - maybe that'll finally get enough market penetration to make this less of an issue going forward, at which point virtualized ARM Windows could be nearly fully viable.
nxobject
There’s one obscure use case that won’t work, sadly - people who have to use proprietary binary only drivers! I’ve been through hell trying to get Oculus Link to work.
mschuster91
I meant x86 Windows of course. No other way to flash Samsung or Mediatek phones, for example - the tools are all proprietary and only run on Windows.
nottorp
> > Running older versions of macOS in a VM enables users to run Intel-only apps long after Rosetta 2 support is dropped from the current macOS
> Now if they'd offer that for x86 Windows guests...
Hmm the way i read it they're running older ARM versions of Mac OS in the VMs. Not x86 versions. The virtualization infrastructure doesn't do architecture translation, that is done in software by the OS running inside the VM.
As for x86 games... they run pretty well with x86 crossover emulating x86 windows that is then translated by rosetta 2 to arm... is your head spinning yet?
ngcc_hk
If the op premises partially is about when Rosette v2 no longer support, at least the older vm arm based macOS can run Apple intel App using the now current then obsolete Rosette v2.
Never thought of that but it happened to power pc App …
Tbh they should keep it as unlike powerpc not much people use, intel based app is all around. Having both intel and arm, only one upcoming platform is missing. But supporting a translator as said implicitly in other post is hard. New intel/amd cpu code may appear, ignoring all those amd and nivdia Gpu code which are mostly not supported anyway.
WanderPanda
My great confusion is why docker —-platform linux/amd64 is so much faster (almost native performance) than x86 UTM VMs. Can docker somehow leverage Rosetta?
cpuguy83
Yes, Docker can leverage Rosetta. I haven't used Docker Desktop in a bit (b/c I end up doing my work in a VM on Azure since I work on Azure), but not too long ago there was an option to enable it in the settings panel, not sure if it's default or not these days.
Any Linux VM can use Rosetta[1] you just need to enable it when booting the vm. This creates a shared directory in the vm that you need to mount and then register Rosetta with binfmt_misc (same way Docker uses qemu).
[1] https://developer.apple.com/documentation/virtualization/run...
lloeki
> Any Linux VM hosted under Virtualization.framework can use Apple's Rosetta 2 for Linux translation binary via binfmt_misc
FTFY
You don't need the mount, it just exposes the Linux binary stored on macOS, which you can copy over. The binary does check for it being run under virt.fw, although some folks managed to hack that away (IIRC running it on AWS whatever)
MBCook
I remember seeing it was out of beta in the release notes of Docker Desktop not too long ago.
koenigdavidmj
Docker runs an ARM kernel and uses qemu in user mode on the individual binary level. Anything CPU-bound is emulated, but as soon as you do a system call, you’re back in native land, so I/O bound stuff should run decently.
arianvanp
Note that UTM also supports rosetta. Boot up an aarch64 image with Rosetta support and then load the mounted binfmt handler. Now you can run x86 binaries on your aarch64 UTM VM. Works flawlessly.
If you use NixOS you can simply enable https://search.nixos.org/options?channel=23.11&show=virtuali...
steeve
It does yes, Apple provides Rosetta for Linux: https://developer.apple.com/documentation/virtualization/run...
naikrovek
MacOS on Apple Silicon does not allow whole VM Rosetta.
You must run arm64 MacOS or Linux VMs and those VMs can run x86_64 binaries via Rosetta. Apple documented this.
Running an x86_64 virtual machine on MacOS requires software emulation, which is why it is so slow. Docker sets it up correctly so that the Linux VM it uses is arm64 but the binaries in the containers are x86_64, so that Rosetta can be used on those binaries.
jbverschoor
Ditch Docket.. Orbstack is fast..
chaxor
Man is this the case.
I have been trying to figure out how to have a single command to make a Qemu VM on an M2 Apple silicon chip for like a year without much luck.
All I want is to run something like Alpine Linux + Sway WM on Qemu while on macOS or AsahiLinux with one command on cli.
On x86-64 its fairly simple :(
r-bar
Lima (1) is a project that packages Linux distros for MacOS and executes them via qemu in the backend. Maybe you could solve your problem by launching one of their vms and inspecting the command line it generates. You might find an option you were missing.
chaxor
I'll check this out. There are many different systems out there like UTM and such, but I want the most basic / minimal amount of dependencies, which will work basically anywhere - which is just QEMU. Not UTM, or maybe parallels, sometimes Lima, for Mac and then virtualbox for windows, and QEMU Linux type of nonsense. Just QEMU should suffice everywhere, and it's much more secure that way.
hinkley
I think this is basically what Colima is doing, if you’re willing to run docker containers to get it
chaxor
It would be silly to install Colima for this though.
If the argument is that Colima --calls--> Lima --calls--> {a ton of different things including kubernetes and docker and ...} --calls--> a QEMU command somewhere deep in the code, then the only thing that is required here is QEMU. Not kubernetes or any other junk on top that just adds complexity and potential insecurity.
One QEMU command should be all that's required.
janandonly
Owh waawh. I see this article mentions drivers written by Rusty Russell, who I encourage everyone to follow on twitter (he is @rusty_twit) for his deep insights into software development.
caycep
Do all commercial desktop VMs - VMWare fusion/parallels/UTM/Vimy now use this virtio model?
in theory win arm64 should run roughtly the same for all?
naikrovek
If they are using Virtualization.Framework they are all going to have the same performance. Apple made it very easy to create and use VMs with this framework, so I would expect most tools to use it.
cactusplant7374
Is it possible to virtualize 32 bit?
zamadatix
Virtualize no, there is no hardware support for 32 bit ARM on Apple Silicon. You can emulate it (32 bit ARM or x86) just fine though. Emulating the whole OS will be relatively slow compared to emulating just a userspace binary.
Get the top HN stories in your inbox every day.
Doesn't Windows do it more or less the same?
A lot of Windows features depend on Hyper-V, once enabled Windows is not booted directly any more, Hyper-V is started and the main Windows system runs in a privileged VM.
All other VMs need to utilize the Hyper-V hypervisor, because nested virtualization is not that well supported. So even VMware then is just a front-end for Hyper-V.