Brian Lovin
/
Hacker News
Daily Digest email

Get the top HN stories in your inbox every day.

vxNsr

Sooo... is intel like crying in a corner right now? On one side we have AMD eating their lunch in the consumer space, they still haven’t launched a full gamut of 10nm CPUs. Apple just announced that they’re dropping them in basically the next 5 years. And now ARM really is encroaching on their core server business.

I feel like in 20 years from now we’re gonna be using intel as a cautionary tale of hubris and mismanagement. Or whatever it is that caused them to fail so spectacularly.

raxxorrax

Honestly I think the proclaimed death of intel is vastly exaggerated. AMD came back from worse places and they do still have the manufacturing edge. Intel CPU for desktop still use less power, which is a big plus. How many people do you know that bought the fastest CPU available recently? Glad AMD is back on track, they were in a rough place, far worse than intels current situation.

blattimwind

For what it's worth, Intel is still faster in most applications, simply by virtue of having a clock speed advantage that by far exceeds any IPC difference, and also by having much lower memory latencies. AMD has basically a 20-30 ns extra latency over Intel; so with good memory you can do ~45 ns on current Intels, but that will give you ~65 ns on a Ryzen. That's significant for a lot of code (e.g. pointer chasing, complex logic etc.).

On the other hand, few applications scale efficiently to more than just four cores. Yes, of course, AMD delivers more Cinebenchpoints-per-Dollar and usually more Cinebenchpoints overall, but that's not necessarily an interesting metric.

Personally I find that if I'm waiting on something to complete that the application in question tends to use only a tiny number of cores for the task at hand. Usually one.

Another significant weakness of AMD's current platform is idle power consumption.

These factors leave me with a much more nuanced impression than "Intel is ded" or "HOW IS INTEL GOING TO CATCH UP TO THIS????"; CPU reviews these days are just pure clickbait.

jchw

The problem is a lot of tasks that people want their CPU to be fast at is exactly stuff that parallelizes almost embarrassingly well. Compiling code, video rendering, compressing files. People buying CPUs for this are not as concerned about how many cycles it takes to jump through a vtable as long as its not slow.

Meanwhile, pointing at memory latency as the flaw in Ryzen has been a popular misdirection for a while now. People warned me about it being a performance pitfall since before I bought my first Ryzen processor. In practice it doesn’t show up in even the most complexity intensive workloads as a serious issue. For example, Zen 2 performs very well on hardware emulation. This is possibly because where it takes a hit in memory latency it makes up in caching and prefetching, but honestly I don’t know and I am not sure how to measure. In any case it’s certainly favorably comparable to Intel’s best chipsets in single core workloads even if not on top. Factor in price and multicore workloads and you now have the exact reasons why people like me have been singing the praises... Intel’s single core lead may exist in some form but it is not what it once was, it is not an unconditional lead where an Intel core beats an AMD core. Not even close.

None of this means Intel’s dead of course, but IMO thats mostly because they have a lot more going on than just being the best CPU. They’ve got their dedicated GPU coming out, and plenty of ancillary technology as well. It does seem like for a company like Intel having to take a backseat in CPUs for a while will be painful; unlike AMD, this is a new position for Intel and maybe not one they will handle well.

dralley

>Intel is still faster in most applications, simply by virtue of having a clock speed advantage that by far exceeds any IPC difference

This is already only marginally true, the difference is only about 5% depending on the application, and in some applications AMD comes out ahead anyway. Expect the remaining difference to disappear when Zen 3 releases in a few months.

>Another significant weakness of AMD's current platform is idle power consumption.

AMD seems to have caught up here almost entirely. They've done a lot of work to improve idle power consumption lately and the node advantage probably helps, too.

highfrequency

Hi, this is very informative. To clarify a couple points: - By memory latency, do you mean the time to access an uncached portion of RAM? - RE clock speed advantage, are you referring to the fact that AMD turbo boost doesn't hit 5GHz?

klelatti

AMD had to take radical action to survive and it was never in Intel's interests for it to vanish.

Intel had the twin defensive 'moats' of x86 and the leading process technology.

But now Intel has stalled relative to TSMC on the process lead (and possibly lost it) and the last few days have shown that the x86 moat is crumbling. The world will not move to TSMC manufactured ARM overnight but a significant shift could happen quite quickly I think. Intel will / has defended with lower prices but that will ultimately mean a big shift in business model.

but ....

Intel is the last firm with leading edge technology manufacturing in the US. If it starts to falter I can see a concerted effort to maintain that from the US government.

api

We still have somewhat of a lead in areas like composites, exotic materials, jet engines, turbines, and aerospace, but they are eroding.

We can make good cars, though the majority of the good cars made in the US are made under the management of Japanese car companies. Tesla has some great technology and product design but on the manufacturing front they are behind the majors.

smolder

Intel desktop cpus do not use less power. The only metric where intel desktop cpus win right now is maximum per-core performance, and not by much. They're worse at performance per dollar, performance per watt, maximum multithreaded performance, and overall power efficiency. (edit: perhaps not idle power efficiency...)

blattimwind

As a desktop user, my CPU tends to be mostly idle. So overall power efficiency is impacted a lot by idle power consumption; my AMD Ryzen CPU alone draws significantly more power in idle than my previous several-years-old Intel system. In fact, just the IO die alone draws almost as much power as some office PCs.

bcrosby95

Yes, AMD as a power hog or a space heater is just a weird assumption people make based upon some old chips. It hasn't really been true for any of the Zen architecture chips.

pedrocr

All I've read says Intel has a manufacturing disadvantage to TSMC (where AMD, Apple and many others get their chips) and their CPUs are less power efficient, not more, compared to AMD and particularly these kinds of upcoming ARM parts. Is that not the case?

ksec

> Is that not the case?

If simply having a better, and cheaper product will equal to immediate success then Mid-High End Fashion, as well as gazillion of other products in many other industry would not have existed as there are always competitors that offers something better at cheaper price.

Marketing / Discovery and Channels / Distributions. And that also excluding the software advantage Intel has.

AWS only just had their GA on Zen 2, nearly a year since they first made the announcement. Compared to Intel. I dont have any insider information. But I guess AMD has a lot to learn with regards to dealing with HyperScalers.

And you may have notice, Intel has way more leaks than usual in the past 12-18 months. That is part of the PR play to keep people from buying AMD while they try to Catch up.

Intel as of today is still operating at 100% capacity with back-log orders to fulfil, and a new record revenue in the last quarter. So yes, technically Intel is inferior, but until all of those disadvantage materialise into financial numbers it is far too early to call the death of Intel.

I dont hold any Intel Stocks but speaking as an AMD shareholder.

ianai

From what I’ve heard Intel management was taken over by marketing “professionals.” It’s an awful place to work and probably devoid of tech leadership.

Aka yes it’s a cautionary tail and time to run from that ship.

undefined

[deleted]

jakeinspace

I suspect that for national security reasons, the US federal government and DoD would not allow Intel to fail. Still, the military and government aren't big enough buyers of microprocessors to keep Intel competitive at its recent position, and I suppose that TSMC's planned fab in the US could be seen as an alternative.

jillesvangurp

I think the writing was on the wall for a few years. IMHO Arm on servers is something that's been an option for quite long and the only thing that's surprising is how long x86 has managed to stay popular/relevant there.

Also, oss instruction sets like Risc-V are going to be interesting.

What went wrong at Intel is that they forgot to take the appropriate steps 10 years ago to avoid running out of options right now.

Ten years ago it was already obvious that mobile CPUs were a thing and Intel's attempts to penetrate that market failed around that time. From that moment they were living on borrowed time.

klodolph

It’s not quite that obvious. Server space is still huge. Growth in mobile usually corresponds to people using more servers. ARM servers are an option but I’d hesitate to lay bets on it—they have to beat Intel on TCO and it’s just not there yet. AMD is great but doesn’t have volume like Intel does, and while the I/O is better on AMD, dealing with NUMA on AMD is a bit more of a beast.

I’m not saying that Intel’s not in trouble, just that the conclusions here are far from obvious. I have some skepticism for people who say that they saw this coming. AMD laid off a lot of top engineers before its recent resurgence. Intel’s failure to ship its 7nm node in volume was a surprise to a lot of people.

Everyone knew that the new process nodes were more difficult, but outside a few experts, hardly anyone was in a position to predict when the move to smaller nodes would slow down.

Not that long ago, people were praising Intel for their superior SSD controllers, or talking about how they would be making 5G modems.

cma

AMD has 64 core processors without any NUMA issues now. They use a dedicated IO die to provide uniform memory access to all chiplets.

ARM has a much weaker memory model with significant performance implications for multithreading as well.

deelowe

Generally, Intel is still best for TCO. AMD is better on specific high core count workloads. Arm isn't really competitive in the HPC/Cloud space, or at least hasn't been historically. Maybe that's starting to change?

api

Ice Lake 10nm / 10th gen has a super weird crashing bug too:

https://youtrack.jetbrains.com/issue/JBR-2310

https://bugs.openjdk.java.net/browse/JDK-8248315

No, no, no it is not an OS bug, a hypervisor bug, or a JVM bug. Read the whole thing if you have an hour to kill. It's confirmed to be a CPU bug, and Intel knows about it.

I am really looking forward to the postmortem. The behavior reminds me of the old F00F bug.

https://en.wikipedia.org/wiki/Pentium_F00F_bug

Google234

Most of those reports seem to be on 14nm procs. Post Morten will probably be a microcode update.

api

It's happening on Ice Lake cores. Process node probably doesn't matter.

justapassenger

> Apple just announced that they’re dropping them in basically the next 5 years

That's a big PR hit, but in terms of sales - Apple isn't really significant customer of x86. But, if it'd trigger Microsoft to double down on Windows on ARM, that could then become a threat. But MS is playing with ARMs for years, and nothing significant came out of it yet.

numpad0

They still have 7nm and 14nm manufacturing businesses...

yjftsjthsd-h

> They still have 7nm and 14nm manufacturing businesses...

I'm pretty sure they have their 14nm business, and are working really hard to get a 7nm manufacturing business? A quick search gives me news articles about Intel hoping to have 7nm working by 2021.

PaulHoule

They have milked 14 nm for what it is worth in the long term in the interest of the short term.

They bent over backwards for cloud providers and offered them special deals that helped finance the cloud providers transitioning to own silicon. They fused off features to create false product "differentiation" like the IBM of old and failed to deliver technology after technology in working form (SGX, TSX, 10nm, ...) They held the performance of the PC platform back by trying to capture all of the BoM for a PC. (e.g. tried to kill off NVIDIA and ATI with integrated 'graphics')

Customers are angry now, that's their problem. Intel is like that Fatboy Slim album, "We're #1, why try harder?" They still think they are the #1 chipmaker in the world but now it is more like #2 or #3.

sjwright

...And who wants to take bets for how many years before Intel starts being a contract manufacturer of chips for Apple and others? Shall we open the bidding at five years?

Aaronstotle

I think Intel has learned a painful lesson on resting on their Laurels. That being said, Jim Keller was there for 2 years (he resigned June 12th, so I'd bet they have some big things on the horizon, namely GPUs.

regularfry

They don't have a particularly good track record here. Have we got reason to think that there's going to be better news for them this time round?

DCKing

It's worth noting that this is based on ARM's Neoverse N1 IP, which is also used in the AWS Graviton2. The Graviton2 benchmarks damn close to the best AMD and Intel stuff, so this chip looks very promising [1]. It's really looking to be a breakthrough year for ARM outside of the mobile market.

[1]: https://www.anandtech.com/show/15578/cloud-clash-amazon-grav...

Refefer

Phoronix paints a very different picture, especially in non-synthetic workloads[1]. Gravitron2 looks like a nice speedup over the first generation but either the optimization isn't there yet or there are areas which need additional work to become more developer/HPC competitive. That said, I'm thrilled we have competition in the architecture space for general purpose compute again.

[1] https://www.phoronix.com/scan.php?page=article&item=epyc-vs-...

_msw_

Disclosure: I work for AWS on cloud infrastructure

My personal opinion is that the Phoronix way places quantity over quality. Measuring performance is an important part of shining a light on where we can improve the product, but I get little practical information from those numbers, even when they are reported as non-synthetic. There are HPC workloads that are showing significant cost advantages when run on C6g, like computational fluid dynamics simulations. See [1].

I expect the scalability of HPC clustering to improve on C6g in the future, like C5n improved cluster scalability compared to C5 with the introduction of the Elastic Fabric Adapter. The Phoronix and Openbenchmarking.org approach doesn't give much insight into workloads like this.

My advice for an audience like folks on HN is is to test it for yourself. For me, being able to run my own experiments is how I come to understand infrastructure better. And the cloud lowers the barrier of running those experiments significantly by being available on-demand, just an API call away. I'd love to hear what you think, either in a thread here or you can contact me via addresses in my user profile.

[1] https://aws.amazon.com/blogs/compute/c6g-openfoam-better-pri...

DCKing

Interesting data. Curious whether there's a logical explanation for these discrepancies in their setups.

karkisuni

Didn't go too deep into it, but the AMD cpus being compared are different. Anandtech has an AWS-only EPYC 7571 (2 socket, 32 cores each, 2.5ghz), Phoronix has EPYC 7742 (1 socket, 64 cores, 2.2ghz). On top of that, Anandtech is using another AWS ec2 instance and Phoronix is testing on a local machine on bare metal.

Still would be interesting to know what differences caused the gap in results, but their setups were pretty different.

embrassingstuff

How different are these ARM server implentations from each other ?

Will we need to recompile? Will it be almost-100%-binary-equivalent-with-some-hidden-bugs ?

yjftsjthsd-h

Ugh, yes; one of the perks of an Intel monoculture was that at least you only had one target to worry about, and inter-generational quirks were mostly limited to minor things. Now we have to deal with "this was optimized for (Intel|AMD) and doesn't work on (AMD|Intel)" and "the devs tested this on their x86 laptops and then it got weird when we went to run it on ARM" and "ARM is less of a platform and more of a collection of kinda-similar-looking systems that are mostly compatible". Don't get me wrong, I'll take this over a monoculture, especially an Intel monoculture, but there are some bumps on the road to a more diverse ecosystem.

_msw_

In my experience, the Arm ecosystem has an excellent track record regarding compatibility across conforming implementations of the architectures (e.g., Armv7-A, ARMv8-A). I can draw a practical comparison to MIPS, where I had to deal with a lot of variability based on various vendor extensions. This is reflected in the "-march=" documentation for GCC:

MIPS: https://gcc.gnu.org/onlinedocs/gcc/MIPS-Options.html AArch64: https://gcc.gnu.org/onlinedocs/gcc/AArch64-Options.html Arm: https://gcc.gnu.org/onlinedocs/gcc/ARM-Options.html

mulmen

Yeah but we get to go out back and invent some new wheels for the portability tractor so that will be fun.

jeffbee

Does anyone have an evaluation board for these things? Their marketing materials scream "scam" to me. For one thing they compare to competing x86 parts by arbitrarily downrating them to 85% of their actual SPECrate scores. Why? Then they switch baseline x86 chips when making claims about power efficiency ... for performance claims they use the AMD EPYC 7742 then for performance/TDP they use the 7702, which has the tendency to make the AMD look worse because it is spending the same amount of power driving its uncore but it's 11% slower than the 7742.

Also, without pricing, all these efficiency claims are totally meaningless.

IanCutress

We're working with Ampere to get access when they're ready to let us test.

fomine3

I hope AnandTech get many-core EPYC Rome like 7702, 7502 for review.

sitkack

jeffbee

That's the eMAG, not the Altra.

jzwinck

This reminds me of Tilera, who had a 64 core mesh connected CPU ten about ten years ago. The problems seemed to be it was harder to optimize due to the mesh connectivity (like NUMA but multidimensional), low clock speeds, and lack of improvement after an initially promising launch.

Will this be the same? It seems possible. Does it really get more work done per watt than x86?

And why does the article say "These Altra CPUs have no turbo mechanism" right below a graphic saying "3.0 Ghz Turbo"?

jillesvangurp

It depends a bit on how you utilize these CPUs. A lot of server software is optimized for just a few cores. Even products optimized for using more than 1 thread tend to be tested and used mostly with 4/8 core configurations. And then of course there are a few popular server-side languages that are effectively single threaded typically (e.g. python) and use multiple processes to leverage multiple cores. Launching 80 python processes on an 80 core machine may not be the best way to utilize available resources compared to e.g. a Java process with a few hundred threads.

With non blocking IO and async processing that can be good enough but to fully utilize dozens/hundreds of CPU cores from a single process, you basically want something that can do both threading and async. But assuming each core performs at a reasonable percentage of e.g. a Xeon core (lets say 40%) and doesn't slow down when all cores are fully loaded, you would expect a CPU with 80 cores to more than keep up with a 16 or even 32 core Xeon. Of course the picture gets murkier if you throw in specialized instructions for vector processing, GPUs, etc.

ddorian43

Yes most software is limited cores (example: encoding videos).

The best (efficient) way to utilize that many cores IS to have 1 pinned process/thread per-core: https://www.scylladb.com/ https://github.com/scylladb/seastar/

That would be the same in Python too. A problem is that you can't share the kernel pages for the code, and you need to have a shared-cache. Probably 0-mem-copy with no deserialization example: lmdb + Flat Buffers.

Nicer is to have 1/2x cores, but each core being 2x faster ;)

rbanffy

You need a lot of memory bandwidth and large caches, or else the cores will starve. That's also why IBM mainframes have up to 4.5 GB of L4 cache.

O_H_E

Ok, just wow. L4 cache more than my laptop's ram. Thanks for that awesome titbit.

PS: don't worry, my upgrade is on it's way :p

yjftsjthsd-h

:D A bit like the moment when I realized that on-CPU cache could now hold a complete DOS system, with programs included...

zozbot234

That's true of all high-frequency/high-core count hardware. Which is why running Java or Python codes on this hardware makes very little sense. Rust is more like it. Golang in a pinch.

imtringued

It's the opposite. Running lots of poorly optimized processes allows you to amortize memory latency. If your software suffers from cache misses then it's not going to run out of memory bandwidth any time soon. Adding more threads will increase memory bandwidth utilization. Meanwhile hyper optimized AVX512 code is going to max out memory bandwidth with a dozen cores or less.

blackoil

Noob ques. Is there any fundamental limitation in Java or more like JVM will need to evolve to optimally use such architecture ??

rbanffy

You will have to tune your code to need as little shared state across threads as you can. It's not fun, but tuning code at this level rarely is.

wtallis

> And why does the article say "These Altra CPUs have no turbo mechanism" right below a graphic saying "3.0 Ghz Turbo"?

These chips obviously have variable clock speed, but apparently nothing like the complicated boost mechanisms on recent x86 processors. My guess is that Turbo speed here is simply full speed, and doesn't depend significantly on how many cores are active, and doesn't let the chip exceed its nominal TDP for short (or not so short) bursts the way x86 processors do.

rbanffy

> and doesn't depend significantly on how many cores are active, and doesn't let the chip exceed its nominal TDP for short (or not so short) bursts the way x86 processors do

Either that, or 3 GHz always exceeds the envelope and the chip is throttling clocks down all the time to keep itself inside the allowed power envelope.

zozbot234

> doesn't let the chip exceed its nominal TDP for short (or not so short) bursts the way x86 processors do.

That's more of an artifact of how TDP is defined than anything else. I doubt that this could peg even a single core at 3.0 GHz given a reasonable cooling setup, let alone run all cores @ 3.0 GHz.

greggyb

Why do you think this won't keep a single core at 3.0GHz?

You can get ~3.4GHz average sustained all-core speed on a 3990x (64-core, nominal 280W TDP). This is with an off-the-shelf AIO cooler.[0]

Note: top-end air coolers are often competitive with AIOs, and can be had for $80-$100.[1]

If you're buying a several thousand dollar CPU, dropping even $500 (much higher than you'd need for closed loop liquid or high-end air cooling) on cooling doesn't seem unreasonable.

[0] https://www.anandtech.com/show/15483/amd-threadripper-3990x-...

[1] https://www.youtube.com/watch?v=7VzXHUTqE7E

PaulHoule

These chips are practical and can go into servers that are similar in performance to x86 servers.

ARM has well-thought out NUMA support, probably a system this size or larger should be divided into logical partitions anyway. (e.g. out of 128 cores maybe you pick 4 to be management processors to begin with).

samcat116

Products like this show that Apple could have an ARM based Mac Pro in two years relatively easily. They already have PCIe Gen 4. TDP and memory capacity is already more than intel provides in the Xeon workstation line that they use.

jagger27

It would be weird (and cool) if Apple ends up being the company to provide easy off the shelf access to a powerful Arm workstation.

klelatti

More of a case of "skating to where the puck is going".

I know it's a bit of a cliche but it feels to me like Apple might have got its timing right on this one.

ed25519FUUU

Timing is something Apple does really well. It's almost never the "first" to anything, but it waits until all of the stars align and then invest heavily.

adrianmonk

If they do that, I wonder whether it would make sense for Apple to get into the ARM server CPU business while they'are at it.

Currently, the Intel Xeon is used in both high-end workstations and servers. If one x86 design can be suitable for both of those, presumably one ARM design could do the same.

If they could sell server CPUs at a profit, then Apple could get more return on its design investment by getting into two markets. And they'd get more volume. Though apparently they'd be facing competition from Ampere and Amazon's Graviton.

why_only_15

I've wondered for a long time if it would make sense for Apple to sell the A13 etc. to smart home device makers, on the theory that Apple can offer great HomeKit integration as well as a superior chip to anything else on the market (for e.g. video processing).

ed25519FUUU

I think it’s a good time to invest in a Mac Pro. While working from home I’m asking myself the benefit of a laptop when a desktop could give me so much more performance.

coder543

The other person who replied to you probably paid half or a third of what you would pay for an equivalent Mac Pro.

The profit margins on the Mac Pro are just incredible. (Yes, I'm sure that equivalent professional workstation brands also have huge profit margins... no, that doesn't make me want to pay those lofty prices more.)

The only real value the Mac Pro provides is that it's the most powerful computer you're allowed to run macOS on legitimately. If you can do your work from Windows (with WSL) or Linux, you can save upwards of tens of thousands of dollars by building your own workstation, and that workstation can be significantly more powerful than any current Mac Pro at the same time.

For video professionals who rely on FCPX or similar macOS-only software, they don't really have a choice, and they get the opportunity to essentially pay $10k to $20k just for a license of macOS, which is fun.

systemvoltage

I have a hackintosh (i7-8700k) and it feels about 2x faster than the top spec $4000 macbook pro latest 2019 model (subjective opinion ofcourse). It is such a huge difference, especially when using PyCharm and Adobe apps.

It is pricey but if it is something you want to buy for 5 years, it is about $100/month cost. Some people might want to buy it.

jeff_vader

"The other person who replied" here. While it definitely costs a lot less - you also need to factor in the time you spend on selecting components, building and tweaking thermals. It's almost a small side hobby for a month or two.

jeff_vader

I just did that when the whole lockdown happened - built a myself a AMD 3970x workstation. It's so good I do not want to go back to work laptop in the office.

ed25519FUUU

How are the thermals? I'm concerned about it making my small office hot.

undefined

[deleted]

emmanueloga_

Is anybody else confused by the "Ampere" brand name? I was trying to figure out what Ampere is...

* There's one "Ampere Computing" [1], but I guess I'm not "in the know" since it is the first time I heard about it :-/

* There's one Ampere [2], "codename for a graphics processing unit (GPU) microarchitecture developed by Nvidia".

Are both things related? Is "Nvidia's Ampere" developed by "Ampere" the company?

Also, I think Ampere is kind of a bad name for a processor line... just makes me think it of high current, power-hungry, low efficiency, etc. :-)

1: https://en.wikipedia.org/wiki/Ampere_Computing

2: https://en.wikipedia.org/wiki/Ampere_(microarchitecture)

why_only_15

They are not related as far as I can tell other than being named "Ampere".

shadykiller

Most logical naming of processors I’ve ever seen. E.g: Q80-33 - 80 Cores 3.3 Ghz Q32-17 - 32 Cores 1.7 Ghz

sradman

> Where Graviton2 is designed to suit Amazon’s needs for Arm-based instances, Ampere’s goal is essentially to supply a better-than-Graviton2 solution to the rest of the big cloud service providers (CSPs).

So the question is whether they can land Google, Microsoft, and/or Alibaba as customers for an alternative to AWS M6g instances.

klelatti

Oracle is an investor ($40m) and Techcrunch reports that they have been working with Microsoft so sounds like they are making progress on getting into the major cloud providers.

cesaref

I'm interested to know what applications really scale to these core counts. When I was working with large datasets (for finance) other bottlenecks tended to dominate, not computation, so memory pressure, and throughput from the SAN were more important.

These high density configurations were key when rack space was at a premium, but these days, power is the limitation, so this is interesting to provide more low power cores, i'm just not sure who is going to get the most benefit from them though...

regularfry

With 80 cores I can get 40 2-core VMs all pegging their CPUs on a single processor without any core contention. Multiply up by the number of sockets. That might be the more interesting application for cloud providers than going for a single use case for the entire box.

Where this might get interesting, depending on how the pricing stacks up, is that if you're in the cloud function business, this will increase the number of function instances you can afford to keep warmed up and ready to fire. In those situations you're not bottlenecked on the total bandwidth for the function itself (usually), your constraint is getting from zero to having the executable in VM it's going to run in, and from there getting it into the core past whatever it's contending with. If there's nothing to contend with and it's just waiting for a (probably fairly small) trigger signal, execution time from the point of view of whatever's downstream could easily be dominated by network transit times.

tyingq

Plain old io-bound multiprocess work would be a good match. Like static content and php sites, for example. I imagine there's quite a lot of that out there.

ed25519FUUU

I'd wager to say the bulk of the web is CPU bound.

ambicapter

Insofar as webservers go, more cores equal more simultaneous connections, no? I doubt network links are saturated yet.

rbanffy

As cool as it is, these server announcements are somewhat disheartening.

I want a workstation with one of these.

gpm

It has PCIE lanes, what, other than price, stops you from buying a rack, sticking a graphics card in it, and calling it a workstation?

rbanffy

Two reasons, mostly.

Aesthetics is a big thing - rackmount servers are ugly and, unless there are panels covering it, they are horrendous deskside workstations.

Another one is noise. These boxes are designed for environments where sounding like a vacuum cleaner is not an issue. Because of that, they sound like vacuum cleaners, with tiny whiny fans running at high speeds instead of more sedated larger fans and bigger heat exchangers.

HP sold, for some time, the ZX6000 workstation that was mostly a rack server with a nice-looking mount. If someone decided to sell that mount, it'd solve reason #1, at least.

greggyb

Shove it in a full tower case. You can mount most server hardware easily in such a case. At that point, you can cool with big slow fans.

sjwright

Probably a lack of time resources for unbounded experimentation with unsupported configurations of expensive, non-mainstream hardware. Not all of us have the luxury to be a recreational sysadmin in our spare time.

gpm

I struggle to imagine what you expect from running a desktop OS on an 80 core ARM cpu if it doesn't involve becoming a recreational sysadmin. That's definitely bleeding edge territory no matter the form factor the hardware ships in.

ed25519FUUU

Racks are horribly loud! Do they even have a fan speed other than "insane"?

asguy

Older specs, but the eMAG is available as a workstation: https://www.avantek.co.uk/ampere-emag-arm-workstation/

rbanffy

Yes, but it's a previous generation. And it only does 32 threads.

asguy

Have you tried calling them to ask if they’ll build you the newer one?

zanny

You can get a Threadripper 3990x with 64 cores in a "regular" workstation.

rbanffy

If I wanted an x86 workstation, I'd get the EPYC counterpart for the extra memory bandwidth.

nine_k

BTW I wonder why one might need a workstation with many less beefy cores as opposed to several more powerful cores. What kind of interactive tasks require that?

E.g. i suppose computer animation rather takes a GPU than 32-64 universal cores, and compilers are still not so massively-parallel.

spott

I'm kind of curious: what is the selling point of an ARM server? Why would I use an ARM instance on AWS or similar instead of an x86?

Are they significantly cheaper per GHz*core? If so, how hard is it to make use of that power, will a simple recompile work?

lowmemcpu

Yes. Here's what AWS' page says

> deliver significant cost savings over other general-purpose instances for scale-out applications such as web servers, containerized microservices, data/log processing, and other workloads that can run on smaller cores and fit within the available memory footprint.

> provide up to 40% better price performance over comparable current generation x86-based instances1 for a wide variety of workloads,

From what I read, it's not terribly hard to tell your compiler to compile for a particular instruction set, you just need to do it. Cost savings and better performance are great incentives, as well as Apple moving their Mac platform to it will drive more market share for developers to take the time to recompile.

Edit: Forgot to add the source of those quotes: https://aws.amazon.com/ec2/graviton/

bluGill

It might or might not be hard to compile for a different cpu. Intel lets you play fast and loose with mutil threaded code without as many race conditions. As a result code that works fine on Intel often randomly gives wrong results on arm. Fixing this can be very hard.

Once it is fixed you are fine. Most of the big programs you might use are already fixed. Some languages give you gaurentees that make it just work.

FnuGk

What is different on intel since you can play fast and loose with multi threading? Two threads reading and writing the same memory area without and locking would give problems regardless of the ISA or am i missing something?

jfkebwjsbx

Amazon marketing claims are not something you should trust.

_msw_

Disclosure: I work at AWS on build cloud infrastructure

It's good to be skeptical. I always encourage folks do experiments using their own trusted methodology. I believe that the methodology that engineering used to support this overall benefit claim (40% price/performance improvement) is sound. It is not the "benchmarketing" that I personally find troubling in industry.

ksec

>what is the selling point of an ARM server? .....Are they significantly cheaper per GHzcore?

In the context of AWS.

They are cheaper per some / specific workload* on AWS.

Especially when ARM Graviton 2's vCPU on AWS are actual CPU core while Intel / AMD instances are CPU thread.

And in general AWS offers the G2 instances with the same vCPU core at 20% discount compared to AMD / Intel instances.

lsofzz

> Especially when ARM Graviton 2's vCPU on AWS are actual CPU core while Intel / AMD instances are CPU thread.

Thank you for that information. Is there a reference that documents this somewhere?

ksec

It is clearly listed on AWS Instance Types [1]

Each vCPU is a thread of either an Intel Xeon core or an AMD EPYC core, except for M6g instances, A1 instances, T2 instances, and m3.medium.

Each vCPU on M6g instances is a core of the AWS Graviton2 processor.

Each vCPU on A1 instances is a core of an AWS Graviton Processor.

[1] https://aws.amazon.com/ec2/instance-types/

bluGill

Less electricity used. Air conditioning is a big cost in large data centers. Lower power use cpus mean less heat which means less ac needed which drives down total costs.

Of course different cpus can do different amounts of work per amount of electricity used, but arm generally works out better on a watt per unit of work basis.

bluedino

In the past, Google said they would switch to POWER if they could get a 10% energy savings by doing so.

bluGill

Facebook has hinted (they won't give real numbers) that adding a new compiler optimization has lowered their electric bill by a fee hundred thousand dollars per year.

nullifidian

How come there isn't a trademark issue with NVidia? I was very confused for a moment.

dbancajas

"Ampere" can't be trade marked since it's a name of a scientist? Unless they are operating on the same market/segment and can prove there is willful intent to defraud customers? probably a hard sell.

nullifidian

So is Tesla. And Ford is a name of an entrepreneur. Are these also not trademark protected?

>they are operating on the same market/segment

They are. Called computation.

>willful intent to defraud customers

Is it a requirement? I doubt it.

btw, I only clicked the link because I thought of the Nvidia's product, so they are definitely getting eyeball traffic due to the name.

UPD: I recognize that I'm unlearned in trademark law, so I'm not insisting on anything.

klelatti

Ampere was founded (and presumably name registered) in 2017, Nvidia's Ampere announced in 2020?

Ampere had products on sale in 2019.

If there is a case I can't see Nvidia winning it.

the_hoser

The name of the company is Ampere. The name of the product is Altra. Trademarks don't automatically apply to all usages of the word.

Daily Digest email

Get the top HN stories in your inbox every day.

Ampere’s Product List: 80 Cores, up to 3.3 GHz at 250 W; 128 Core in Q4 - Hacker News