Hacker News
8 days ago by dathinab

I see Intel not being dominant, having to take a back seat, shrinking a bit.

But dying??

Nup, I'm not seeing this at all.

I mean AMD for years survived in a situation much worse then Intel currently is.

Intel's productions nodes might be behind TSMC's but they are still quite valuable, not all chips need 5nm processes. Heck we could say most chips don't need it.

Intel is still innovating, just because they currently lack behind doesn't mean that will stay that way.

Even when lacking behind wrt. the production node Intel still manged to produce Chips which are often not much worse then the competition.

Intel is not just about CPU's.

Etc.

So no Intel is not dying at all.

It's taking damage, and lost dominance but it still has a fairly good shot at survival and might even take back dominance (in a 2-3 years). Or it might not but still stay competitive.

And if we look at the world situation TSMC might not be a think anymore in a view years. It would make me supper angry but thinks are heading in that direction (China Taiwan conflict).

8 days ago by dylan604

But not being hyperbolic doesn't drive clicks. Sort of like all of the headlines about Musk no longer being the richest man because the stock prices fell. Whoopee friggin boohoo. He still has more money than >99% of all humans on the planet.

In sports it's fun to say that 2nd place is just the first loser, but that's all in jest and friendly banter. When making things like this overly dramatic, it serves nobody except those trying to short the stock.

7 days ago by yboris

Side note: we, here on HN, all likely have more money than 99% of all humans on this planet [0] (the link is about income; wealth is even more unequally distributed).

Earning more than $60k/year puts you in the 1% already. [1]

[0] https://income-inequality.info/

[1] https://howrichami.givingwhatwecan.org/how-rich-am-i

7 days ago by perl4ever

Some people on HN in fact have salaries under $60K. Not everyone here is an American or works in the private sector, either.

I would be fairly surprised if $60K is top 1%, even if you have the entire world population in the denominator. But when I checked your link, it's not using the entire population for the denominator, (which is correct, IMO) so I don't think it's even close.

In the US, there are about 110M fulltime wage earners, (out of 330M population) with median being ~$50K and top quartile ~$80K. My very rough estimate based on that might be 45M people >$60K. That is a lot more than 110/330 * 1% * 7.8B = 26M before adding in Europe (lower wages but twice the population) or anywhere else.

Source: https://www.bls.gov/news.release/pdf/wkyeng.pdf

7 days ago by tasuki

> He still has more money than >99% of all humans on the planet.

Understatement much? You could also say Elon has more money than 50% of humans on the planet.

7 days ago by dylan604

I don't understand your point. 99% > 50%. My original statement is more exclusive. Want to elaborate on what your point is?

7 days ago by Torkel

Fast forward a few years. Look at the time it takes to design chips. Look at where the market and tech is going (M1 and AMD especially). Compare with RIM and Nokia. There is a "roadrunner" effect where a colossal company will continue for a few years on pure momentum. There are so many things speaking against Intel. And the facts you bring up as speaking for Intel seams more like short term momentum to me. I would guess at the following conversation in 5-10 years:

- "How did Intel die?"

- "Well, first slowly. Then quickly."

7 days ago by totalZero

Semiconductor demand is so high right now that any company that makes processors is going to print money in the short term. It's a "right place, right time" situation. Intel's long-term problem is loss of market share, but if you're going to lose market share, you're more likely to survive if it happens when the whole market is growing.

There's not much that a company so heavily invested in x86 can do in the face of Apple Silicon, aside from executing well on 7nm and bringing it to market ahead of schedule.

I don't see much of a similarity between Intel CPUs and Blackberry phones, considering that many flagship notebooks (even Apple's) have Intel inside. I for one am heralding the new era of voracious semiconductor demand, and am rooting for Intel to get its game face on and turn things around before the opportunity passes.

7 days ago by kllrnohj

Intel's chip designs are still fine, though. Tiger Lake's IPC gains are right in line with Zen 3's IPC gains over Zen 2. And Xe looks to be a great iGPU.

Which means Intel is more or less on equal design footing with AMD, they're just having trouble getting it built.

And Intel has multiple huge revenue streams that are not in question at all. All their networking stuff, for example (everyone, even on AMD, still goes for motherboards with Intel LAN instead of anyone else's, and same with wifi). And things like Optane. And all their tooling & software suites. Intel's CPU division is struggling to get wins, but Intel overall as a company isn't.

7 days ago by bcrl

Throw in power consumption and Intel's stagnation starts to show. Look at their lasted chips having a 250W power limit when turboing -- almost double AMD! That's a pretty hard barrier to overcome without catching up on process technology.

7 days ago by incrudible

The idea here must be that somehow Apple or AMD have figured out the magic sauce to making faster processors, while Intel of all companies is oblivious to it and won't be able to catch up.

In reality, there is nothing that Apple or AMD are doing that Intel could not also do, isn't already doing, or isn't in a hurry to also do.

Consider that at 14nm+++++, Intel still manages to outpace AMD with its 7nm CPUs and latest architecture. AMD competes mainly on price, which is not a great position to be in. Intel's profit margins are massively better.

It's possible that x86 will become less relevant in the future, but in that case Intel will still be one of the be best chip designers in the industry. They will once again produce ARM chips, perhaps borrow one or two insights from Apple's architecture. They will continue to outsource manufacturing to other fabs, perhaps even become fabless.

7 days ago by willis936

>In reality, there is nothing that Apple or AMD are doing that Intel could not also do, isn't already doing, or isn't in a hurry to also do.

On the contrary, Intel has refused to do any architectural innovation for nearly 20 years. They are capable, but overly conservative. They should have been in a hurry 10 years ago. It's well documented that they weren't and were resented for it. AMD came up with a winning strategy (multi-die packaging) and Intel still doesn't seem in a hurry to catch up. You could swap "AMD" in that sentence with "Apple" and maintain the meaning.

When superscalar or simultaneous multithreading came out you didn't see competitors mill about waiting for the customers to forget about the advantages.

7 days ago by Tuna-Fish

> Consider that at 14nm+++++, Intel still manages to outpace AMD with its 7nm CPUs and latest architecture. AMD competes mainly on price, which is not a great position to be in. Intel's profit margins are massively better.

Have you been living under a rock for this past year? This is not true. AMD is beating Intel in performance in every market segment right now.

8 days ago by icedchai

Indeed, a similar situation happened in the mid 2000's before Intel moved to the Core architecture. I wouldn't be surprised if something like this happened again.

8 days ago by thisisnico

The Core Architecture in the mid 2000's was such an incredible shot back to the top. I remember seeing the benchmarks absolutely dominate AMD on every level, power, performance, TDP, Cost, etc. etc. This is almost immediately after Athlon 64 and the Athlon 64X2 started to gain market share, spanking Intel's Pentium 4 at the time. IIRC Intel ended up dropping the Prescott architecture, moving to the Pentium M architecture for development of the Core CPU's. Moving away from chasing max frequencies, and instead looking at IPC. Then Nehalem Dropped, (The first Core i CPU's) making the gap even larger. Q6600, Quad Core, Low power, decent cost. That was the CPU to have at the time. I miss the days reading about all of this.

8 days ago by nwmcsween

This just puts into perspective how stagnate cpus have been, ~14 years and we are just starting to move to more than 4 core processors on prosumer systems.

7 days ago by xattt

At that time, Intel had a lead with the Tualatin microarchitecture on the PIII. This was developed into the Pentium M and Core. As far as I can tell, they don’t really have anything of that sort on the market right now that they could “evolve” into something new.

7 days ago by l33tman

I retired my Q6600 a few weeks ago, because CyberPunk 2077 couldn't run on it (lacking AVX instructions). Worked fine up until then in my gaming rig :)

I still have my Core2 Duo in my HTPC... it's starting to feel doggy though.

8 days ago by dis-sys

I still have my Q6600, what an incredible processor at its time.

8 days ago by AmVess

Intel has been bleeding engineers to other firms for a while now, and new talent has been skipping them altogether. These are not good developments for a company that depends on brain power to make their products.

7 days ago by icedchai

Intel is an enormous company with over 100,000 employees and an enormous stockpile of cash. I think they'll be okay.

8 days ago by gtirloni

> new talent has been skipping them altogether

I'd be interested in hearing more about this. Are new graduates actually not applying?

8 days ago by ardy42

> Indeed, a similar situation happened in the mid 2000's before Intel moved to the Core architecture. I wouldn't be surprised if something like this happened again.

Though that may have been a bit easier for them to solve, because the main thing they had to do was scrap NetBurst (Pentium 4) and return to a derivative their old architecture (which they were still using for the notebook market).

8 days ago by raverbashing

Correct, they only had to quit being led by their own marketing BS about "higher clock rates are faster"

8 days ago by chrisseaton

I’m almost positive Intel have tech ready to deploy to be on top again. They will have been developing it... but not needed to use it until they came under competition pressure.

8 days ago by monocasa

Jim Keller's apparent rage quit has me questioning that.

I was bullish on Intel until quite recently. I was able to convince myself that they were taking the L on 10nm to focus completely on a full, true EUV transition, in order to be at the top of the game just everyone else was hitting that wall in order to dominate again. That turned out to be overly optimistic.

8 days ago by imtringued

They can do it but will they? Intel isn't known for being a well managed company internally.

8 days ago by karmasimida

> might even take back dominance (in a 2-3 years)

Well, they won't even have that EUV capacity in 2-3 years. ASML is almost fully booked at this point.

The way the industry is moving to custom chips won't wait for Intel. And realistically, the trend is already there, next stage of cloud evolution would be application specific chips for cloud native services, like storage/DB/ML, etc. There is zero chance it would scale back.

And I am not sure why everyone here is taking 'dying' so literally. Losing influence is another form of dying. It will take a long time for a company to cease to exist after it becomes irrelevant.

8 days ago by tw04

Let's all just come to a common agreement: there's no planet on which the US military allows Intel to die. Period. End of story.

Having a US based company with fabrication facilities on US soil is vital to national security and will be for the entirety of my lifetime and likely my grandchildren's lifetime (if I ever have any). For all the bluster of the investors wanting to chop Intel up: they will find a swift and full rejection from the government unless their plan includes selling the assets to another US firm.

TSMC building a fab in the US won't be "good enough" - the US military will not allow the security of our country to be in the hands of a foreign corporation.

Now whether Intel remains a dominant player? Who knows, but they aren't going anywhere. I wouldn't be even a little bit surprised if for instance the 3-letter agencies cloud contracts suddenly included a clause that all of their infrastructure must run on Intel CPUs.

I'm not saying it's right, I'm not saying it's fair, but I don't see any other outcome.

8 days ago by filereaper

I'd agree with you, but this has already happened when IBM sold its Fab [1] to Global-Foundries which is owned by Mubadala Investment Company from the U.A.E

IBM's Fab was on the DoD trusted foundry list [2] Slide 13, Intel's is not.

There's an article that already rang this alarm around national-security when IBM divested itself of its Fab. [3]

1. https://www.computerworld.com/article/2837426/ibms-chip-busi...

2. http://jteg.ncms.org/wp-content/files/documents/DoD%20Truste...

3. https://semiengineering.com/a-crisis-in-dods-trusted-foundry...

7 days ago by throwawayp123

Intel does have government contracts today[1][2], however, so some aspects of that situation may have changed with time.

1. https://www.reuters.com/article/us-intel-manufacturing/intel...

2. https://www.wsj.com/articles/trump-and-chip-makers-including...

8 days ago by aemreunal

I agree with your assessment of Intel's role in US national security, though Texas Instruments also owns and operates chip fabs in the US [1]. They most likely aren't as sophisticated as Intel or able to create similar chips, but Intel most likely isn't the only source of chips in the US.

[1]: https://en.wikipedia.org/wiki/List_of_semiconductor_fabricat...

8 days ago by 0xy

If they're propping Intel up as their last bastion of chip superiority they've already lost. The US knows they need to defend Taiwan with the full weight of their military resources. This means war in the event of an invasion or takeover.

If the US loses influence over Taiwan and subsequently TSMC, they've lost. In this phase of global geopolitics, whoever controls the chips has the influence. You can't do anything without compute resources.

8 days ago by ahepp

How important is it, strategically, to have the most advanced process tech? As far as I know, military tech tends to lag commercial significantly.

8 days ago by imtringued

Old military tech also has a longer shelf life. There are lots of military installations that were built before the internet.

8 days ago by abacadaba

Only so much as it leads to faster AI.

8 days ago by thisisnico

Interestingly, there was a time when Intel's Fabs were the industry's absolute best. They were far ahead.

7 days ago by oatmeal_croc

I don't think the military alone can sustain Intel's R&D division, which has proven to be lacking compared to the competition.

8 days ago by exintelwithop

I worked at Intel years ago.

At that time, The ratio of excellent to useless people I experienced there was 1:40. That is way out of whack. I am certain it only got worse because all the 1/40’s I knew left.

They are going to have to go and get people who have modern skillsets and mindsets and those people don’t want to live in Hillsboro. And it’s going to be expensive.

And those people don’t tolerate dead weight.

My take is that Bob Swan was only ever a placeholder. The board needed to be put in the worst possible negotiating position: Literally on their knees begging because they are getting their asses kicked in every single competitor category.

A smart guy like Pat wouldn’t take the CEO job until they (a) are crawling on all fours and (b) give him the latitude to clean house as vigorously as he wants and (c) a preposterous paycheck.

No one intelligent would want that job without all three. That’s why Bob got to babysit that role for two years.

8 days ago by MichaelApproved

> The ratio of excellent to useless people I experienced there was 1:40

This has to be hyperbole but the rest of your comment makes me think you're being serious.

I have only worked at small companies <50 people, so I don't have first hand experience with how waste can hide in large companies but 1:40 excellent to useless people? I can't believe it'd even be half that.

My guess is you thought those people were useless because you didn't understand the role they played in the company.

However, I'll admin again that I don't have first hand experience. I'd genuinely love to hear from others who have experience with large companies.

8 days ago by nurspouse

Disclaimer: I worked at Intel.

Calling them useless may be a bit of a stretch. If you said "mediocre", I'd believe it. Also, in my experience there, the ratio is not uniform - you could go to a department with lots of top notch folks, and then to another department with everyone being mediocre.

Some data points:

Asked a person in a SW interview to write a function to calculate the factorial. The candidate was repeatedly told he could use whatever language he wants. He insisted on using a language he had little experience in (C++), and not the one he was experienced in (Python). He was given latitude to use any method he wanted: Recursive, iterative, etc.

He didn't even come close to solving it.

He was still hired.

He is not an outlier when it comes to SW folks at Intel. Yes, Intel definitely has some really good SW engineers, but the 1:40 ratio easily applies in that sphere.

Up till 2014, my department insisted on using cvs for version control. They said they saw no benefit to git (or even SVN).

In another team I was in, we were stuck with SVN. The senior manager wouldn't allow us to use git because he was sure it would be over the heads of many of the employees (sadly, he may have been right). Then in 2017, when IT announced that SVN was being EOL, to avoid git they switched to MS TFS (even though TFS themselves recommend git!) I still hear from folks there about the resistance to switch to git. And the following comment I've heard from multiple folks:

"I don't want to use git because it comes from Microsoft!" (conflating github with git, and upon further discussion realizing they have no idea MS bought Github - they think Git and Github both originated from MS).

BTW, I'm not a git fan boy - I much prefer mercurial.

If you're good at SW and somewhat up to speed with current technology, Intel is a very frustrating place to work.

But then again, their compensation for SW is simply not competitive:

https://www.levels.fyi/company/Intel/salaries/Software-Engin...

8 days ago by elihu

I work at Intel. Maybe my experiences are not representative, but everyone I've worked with is competent at what they do, and most people are helpful and friendly. But the trouble is that isn't enough. The really effective programmers aren't just good at writing code, they're also good at designing elegant interfaces, and at understanding customer problems, and strategic thinking, and so on. Competent programmers aren't useless if they don't have those skills, but you need some people that do if you want to make great products rather than just check off boxes in feature lists.

Intel doesn't have as many of those great programmers as I would like, but I think their bigger problems are organization rather than individual. There just isn't enough high-level coordination and sharing of information. Every group just kind of does their own thing, and if you want to know how something works you have to know who to ask. Too much tribal knowledge (which often has a short expiration date) and too little writing stuff down in one place where it's easy to find.

It's funny you mention git, because I joined Intel from a startup that used SVN and that was the one bright spot about Intel's technical culture that they used git pretty extensively. I don't think I've ever used a non-git source repo at Intel. I assume that there was some major internal struggle to get to that point, but it was before my time.

8 days ago by schmookeeg

Oh it's reality. My experience at Sony was about 1:25, but other big dumb companies I have worked for have met and exceeded the 1:40 rule. It is amazing how process generates dead weight and worthless humans and nobody seems to care.

At one point I posited that Sony USA were merely a political pawn for Sony Japan. Hire a few tens of thousands of Americans, gain political capital -- and care not what they actually do all day. In our case, it was death by scrum and sticky notes.

8 days ago by dpe82

I worked for a tiny tiny unit of Sony USA that was itself once a startup that Sony had acquired. We were 25 or so engineers with our own little profitable(ish) product in our own office in Madison, WI. I once went to HQ in San Diego and couldn't believe how many people there were doing... well, I never could quite figure out what.

8 days ago by TwoBit

Wow I worked at Oculus and the ratio was at least 30/40, if not higher.

8 days ago by dpe82

Large companies can hide inefficiencies and incompetence of staggering proportion. Often there's little incentive to really improve things, but tons of bureaucracy that helps maintain the status quo or let it slide ever worse. People can only judge how good something is against their experience, so if you've only ever worked at that bigco and only known incompetence then you might not even realize anything is wrong. Meanwhile the proverbial frog slowly boils...

8 days ago by systemvoltage

1:20 is pretty realistic. Seen with my own eyes and ears. I wasn’t the 1, I was part of the 20. Didn’t do shit all day mainly because incentives were not aligned to motivate me. I get the same salary if I put in the extra effort. So why. Knew one guy who had a vision and the stamina to push through with PowerPoint precision. He was the 1.

8 days ago by ZephyrBlu

Well if you think about it, it's not that hard to imagine how it happens.

First, you have Price's Law [0] which states that half the work is done by the square root of the number of people. This in itself already implies a lot of useless people, especially at a company the size of Intel.

You might also have whole orgs/departments where pretty much everyone is phoning it in, so that would drive it up as well.

The way I see it is the more layers there are and the more removed people at the top are from the work, the greater chance the of useless people being able to go under the radar.

[0] https://dariusforoux.com/prices-law/

8 days ago by imtringued

Interesting. I am pretty sure that it's not a case of individual failure. It has more to do with organizational structure.

Most of the low hanging fruits can be picked by one individual. Instead of trying to pick more apples from the same tree people should make sure that there are not too many people assigned to the same tree.

The article confirms my idea:

>Look at your current profession. Are you in a position to create substantial value? If the answer is no, move on to a different place where you CAN.

8 days ago by jrumbut

It's odd that when people from Intel talk about Intel on the Internet or with me in person it's so frequently about high level strategy and C level/Board of Directors politics. The way they're spoken of exhibits a lot of careful examination.

I can't think of another company like that, maybe Microsoft years ago or GE. Is this something I'm imagining or is that a frequent topic of conversation at all levels of Intel?

8 days ago by ativzzz

Probably because nobody working at Intel was doing any actual work and were just reading/talking about Intel politics.

8 days ago by robk

Read mini msft

8 days ago by chx

Agreed on serious house cleaning. There were rumors the single, absolutely botched CPU in the Cannon Lake family was released because certain Intel managers had bonuses tied to a 10nm launch. All those people need to be fired, that's for sure.

8 days ago by SoSoRoCoCo

Intel innovates in process. Everything else is ruled by backwards compatibility and frenetic management scared to stay the course. (The vast majority of projects are killed if they don't tape out in ~2 years.)

Intel will shift to a TSMC model. They have the best fabs on the planet, and the best fab engineers. I believe it is something like a 3 millions dollars lost per minute if they are idle. They have already started doing that a few years ago, I suspect this will be their final form.

IMHO: The only thing holding them back from the transition are the hundreds of small boondoggle-groups staffed by old-timers too scared to retire, and too scared to do something daring, yet still somehow hang on to their hidey-holes. They lost a ton of key architects to Apple a few years ago, which I also suspect was the reason why the M1 is so badass.

...and if you really want to get sentimental, here's an AMD poster we had in our cubes back in ~1991:

https://linustechtips.com/uploads/monthly_2016_03/Szg2Ppo.jp...

The Sanders-as-Indiana was both funny and infuriating....

(The Farrah Fawcett looking woman was Sander's bombshell wife, compared to Grove at the time, who drove a beat up old car.)

8 days ago by supernova87a

>I believe it is something like a 3 millions dollars lost per minute if they are idle.

I think that's overestimating, although you are right it is damn expensive.

Order of magnitude I would say it's more like: Fab has lifetime of 2-3(?) years, and costs $10B to build and amortize. So every minute of idled factory capital = $6,000 in pure cost of the facility.

(although, if you think of it in potential lost revenue terms, then you may be more correct.)

(Actually we can also do that arithmetic, according to https://www.forbes.com/sites/willyshih/2020/05/15/tsmcs-anno...

240,000 wafers per year * 500 chips per wafer * $100? = $12B per year revenue. Also is something like $22k per minute)

8 days ago by chrisseaton

> Fab has lifetime of 2-3(?) years

Many Intel fabs are from the early 2000s. Some are from the 90s. What do you mean 2-3 years?

https://en.m.wikipedia.org/wiki/List_of_Intel_manufacturing_...

8 days ago by monocasa

I think they're referring to each line as a new fab, rather than the complex of buildings they're in, which is pretty fair.

8 days ago by systemvoltage

3 million dollars might be the cost of one FOSB (carries some 18-24 wafers) if you dropped it :)

8 days ago by mhh__

> Intel will shift to a TSMC model.

I think fabbing for outsiders would probably be a good idea, but splitting the fabs out from the designers seems like a bad idea.

8 days ago by SoSoRoCoCo

Yeah, agreed. The designers / integration will probably get the newest nodes--and the headaches with getting their yields up! I suspect the older high-yield nodes will be filled with tenants pretty quickly. I don't have much knowledge of how this is going, at least from the inside.

8 days ago by ChrisIsTaken

"Higher yield nodes" are full, that's why Intel is outsourcing to TSMC. Intel has already sold every 14nm, 22nm, 32nm and 45nm wafer it can make. They have zero capacity, which is an amazing problem for a "dying" company to have.

Even if they axed all process R&D and returned the cash to shareholders, due to the eye watering costs of designing at 10nm and lower I expect there will be a lot of business to keep their fabs turning over for the next decade.

8 days ago by mlyle

Intel has fabbed for (larger) outsiders for a few years (2014):

https://www.intel.com/content/www/us/en/foundry/overview.htm...

In late 2018, it was rumored they were exiting the business because of low uptake and because of constrained supply of their leading edge process, but it doesn't look like that happened.

8 days ago by userbinator

Everything else is ruled by backwards compatibility

That's one of the biggest reasons to choose them. If Intel's "x86" stops being x86, they'll lose one of their biggest competitive advantages.

See the first comment here, for example: http://www.os2museum.com/wp/vme-broken-on-amd-ryzen/

7 days ago by SoSoRoCoCo

This question has raged since the 90's. I worked on the Itanium (Madison and McKinley), and the VLIW architecture was brilliant. This was during the time of the Power4 and the DEC ALPHA, two non-x86 competing architectures that were dominating the "Workstation" market (remember that term?). It looked like the server world was going to have three architectural options (Sun was dying, and Motorola's 68000 line wasn't up to the task.)

Microsoft even had a version of NT3.5 for The Itanic. It seemed we were just about to achieve critical mass in the server world to switch to a new architecture with huge address space, ECC up the wazoo and massive integer performance.

Then the PC revolution took off with Win95, and the second war with AMD happened (and NexGen sorta). This couldn't be solved with legal battles. This put all hands on deck because there was SO much money to be made with x86 compatibility. The presence of x86 "up and down" AMD & Intel's roadmap took over the server market as well: it was x86 all over the place.

And that, chil'ren, is why x86 was reborn in the 90's just as it was close to being wiped out.

Now Apple has proven you can you seamless sneak in a brand-new architecture, get hyooj gainz, and we are none the wiser. This is fantastic news. I think we are truly on the cusp of x86 losing its dominance in the consumer space after almost 35 years of dominance.

7 days ago by gautamcgoel

> The Itanic

Lmao, never heard that one before

7 days ago by fvv

Thanks for your comment

8 days ago by Ericson2314

Yes but loosing that security blanket will make them better.

8 days ago by fomine3

Not much, as you can see on Zen3.

8 days ago by yjftsjthsd-h

In fairness, Intel probably has enough people left who remember Itanium and are scared of trying big new innovation for an honestly legitimate reason.

7 days ago by api

Lately I've started to wonder if Itanium wasn't a good idea badly executed. I wonder if you went back in time and invested more in compilers and ecosystem if it could have succeeded? VLIW could really reduce complexity by dumping a lot of instruction re-ordering stuff and eliminating the need for tons of baroque vector instructions for specific purposes.

The biggest thing Intel didn't do with Itanium was release affordable AT/ATX form factor boards. They priced it way, way too high, chasing early margins in "enterprise" without realizing that market share is everything in CPUs. This is the same mistake that Sun, DEC, and HP made with their server/workstation CPUs in the previous era. With a new architecture you've got to push hard for market share, wide support, and scale.

If I'd been in charge I would have priced the first iterations of Itanium only a little above fab cost and invested a lot more in compiler support and software.

Edit: also whatever happened to the Mill? The idea sounded tenable but I have to admit that I am not a CPU engineer so my armchair take is dubious.

Anyway the ship has sailed. Later research in out of order execution has yielded at least similar performance gains and post-X86 the momentum is behind ARM and RISC-V.

7 days ago by jsmith45

I'm not sure that pushing the complexity to the compiler makes as much sense.

One good side of x86 style instruction sets is that there is a lot you can do in the cpu to optimize existing programs more. While some really advanced compiler optimizations may make some use of the internal details of the implementation to choose what sequence to output, these details are not part of the ISA, and thus you can change them without breaking backwards compatibility. Changing them could slow down certain code optimized with those details in mind, but the code will still function. And I'm not even talking just about things like out of order execution. Some ISA's leak enough details that just moving from multi-cycle in-order execution to pipelined execution was awkward.

This ability of the implementation to abstract away from the ISA is very handy. And some RISC processors that exposed implementation details like branch-delay slots ended up learning this lesson the hard way. Now the Itanium ISA does largely avoid leaking the implementation details like number of scalar execution units or similar, but it's design does make certain kinds of potential chip-side optimizations more complicated.

In the Itanium ISA the compiler can specify groups of instructions that can run in parallel, specify speculative and advanced loads, and set up loop pipelining. But this is still more limited than what x86 cores can do behind the scenes. For an Itanium style design, adding new types of optimizations generally requires new instructions and teaching the compilers how to use them, since many potential optimizations could only be added to the chip if you add back the very circuitry that you were trying to remove by placing the burden on the compiler.

Even some of the types of optimizations Itanium compilers can do that mimic optimizations x86 processors do behind the scenes can result in needing to write additional code, reducing the effectiveness of the instruction cache. This is not surprising. The benefits of static scheduling are that you pre-compute things that are possible to pre-compute like which instructions can run in parallel, and where you can speculate etc. And thus you don't need to compute that stuff on-die, and don't need to compute it each and every time you run a code fragment. But obviously that information still needs to make it to the CPU, so you are trading that runtime computation for additional instruction storage cost. (I won't deny that the result could still end up more I-cache efficient than x86 is, because x86 is not by any means the most efficient instruction encoding, especially since some rarely used anymore opcodes hog some prime encoding real-estate.)

Basically I'm not sold on static scheduling for high performance but general purpose CPUs, and am especially not sold on the sort of peudo-static scheduling used by Itanium where you are scheduling for instructions with unknown latency, that can differ from model to model. The complete static scheduling where you must target the exact CPU you will run on, and thus know all the timings (like the Mill promised) feels better to me. (But I'm not entirely sure about install type specialization like they mention.)

But I'm also no expert on CPU design, a hobbyist at best.

8 days ago by monocasa

Any analysis that says that Intel's death is being precipitated by AMD and Apple rather than by TSMC is pretty much worth ignoring.

The fact that they're almost equivalent to those two at 5nm when Intel is at 14nm+++++++++ shows how killer the RTL and tape out folk are at Intel.

The real fight has always been at the fab layer.

8 days ago by ChrisIsTaken

Apple did poach a few of Intel's better architects and killed Intel's smartphone ambitions. But the damage occured two decades before that.

Intel is dying because they should have entered the foundry business in the early 90's. Process was always their forte, they were never good at the CPU architecture design anyway as iAPX/i860/Itanium amply demonstrated.

Instead they blew $100B getting into antivirus, network security, mobileye, infineon, and a dozen other failed businesses, $50B in illegal kickbacks keeping AMD out of the unprofitable low end laptop market, and another $50B of "contra revenue" subsidizing their inferior me-too products against mobile SOCs they should have been fabbing in the first place.

8 days ago by usefulcat

I'm curious about how Apple "killed Intel's smartphone ambitions"?

Otellini on the iPhone:

"We ended up not winning it or passing on it, depending on how you want to view it."

"At the end of the day, there was a chip that they were interested in that they wanted to pay a certain price for and not a nickel more and that price was below our forecasted cost. I couldn't see it. It wasn't one of these things you can make up on volume. And in hindsight, the forecasted cost was wrong and the volume was 100x what anyone thought."

8 days ago by AtlasBarfed

"And in hindsight, the forecasted cost was wrong and the volume was 100x what anyone thought."

And in hindsight, he was a moron, unqualified to lead an engineering company.

When people point to MBAs ruining Intel. Point to that statement.

You could invest in emerging markets.

Or rearrange financial statements deck chairs to hit your stock options and roll your eyes at the nerds.

8 days ago by mardiyah

Nice insight...

8 days ago by passive

Yeah, exactly this.

Intel has really stumbled on fabrication the last 2-3 years, but the idea that they won't correct that ignores decades of precedent.

I'm an AMD fanboy from the K6 days, and I love what Lisa Su has been able to deliver for them, but if it wasn't for Intel's stumbles, they would still be single digits at best in most markets.

Look at their progress from Pentium 4 to Core 2 Duo (makes me feel real old to see that was 15 years ago). We probably won't see quite as dramatic an improvement over the next 3 years, but I'll be stunned if AMD still has the performance lead at that point.

Apple's a wildcard, but they aren't going to sell their chips, and they represent a fairly small part of the PC market. No other non-x86 chips have made even a tiny dent in the PC market, and even in the datacenter, it's still mostly x86.

The absolute worst I can imagine happening to Intel over the next decade is that they no longer define the CPU market. But that's a long, long, way from dying, and will probably be good for the market and good for Intel in the long run.

8 days ago by systemvoltage

Curious, why were you a fanboy? Is it like rooting for team red and all the fun that goes into it? Honestly wondering since I have no feelings for any company even if their products are amazing.

8 days ago by calvano915

I've been a fan for much the same time period yet never owned an AMD powered device, except maybe a game console? AMD has been useful to keep Intel on top of their game, but Intel has been on top for most of the last 15 years give or take.

8 days ago by passive

It was really about them being an underdog, more than anything. Challenging the big dog and occasionally forcing them into more consumer-friendly behaviors. That said, I still ended up owning mostly Intel CPUs (a lapped Celeron 300a running at ~514mhz in the K6 days).

8 days ago by AtlasBarfed

It's not like 2000, where if you're behind a good run of fab research will rocket you forward with some sweet increases in clock frequency.

We're nearing the end of the road on node shrinks, Intel can best hope to be marginally ahead on process.

What Apple's M1 shows is that a fundamental redesign is needed to get gains. Intel's problems appear to be in design, software, and business execution.

But doing a new ISA or muscling into ARM or other radical things precisely require design/software/business execution or it will utterly fail.

M1 is going to open the floodgates for ARM on PCs. That's what will kill Intel.

AMD can probably pivot, I have faith in their designers.

8 days ago by passive

The Pentium 4 to Core 2 Duo leap was based on architecture, not fab tech. It was a very significant shift in chip design philosophy. Intel being ahead in fab tech didn't hurt, but it's far from their only strength. There's no reason to believe M1 will open any sort of gates for Arm on PC. It's not like other Arm CPUs will be running Mac OS, or M1s will be running Windows. The whole pre-Intel period for Mac OS didn't do anything to diminish x86 on the desktop.

Now, Chromebooks are something of an argument for ARM on the desktop, as they at least seem to provide identical behavior regardless of ISA, but they aren't putting up the necessary numbers quite yet. x86 still owns PC gaming, and has a pretty solid hold on console gaming. I'm not going to be worried at all for Intel until that shifts too.

8 days ago by jabl

The rise of TSMC, or more generally the merchant fab model, is really the big semiconductor story of the past few decades.

Traditional wisdom used to be that if you wanted cutting edge performance, you really needed your own cutting edge in-house fab. The vertical integration advantages of tuning your ASIC design to your process were unassailable. If you couldn't afford your own foundry, tough luck, you suffered a severe penalty in performance and/or cost.

But the merchant foundries, first and foremost TSMC, have really turned that traditional wisdom around. Both in terms of matching, and then surpassing Intel on deploying the latest process nodes, but also in terms of making it possible for 3rd party ASIC developers to use that process as efficiently as an in-house design team would be able to.

8 days ago by trhway

>The fact that they're almost equivalent to those two at 5nm when Intel is at 14nm+++++++++

Only when it comes to single core. And even then Intel has to spend 2x silicon - Zen 3 CCD at 7nm with a die size of 80.7 mm² while Intel's 10th gen 8C die is 170mm2. Of course it isn't silicon itself which is costly, it is the processing of each wafer what is the cost here, and getting 2x less cores per wafer is bound to affect the margin/profit/etc.

And of course Intel has no chances of putting out 16C chip into normal desktop like AMD does with 3950x and 5950x (those chips successfully go into the territory of 2x Xeon workstations costing arm and leg)

8 days ago by ChrisIsTaken

I'd reckon per unit area, Intel's cost on 14nm wafers are going to run to less than 25% of TSMC's ask price at 7nm.

Do keep in mind that a fair amount of a desktop chip is pad space, IO hasn't shrunk since 22nm nodes, DUV steppers are dirt cheap, Intel's foundries are fully amortized, and yields at 14nm are going to be approaching 100%.

8 days ago by CyberRabbi

Intel has at least 20 more years and probably dozens of real chances to reclaim past market position and maybe even more so.

Intel is a large company with a massive market cap. The bigger they are, the longer they fall. They just have so much runway and market gravity.

For comparison, Apple shot itself in the foot repeatedly for at least 10 years before Steve Jobs rejoined, and it took 10 more years after that before the iPhone came out, which no layman could have predicted.

8 days ago by ChrisIsTaken

Intel has a very impressive balance sheet for a failing company, it has almost no debt and is generating double-digit billions in free cash flow from almost state of the art fabs. They are still number three on process technology, and number one on volume.

The fact that Intel's management have chosen to spend two decades flushing that amazing free cashflow down the M&A toilet instead of fixing their workplace culture and getting their design house in order is what Pat Gelsinger needs to fix.

8 days ago by exmadscientist

Just an extended +1 here. I think you're exactly on point.

Intel is definitely in a hole (that they watched being dug below them and did nothing...), but they can get out. They have plenty of time to do so. The only real problems to overcome are their own internal culture and the sheer difficulty of single-digit-nanometer manufacturing. And the second one is easier.

Don't count them out yet. (But do laugh at them. Because they definitely deserve it, and it just might motivate them.)

8 days ago by arp242

Yeah, exactly; if Intel is a "failing business" then I don't know what a successful business looks like. It's still one of the most profitable companies on the planet.

Look at IBM; while Big Blue was once seemingly omnipotent and they've since fallen from that position, it's still a double-digit billion company with over 300k employees. It's not as big (relatively speaking) as it once was, but it sure didn't die either.

Other companies have died, such as Digital Equipment, but that was the result of 1) a massive paradigm shift in computing and computing equipment (introduction of "personal computing"), and 2) some pretty bad business decisions (not just failing to get a hold of the PC market). Thus far, I'm not really seeing 1) happen, and 2) alone probably won't be enough to completely kill intel, just cripple it temporarily.

8 days ago by m-p-3

And they're still the dominant CPU vendor in most end-user corporate systems. I'm happy to see some Ryzen laptops finally coming out and breaking that dominance, but let's not kid ourselves that Intel is still in a great position, and that won't change overnight.

6 days ago by Shorel

True. At this point both companies, Intel and AMD, are basically selling every single chip they can make.

7 days ago by otherme123

I recently read "Innovator'd dilemma', and Intel came to my mind immediately. They are big, so they don't bother with low-margin markets. That allows small companies to start stealing market on the lowest margin products, while Intel keep showing awesome balance sheets as they flee from those markets.

Mobile? No, thanks, those are peanuts. Cheap/home PCs? Nah, either. PC builders very sensitive to prices? The don't want them. And slowly they are being sweetly cornered in the top market of the very expensive servers with very juicy margins, but feeling how the small dogs keep bitting up. Suddenly ARM wants to build for cheap servers, and AMD has a chip that also fights for that top market.

8 days ago by fiddlerwoaroof

I don’t think this is right: it completely ignores the iPod, for one. And the iMac plays a role here too.

8 days ago by jhbadger

And OSX (which came from Jobs via NeXT).

8 days ago by softwaredoug

Feels like Microsoft during the Balmer years. Instead of Linux and Mac ascendant, you have AMD and ARM... it doesn’t seem impossible to think a Satya Nadella could turn things around.

8 days ago by SoSoRoCoCo

Good observation. The only thing that kept MS going was the massive inertia of Win & Office. It bought them time to pivot to the cloud. And now we have things like VSCode, an ElectronJS app, which is the first microsoft product I've adored in two decades. And TypeScript, which has made the world a better place.

8 days ago by mhh__

Realistically, Intel only have to get one process node right to either be right back in the game or buy themselves years more time to play with. They are currently somewhere between 3-5 times behind TSMC lithographically, but nowhere near that far behind in raw performance, which suggests that they could at very least knock AMD off their perch in single-threaded work.

Obviously process nodes are much harder than pretty much anything in the cloud, but this is a serious test of (pure and simple) whether their leadership has the (gender-neutral) balls to not back down.

8 days ago by kllrnohj

> They are currently somewhere between 3-5 times behind TSMC lithographically

No they aren't. The nanometer names are only kinda comparable within a single company, but they are not comparable beyond that.

It's like saying that Chrome is 6x further ahead than Safari because Chrome is on 87 and Safari is only on 14. Same thing with fabs here, it's just a version number that counts down and has been for a long while.

In this case specifically, Intel's 10nm is more or less the same density as TSMC's 7nm. TSMC's 5nm is then a significant 1.8x the density of their 7nm process, so right now Intel's fabs are ~identical to what AMD is using (7nm), but Apple's use of TSMC's 5nm then puts them a generation ahead. TSMC is leading, but by a single generation not by 3-5x or whatever. Or to put it in year terms TSMC is about 2 years ahead of Intel right now, Intel just now getting "volume" 10nm up & running while TSMC was at a comparable node in volume 2 years ago.

As far as "performance" goes, though, so far TSMC, even their 5nm process, cannot quite match Intel. Remember that as far as the fab goes, performance is just what clock speeds can be hit, and Intel's clocks are the highest - they have the fastest transistors. 14nm+++++ is a meme/joke, but the actual transistor speed of it is still downright impressive. How fast a CPU actually runs depends on the architecture, though, which is influenced by density but in terms of pure single-core performance that's really almost entirely just architecture.

8 days ago by jabl

> Remember that as far as the fab goes, performance is just what clock speeds can be hit, and Intel's clocks are the highest - they have the fastest transistors. 14nm+++++ is a meme/joke, but the actual transistor speed of it is still downright impressive.

My understanding is that Intel's fast transistors come at a cost of not being particularly dense for the process node they are implemented on. So it's a question of design philosophy as well. Intel has fast, but somewhat sparse transistors, they use a lot of hand layout etc. to really optimize their designs so they can clock high.

Other players use a different philosophy. Slower, but denser transistors and more automated layout may not clock as high, but the improved density gives you more cores (e.g. AMD vs Intel) and/or allows a more "brainiac" core design (wider, deeper ROB, better branch prediction, more cache, etc. like Apple M1), and also gets you a better turnaround time.

8 days ago by mhh__

Intel still can't make 10nm in any decent volume.

TSMC are shipping chips at something like 170MTr/mm^2, the bulk of Intel's production (even their new chips) are still closer to 40-50MTr/mm^2

8 days ago by SoSoRoCoCo

> 3-5 times behind TSMC lithographically,

I'm not entirely sure that is true. TSMC plays fast and loose with their definition of node. Any fab folks want to speak up?

What I mean by that:

https://www.pcgamesn.com/amd/tsmc-7nm-5nm-and-3nm-are-just-n...

"And also goes some way to explaining why, despite TSMC offering a nominally 7nm process, the general consensus has been that Intel’s 10nm design is pretty much analogous. But what’s 3nm between fabs? At that level, probably quite a lot. But if the 7nm node is more of a branding exercise than genuinely denoting the physical properties of that production process then you can understand why there’s supposedly not a lot in it."

8 days ago by monocasa

Every fab plays loose and fast with the terms, including Intel these days.

More or less TSMC 7nm := Intel 10nm, and TSMC 5nm := Intel 7nm. It's more complex than that, one has denser logic while the other has denser SRAM and what have you, but it's a good baseline.

Since Intel is struggling with 10nm but is shipping, that puts TSMC about a node and a half ahead.

If you want to dig in deeper, wikichip has most of the public specifics on the process nodes (which are heavily shrouded in secrecy). For instance: https://en.wikichip.org/wiki/7_nm_lithography_process

8 days ago by mhh__

https://en.wikipedia.org/wiki/Transistor_count#MOSFET_nodes

Look it up yourself. Intel being on 14nm vs (say) 5nm is actually quite flattering to Intel.

8 days ago by ekianjo

How is Intel dying if AMD can't even produce enough chips to avoid being out of stock the whole time? Production bottlenecks with TSMC are a thing.

8 days ago by josephg

That'll sort itself out pretty quickly. TSMC is flush with cash from the premium pricing they've been able to charge for their 5nm fabs. Every day TSMC doesn't have enough capacity is a day they funnel easy money straight to their competitors at Samsung and Intel.

Investing into increasing their fab capacity is the easiest business decision they'll ever make. And if they need finance they could get investors lining up around the block throwing money their way.

8 days ago by ChrisIsTaken

TSMC's capacity problems have been directly caused by still ongoing covid shutdowns at ASML.

6 days ago by alimbada

Do you have any sources for this? From what I've been reading the last couple of weeks, the shortages are due to ABF substrate shortages[1].

https://www.tomshardware.com/news/amd-chip-shortage-packagin...

8 days ago by gchamonlive

This is not limited to AMD. Gpu manufacturers are also experiencing supply shortages. The problem is that even if supply normalises, Intel has to work harder than usual to regain both consumer trust and innovation lead

8 days ago by ekianjo

> Intel has to work harder than usual to regain both consumer trust and innovation lead

Not in the enterprise market where AMD is almost completely absent so far.

8 days ago by gchamonlive

What do you mean by "almost completely absent"?

A quick search showed that in 2017, AMD server market share was 1%, and in 2020 is expected to have reached nearly 10%[1] in 2019. This is still not the same as 15 years ago, but the market is also not the same, with cloud computing, and this increase is very expressive.

They are bringing IPC improvement with Epyc Milan and is pushing the same innovations towards the server market.

[1] https://www.hardwaretimes.com/amd-server-market-share-grows-...

8 days ago by etrabroline

yea but not being able to keep up with demand is a good problem to have, and much easier to solve than having an inferior product

Daily Digest

Get a daily email with the the top stories from Hacker News. No spam, unsubscribe at any time.