Brian Lovin
/
Hacker News
Daily Digest email

Get the top HN stories in your inbox every day.

pclmulqdq

This is a CGRA. It's like an FPGA but with bigger cells. It's not a VLIW core.

I assume that like all past attempts at this, it's about 20x more efficient when code fits in the one array (FPGAs get this ratio), but if your code size grows past something very trivial, the grid config needs to switch and that costs tons of time and power.

torginus

Yeah, I have worked with FPGAs a while ago and still casually follow the space.

There have been many attempts of mapping general purpose/GPU programming languages to FPGA and none of them worked out.

The first leading claim they make - that this is a general purpose CPU, capable of executing anything - I suspect is false.

CPUs are hard because they have to interact with memory, basically 95% of CPU design complexity comes from having to interact with memory, and handling other data hazards.

If this was reducible complexity, they'd have done so already.

VonTum

In academia people use general purpose languages (Notably C++ dialects) for FPGA design quite a lot. And there's definitely a glut of papers published on the development of such "High Level Synthesis" tools.

In order to use the FPGA efficiently you need to first pipeline the logic deeply, but then also be able to fill that pipeline.

But in my opinion there is just too much of an impedance mismatch between the "do one thing and then the next" style of imperative code, and the "everything everywhere all at once" way in which FPGAs actually work.

And well, here's my plug for my language that gets close to HLS in terms of syntactic ease, while still retaining full control over the generated hardware: https://github.com/pc2/sus-compiler

By explicitly keeping track of pipelining in submodules the compiler automatically balances the pipelines your create, and in addition, you can write modules that respond to the pipeline distance between their ports, and infer their parameters based on that. That fixes one of the most error-prone activities when designing hardware in SV or other languages.

torginus

I worked in industry, trying to fit video processing algorithms and filter banks to FPGAs, about a decade ago, so my memories are a bit fuzzy.

The thing about FPGAs is that they are wildly expensive, getting exponentially more so the more you move up the product pallette. They also have a ton of special function blocks (DPRAM, multipliers, shift registers etc), that work differently from vendor to vendor, and even between generations.

Even with HDLs you had to options - either read all the datasheet, the layout and mix of components, and plan your design to fit the hardware - or just wing it and write down what you wanted, and hope for the best - in the latter scenario, the synthesis tool had to figure out how to fit your design best to the hardware, which even if you were mindful of how the chip worked, was still a hit and miss.

When you work in industry and either every cent counts, or you have to design for a fixed target, and somehow you're in the good graces of tooling, finding that changing a small thing suddenly makes your design 3x as big and half as fast is not acceptable.

HLS was this, but on steroids - you never knew what you were going to get, sometimes it worked well, sometimes not at all.

I'm sure tooling has evolved back then but I'd still guess spending the extra engineering effort is often well worth it when your chips cost five figures.

actionfromafar

Then I have a thought experiment. Replace the execution cores on a state of the art GPU and CPU with FPGAs.

DesiLurker

I recall Mathstar's FPOA (field programmable object arrays) have had similar architecture. it seems they have done a mixture of stack computer, this and some async programming to get this level of optimization. The other one I had seen with pretty good on chip fabric was Tilera who were using something like a packet switch to interconnect tonnes of on-chip cores.

My first reaction watching this video was that they are just shifting the problem to compiler, which is actually worse and also does not works for dynamic code with tonnes of branches. Also, didnt Intel burn a lot of money trying to do this with Itanium?

Overall, interesting idea but I filed it under 'solution looking for a problem' desk.

rf15

I agree this is very "FPGA-shaped" and I wonder if they have further switching optimisations on hand.

RossBencina

My understanding is that they have a grid configuration cache, and are certainly trying to reduce the time/power cost of changing the grid connectivity.

pclmulqdq

An FPGA startup called Tabula had the same thesis and it didn't work out well for them. Their configurable blocks had 16 configurations that they would let you cycle through. Reportedly, the chips were hell to program and the default tools were terrible.

reactordev

Is that a design flaw or a tooling flaw? The dev experience is usually left till the very end of some proof like this.

gamache

Sounds a lot like GreenArray GA144 (https://www.greenarraychips.com/home/documents/greg/GA144.ht...)! Sadly, without a bizarre and proprietary FORTH dialect to call its own, I fear the E1 will not have the market traction of its predecessor.

jnpnj

That was my first thought too. I really like the idea of interconnected nodes array. There's something biological, thinking in topology and neighbours diffusion that I find appealing.

londons_explore

One day someone will get it working...

Data transfer is slow and power hungry - it's obvious that putting a little bit of compute next to every bit of memory is the way to minimize data transfer distance.

The laws of physics can't be broken, yet people demand more and more performance, so eventually the difficulty of solving this issue will be worth solving.

AnimalMuppet

That minimizes the data transfer distance from that bit of memory to that bit of compute. But it increases the distance between that bit of (memory and compute) and all the other bits of (memory and compute). If your problem is bigger than one bit of memory, such a configuration is probably a net loss, because of the increased data transfer distance between all the bits.

Your last paragraph... you're right that, sooner or later, something will have to give. There will be some scale such that, if you create clumps either larger or smaller than that scale, things will only get worse. (But that scale may be problem-dependent...) I agree that sooner or later we will have to do something about it.

Imustaskforhelp

Pardon me but could somebody here explain to me like I am 15? Because I guess Its late night and I can't go into another rabbithole and I guess I would appreciate it. Cheers and good night fellow HN users.

elseless

Sure. You can think of a (simple) traditional CPU as executing instructions in time, one-at-a-time[1] — it fetches an instruction, decodes it, performs an arithmetic/logical operation, or maybe a memory operation, and then the instruction is considered to be complete.

The Efficient architecture is a CGRA (coarse-grained reconfigurable array), which means that it executes instructions in space instead of time. At compile time, the Efficient compiler looks at a graph made up of all the “unrolled” instructions (and data) in the program, and decides how to map it all spatially onto the hardware units. Of course, the graph may not all fit onto the hardware at once, in which case it must also be split up to run in batches over time. But the key difference is that there’s this sort of spatial unrolling that goes on.

This means that a lot of the work of fetching and decoding instructions and data can be eliminated, which is good. However, it also means that the program must be mostly, if not completely, static, meaning there’s a very limited ability for data-dependent branching, looping, etc. to occur compared to a CPU. So even if the compiler claims to support C++/Rust/etc., it probably does not support, e.g., pointers or dynamically-allocated objects as we usually think of them.

[1] Most modern CPUs don’t actually execute instructions one-at-a-time — that’s just an abstraction to make programming them easier. Under the hood, even in a single-core CPU, there is all sorts of reordering and concurrent execution going on, mostly to hide the fact that memory is much slower to access than on-chip registers and caches.

pclmulqdq

Pointers and dynamic objects are probably fine given the ability to do indirect loads, which I assume they have (Side note: I have built b-trees on FPGAs before, and these kinds of data structures are smaller than you think). It's actually pure code size that is the problem here rather than specific capabilities, as long as the hardware supports those instructions.

Instead of assembly instructions taking time in these architectures, they take space. You will have a capacity of 1000-100000 instructions (including all the branches you might take), and then the chip is full. To get past that limit, you have to store state to RAM and then reconfigure the array to continue computing.

elseless

Agree that code size is a significant potential issue, and that going out to memory to reprogram the fabric will be costly.

Re: pointers, I should clarify that it’s not the indirection per se that causes problems — it’s the fact that, with (traditional) dynamic memory allocation, the data’s physical location isn’t known ahead of time. It could be cached nearby, or way off in main memory. That makes dataflow operator latencies unpredictable, so you either have to 1. leave a lot more slack in your schedule to tolerate misses, or 2. build some more-complicated logic into each CGRA core to handle the asynchronicity. And with 2., you run the risk that the small, lightweight CGRA slices will effectively just turn into CPU cores.

kannanvijayan

Hmm. You'd be able to trade off time for that space by using more general configurations that you can dynamically map instruction-sequences onto, no?

The mapping wouldn't be as efficient as a bespoke compilation, but it should be able to avoid the configuration swap-outs.

Basically a set of configurations that can be used as an interpreter.

markhahn

I think that footnote is close to the heart of it: on a modern OoO superscalar processor, there are hundreds of instructions in-flight. that means a lot of work done to maintain their state and ensure that they "fire" when their operands are satisfied. I think that's what this new system is about: a distributed, scalable dataflow-orchestration engine.

I think this still depends very much on the compiler: whether it can assemble "patches" of direct dependencies to put into each of the little processing units. the edges between patches are either high-latency operations (memory) or inter-patch links resulting from partitioning the overall dataflow graph. I suspect it's the NOC addressing that will be most interesting.

esperent

> it executes instructions in space instead of time. At compile time, the Efficient compiler looks at a graph made up of all the “unrolled” instructions (and data) in the program, and decides how to map it all spatially onto the hardware units.

Naively that sounds similar to a GPU. Is it?

3836293648

No? GPUs are just extremely parallel much wider SIMD cores

drcongo

You managed to explain that in a way that even I could understand. Magnificent, thank you.

majkinetor

> meaning there’s a very limited ability for data-dependent branching, looping, etc. to occur compared to a CPU

Not very useful then if I can't do this very basic thing?

hencoappel

Found this video a good explanation. https://youtu.be/xuUM84dvxcY?si=VPBEsu8wz70vWbX4

Tempest1981

Thanks. (Why does he keep pointing at me?)

Nevermark

Instead of large cores operating mostly independently in parallel (with some few standardized hardwired pipeline steps per core), …

You have many more very small ALU cores, configurable into longer custom pipelines with each step more or less as wide/parallel or narrow as it needs to be for each step.

Instead of streaming instructions over & over to large cores, you use them to set up those custom pipeline circuits, each running until it’s used up its data.

And you also have some opportunity for multiple such pipelines operating in parallel depending on how many operations (tiles) each pipeline needs.

wmf

Probably not. This is graduate-level computer architecture.

artemonster

As a person who is highly vested and interested in CPU space, especially embedded, I am HIGHLY skeptical of such claims. Somebody played TIS-100, remembered GA144 failed and decided to try their own. You know what can be a simple proof of your claims? No, not a press release. No, not a pitch deck or a youtube video. And NO, not even working silicon, you silly. A SIMPLE FUCKING ISA EMULATOR WITH A PROFILER. Instead we got bunch of whitepapers. Yeah, I call it 90% chance for total BS and vaporware

wmf

There's >20 years of academic research behind dataflow architectures going back to TRIPS and MIT RAW. It's not literally a scam but the previous versions weren't practical and it's unlikely this version succeeds either. I agree that if the compiler was good they would release it and if they don't release it that's probably because it isn't good.

jecel

The 2022 PhD thesis linked from their web site includes a picture of what they claim was an actual chip made using a 22nm process. I understand that the commercial chip might be different, but it is possible that the measurements made for the thesis could be valid for their future products as well.

bmenrigh

I like Ian but he’s rapidly losing credibility by postings so much sponsored content. Many of his videos and articles now are basically just press releases.

IanCutress

This content wasn't sponsored. I spent time with the CEO and listened to his explanations, and did some digging of my own. I reported on the announcement and added in some of my own thoughts and opinions. I spent a decade doing exactly this at AnandTech, but now it's in video form (or on my substack).

So I'm not really sure where you're getting that feeling from. I've always done this. If I do sponsored content, it's listed as such.

JJJollyjim

Especially the fact that he says the toolchain is now available for download (which lends credibility – if they're willing to share it so people can see the quality of output it produces), when in fact the website has no download links.

IanCutress

I was under the impression it was going to be available to download without registration, but the CEO pinged me to say it will be registration required. They've debated internally and this is the direction they want to go down.

kendalf89

This grid based architecture reminds me of a programming game from zactronics, TIS-100.

mcphage

I thought the same thing :-)

pedalpete

Though I'm sure this is valuable in certain instances, thinking about many embedded designs today, is the CPU/micro really the energy hog in these systems?

We're building an EEG headband with bone-conduction speaker so in order of power, our speaker/sounder and LEDs are orders of magnitude more expensive than our microcontroller.

In anything with a screen, that screen is going to suck all the juice, then your radios, etc. etc.

I'm sure there are very specific use-cases that a more energy efficient CPU will make a difference, but I struggle to think of anything that has a human interface where the CPU is the bottleneck, though I could be completely wrong.

montymintypie

Human interfaces, sure, but there's a good chunk of industrial sensing IoT that might do some non-trivial edge processing to decide if firing up the radio is even worth it. I can see this being useful there. Potentially also in smart watches with low power LCD/epaper displays, where the processor starts to become more visible in power charts.

Wonder if it could also be a coprocessor, if the fabric has a limited cell count? Do your dsp work on the optimised chip and hand off the the expensive radio softdevice when your codesize is known to be large.

schobi

I would not expect that this becomes competitive against a low power controller that is sleeping most of the time, like in a typical wristwatch wearable.

However, the examples indicate that if you have a loop that is executed over and over, the setup cost for configuring the fabric could be worth doing. Like a continuous audio stream in a wakeup-word detection, a hearing aid, or continous signals from an EEG.

Instead of running a general purpose cpu at 1MHz the fabric would be used to unroll the loop, you will use (up to) 100 building blocks for all individual operations. Instead of one instruction after another, you have a pipeline that can execute one operation in each cycle in each building block. The compute thus only needs to run at 1/100 clock, e. g. the 10kHz sampling rate of the incoming data. Each tick of the clock moves data through the pipeline, one step at a time.

I have no insights but can imagine how marketing thinks: "let's build a 10x10 grid of building blocks, if they are all used, the clock can be 1/100... Boom - claim up to 100x more efficient!" I hope their savings estimate is more elaborate though...

dlcarrier

The question for any given application is: How slow does the processor need to be, before its power consumption is no longer a factor?

Increasing the processing power at that near-marginal power consumption broadens the range of battery-powered applications that are possible.

kldg

particularly solar/outdoor/remote applications. I run ESP32s from super-cheap USBC solar panels with integrated battery/BMS for various measurements, keeps them running 24/7 regardless of weather due to how little they consume. They pop on every 4 minutes, power the sensors, connect to the WiFi network, beep out their data, then go back to sleep. Seeed's ~$6 ESP32C3/C6 boards have onboard BMS for 3.7V cell even, if I didn't prefer the integrated panel/battery method.

archipelago123

It's a dataflow architecture. I assume the hardware implementation is very similar to what is described here: https://csg.csail.mit.edu/pubs/memos/Memo-229/Memo-229.pdf. The problem is that it becomes difficult to exploit data locality, and there is so much optimization you can perform during compile time. Also, the motivation for these types of architectures (e.g. lack of ILP in Von-Neumann style architectures) are non-existent in modern OoO cores.

timschmidt

Out of order cores spend an order of magnitude more logic and energy than in-order cores handling invalidation, pipeline flushes, branch prediction, etc etc etc... All with the goal of increasing performance. This architecture is attempting to lower the joules / instruction at the cost of performance, not increase energy use in exchange for performance.

ZiiS

Percentage chance this is 100X more efficent at the general purpose computing ARM is optimized for: 1/100%

trhway

> spatial data flow model. Instead of instructions flowing through a centralized pipeline, the E1 pins instructions to specific compute nodes called tiles and then lets the data flow between them. A node, such as a multiply, processes its operands when all the operand registers for that tile are filled. The result then travels to the next tile where it is needed. There's no program counter, no global scheduler. This native data-flow execution model supposedly cuts a huge amount of the energy overhead typical CPUs waste just moving data.

should work great for NN.

Grosvenor

Is this the return if Itanium? static scheduling and pushing everything to the compiler it sounds like it.

wood_spirit

The Mill videos are worth watching again - there are variations on NaT handling and looping and branching etc that make DSPs much more general-purpose.

I don’t know how similar this Electron is, but the Mill explained how it could be done.

Edit: aha, found them! https://m.youtube.com/playlist?list=PLFls3Q5bBInj_FfNLrV7gGd...

smlacy

I love these videos and his enthusiasm for the problem space. Unfortunately, it seems to me that the progress/ideas have floundered because of concerns around monetizing intellectual property, which is a shame. If he had gone down a more RISC-V like route, I wonder if we would see more real-world prototypes and actual use cases. This type of thing seems great for microprocessor workloads.

darksaints

It kinda sounds like it, though the article explicitly said it's not VLIW.

I've always felt like itanium was a great idea but came too soon and too poorly executed. It seemed like the majority of the commercial failure came down to friction from switching architecture and the inane pricing rather than the merits of the architecture itself. Basically intel being intel.

bri3d

I disagree; Itanium was fundamentally flawed for general purpose computing and especially time-shared generally purpose computing. VLIW is not practical in time-sharing systems without completely rethinking the way cache works, and Itanium didn't really do that.

As soon as a system has variable instruction latency, VLIW completely stops working; the entire concept is predicated on the compiler knowing how many cycles each instruction will take to retire ahead of time. With memory access hierarchy and a nondeterministic workload, the system inherently cannot know how many cycles an instruction will take to retire because it doesn't know what tier of memory its data dependencies live in up front.

The advantage of out-of-order execution is that it dynamically adapts to data availability.

This is also why VLIW works well where data availability is _not_ dynamic, for example in DSP applications.

As for this Electron thing, the linked article is too puffed to tell what it's actually doing. The first paragraph says something about "no caches" but the block diagram has a bunch of caches in it. It sort of sounds like an FPGA with bigger primitives (configurable instruction tiles rather than gates), which means that synchronization is going to continue to be the problem and I don't know how they'll solve for variable latency.

hawflakes

Not to detract form your point, but Itanium's design was to address the code compatibility between generations. You could have code optimized for a wider chip run on a narrower chip because of the stop bits. The compiler still needs to know how to schedule to optimize for a specific microarchitecture but the code would still run albeit not as efficiently.

As an aside, I never looked into the perf numbers but having adjustable register windows while cool probably made for terrible context switching and/or spilling performance.

als0

> VLIW is not practical in time-sharing systems without completely rethinking the way cache works

Just curious as to how you would rethink the design of caches to solve this problem. Would you need a dedicated cache per execution context?

cmrdporcupine

It does feel maybe like the world has changed a bit now that LLVM is ubiquitous with its intermediate representation form being available for specialized purposes. Translation from IR to a VLIW plan should be easier now than the state of compiler tech in the 90s.

But "this is a good idea just poorly executed" seems to be the perennial curse of VLIW, and how Itanium ended up shoved onto people in the first place.

bobmcnamara

Itanic did exactly what it was supposed to do - kill off most of the RISCs.

markhahn

haha! very droll.

dlcarrier

No, but multiple GPU shader architectures use VLIW instructions. It's totally doable with modern compilers, but it's only adventageous to parallelizable tasks, hence the use in GPUs.

mochomocha

On the other hand, Groq seems pretty successful.

variadix

Pretty interesting concept, though as other commenters have pointed out the efficiency gains likely break down once your program doesn’t fit onto the mesh all at once. Also this looks like it requires a “sufficiently smart compiler”, which isn’t a good sign either. The need to do routing etc. reminds me of the problems FPGAs have during place and route (effectively the minimum cut problem on a graph, i.e. NP), hopefully compilation doesn’t take as long as FPGA synthesis takes.

kyboren

> The need to do routing etc. reminds me of the problems FPGAs have during place and route (effectively the minimum cut problem on a graph, i.e. NP)

I'd like to take this opportunity to plug the FlowMap paper, which describes the polynomial-time delay-optimal FPGA LUT-mapping algorithm that cemented Jason Cong's 31337 reputation: https://limsk.ece.gatech.edu/book/papers/flowmap.pdf

Very few people even thought that optimal depth LUT mapping would be in P. Then, like manna from heaven, this paper dropped... It's well worth a read.

almostgotcaught

I don't what this has to do with what you're responding to - tech mapping and routing are two completely different things and routing is known NP complete.

ethan_smith

This is essentially a CGRA (Coarse-Grained Reconfigurable Array) architecture, which historically has shown impressive efficiency in academic research but struggled with compilation complexity and commercial adoption precisely because of the NP-hard routing problems you've identified.

rpiguy

The architecture diagram in the article resembles the approach Apple took in the design of their neural engine.

https://www.patentlyapple.com/2021/04/apple-reveals-a-multi-...

Typically these architectures are great for compute. How will it do on scalar tasks with a lot of branching? I doubt well.

Daily Digest email

Get the top HN stories in your inbox every day.

Efficient Computer's Electron E1 CPU – 100x more efficient than Arm? - Hacker News