Hacker News
8 days ago by the8472

> We use a logic primitive called the adiabatic quantum-flux-parametron (AQFP), which has a switching energy of 1.4 zJ per JJ when driven by a four-phase 5-GHz sinusoidal ac-clockat 4.2 K.

The landauer limit at 4.2K is 4.019×10^-23 J (joules). So this is only a factor of 38x away from the landauer limit.

8 days ago by pgt

I'm curious about how the Landauer limit relates to Bremermann's Limit: https://en.wikipedia.org/wiki/Bremermann%27s_limit

Admittedely, I haven't done much reading, but I see it is a linked page from Bremermann's Limit: https://en.wikipedia.org/wiki/Landauer%27s_principle

8 days ago by freeqaz

Mind expanding on this a bit more? What is that that limit and how does it relate to the clock speed?

8 days ago by Enginerrrd

It's not about clock speed per se. It's about the lowest possible energy expenditure to erase one bit of information (or irreversibly destroy it by performing a logical operation). The principle comes about from reasoning about entropy loss in said situations. There's a hypothesized fundamental connection between information and entropy manifest in physical law. The idea is that if you destroy one possible state of a system, you have reduced the entropy of that system, so the 2nd law of thermodynamics implies that you must increase the entropy of the universe somewhere else by at least that amount. This can be used to say how much energy the process must take as soon as you choose a particular temperature.

This applies to any irreversible computation.

IMO, The fact that it's only 38x the minimum is MIND BLOWING.

8 days ago by ajuc

> IMO, The fact that it's only 38x the minimum is MIND BLOWING.

It's like if someone made a car that drives at 1/38th the light speed.

8 days ago by jcims

Is there an idea of entropic potential energy/gradient/pressure? Could you differentiate encrypted data from noise by testing how much energy it requires to flip a bit?

8 days ago by centimeter

> The fact that it's only 38x the minimum is MIND BLOWING.

It doesn't run at room temp. The difference is much larger than 38x. It's closer to 1000x.

8 days ago by the8472

https://en.wikipedia.org/wiki/Landauer%27s_principle

Note that the gates themselves used here are reversible, so the limit shouldn't apply to them. But the circuits built from them them aren't reversible as far as I can see in the paper, so it would still apply to the overall computation.

7 days ago by beefok

This book goes into some great detail about this. :) https://www.amazon.com/dp/B004ZLS3TU/ref=dp-kindle-redirect?... I will always recommend it!

8 days ago by ginko

The interesting thing about this is that if we get close to the Landauer limit we may have to seriously start thinking of using reversible computing[1] paradigms and languages to get optimal performance.

[1] https://en.wikipedia.org/wiki/Reversible_computing

8 days ago by dcposch

> adiabatic quantum-flux-parametron

https://youtube.com/watch?v=BBqIlBs51M8

8 days ago by TheSpiceIsLife

Reminded me of the Rockwell Retro Encabulator

https://www.youtube.com/watch?v=RXJKdh1KZ0w

8 days ago by b0rsuk

Given the cooling requirements, I suppose it would create completely impassable rift between datacenter computing and other kinds. Imagine how programming and operating systems might look in a world where processing power is 80x cheaper.

Considering that "data centers alone consume 2% of world's enegy", I think it's worth it.

8 days ago by xvedejas

It seems likely that the more efficient our processors become, the larger share of the world's energy we'll devote to them [0]. Not that that's necessarily a bad thing, if we're getting more than proportionally more utility out of the processors, but I worry about that too [1].

[0] https://en.wikipedia.org/wiki/Jevons_paradox

[1] https://en.wikipedia.org/wiki/Wirth%27s_law

8 days ago by carlmr

>Not that that's necessarily a bad thing, if we're getting more than proportionally more utility out of the processors

The trend seems to be that we get only a little bit of extra utility out of a lot of extra hardware performance.

When the developer upgrades their PC it's easier for them to not notice performance issues. This creates the situation where every few years you need to buy a new PC to do the things you always did.

8 days ago by philsnow

> The trend seems to be that we get only a little bit of extra utility out of a lot of extra hardware performance.

"hardware giveth, and software taketh away"

8 days ago by akiselev

Is that the case for our highly centralized clouds? No one's putting a liquid nitrogen cooled desktop in their office so this type of hardware would be owned by companies who are financially incentivized to drive down the overhead costs of commoditized functionality like networking, data replication and storage, etc. leaving just inefficient developer logic which I assume is high value enough to justify it.

8 days ago by avmich

I'm sure a lot of developers upgrade their PCs (they were called workstations at a time) because of material problems - keyboards getting mechanically worse, and laptops can't easily get keyboard fixed, screens getting dead pixels, sockets getting loose, hard-to-find batteries getting less charge, and maybe some electronics degradation.

Another reason is upgrades to software, which maintain general bloat, and which is hard to control; new hardware is easier. That's however is very noticeable.

On top of that, just "better" hardware - say, in a decade one can have significantly better screen, more cores and memory, faster storage; makes easier for large software tasks (video transcoding, big rebuilds of whole toolchains and apps, compute-hungry apps like ML...)

8 days ago by ganzuul

So by dropping support for old CPUs, the Linux kernel burns a bridge. That conversation makes more sense now.

8 days ago by winter_blue

> Not that that's necessarily a bad thing, if we're getting more than proportionally more utility out of the processors, but I worry about that too

I have two points to comment on this matter.

Point 1: The only reason I would worry or be concerned about it is if we are using terribly-inefficient programming languages. There are languages (that need not be named) which are either 3x, 4x, 5x, 10x, or even 40x more inefficient than a language that has a performant JIT, or that targets native code. (Even JIT languages like JavaScript as still a lot less efficient because of dynamic typing. Also, in some popular complied-to-native languages, programmers tend to less efficient data structures, which results in lower performance as well.)

Point 2: If the inefficiency arises out of more actual computation being done, that's a different story, and I AM TOTALLY A-OK with it. For instance, if Adobe Creative Suite uses a lot more CPU (and GPU) in general even though it's written in C++, that is likely because it's providing more functionality. I think even a 10% improvement in overall user experience and general functionality is worth increased computation. (For example, using ML to augment everything is wonderful, and we should be happy to expend more processing power for it.)

8 days ago by john_minsk

Both are important and 1 can be even more useful. Use 1 to easier build very complex systems. Once they are working and you are selling them - optimize. Without#1 you can't get #2

8 days ago by moosebear847

In the future, I don't see why there's anything holding us back from splitting a bunch of atoms and having tons of cheap energy.

8 days ago by tsimionescu

Nuclear fission has well-known drawbacks and risks that are fundemantal, they will never be engineered away (risk of catastrophic explosion, huge operating costs, risk of nuclear waste leakage, risk of nuclear weapon proliferation). Why do you think the future will be significantly different from the present in this regard?

8 days ago by layoutIfNeeded

>Imagine how programming and operating systems might look in a world where processing power is 80x cheaper.

So like 2009 compared to 2021? Based on that, I'd say even more inefficient webshit.

8 days ago by AnIdiotOnTheNet

Considering that many modern interfaces are somehow less responsive than ones written over 20 years ago even when running those programs on period hardware, I feel certain that you are right.

8 days ago by qayxc

This is a very complex topic and there's a bunch of reasons for that.

And no, software efficiency isn't even the main factor. Not even close.

Just a few pointers: polling on peripherals instead of interrupts (i.e. USB vs. PS/2 and DIN) introducing input lag, software no longer running in ring-0 while being the sole process that owns all the hardware, concurrent processes and context switches, portability (and the required layers of abstraction and indirection), etc.

It's a bit cheap to blame developers while at the same time taking for granted that you can even do what you can do with modern hard- and software.

Everything comes at a price and even MenuetOS [1] will have worse input lag and be less responsive than an Apple II, simply because you'll likely have USB keyboard and mouse and an LCD monitor connected to it.

[1] http://menuetos.net

8 days ago by nicoburns

Some things have improved a lot. Remember how long it took to boot a 2009 PC. I suspect if hardware perfromance stagnates then software optimisation will develop again.

8 days ago by jayd16

20 years ago, Windows 98 dominated. That was already well into the slow, time sliced, modern era.

8 days ago by xxpor

But it probably took 80x less time to develop said software.

8 days ago by Jweb_Guru

Processing power is not 80x cheaper now than it was in 2009 unless you can do all your computation on a GPU.

8 days ago by jnsie

Anyone remember Java 3D? 'cause I'm imagining Java 3D!

8 days ago by vidanay

Java 11D™ It goes to eleven!

8 days ago by sago

I don't understand your reference. It seems negative, but it's hard imho to go down on the success of Minecraft. Or am I misunderstanding you?

8 days ago by systemvoltage

Javascript emulator in Javascript!

8 days ago by lkbm

Gary Bernhardt's presentation on the "history" of Javascript from 1995 to 2035 is hilarious and seems like something you'd enjoy: https://www.destroyallsoftware.com/talks/the-birth-and-death...

It takes things way beyond simply "emulating Javascript in Javascript", yet is presented so well that you barely notice the transition from current (2014) reality to a comically absurd future.

8 days ago by dmingod666

Do you mean the "eval()" function?

8 days ago by whatshisface

Solid state physics begets both cryogenic technology and cryocooling technology. I wouldn't write off the possibility of making an extremely small cryocooler quite yet. Maybe a pile of solid state heat pumps could do it.

8 days ago by jessriedel

This is true, but the fact that heat absorption scales with the surface area is pretty brutal for tiny cooled objects.

8 days ago by Calloutman

Not really. You just have the whole package instead a vacuum casing.

8 days ago by whatshisface

Heat conduction also scales with thermal conductivity, which is another thing that advances in solid state can bring us.

8 days ago by api

I don't see an impassible rift. Probably at first, but supercooling something very small is something that could certainly be productized if there is demand for it.

I can see demand in areas like graphics. Imagine real-time raytracing at 8K at 100+ FPS with <10ms latency.

8 days ago by adamredwoods

Cyptocurrency demands.

8 days ago by inglor_cz

Nice, but requires 10 K temperature - not very practical.

Once this can be done at the temperature of liquid nitrogen, that will be a true revolution. The difference in cost of producing liquid nitrogen and liquid helium is enormous.

Alternatively, such servers could be theoretically stored in the permanently shaded craters of the lunar South Pole, but at the cost of massive ping.

8 days ago by gnulinux

If the throughput is fast enough 3+3=6 seconds latency doesn't really sound that bad. There are websites with that kind of lag. You can't use to build a chat app, but you can use it as a cloud for general computing.

8 days ago by mindvirus

Fun aside I learned about recently: we don't actually know if the speed of light is the same in all directions. So it could be 5+1=6 seconds or some other split.

https://en.m.wikipedia.org/wiki/One-way_speed_of_light

8 days ago by faeyanpiraat

There is a Veritasium video really fun to watch which exmplains with examples why you cannot measure one way speed of light: https://www.youtube.com/watch?v=pTn6Ewhb27k

8 days ago by inglor_cz

Yes, for general computing, that would be feasible.

8 days ago by juancampa

> The difference in cost of producing liquid nitrogen and liquid helium is enormous.

Quick google search yields: $3.50 for 1L of He vs $0.30 for 1L of H2. So roughly 10 times more expensive.

8 days ago by tedsanders

That price is more than a decade out of date. Helium has been about 10x that the past half decade. I used to pay about $3,000 per 100L dewar a few years ago. Sounds like that price was still common in 2020: https://physicstoday.scitation.org/do/10.1063/PT.6.2.2020060...

Plus, liquid helium is produced as a byproduct of some natural gas extraction. If you needed volumes beyond that production, which seems likely if you wanted to switch the world's data centers to it, you'd be stuck condensing it from the atmosphere, which is far more expensive than collecting it from natural gas. I haven't done the math. I'm curious if someone else has.

8 days ago by mdturnerphys

That's the cost for one-time use of the helium. If you're running a liquefier the cost is much lower, since you're recycling the helium, but it still "costs" ~400W to cool 1W at 4K.

8 days ago by inglor_cz

Nitrogen is N2, though. Liquid nitrogen is cheaper than liquid hydrogen.

I was able to find "1 gallon of liquid nitrogen costs just $0.5 in the storage tank". That would be about $0.12 per 1L of N2.

8 days ago by monopoledance

Edit: Didn't read the OP carefully... Am idiot. Anyway, maybe someone reads something new to them.

Nitrogen won't get you below 10°K, tho. It's solid below 63°K (-210°C).

You know things are getting expensive, when superconductors are rated "high temperature", when they can be cooled with LN2...

Helium (He2) is _practically finite, as we can't get it from the atmosphere in significant amounts (I think fusion reactor may be a source in the future), and it's critically important for medical imaging (each MRI 35k$/year) and research. You also can really store it long term, which means there are limits to retrieval/recycling, too. I sincerely hope we won't start blowing it away for porn and Instagram.

8 days ago by faeyanpiraat

I'm no physicist, but wouldn't you need some kind of medium to efficiently transfer the heat away?

On the moon you have no atmosphere to do it with radiators with fans, so I gues you would have to make huge radiators which simply emit the heat away as infrared radiation?

8 days ago by qayxc

> On the moon you have no atmosphere to do it with radiators with fans, so I gues you would have to make huge radiators which simply emit the heat away as infrared radiation?

Exactly. You can still transport the heat efficiently away from the computer using heat exchangers with some medium, but in the end radiators with a large enough surface area will be required.

Works well enough on the ISS, so I imagine it'd work just as well on the Moon.

8 days ago by ubitaco

I'm also not a physicist, but for the fun of the discussion...

Radiative heat loss scales with the fourth power of temperature. I don't know what temperature the ISS radiators are but suppose they are around 300K. Then I think the radiative surface to keep something cool at 10K would need to be 30^4, or 810000 times larger per unit heat loss. So realistically I think you would need some kind of wacky very low temperature refrigeration to raise the temperature at the radiator, and then maybe radiate the heat into the lunar surface.

8 days ago by woah

Maybe the stone that the moon is made from

8 days ago by rini17

Doubt if 80x difference would make it attractive. If it were 8000x then maybe.

And that only if you use the soil for cooling, which is non-renewable resource. If you use radiators, then you can put them on a satellite instead with much lower ping.

8 days ago by px43

It'll be interesting to see if the cryptocurrency mining industry will help subsidize this work, since their primary edge is power/performance.

During stable price periods, the power/performance of cryptocurrency miners runs right up to the edge of profitability, so someone who can come in at 20% under that would have a SIGNIFICANT advantage.

8 days ago by wmf

In this paper, we study the use of superconducting technology to build an accelerator for SHA-256 engines commonly used in Bitcoin mining applications. We show that merely porting existing CMOS-based accelerator to superconducting technology provides 10.6X improvement in energy efficiency. https://arxiv.org/abs/1902.04641

8 days ago by reasonabl_human

Looks like the admit it’s not scalable and only applies to workloads that are compute heavy, but a 46x increase over cmos when redesigning with an eye for superconducting env optimizations

8 days ago by Badfood

Cost / hash. If power is free they don't care about power

8 days ago by gwern

Even if power is free, you still get only a limited amount of it to turn into hashes. Power sources like plants or dams only produce so much. Even if you have bribed a corrupt grid operator to give you 100% of the output from a X megawatt coal plant for free, you're still limited to X megawatts of hashing. If you can turn that X megawatts into 10*X megawatts' worth of your competitors' hashing, well, you just 10xed your profit.

8 days ago by adolph

/ time. Time is immutable cost.

8 days ago by agumonkey

If something like that happens it will have far reaching consequences IMO. I'm not pro blockchain.. but the energy cost is important and it goes away significantly people will just pile 10x harder on it.

8 days ago by woah

How efficiently bitcoin can be mined has absolutely no impact on the power consumption of bitcoin mining. The difficulty adjusts.

8 days ago by andrelaszlo

Not a physicist so I'm probably getting different concepts mixed up, but maybe someone could explain:

> in principle, energy is not gained or lost from the system during the computing process

Landauer's principle (from Wikipedia):

> any logically irreversible manipulation of information, such as the erasure of a bit or the merging of two computation paths, must be accompanied by a corresponding entropy increase in non-information-bearing degrees of freedom of the information-processing apparatus or its environment

Where is this information going, inside of the processor, if it's not turned into heat?

8 days ago by abdullahkhalids

If your computational gates are reversible [1], then in principle, energy is not converted to heat during the computational process, only interconverted between other forms. So, in principle, when you reverse the computation, you recover the entire energy you input into the system.

However, in order to read out the output of computation, or to clear your register to prepare for new computation, you do generate heat energy and that is Landauer's principle.

In other words, you can run a reversible computer back and forth and do as many computations as you want (imagine a perfect ball bouncing in a frictionless environment), as long as you don't read out the results of your computation.

[1] NOT gate is reversible, and you can create reversible versions of AND and OR by adding some wires to store the input.

8 days ago by aqme28

I was curious about this too. This chip is using adiabatic computing, which means your computations are reversible and therefore don't necessarily generate heat.

I'm having trouble interpreting what exactly that means though.

8 days ago by patagurbon

But you have to save lots of undesirable info in order to maintain the reversibility right? Once you delete that don't you lose the efficiency gains?

8 days ago by jlokier

You don't need to delete the extra info.

After reading the output, you run the entire computation backwards (it's reversible after all) until the only state in the system is the original input and initial state.

Then you can change the input to do another calculation.

If implemented "perfectly", heat is produced when changing the input and to read the output.

Reading the output will disturb it so the computation won't perfectly reverse in practice, however if the disturbance is small, most of the energy temporarily held in circuit is supposed to bounce back to the initial state, which can then be reset properly at an appropriately small cost.

Quantum computation is very similar to all of this. It too is reversible, and costs energy to set inputs, read outputs and clear accumulated noise/errors. The connection between reversible computation and quantum computation is quite deep.

8 days ago by ladberg

It's still getting turned into heat, just much less of it. The theoretical entropy increase required to run a computer is WAY less than current computers (and probably even the one in the article) generate so there is a lot of room to improve.

8 days ago by VanillaCafe

If it ever gets to home computing, it will get to data center computing far sooner. What does a world look like where data center computing is roughly 100x cheaper than home computing?

8 days ago by whatshisface

It will look like the 1970s.

8 days ago by valine

Not much would change I imagine. For most tasks consumers care about low latency trumps raw compute power.

8 days ago by gpm

Flexible dumb terminals everywhere. But we already have this with things like google stadia. Fast internet becomes more important. Tricks like vs code remote extensions to do realtime rendering locally but bulk compute (compiling in the case) on the server become more common. I don't think any of this results in radical changes from current technology.

8 days ago by tiborsaas

You could play video games on server farms and stream the output to your TV. You just need a $15 controller instead of a $1500 gaming PC.

:)

8 days ago by colinmhayes

You're describing a service that already exists, Google Stadia.

8 days ago by tiborsaas

And GeForce Now, it was a joke ;)

8 days ago by KirillPanov

The problem is the capital cost of the cryocooler.

The upfront costs of a cryocooler, spread out over the usable lifetime of the cryocooler (they're mechanical, they wear out), vastly exceeds the cost of electricity you save by switching from CMOS to JJs. Yes, I did the math on this. And cryocoolers are not following Moore's Law. Incredibly, they're actually becoming slightly more expensive over time after accounting for inflation. There was a LANL report about this which I'm trying to find, will edit when I find it. The report speculated that it had to do with raw materials depletion.

All of the above I'm quite certain of. I suspect (but am in no way certain) that the energy expended to manufacture a cryocooler also vastly exceeds the energy saved over its expected lifetime as a result of its use. That's just conjecture however, but nobody ever seems to address that point.

8 days ago by peter_d_sherman

"Everything becomes a superconductor -- when you put enough voltage through it..." [1] <g>

Footnotes:

[1] Including non-conductors... but you need a lot of voltage! <g>

Daily Digest

Get a daily email with the the top stories from Hacker News. No spam, unsubscribe at any time.