Get the top HN stories in your inbox every day.
the8472
pgt
I'm curious about how the Landauer limit relates to Bremermann's Limit: https://en.wikipedia.org/wiki/Bremermann%27s_limit
Admittedely, I haven't done much reading, but I see it is a linked page from Bremermann's Limit: https://en.wikipedia.org/wiki/Landauer%27s_principle
freeqaz
Mind expanding on this a bit more? What is that that limit and how does it relate to the clock speed?
Enginerrrd
It's not about clock speed per se. It's about the lowest possible energy expenditure to erase one bit of information (or irreversibly destroy it by performing a logical operation). The principle comes about from reasoning about entropy loss in said situations. There's a hypothesized fundamental connection between information and entropy manifest in physical law. The idea is that if you destroy one possible state of a system, you have reduced the entropy of that system, so the 2nd law of thermodynamics implies that you must increase the entropy of the universe somewhere else by at least that amount. This can be used to say how much energy the process must take as soon as you choose a particular temperature.
This applies to any irreversible computation.
IMO, The fact that it's only 38x the minimum is MIND BLOWING.
ajuc
> IMO, The fact that it's only 38x the minimum is MIND BLOWING.
It's like if someone made a car that drives at 1/38th the light speed.
jcims
Is there an idea of entropic potential energy/gradient/pressure? Could you differentiate encrypted data from noise by testing how much energy it requires to flip a bit?
centimeter
> The fact that it's only 38x the minimum is MIND BLOWING.
It doesn't run at room temp. The difference is much larger than 38x. It's closer to 1000x.
the8472
https://en.wikipedia.org/wiki/Landauer%27s_principle
Note that the gates themselves used here are reversible, so the limit shouldn't apply to them. But the circuits built from them them aren't reversible as far as I can see in the paper, so it would still apply to the overall computation.
beefok
This book goes into some great detail about this. :) https://www.amazon.com/dp/B004ZLS3TU/ref=dp-kindle-redirect?... I will always recommend it!
ginko
The interesting thing about this is that if we get close to the Landauer limit we may have to seriously start thinking of using reversible computing[1] paradigms and languages to get optimal performance.
dcposch
> adiabatic quantum-flux-parametron
TheSpiceIsLife
Reminded me of the Rockwell Retro Encabulator
centimeter
> The landauer limit at 4.2K is 4.019×10^-23 J (joules).
At room temperature.
nicoburns
What does the 4.2K mean then? I assumed that was the temperature in kelvin.
Arnavion
Not sure what you're trying to say. Room temperature is 293K, and at temperature the Laundauer limit is 2.8E-21 J. The chips in the article were cooled to 4.2 K, and at that temperature the Laundauer limit is 4.0E-23 J.
namibj
That number checks out to be accurate to just over 4 significant digits. Divide the 4.019×10^-23 J by 4.2 K and ln(2), and you get a result just a hair smaller than the Boltzmann constant.
Unless the formula at [0] is wrong, in which case my calculations would also be wrong.
[0]: https://en.wikipedia.org/wiki/Landauer%27s_principle#Equatio...
6nf
Hmm that's interesting. At lower temperatures, would the limit also be lower? Or what's the relationship between temp and Landauer?
marcosdumay
It's linear, so there's an almost 100 times decrease.
b0rsuk
Given the cooling requirements, I suppose it would create completely impassable rift between datacenter computing and other kinds. Imagine how programming and operating systems might look in a world where processing power is 80x cheaper.
Considering that "data centers alone consume 2% of world's enegy", I think it's worth it.
xvedejas
It seems likely that the more efficient our processors become, the larger share of the world's energy we'll devote to them [0]. Not that that's necessarily a bad thing, if we're getting more than proportionally more utility out of the processors, but I worry about that too [1].
carlmr
>Not that that's necessarily a bad thing, if we're getting more than proportionally more utility out of the processors
The trend seems to be that we get only a little bit of extra utility out of a lot of extra hardware performance.
When the developer upgrades their PC it's easier for them to not notice performance issues. This creates the situation where every few years you need to buy a new PC to do the things you always did.
philsnow
> The trend seems to be that we get only a little bit of extra utility out of a lot of extra hardware performance.
"hardware giveth, and software taketh away"
akiselev
Is that the case for our highly centralized clouds? No one's putting a liquid nitrogen cooled desktop in their office so this type of hardware would be owned by companies who are financially incentivized to drive down the overhead costs of commoditized functionality like networking, data replication and storage, etc. leaving just inefficient developer logic which I assume is high value enough to justify it.
avmich
I'm sure a lot of developers upgrade their PCs (they were called workstations at a time) because of material problems - keyboards getting mechanically worse, and laptops can't easily get keyboard fixed, screens getting dead pixels, sockets getting loose, hard-to-find batteries getting less charge, and maybe some electronics degradation.
Another reason is upgrades to software, which maintain general bloat, and which is hard to control; new hardware is easier. That's however is very noticeable.
On top of that, just "better" hardware - say, in a decade one can have significantly better screen, more cores and memory, faster storage; makes easier for large software tasks (video transcoding, big rebuilds of whole toolchains and apps, compute-hungry apps like ML...)
ganzuul
So by dropping support for old CPUs, the Linux kernel burns a bridge. That conversation makes more sense now.
thebean11
If the developer doesn't notice the performance issues, maybe they move on to the next thing more quickly and get more done overall?
I'm not sure if that's the case, but it may be we aren't looking for utility in the right places.
winter_blue
> Not that that's necessarily a bad thing, if we're getting more than proportionally more utility out of the processors, but I worry about that too
I have two points to comment on this matter.
Point 1: The only reason I would worry or be concerned about it is if we are using terribly-inefficient programming languages. There are languages (that need not be named) which are either 3x, 4x, 5x, 10x, or even 40x more inefficient than a language that has a performant JIT, or that targets native code. (Even JIT languages like JavaScript as still a lot less efficient because of dynamic typing. Also, in some popular complied-to-native languages, programmers tend to less efficient data structures, which results in lower performance as well.)
Point 2: If the inefficiency arises out of more actual computation being done, that's a different story, and I AM TOTALLY A-OK with it. For instance, if Adobe Creative Suite uses a lot more CPU (and GPU) in general even though it's written in C++, that is likely because it's providing more functionality. I think even a 10% improvement in overall user experience and general functionality is worth increased computation. (For example, using ML to augment everything is wonderful, and we should be happy to expend more processing power for it.)
john_minsk
Both are important and 1 can be even more useful. Use 1 to easier build very complex systems. Once they are working and you are selling them - optimize. Without#1 you can't get #2
moosebear847
In the future, I don't see why there's anything holding us back from splitting a bunch of atoms and having tons of cheap energy.
tsimionescu
Nuclear fission has well-known drawbacks and risks that are fundemantal, they will never be engineered away (risk of catastrophic explosion, huge operating costs, risk of nuclear waste leakage, risk of nuclear weapon proliferation). Why do you think the future will be significantly different from the present in this regard?
layoutIfNeeded
>Imagine how programming and operating systems might look in a world where processing power is 80x cheaper.
So like 2009 compared to 2021? Based on that, I'd say even more inefficient webshit.
AnIdiotOnTheNet
Considering that many modern interfaces are somehow less responsive than ones written over 20 years ago even when running those programs on period hardware, I feel certain that you are right.
qayxc
This is a very complex topic and there's a bunch of reasons for that.
And no, software efficiency isn't even the main factor. Not even close.
Just a few pointers: polling on peripherals instead of interrupts (i.e. USB vs. PS/2 and DIN) introducing input lag, software no longer running in ring-0 while being the sole process that owns all the hardware, concurrent processes and context switches, portability (and the required layers of abstraction and indirection), etc.
It's a bit cheap to blame developers while at the same time taking for granted that you can even do what you can do with modern hard- and software.
Everything comes at a price and even MenuetOS [1] will have worse input lag and be less responsive than an Apple II, simply because you'll likely have USB keyboard and mouse and an LCD monitor connected to it.
nicoburns
Some things have improved a lot. Remember how long it took to boot a 2009 PC. I suspect if hardware perfromance stagnates then software optimisation will develop again.
jayd16
20 years ago, Windows 98 dominated. That was already well into the slow, time sliced, modern era.
xxpor
But it probably took 80x less time to develop said software.
Jweb_Guru
Processing power is not 80x cheaper now than it was in 2009 unless you can do all your computation on a GPU.
jnsie
Anyone remember Java 3D? 'cause I'm imagining Java 3D!
vidanay
Java 11D™ It goes to eleven!
sago
I don't understand your reference. It seems negative, but it's hard imho to go down on the success of Minecraft. Or am I misunderstanding you?
systemvoltage
Javascript emulator in Javascript!
lkbm
Gary Bernhardt's presentation on the "history" of Javascript from 1995 to 2035 is hilarious and seems like something you'd enjoy: https://www.destroyallsoftware.com/talks/the-birth-and-death...
It takes things way beyond simply "emulating Javascript in Javascript", yet is presented so well that you barely notice the transition from current (2014) reality to a comically absurd future.
dmingod666
Do you mean the "eval()" function?
whatshisface
Solid state physics begets both cryogenic technology and cryocooling technology. I wouldn't write off the possibility of making an extremely small cryocooler quite yet. Maybe a pile of solid state heat pumps could do it.
jessriedel
This is true, but the fact that heat absorption scales with the surface area is pretty brutal for tiny cooled objects.
Calloutman
Not really. You just have the whole package instead a vacuum casing.
whatshisface
Heat conduction also scales with thermal conductivity, which is another thing that advances in solid state can bring us.
api
I don't see an impassible rift. Probably at first, but supercooling something very small is something that could certainly be productized if there is demand for it.
I can see demand in areas like graphics. Imagine real-time raytracing at 8K at 100+ FPS with <10ms latency.
adamredwoods
Cyptocurrency demands.
hacknat
> Imagine how programming and operating systems might look in a world where processing power is 80x cheaper.
Just wait 10 years?
Jweb_Guru
Not sure if you noticed, but Moore's Law died quite awhile ago now.
ipsum2
Moore's law is fine: https://en.wikipedia.org/wiki/Moore's_law#/media/File:Moore'...
mcosta
For single thread performance.
twobitshifter
Jevons Paradox would predict that we’ll end up using even more energy on computing.
mailslot
It’ll all be wasted. When gasoline prices plummet, everyone buys 8mpg SUVs. If power & performance gets cheaper, it’ll be wasted. Blockchain in your refrigerator.
ffhhj
> Imagine how programming and operating systems might look in a world where processing power is 80x cheaper.
UI's will have physically based rendering and interaction.
inglor_cz
Nice, but requires 10 K temperature - not very practical.
Once this can be done at the temperature of liquid nitrogen, that will be a true revolution. The difference in cost of producing liquid nitrogen and liquid helium is enormous.
Alternatively, such servers could be theoretically stored in the permanently shaded craters of the lunar South Pole, but at the cost of massive ping.
gnulinux
If the throughput is fast enough 3+3=6 seconds latency doesn't really sound that bad. There are websites with that kind of lag. You can't use to build a chat app, but you can use it as a cloud for general computing.
mindvirus
Fun aside I learned about recently: we don't actually know if the speed of light is the same in all directions. So it could be 5+1=6 seconds or some other split.
faeyanpiraat
There is a Veritasium video really fun to watch which exmplains with examples why you cannot measure one way speed of light: https://www.youtube.com/watch?v=pTn6Ewhb27k
inglor_cz
Yes, for general computing, that would be feasible.
juancampa
> The difference in cost of producing liquid nitrogen and liquid helium is enormous.
Quick google search yields: $3.50 for 1L of He vs $0.30 for 1L of H2. So roughly 10 times more expensive.
tedsanders
That price is more than a decade out of date. Helium has been about 10x that the past half decade. I used to pay about $3,000 per 100L dewar a few years ago. Sounds like that price was still common in 2020: https://physicstoday.scitation.org/do/10.1063/PT.6.2.2020060...
Plus, liquid helium is produced as a byproduct of some natural gas extraction. If you needed volumes beyond that production, which seems likely if you wanted to switch the world's data centers to it, you'd be stuck condensing it from the atmosphere, which is far more expensive than collecting it from natural gas. I haven't done the math. I'm curious if someone else has.
mdturnerphys
That's the cost for one-time use of the helium. If you're running a liquefier the cost is much lower, since you're recycling the helium, but it still "costs" ~400W to cool 1W at 4K.
inglor_cz
Nitrogen is N2, though. Liquid nitrogen is cheaper than liquid hydrogen.
I was able to find "1 gallon of liquid nitrogen costs just $0.5 in the storage tank". That would be about $0.12 per 1L of N2.
monopoledance
Edit: Didn't read the OP carefully... Am idiot. Anyway, maybe someone reads something new to them.
Nitrogen won't get you below 10°K, tho. It's solid below 63°K (-210°C).
You know things are getting expensive, when superconductors are rated "high temperature", when they can be cooled with LN2...
Helium (He2) is _practically finite, as we can't get it from the atmosphere in significant amounts (I think fusion reactor may be a source in the future), and it's critically important for medical imaging (each MRI 35k$/year) and research. You also can really store it long term, which means there are limits to retrieval/recycling, too. I sincerely hope we won't start blowing it away for porn and Instagram.
faeyanpiraat
I'm no physicist, but wouldn't you need some kind of medium to efficiently transfer the heat away?
On the moon you have no atmosphere to do it with radiators with fans, so I gues you would have to make huge radiators which simply emit the heat away as infrared radiation?
qayxc
> On the moon you have no atmosphere to do it with radiators with fans, so I gues you would have to make huge radiators which simply emit the heat away as infrared radiation?
Exactly. You can still transport the heat efficiently away from the computer using heat exchangers with some medium, but in the end radiators with a large enough surface area will be required.
Works well enough on the ISS, so I imagine it'd work just as well on the Moon.
ubitaco
I'm also not a physicist, but for the fun of the discussion...
Radiative heat loss scales with the fourth power of temperature. I don't know what temperature the ISS radiators are but suppose they are around 300K. Then I think the radiative surface to keep something cool at 10K would need to be 30^4, or 810000 times larger per unit heat loss. So realistically I think you would need some kind of wacky very low temperature refrigeration to raise the temperature at the radiator, and then maybe radiate the heat into the lunar surface.
woah
Maybe the stone that the moon is made from
rini17
Doubt if 80x difference would make it attractive. If it were 8000x then maybe.
And that only if you use the soil for cooling, which is non-renewable resource. If you use radiators, then you can put them on a satellite instead with much lower ping.
reasonabl_human
I wouldn’t want to be on call when something breaks on the moon....
Astronaut DRIs?
inglor_cz
"Oh no, we bricked a lunar computer! Go grab your pressure suit, Mike! Back in a week, darling... Tell your mother I won't be attending her birthday party."
gpderetta
Obviously we will use a replaceable army of unknowing clones to service the machines. Cheaper to just defrost a new one than sending someone up.
Who volunteers as the original?
px43
It'll be interesting to see if the cryptocurrency mining industry will help subsidize this work, since their primary edge is power/performance.
During stable price periods, the power/performance of cryptocurrency miners runs right up to the edge of profitability, so someone who can come in at 20% under that would have a SIGNIFICANT advantage.
wmf
In this paper, we study the use of superconducting technology to build an accelerator for SHA-256 engines commonly used in Bitcoin mining applications. We show that merely porting existing CMOS-based accelerator to superconducting technology provides 10.6X improvement in energy efficiency. https://arxiv.org/abs/1902.04641
reasonabl_human
Looks like the admit it’s not scalable and only applies to workloads that are compute heavy, but a 46x increase over cmos when redesigning with an eye for superconducting env optimizations
Badfood
Cost / hash. If power is free they don't care about power
gwern
Even if power is free, you still get only a limited amount of it to turn into hashes. Power sources like plants or dams only produce so much. Even if you have bribed a corrupt grid operator to give you 100% of the output from a X megawatt coal plant for free, you're still limited to X megawatts of hashing. If you can turn that X megawatts into 10*X megawatts' worth of your competitors' hashing, well, you just 10xed your profit.
adolph
/ time. Time is immutable cost.
agumonkey
If something like that happens it will have far reaching consequences IMO. I'm not pro blockchain.. but the energy cost is important and it goes away significantly people will just pile 10x harder on it.
woah
How efficiently bitcoin can be mined has absolutely no impact on the power consumption of bitcoin mining. The difficulty adjusts.
andrelaszlo
Not a physicist so I'm probably getting different concepts mixed up, but maybe someone could explain:
> in principle, energy is not gained or lost from the system during the computing process
Landauer's principle (from Wikipedia):
> any logically irreversible manipulation of information, such as the erasure of a bit or the merging of two computation paths, must be accompanied by a corresponding entropy increase in non-information-bearing degrees of freedom of the information-processing apparatus or its environment
Where is this information going, inside of the processor, if it's not turned into heat?
abdullahkhalids
If your computational gates are reversible [1], then in principle, energy is not converted to heat during the computational process, only interconverted between other forms. So, in principle, when you reverse the computation, you recover the entire energy you input into the system.
However, in order to read out the output of computation, or to clear your register to prepare for new computation, you do generate heat energy and that is Landauer's principle.
In other words, you can run a reversible computer back and forth and do as many computations as you want (imagine a perfect ball bouncing in a frictionless environment), as long as you don't read out the results of your computation.
[1] NOT gate is reversible, and you can create reversible versions of AND and OR by adding some wires to store the input.
aqme28
I was curious about this too. This chip is using adiabatic computing, which means your computations are reversible and therefore don't necessarily generate heat.
I'm having trouble interpreting what exactly that means though.
patagurbon
But you have to save lots of undesirable info in order to maintain the reversibility right? Once you delete that don't you lose the efficiency gains?
jlokier
You don't need to delete the extra info.
After reading the output, you run the entire computation backwards (it's reversible after all) until the only state in the system is the original input and initial state.
Then you can change the input to do another calculation.
If implemented "perfectly", heat is produced when changing the input and to read the output.
Reading the output will disturb it so the computation won't perfectly reverse in practice, however if the disturbance is small, most of the energy temporarily held in circuit is supposed to bounce back to the initial state, which can then be reset properly at an appropriately small cost.
Quantum computation is very similar to all of this. It too is reversible, and costs energy to set inputs, read outputs and clear accumulated noise/errors. The connection between reversible computation and quantum computation is quite deep.
ladberg
It's still getting turned into heat, just much less of it. The theoretical entropy increase required to run a computer is WAY less than current computers (and probably even the one in the article) generate so there is a lot of room to improve.
VanillaCafe
If it ever gets to home computing, it will get to data center computing far sooner. What does a world look like where data center computing is roughly 100x cheaper than home computing?
whatshisface
It will look like the 1970s.
valine
Not much would change I imagine. For most tasks consumers care about low latency trumps raw compute power.
gpm
Flexible dumb terminals everywhere. But we already have this with things like google stadia. Fast internet becomes more important. Tricks like vs code remote extensions to do realtime rendering locally but bulk compute (compiling in the case) on the server become more common. I don't think any of this results in radical changes from current technology.
tiborsaas
You could play video games on server farms and stream the output to your TV. You just need a $15 controller instead of a $1500 gaming PC.
:)
colinmhayes
You're describing a service that already exists, Google Stadia.
tiborsaas
And GeForce Now, it was a joke ;)
cbozeman
Dumb terminals everywhere. A huge upgrade of high-speed infrastructure across the US since everyone will need high throughput and low latency. Subscriptions will arise first, as people fucking love predictable monthly revenue - and by people I mean vulture capitalists, and to a lesser degree, risk-averse entrepreneurs (which is almost an oxymoron...), both of whom you can see I hold in low regard. Get ready for a "$39.99 mo. Office Productivity / Streaming / Web browsing" package", a "$59.99 PrO gAmEr package", and God knows what other kinds of disgusting segmentation.
Someone, somewhere, will adopt a Ting-type model where you pay for your compute per cycle, or per trillion cycles or whatever, with a small connection fee per month. It'll be broken down into some kind of easy-to-understand gibberish bullshit for the normies.
In short, it'll create another circle of Hell for everyone - at least initially.
f1refly
I really appreciate your pessimistic worldview, keep it up!
cbozeman
I basically just base my worldview on the fact that everyone is ultimately self-serving and selfish. Hasn't failed me yet. :)
colinmhayes
The great thing about capitalism is that it replaces bad services with good ones. Only exceptions are natural monopolies that the state fails to regulate, which datacenters are not.
KirillPanov
The problem is the capital cost of the cryocooler.
The upfront costs of a cryocooler, spread out over the usable lifetime of the cryocooler (they're mechanical, they wear out), vastly exceeds the cost of electricity you save by switching from CMOS to JJs. Yes, I did the math on this. And cryocoolers are not following Moore's Law. Incredibly, they're actually becoming slightly more expensive over time after accounting for inflation. There was a LANL report about this which I'm trying to find, will edit when I find it. The report speculated that it had to do with raw materials depletion.
All of the above I'm quite certain of. I suspect (but am in no way certain) that the energy expended to manufacture a cryocooler also vastly exceeds the energy saved over its expected lifetime as a result of its use. That's just conjecture however, but nobody ever seems to address that point.
aqfp
I'm one of the authors of the published paper that IEEE Spectrum referred to in the post. First off, thanks for posting! We're so delighted to see our work garner general interest! A few friends and relatives of mine mentioned that they came across my work by chance on Hacker News. I already noticed the excellent questions and excellent responses already provided by the community.
This comment might get buried but I'd just like to mention a few things:
- Indeed, we took into account the additional energy cost of cooling in the "80x" advantage quoted in the article. This is based on a cryocooling efficiency of 1000 W at room temperature per Watt dissipated at cryotemps (4.2 Kelvin). This 1000W/W coefficient is commonly used in the superconductor electronics field. The switching energy of 1.4 zJ per device is quite close to the Landauer limit as mentioned in the comments but this assumes a 4.2 K environment. With cryocooling, the 1000x factor brings it to 1.4 aJ per device. Still not bad compared to SOTA FinFETs (~80x advantage) and we believe we can go even lower with improvement in our technology as well as cryocooling technology. The tables in Section VI of the published paper (open-access btw) goes on to estimate what a supercomputer using our devices might look like using helium referigeration systems commercially available today (which have an even more efficient ~400W/W cooling efficiency). The conclusion: we may easily surpass the US Department of Energy's exascale computing initiative goal of 1 exaFLOPS within a 20-MW power budget, some thing that's been difficult using current tech (although HP/AMD's El Capitan may finally get there, we may be 1-2 orders of magnitude better assuming a similar architecture).
- Quantum computers require very very low temps (0.015 K for IBM vs the 9.3 K for niobium in our devices). With the surge in superconductor-based quantum computing research, we expect new developments in cryocooling tech which would be very helpful for us to reduce the "plug-in" power.
- Our circuits are adiabatic but they're not ideal devices hence we still dissipate a tiny bit of energy. We have ideas to reduce the energy even further through logically and physically reversible computation. The trade-off is more circuit area overhead and generation of "garbage" bits that we have to deal with.
- The study featured only a prototype microprocessor and the main goal was to demonstrate that these AQFP devices can indeed do computation (processing and storage). Through the experience of developing this chip, it helped revealed the practical challenges in scaling up, and our new research directions are aggressively targetting them.
- The circuits are also suitable for the "classical" portion of quantum computing as the controller electronics. The advantage here is we can do classical processing close to the quantum computer chip which can help reduce the cable clutter going in/out of the cryocooling system. The very low-energy dissipation makes it less likely to disturb the qubits as well.
- We also have ideas on how to use the devices to build artificial neurons for AI hardware, and how we can implement hashing accelerators for cryptoprocessing/blockchain. (all in the very early stages)
- Other superconductor electronics showed super fast 700+ GHz gates but the power consumption is through the roof even before taking into account cooling. There are other "SOTA" superconductor chips showing more Josephson junction devices on a chip... many of those are just really long shift-registers that don't do any meaningful computation (useful for yield evaluation though) and don't have the labyrinth of interconnects that a microprocessor has.
- There are many pieces to think about: physics, IC fabrication, analog/digital design, architecture, etc. to make this commercially viable. At the end of the day, we're still working on the tech and trying to improve it, and we hope this study is just the beginning of some thing exciting.
peter_d_sherman
"Everything becomes a superconductor -- when you put enough voltage through it..." [1] <g>
Footnotes:
[1] Including non-conductors... but you need a lot of voltage! <g>
tromp
This microprocessor composed of some 20k Josephson junctions appears to be pure computational logic.
In practice it will need to interface to external memory in order to perform (more) useful work.
Would there be any problems fashioning memory cells out of Josephson junctions, so that the power savings can carry over to the system as a whole?
qayxc
This is an area that's still in its early stage, but yes, it would seem so.
Modules that are a few kilobytes in size have already been tested.
Even taking cryogenic operation into account, memory of this type consumes 10-100x less power than CMOS technology at roughly the same clock speed [1]
This is a very active field of research and there's a plethora of different approaches.
My guess is that 20 years from now there could be three types of computing:
• cryogenic quantum-computing for specialised tasks
• cryogenic ultra-high-performance computing
• high temperature computing (traditional CMOS-based)
with the first two not being available to consumers or small companies. Maybe it's going to be like in the 1970s and early 1980s when mainframes ruled supreme and you would rent these machines or compute time thereon.
People are accustomed to "the cloud" already, so it's not really a regression going back to centralised computing for a bit.
[1] https://www.researchgate.net/publication/320891331_Experimen...
faeyanpiraat
If you compare cpu power usage with ram power usage, you'll see ram is already quite efficient, so even if traditional ram connected to the said microprocessor cannot be brought under the magic of this method, it might work.
(Haven't read the article, or have any expertise in this field, so I might be wrong)
qayxc
Cryogenic memory can be orders of magnitude more efficient than traditional RAM, even taking the cooling into account.
nicoburns
Huh, this seemed a bit too good to be true on first reading. But given that the limits on computing power tend to thermal, and that a superconducting computer presumably wouldn't produce any heat at all, it does kind of make sense.
ska
The system as a whole will produce heat, but less.
nicoburns
True, but usually the problem is removing the heat from the chip, not the total amount of heat produced. If the heat is mostly produced by the cooling system then that problem all but goes away.
ska
Not really at data centre scale. Heat directly in the CPU is a limiting factor on how fast an individual chip can go, and at the board level is an issue of getting heat away from the CPU somehow.
But that heat has to go somewhere. When you have rooms full of them the power and cooling issues become key in a way that doesn't matter when it's just a PC in your room.
mensetmanusman
Any entropy reduction activity in one area automatically means a lot more heat is added somewhere else :) (2nd law)
superkuh
Sure. But how efficient are they once you include the power used to keep them cold enough to superconduct? I doubt that they're even as efficient as a normal microprocessor would be.
SirYandi
“But even when taking this cooling overhead into account,” says Ayala, “The AQFP is still about 80 times more energy-efficient when compared to the state-of-the-art semiconductor electronic device, [such as] 7-nm FinFET, available today.”
Get the top HN stories in your inbox every day.
> We use a logic primitive called the adiabatic quantum-flux-parametron (AQFP), which has a switching energy of 1.4 zJ per JJ when driven by a four-phase 5-GHz sinusoidal ac-clockat 4.2 K.
The landauer limit at 4.2K is 4.019×10^-23 J (joules). So this is only a factor of 38x away from the landauer limit.