Skip to content(if available)orjump to list(if available)

Artificial synapses 10k times faster than real thing


God I wish people would stop using the term "artificial" when they mean "metaphorical". An artificial organ is artificial if you can put it in the place where the original thing used to be, and it works. That's to say, it's equivalent in function.

These 'synapses' aren't actually synapses and they don't emulate even a fraction of the processes you find in their biological namesakes.


> they don't emulate even a fraction of the processes you find in their biological namesakes

That is correct. But they are not supposed to: natural nervous systems are just an inspiration. That perspective is only valid in _nervous systems emulations_ - which is just a subset of what we do.

> using the term

'Artificial', in use, stands for "non natural, non spontaneous"; it means "done with an aim" (cpr. Sanskirt arth, artha, arthin, arya; Greek aretē, àristos). Those items are. They are called metaphorically - but evidently so - 'synapses', which means "joinings, interfastenings" (see in engineering 'haptic', "touching"), because they join the elements of a "neural network", where 'neuron' remains "a string" - a "connection significant of a relation".

So: at some point some thought: "What if we obtain something through joining similar elements in a network... Yes, similarly to the other one". It fits, because the model is this model, a model of the natural thing. You have natural neural networks, their model ("just a network"), and such model (of the natural thing) is used as a model (for designing further things).


In the article they proceed to compare the “artificial” size and performance to the “natural”. I think OP had more issues with that


Tell that IKEA

Artificial just means made by humans and at some part are a copy of the original, not a 100% replacement


I have some of those and they have, in fact, replaced the originals


Wow, the brand name is FEJKA: “to fake”…


IKEA doesn't claim them to be 10x live longer than real thing.


I bet, they "live" even longer


Artificial just means made by humans.


I think we agree on that it’s made by humans. The discussion is about what is being made. Just because a tennis ball is round, we don’t call it an artificial sun.


> by humans

Not necessarily, though primarily. The core point is in "crafting with an aim".

It is interesting that the first recognized use of the term is for "artificial day", meaning "dawn to dusk", which is the period in the "natural day" (here the term already starts to suggest an opposition - "artificial vs natural") in which you can "purposefully craft" (light allows).


The point is that these are inspired by human synapses, but nowhere near equivalent to them. They do something much simpler.

Comparing them with human synapses is like comparing cellphones to servers. Not exactly apples to apples.




As the core part of some software I could make a game engine.

Or I could do the same thing but name it an “artificial heart”, noting that it’s 10,000 times faster than a human heart.

That makes sense because artificial just means made by humans?


It's more like calling an articulated robot with a manipulator a 'robotic arm', despite being infinitely simpler than a human arm and with much more limited functionality.

I agree that the speed comparison seems like a very misguided metric though.


I had the same initial reaction, but after digging in a little I don’t think “artificial” is the worst description in the world here. I do think the innovation here is much more like an artificial neuron than an artificial synapse though. Fair warning, I’m no neuroscientist and I basically don’t know what neural networks are, but here’s what I’ve gathered.

neuron1 -> neuron2

Neuron1 receives an input signal across a synapse, “processes” that signal, and then either does or does not pass along an output signal to neuron2. I’m sure this is an incredibly deep field of research with a lot of nuance, but I think it remains a reasonable approximation to say that neurons “fire” or not in a binary manner. A lot of the magic takes place within the neuron itself, where unimaginably complex biochemistry dictates how likely a neuron is to fire in response to an input signal. As far as I understand it, this is analogous to the application of a weight to an input in a neural network.

A decent example along these lines is how opiates influence breathing. Neurons exist at a resting negative electrical state, which can be shifted to a sufficiently positive state in response to an input that the signal propagates down the neuron resulting in the passing on of that signal to the next neuron. Opiates drive that resting negative electrical state to be even more negative, and so in response to a normal “we’re running low on oxygen here!” input, a neuron will fail to become sufficiently positive to pass that signal along the chain. In NN parlance, it’s weight has been changed.

This piece describes a memristor that replicates this weighting of inputs to produce outputs through a material that stores these weights in a material that can be adjusted electrically rather than through biochemistry. There was actually a paper[0](released just two days ago!) that uses memristors to meaningfully create an artificial neuron with biochemical synapses. Of course, there’s a lot of extra machinery involved to actually be biologically useful, but nonetheless this tech can be used as a very simplified drop-in. Of course, as you say it’s like step one in a 10 billion step process, but I don’t think it’s totally dishonest to call it an “artificial” neuron, or at least a component of one.

Of course, bragging about how fast and small it is compared to a neuron and synapse is a bit like an elementary school teacher setting up a cool grow-lamp garden to teach kids about sunlight and photosynthesis and then bragging about how they produced an ultra-minutare sun that’s so efficient it runs off an outlet :)



There are at least two problems with this model.

For one, some neurons, when activated, don't just send a signal to specific other neurons, but instead release a chemical in an area, that affects the activation chances of other nearby neurons. I believe there are also other modes of activation, and other consequences of neuron activation, that make the brain far more complex. It should be remembered that the brain can also activate other glands in the body, which in turn change how the brain works - e.g. when releasing adrenaline, testosterone, oxytocin etc.

For another, as far as we know right now, each neuron itself is deciding whether to fire or not based on much more sophisticated logic than "sum(input*weight) > threshold". In fact, it seems that computation happens quite a bit in individual neurons, not only at the NN level. At the very least, the neuron activation function is not fixed, like in an ANN after training, it changes constantly for various reasons.


Oh the number of ways this model doesn’t match reality couldn’t even be counted. I suppose my standard for achieving an “artificial something” in biology is if accurately reflects reality well enough to learn from, and I only meant to imply that this might.

I will say that my mental model does hinge on the idea that the action of a single neuron at a single point in time in a single context can actually be equated to "sum(input*weight) > threshold". Doing the actual computation to figure out a principled measure of weight (and input, context, and maybe even time for that matter) is way outside our ability, but it seems like something that could be approximated in a simple experimental model!


> don't emulate even a fraction of the processes you find in their biological namesakes.

It isn't expected that e.g. an artificial heart would emulate all processes of the heart. It just needs to cover the function of the heart to some degree, sometimes (usually?) they don't even have a beat.


What do you mean?

> made by people, often as a copy of something natural

There is absolutely no requirement for the "copy" to be functional. See: artificial flowers. Things like "artificial hearts" are the exception and not the norm.


I was thinking about this nit for awhile. Artificial flowers do fulfill one purpose of flowers, i.e. decoration. And the parent is right that most other "artificial" things serve to fill in for some original purpose of the natural thing. But the real point is they don't have to fill all the purposes, or even most of them. An artificial eye could be made of glass or it could be a camera a thousand miles away... in the first case, it's just there in someone's head for show, fulfilling the aesthetic purpose of an eye. In the second case it's there for the purpose of seeing something and fills no aesthetic need at all. So an artificial neuron could fill any one of many functions... it could even just be a big plastic model of a neuron in a doctor's office.


Artificial hearts pump blood similarly to how submarines swim: typically with some form of rotor.


Artificial neural networks have been called like that since the 1940s. So that's just how they're called. I assure you that people in the field know the difference very well.


Yes, "synthetic analogy" or something similar would be more appropriate. I guess artificial synapse always means an approximation (e.g. if its similar it's only in some partial way: the device uses ionic transport etc). I think artificial synapse has become the nomenclature in the academic literature and they are just going along with that. The article makes a bit of a mess of it though.


Headline: New "artificial car" travels 30 times faster than the real thing and can fly!

Thing they're describing: a bullet fired from a gun


sorry for the downvote, but an essential function of a car is transportation - bullets don't transport anything (not even small things).


Yes, and similarly, a memristor is not a neuron - it only bears a passing resemblance, probably worse than the resemblance between a car and a bullet.


It doesn't have to be, but both are, at some level of abstraction, switches that integrate multiple weighted inputs and set an output level that's used as input for other switches.

If this is what we expect, the analogy holds and the implementation details don't matter.


You're talking about essence, the parent is talking about function.

A horse is not a car, it does not even bear a passing resemblance, and yet both can be used as a mean of transportation.


They do transport you to the other side tho




There is no other side.

This is it.


Some bullets actually do transport something, e.g. explosives or ropes.


Technically, they transport their kinetic payload. The engine stays behind.


These things in the article may not have functions essential to synapses though. I’m no biologist. If a subset of functions is ok then …


I read other research recently that showed that real synapsis have additional dimensions of data that are stored than previously thought. Whereas before it was thought they were a simple binary fire/no fire, they are also transferring data through width and height of the firing. Because of this neural networks designs may be missing a key feature.


That has never been thought by anyone but computer scientists who never looked at a biology textbook.

To begin approximating what a lone spherical synapse would actually do you'd need to solve 2^n coupled second order differential equations where n is the number of ions used.

That is before you throw in things like neuro transmitters and the physical volume of a cell. Simulating a single neuron accurately is beyond any super computer today. The question is how inaccurately can we simulate one and still get meaningful answers.

Then how we do it 100e9 more times.


We are way to stupid to solve this riddle but I'm rather optimistic we could build something that solves it or at least build something that can build something that solves it.

I'm looking forwards to all the "easy" things it will figure out and stick us in a loop of "why didn't anyone think of that?" Something like the nth generation ML offspring solving the building of viable neurons at scale by breeding some single cell organism.


We didn't solve flight by building a bird. We solved it by building a plane. The problems we care about might not be solved by neurons at all. But right now using ANNs as a model of the brain is like saying that a bunch of kites have the same behavior as a flock of birds.


And nevermind microtubules


It's not exactly a "new" finding that neurons communicate signals to each other by means other than purely electrical signalling. The existence of well over 100 different neurotransmitters has been known for some time, and these are used to create unique signal cascades within the receiving neuron depending on the exact concentration sent. There is nothing equivalent to this in artificial neural networks. Artificial neural networks are not necessarily using binary neurons. Each activation unit may receive and pass on a continuous signal, but this tends to be a single floating-point number, usually normalized to be between -1 and 1 or between 0 and 1. Biological neurons are sending at minimum hundreds of these continuous-valued signals to each other. Likely, the closest way to build an equivalent artificial neural network would to have each neuron have hundreds of connections to all the other neurons it is connected to, rather than just one, but even that isn't necessarily equivalent, as what actually happens internal to the cells in response to these signal cascades isn't all that well-understood, but it involves quite a bit more than just determining what sort of signal to pass on to the next synapse.


Fascinating! Any idea where you came across this?


A poor facsimile runs faster due to being greatly simplified? Colour me pineapples


It's probably also because evolution didn't have to make it out of meat and goo. But ya that also helps.

> Colour me pineapples

That's a nice phrase, I like it.


Breaking news: website which displays "Hello World" runs faster than Google.


These are ... not the synapses you 're looking for. These are resistors. It doesn't seem they can be used to replace weights in hypothetical deep learning circuits. They certainly can't replace real synapses


> The new programmable resistors are similar to memristors, or memory resistors. Both kinds of devices are essentially electric switches that can remember which state they were toggled to after their power is turned off. As such, they resemble synapses, whose electrical conductivity strengthens or weakens depending on how much electrical charge has passed through them in the past.


... In the same way I resemble Brad Pitt

If synapses worked like that we d all be epileptic, or dead


Wouldn't the real comparison be against common, existing, digital implementations?


Yes, but then you don’t get to put “10k times” in your headline.


Ya, headlines rule all, kind of annoying. I don't see and better comparisons even in the article text, maybe I didn't look closely enough though. I probably should stop being surprised by this stuff some day.


Is IEEE really so pathetically desperate for money they need to show layers and layers of ads on their articles? Sad.


When ever someone comes up with a new resistor type they seem to ignore all the research and development done with nand memory and regular cpu architectures. Like the memresistor all of the research and development done with nand flash and 5nm technology outweighs all these new types of gates.


What are the main differences between programmable resistors and varistors or photoresistors? I guess presence of volatile/non-volatile memory, smaller scale size and maybe lower voltage operation (e.g. compared to varistors)?


Last I heard they are still discovering new things about how human synapses work.


But is that the point of artificial "neurons", to mimic the functioning of actual neurons? I'm certain we don't have a complete model of how the natural neurons function.