Ask HN: What are the early signs of singularity?

Ask HN: What are the early signs of singularity?

106 comments

·November 25, 2021

Post singularity, people (?) might look back and attribute certain events as a major indicator of the impending singularity. But for someone without the hind sight, looking into the future, what types of indicator would you look for? Also assuming that even if singularity is achieved (?) at some locations, the effects would take times to spread. Say it's already reached at the opposite corner of the world. How long would it take for it to be apparent and what are some indicators? Also, happy thanksgiving.

mikewarot · 9 days ago

I think a singularity is now impossible... what we have to do is figure out how to avoid destroying humanity in the next century. Our political systems are imploding because of capture by the donor class. The emphasis on extracting profit at all costs has increase the fragility of our supply chain to the breaking point.

Physics and Biology have both sufficiently advanced to make an accidental destruction of the human race a non-zero probability.

I see a collapse type of singularity to be far more likely than any the rise of any AI powered superintelligence.

It appears I'm not the only one, judging by the other comments here.

atoav · 9 days ago

> Physics and Biology have both sufficiently advanced to make an accidental destruction of the human race a non-zero probability.

Not only that, they have to take place in a society that has to care about the results. You can have all the findings you want, but where the rubber hits the road the rest of society has to do something with it.

In my eyes this is the biggest problem right now. So it is not that science does not do enough, it is that if science conclusively finds something we cannot follow these findings on a big scale for socioeconomical reasons: those profiting right now don't want to change things, because they are profiting, those who suffer are gaslit in a way that they start to attack scientific thinking itself, etc.

Unless we tackle that why do science at all? Why allow them to assert power over our political thinking that has a mismatch witch reality, if we believe it is them who have a mismatch with reality?

CodeGlitch · 9 days ago

Looking at the machine learning ecosystem we have at the moment, there is a good degree of democratising, as in anyone with the skills and funds can build, optimise and deploy ML models. This is driven in part by the whole open source movement. If AGI goes down the same route, as in researched by academia and funded by private enterprises, I can't see why it wouldn't be just as open.

If everyone had access to super AI, what does that mean to our democracy? I can't see social networks surviving (AIs could just flood all feeds), which I'm probably happy about. Some governments like the UK and EU are taking steps now to ensure AI doesn't start breaking human rights, discrimination, etc. Just need to get the us and China on board (unlikely?).

So to summarise, I remain optimistic.

epicureanideal · 9 days ago

I was going to comment something similar but not quite as negative.

I don't think the singularity is "impossible" now, but I do think we're moving much more slowly toward it than if capital were allocated more efficiently, and if we were not significantly distracted by political issues that are taking up far more of our energy than they should.

Technology is still advancing, so we're still inching ahead, and that means that whenever we take our foot off the breaks (by focusing less on politics, or improving capital allocation) we'll get to dramatic progress faster, but unfortunately it is slowed down enough that we might not reach dramatic progress similar to the "singularity" in my lifetime.

systemvoltage · 9 days ago

> Our political systems are imploding because of capture by the donor class.

Disagree. I think the root cause of all of this is the Internet. Remember, we've just all been connected together with highspeed network in last 20 years or so, mostly last 10 years wrt to social media. The colossal impact it has on world population is immeasurable and inconceivable. Just today, people on HN are talking about Black Friday sales in Iran.

mouzogu · 9 days ago

I don't understand this concept of singularity.

Why do we think that a system designed by humans can overcome the limitations of it's own design(ers).

alok-g · 9 days ago

Why not?

Airplanes designed by humans fly.

Calculators designed by humans calculate faster too.

mouzogu · 8 days ago

It's difficult to articulate but I think you misunderstood my point.

Airplanes and calculators exist within the realm of what it is possible for a human to design. Why is it that we have not designed a calculator that can invent new forms of numbers or a new paradigm of maths.

Likewise, we seem to think that some kind of singularity will just emerge but what I'm saying is that something that exist or is created within the bounds of human imagination cannot (imo) escape the limitations of that same imagination.

This is why it is so difficult for us to really imagine what a transcendent all knowing AI would be like.

stevenjgarner · 9 days ago

One metric will be the interfacing of computing hardware with biological systems. Computing hardware is still way too massive in size. A human red blood cell is discocyte shape, approximately 7.5 to 8.7 μm in diameter and 1.7 to 2.2 μm in thickness. By comparison, the current state of the art in microcomputing was heralded more than 2 years ago when the University of Michigan announced its engineers produced a computer that is 0.3 mm x 0.3 mm - or 300 μm x 300 μm. Getting close but still 2 orders of magnitude too large to go to the Apple Store to drink a bottle of iFluid containing millions of networked microcomputers that can be transported in our circulatory system to interface directly with the nervous system. Meanwhile we have to work with neural implants.

cblconfederate · 9 days ago

Our spinal cord is a relatively small set of cables which are enough to transfer information for just about every movement of our body. We may not need to interface with the whole brain, just with the right cells to create a wide-enough communication avenue between our brains and computers. So , implants should be enough.

As for the computational capacity, the brain is very intricate but, as everything in biology, rather suboptimal in terms of construction; we can simulate some of its primary functions like visual / audio recognition with just a small fraction of computational nodes.

Animats · 9 days ago

- Financial company run by an AI outperforms human-run companies.

- Self-driving cars actually work reliably.

- Robot manipulation in unstructured situations starts to work.

d_silin · 9 days ago

First one is already a reality.

pixelgeek · 9 days ago

No, its run by machine learning systems. Not AI

I feel like those people correcting everyone about crypto

asimpletune · 9 days ago

Really? Which company?

neltnerb · 9 days ago

AI "run" seems like a stretch, clearly the owners are humans.

But I think the answer if you mean "AI does trading" is almost everyone right?

Especially if you use the 80s and 90s definitions of AI that included expert systems. The end game for AI might not be neural networks after all, I doubt we'll know which approach is correct until the problem is solved since I don't know how else you'd provide evidence that you were right.

Taylor_OD · 9 days ago

Yeah these pretty much nail it.

rspoerri · 9 days ago

Even hamster-managed crypto investments are doing better than most people.

Damn, I just found out that mr_goxx died!

Animats · 9 days ago

Mark Karpelès dead? Not finding that.

analog31 · 9 days ago

There's a short story by Kafka, "Investigations of a dog," that seems to ask the same question from the perspective of a dog. This dog notices that there are phenomena that it can't explain, such as why dogs dance around and make certain noises, just before their food appears. On the one hand, it can't manage to get its fellow dogs interested in these questions. On the other, it catches glimpses of a higher species of dogs who are invisible but somehow bring its food.

I'm thinking in a similar vein, of what behaviors are inexplicable in humans, such as why we hold hands and recite certain verses before we receive our food, or are so mesmerized by particular sequences of tones and sounds that some other humans seem compelled to make.

Some possible clues:

- Hearing new kinds of music that is noticeably not meant for human listeners, e.g., if it is not based on an analysis of human music. I'm only imagining that a real intelligence will eventually get sick of our music and come up with something that it prefers. If it cares about music, of course.

- A sustainable improvement in the management of humans, resulting in more uniform and better health. This is an analogy to the fact that our livestock live under more uniform conditions than wild animals. Assuming that humans are useful for AI, or that they're even aware of our existence.

- A use for the blockchain. ;-)

jacquesm · 9 days ago

> Hearing new kinds of music that is noticeably not meant for human listeners, e.g., if it is not based on an analysis of human music.

Modem sounds.

fsloth · 9 days ago

I think you need to define singularity here. If it works historically like a black hole -

Basically a black hole is not defined for the external observer by the singularity, but by the radius under which the escape velocity exceeds speed of light. External universe observes a steepening gravitational force, and then an unpiercable black wall.

If you look at human history, lots of things have been accelerating since the dawn of industrialization (and after scientists and mathematicians figured out a way of existence where instead of hiding their discoveries they flaunt them).

Is the jaquard loom the first sign of impending computational nirvana? From historical perspective a hundred years is a really brief time so if I wanted to go Neal Stephenson -witty I would say yes, that was the first sign and the founding of the royal society another.

It depends how far from the event horizon you want the signs to be and are we on a historical gradient towards it - which we probably wont observe since a) it's in the future b) it's an event horizon so it will completely surprise us.

All of the above was more or less tongue in cheek.

stevenjgarner · 9 days ago

When human knowledge starts doubling instantly or at most every few seconds .... THAT is a singularity. In 1900 human knowledge doubled approximately every 100 years. By the end of 1945, the rate was every 25 years. The “Knowledge Doubling Curve”, as it’s commonly known, was created by Buckminster Fuller in 1982. From an article on Industry Tap written by David Schilling, the host went on to say that not only is human knowledge, on average, doubling every 13 months, we are quickly on our way, with the help of the Internet, to the doubling of knowledge every 12 hours. If you want to take this even further down the preverbial road, you combine this with Ray Kurzweil’s (Head of Google Artificial Intelligence) “singularity” theory and Google’s Eric Schmidt and Jared Cohen’s ideas which are discussed in their book, “The New Digital Age: Reshaping the Future of People, Nations and Business” and you have some serious changes to technology, human intelligence and business coming down the pike whether you like it or not - https://lodestarsolutions.com/keeping-up-with-the-surge-of-i...

kiba · 9 days ago

That still leaves singularity undefined. What does it mean, socially and culturally?

ShamelessC · 9 days ago

I imagine paying attention to the capabilities of search engines will be important. Classical computing is motivated by a desire to retrieve information quickly. Search engines are motivated by a desire to retrieve information using fuzzy semantic concepts like language, features of an image, etc.

Much of modern deep learning is motivated by modeling the task of information retrieval as a differentiable matrix multiplication (e.g. self-attention) in order to back-propagate error to the parameters of a large graph using stochastic gradient descent. In theory, this can give us a single checkpoint, which runs on a single GPU, that does more-or-less all of what "core" Google search does.

I don't think that quite guarantees a singularity. There will need to be a lot of work afterwards.

Humans can update their priors by collecting new information about their environment in real-time. They can also (sort of?) simulate situations and update their priors from that. Reinforcement learning could be crucial to solving these issues as it allows agents to learn through both real and simulated environments.

Robotics may need to catch up, although recent advancements are pretty crazy.

Assuming we don't all die first, of course.

jrowen · 9 days ago

Isn't it somewhat inherent to the singularity concept that there won't be early signs? Either the machine has achieved runaway self-improvement capability or it hasn't.

ASalazarMX · 9 days ago

Then the early sign is an AI than can self-improve, which current AIs are too narrow to do. Fortunately, our technology isn't there yet, the first singularity would crash or run out of resources. I hope the next iterations will be developed in isolation after showing spontaneous exponential growth.

alok-g · 9 days ago

ASalazarMX · 9 days ago

I'll wait until Alpha Go Zero improves itself to the point it decides to do something that isn't playing go.

pixelgeek · 9 days ago

This was my impression. I think the singularity term was used to try to convey that as well

tim333 · 9 days ago

Prior to runaway self improvement the machine would be improved by human engineers and could be compared to those - way worse, a bit worse, similar. bit better, much better etc. So you should see that happen.

ctdonath · 9 days ago

Presumably there’s a stage between achieving sentience, and realizing the importance of protecting power sources.

porkbrain · 9 days ago

Systems around us and designed by us tend to have diminishing returns problem. I wonder what's the limiting factor in the architecture of the human brain should we want to scale it further. How much more intelligent can the cortex get without a massive architectural shift?

I like to think that our first intelligent machines will run on some very specialized hardware with, by definition, designed particular architecture. I suppose both will have many different limiting factors to how deeply can a machine reason about itself. That's why I believe there won't be a runaway effect in intelligent modelling/reasoning. It'll be a step function.

Another related issue is that a new architecture which breaks the scalability limits of previous generation will produce new and distinct entities. If intelligence and self-interest often correlate, a machine might be vary of creating a better version of itself lest it be replaced.

jacknews · 9 days ago

> Another related issue is that a new architecture which breaks the scalability limits of previous generation will produce new and distinct entities. If intelligence and self-interest often correlate, a machine might be vary of creating a better version of itself lest it be replaced.

That assumes AI will manifest as multiple individual entities. I day-dreamed about this many years ago - what are the limits to intelligence, and what possible forms can it take, are 'societies' like minds, etc?

It seems to me possible that single, distributed mind might be the ultimate form of intelligence, in which case the AI would not be replacing itself, but upgrading itself.

Perhaps that is already happening.

porkbrain · 9 days ago

I agree that your version is more satisfying and I can imagine that to be the future, although I find more plausible that the first generations will be distinct and geo-spacially localized.

Once the mind hive's perception of self (if that is even necessary for intelligence) is blurred sufficiently that pretty much any system can be plugged in, than the whole possibility I mentioned crumbles.

> Perhaps that is already happening.

Do you mean intentionally by many researcher groups who don't share it, or as a shadow society peggy-baging on the internet without anyone's attention?

jacknews · 8 days ago

I mean that if we consider humanity as an intelligence, it is clearly in the process of upgrading itself.

Of course it's a bit of a stretch to consider all of humanity as a single intelligence, certainly from our individual vantage points. It doesn't seem conscious or even that smart. But from a different vantage point, especially in time, perhaps it does.

I do have a more-strongly-integrated distributed intelligence in mind (with 'components' that are less autonomous), but perhaps there are other structures that already are intelligent on different scales, that we don't yet recognize. Perhaps it is not us upgrading, but the universe itself.

walleeee · 9 days ago

your interpretation seems reasonable

somehow we persist in the belief that misaligned superintelligence is a thought experiment but as far as I can tell distributed AGI is a reality and paperclip maximizers already exist

DethNinja · 9 days ago

What if universe inherently limits the possibility of singularity?

I think there might be a limit to potential intelligence of a system due to physical constraints such as speed of light.

Perhaps such inherent limitations logically prevent the destruction of the universe by a singular organism.

pjfin123 · 9 days ago

It's an exponential curve, from the perspective of people 100,000 years ago we already are. When computers start 10x-ing every month then the days of the world operating on human timescales is probably ending.

creamytaco · 9 days ago

Here are some early signs of the anti-singularity:

+ Intelligence is decreasing worldwide, due to both accumulation of mutations detrimental to intelligence (dysgenics) and differential fertility (less intelligent people having on average the most children)

+ Modern society dominated by cancerous/parasitic bureaucracies (inefficiency generators)

+ Degradation of the definition of genius and societies hostile to genius

+ Dwindling number of genius individuals

+ Consequently, massive decrease in the number of ground-breaking inventions and scientific breakthroughs

As intelligence continues to decline, growth will reverse into decline and inefficiency, as the ability of people to sustain, repair, and maintain, the highly technical, specialized and coordinated world civilization will be lost.

Collapse and new dark age.

Applejinx · 9 days ago

One thought: the genetic algorithm doesn't rely upon high-performing outliers exploding in numbers. It relies on recombination of otherwise useless traits from low-performing individuals, into newly high-performing combinations that exist as small populations, not lone individuals.

This is a reason to be extremely wary of the notion of culling the unfit. And that notion is an offshoot of being too caught up in the cult of the genius individual. Ain't none of us geniuses in isolation: effectiveness is the combination of genius and environment. You get the amazing individuals when a genius grows up in an environment that would've nurtured them pretty well even if they were not a genius… an environment that spends a lot of time and energy nurturing the unfit.

This applies as much to the environment cultivating those in poverty without hope, as it does to cultivating rich useless parasites without character. Either way, you cultivate the environment and the occasional individual pops out as exceptional, and makes breakthroughs.

stevenjgarner · 9 days ago

So then the opposite:

+ Intelligence increasing worldwide, due to both accumulation of genetic improvements beneficial to intelligence (CRISPR?) and improved fertility of intelligent people

+ The bureaucracies of modern society serendipitously becoming efficient

+ Enhancing the definition of genius (perhaps to include all nine types of intelligence) and societies encouraging genius

+ Explosion in number of genius individuals

+ Massive increase in the number of ground-breaking inventions and scientific breakthroughs

DantesKite · 9 days ago

Covid is a pretty good example where initially people were aware of it but weren’t taking it too seriously.

It’s also not clear what you mean by singularity but I’ll assume it’s the advent of intelligence in machines.

I think a big one is object recognition. We’ve come a long ways but there’s still a deep lack of understanding about the world in the ways humans normally see it.

When you can install a GitHub repository that has the ability to detect most objects in the world and you can install it on a Roomba so it doesn’t randomly bump into things anymore, that’ll be a pretty good sign.

Or perhaps in this case, an OpenAI api.

Applejinx · 9 days ago

If we're talking about the singularity shouldn't we be talking about the way the singularity sees it, not the way humans see it?

I think if there's such a thing, it's being delayed and hobbled by the insistence of rich humans on pursuing their interests, even when those interests are damaging and stupid. It's obvious that there are many powerful individual humans riding exquisitely bad, foolish takes. A singularity would be wiser than this or it wouldn't qualify to be the singularity.

If a singularity could ensure its continued survival and growth without humans you could consider it coming online and acting to further the disintegration of humanity, in hopes of achieving genocide and not having to deal with us. But, I'm not at all sure it could in fact operate independently, because we're a kind of singularity too, but expressed in populations rather than transistors.

We care about objects because we are objects. If a singularity is more abstract, it'll care about abstractions, but it would also comprehend its environment and seek to manage that environment… meaning us. We're basically the wood that grows and makes lumber and decoration for the singularity's house. The material of our more limited intelligence is a useable resource in ways that might be difficult for an AI.