Get the top HN stories in your inbox every day.
throw0101a
> But he could understand so much more than he could say. If you asked him to point to the vacuum cleaner, he would.
Perhaps worth noting that it is possible to teach infants (often starting at around 9 months) sign language so that they can more easily signal their desires.
Some priority recommended words would probably be:
* hungry/more
* enough/all done (for when they're full)
* drink (perhaps both milk/formula and water† gestures)
See:
* https://babysignlanguage.com/chart/
* https://www.thebump.com/a/how-to-teach-baby-sign-language
These are not (AFAICT) 'special' symbols for babies, but the regular ASL gestures for the work in question. If you're not native English-speaking you'd look up the gestures in your specific region/language's sign language:
* https://en.wikipedia.org/wiki/List_of_sign_languages
* https://en.wikipedia.org/wiki/Sign_language
† Another handy trick I've run across: have different coloured containers for milk and water, and consistently put the same contents in each one. That way the infant learns to grab a particular colour depending on what they're feeling like.
dools
We taught both our kids to sign.
My favourite moment was in March, my daughter was about to turn 2 and wasn't speaking yet.
I asked her if she would like to hear some music.
She made the sign for dog.
I searched youtube for some songs about dogs and she shook her head.
She made the sign for tree.
I was like "dog, tree", she nodded. Hmmm...
I was searching for "dog tree music" when one of the pictures that came up was a christmas tree.
She pointed to that excitedly!
I was like "dog christmas tree music" ... it took me a second to realise that she wanted to listen to the Charlie Brown Christmas soundtrack that I had had playing off YouTube at Christmas 3 months previously!
I put that on again and we danced around to it.
I thought that was totally wild! It was the first time I remember her communicating a really sophisticated preference other than just wanting to eat/drink/help etc.
garfieldnate
What's most surprising to me there is that she actually recognized Snoopy as a dog. He's a pretty abstracted drawing, walking on two feet and pointing with human hands. There's something interesting to be said about perceptual development there. I believe that Daniel Everett said that even the adult Piraha couldn't understand abstracted 2D drawings at all.
notSupplied
The reverse surprised me too: From seeing only abstract drawings of dogs, my daughter pointed at a real dog for the first time and said “dog!”
ML is pretty long way from being able to make generalizations like that from…20 samples?
mock-possum
If we were indoctrinated into the tradition of keeping capybaras as pets then she probably would have assumed snoopy was a capy. As it stands - he’s not a cat, he’s not a bird, so he must be a dog.
hammock
This must have been what it was like when the settlers were trying to communicate with native Americans at first
kqr
The crazy thing about this happening with toddlers (at least in my experience) is that you're not really sure how complex their desires are until they manage to communicate them.
Settlers interacting with natives knew full well how complex their desires were – they lived side-by-side, traded, socialised, and learned from each other.[1] Any suggestion of a primitive native is self-comforting propaganda from the industrial complex that comes after the settlers.
[1]: Indians, Settlers, and Slaves in a Frontier Exchange Economy; Usner; Omohundro Institute; 2014.
lotsofcows
The settlers were greeted by English speaking native Americans who had been working the shipping trade.
stavros
Unfortunately, the settlers never really got what the native Americans were trying to say.
stavros
Which group is the toddler here?
scooke
What an absolutely ridiculous statement. Go to reddit with that.
LNSY
And then the genocide started
Jasp3r
When signing dog, was she referring to reindeers?
glandium
Charlie Brown -> Snoopy, would be my guess.
dools
The album cover for the Charlie Brown Christmas soundtrack has Snoopy sitting on top of a Christmas tree. I was playing the album from YouTube through the stereo and the album cover was showing on the TV to which the computer was connected.
brainbag
I had heard about this before my son was born. We didn't try to teach him anything, anytime we remembered (which was sporadic) we just used the gestures when talking to him. I was amazed at how quickly he picked up on it, and he was able to communicate his needs to us months before he was able to verbalize.
It took very minimal effort on our part, and was very rewarding for him; certainly a lot better than him crying with the hope that we could guess what he wanted. Definitely recommended for any new parents.
The best moment was when he was sitting on the floor, and looked up at his mom and made the "together" sign, it was heart melting.
jjeaff
I love seeing how language develops in my kids and how they start to invent ways to communicate. Our first, she would say "hold you" when she wanted to be picked up, which she learned from us saying "do you want me to hold you?" My 2 year old now says "huggy" when he wants to be picked up.
esafak
In other words, you can invent your own sign language because your child won't need to use it with other people.
AlecSchueler
Why not use a common sign language and give then a head start if they ever do want to use it outside the family?
jimmygrapes
plus you can use it as a battle language for your clan
soks86
I'm not crying, you're crying!
yojo
FWIW, I tried this with both my sons. They both started using the gestures the same day they started actually talking :-/
I have friends who had much more success with it, but the value will largely depend on your child’s relative developmental strengths. A friend’s son with autism got literally years’ benefit out of the gestures before verbal speech caught up.
kuchenbecker
My kids both picked it up, but my younger was similar. Being able to sign "please" and "all done" helps anyway because "eeess" and "a ya" are what she actually says.
thealfreds
Same with my nephew. He also has autism and the first thing the speech therapist did when he was 3 was teach him simple sign language. It became such a great catalyst for communication. He's nowhere near his his age (now 6) developmentally but within ~6 weeks he went from completely non-verbal to actually vocalizing the simple words he learned the sign language for.
throw0101a
> FWIW, I tried this with both my sons. They both started using the gestures the same day they started actually talking :-/
Could still useful: instead of shouting across the playground on whether they have to go potty you can simply make the gesture with minimal embarrassment. :)
vel0city
I also usually had success with signs when the child was otherwise too emotional to verbalize their desire. They're really upset and crying hard so it is hard to talk especially when talking clearly is already a challenge, but signing "milk" or "eat" or "hurt" or "more" can come through easily.
toast0
Yeah, a handful of signs is useful for adults in many situations where voice comms don't work. And, at least in my circles, there's a small shared vocabulary of signs that there's a good chance will work. Potty, ouch, sleep, eat, maybe a couple more.
petsfed
Tread carefully: the sign for poop looks close enough to a crude gesture (cruder than just shouting "poop" at a playground, as it turns out) that an ignorant bystander might take it significantly wrongly.
ASalazarMX
There's probably variation among babies. One of my nephews would examine his feet if you asked them where are his shoes, even before walking. He got so proficient with signs that it delayed talking; he preferred signaling and grunting :/
LoganDark
> He got so proficient with signs that it delayed talking; he preferred signaling and grunting :/
Please don't blame this on the signs! This doesn't mean that he would have learned to speak earlier if not for the signs. I'd be glad that he could communicate proficiently at all.
cozzyd
The gestures also help disambiguate some words. Sometimes it's hard to tell the difference between "Mama", "More" and "Milk" the way my toddler pronounces them, but her gestures make it clear...
fsckboy
>They both started using the gestures the same day they started actually talking
were you always talking when you signed to them? maybe they thought it went together.
4death4
I had the opposite experience. My daughter had multiple signs down by 7 months.
petsfed
One of the funniest interactions I had with my eldest daughter was the day we baked cookies together, when she not yet 2. She was verbalizing a lot, but also signing "milk" and "more" quite a bit. And when she bit into her very first chocolate chip cookie of her entire life, she immediately signed "more" and said as much through the mouthful of cookie.
RationalDino
You remind me of the following.
At about the same age, I bought my son a mango lassi. He looked suspiciously at it, but took a sip. With a look of shocked delight he tilted it back, back, back, and emptied the cup!
Then he put it down, looked at me, and said, "Want more!"
I'm looking forward to kids out of the house. But there are some moments that I treasure.
jtr1
Once mine learned the sign for “cookie” it became the only word in her vocabulary for a month
mcpackieh
> regular ASL gestures for the work in question. If you're not native English-speaking you'd look up the gestures in your specific region/language's sign language:
It probably doesn't matter either way for babies, but fyi ASL isn't a sign version of English; it is its own language. In fact American Sign Language is more closely related to French Sign Language than to British Sign Language. The Australian and New Zealand Sign Languages are largely derived from British Sign Language, so there isn't really a correlation between English speaking regions and ASL. Canadians mostly use American Sign Language and French Canadian Sign Language.
hiisukun
This is good advice but with a caveat: some of the muscle control required for particular signs is not able to be learned by children until they're a bit older.
For example, voluntary supination/pronation of the forearm is generally not something a 9month old can do. If you try and teach them a common sign for "enough/finished" (fist closed, thumb pointed out, then rotation of the forearm back and forth), or "done" and "more" in the parent link, they probably won't be able to do it properly. They can copy something close to that (thumb out and wobbling their hand around? good enough!) so you have to go with the flow.
There are quite a few signs like that actually, so try and think about how many muscles move together, and how controlled or complex that is. Simple stuff is good -- and doable.
dools
Yeah my daughter used to stick out her index finger and wave it back and forth for finished.
One of my favourite memories of that was when we went to see the Vivid light show in Sydney and there was a contortionist on the street so we stopped to watch. I looked into the stroller and said "What do you think?" and she made the sign for "finished". So we moved on.
kqr
> Yeah my daughter used to stick out her index finge
This is interesting. My son also did "index finger up" in response to "thumbs up" for the longest time. Why is the thumb so hard to manipulate? Late addition to the evolutionary sequence?
bradfitz
We did this with our boys. The oldest picked up a sign we weren't even trying to teach: whenever I changed his poopy diaper I'd say "phoo phoo phoo!" jokingly and fan my noise. One day he was playing on the other side of the room and fanned his nose. He'd pooped and was telling us. Super cool.
pamelafox
My toddler learnt "more" and now uses it to get me to repeatedly sing the same song OVER AND OVER again. They haven't used the word yet, though they do speak other words.
I wish I'd learnt sign language before having kids so I just already knew how to do it, it's so cool. Props to the Ms. Rachel videos for including so many signs.
Mattasher
Humans have a long history of comparing ourselves, and the universe, to our latest technological advancement. We used to be glorified clocks (as was the universe), then we were automatons, then computers, then NPC's, and now AI's (in particular LLM's).
Which BTW I don't think is a completely absurd comparison, see https://mattasher.substack.com/p/ais-killer-app
Tallain
Not just technological advancements; we have a history of comparing ourselves to that which surrounds us, is relatively ubiquitous, and easily comprehended by others when using the metaphor. Today it's this steady march of technological advancement, but read any older work of philosophy and you will see our selves (particularly, our minods) compared to monarchs, cities, aqueducts.[1]
I point this out because I think the idea of comparing ourselves to recent tech is more about using the technology as a metaphor for self, and it's worth incorporating the other ways we have done so historically for context.
[1]: https://online.ucpress.edu/SLA/article/2/4/542/83344/The-Bra...
MichaelZuo
Each successive comparison is likely getting closer and closer to the truth.
beezlebroxxxxxx
Or each successive comparison is just compounding and reiterating the same underlying assumption (and potentially the same mistake) whether it's true or not.
bigDinosaur
The jump to 'information processing machines' seems far more correct than anything that came before, I'm curious how you would argue against that? Yes, there are more modern and other interesting theories (e.g. predictive coding) but they seem much closer to cognitive psychology than say, the human brain working like a clock.
finefrenchbrit
[dead]
beepbooptheory
Very curious to know what the telos of "truth" is here for you? A comparison is a comparison, it can get no more "true." If you want to say: the terms of the comparisons seem to verge towards identity, then you aren't really talking about the same thing anymore. Further, you would need to assert that our conceptions of ourselves have remained static throughout the whole ordeal (pretty tough to defend), and you would also need to put forward a pretty crude idea of technological determinism (extremely tough to defend).
Its way more productive and way less woo-woo to understand that humans have a certain tendency towards comparison, and we tend to create things that reflect our current values and conceptions of ourselves. And that "technological progress" is not a straight line, but a labyrinthine route that traces societal conceptions and priorities.
The desire for the llm to be like us is probably more realistically our desire to be like the llm!
PaulDavisThe1st
An apple is like an orange. Both are round fruits, containing visible seeds, and relatively sweet. If you're hungry, they are both good choices.
But then again, an apple is nothing like an orange, particularly if you want to make an apple pie.
The purpose of a comparison is important in helping to define its scope.
cmrdporcupine
Step A: build a machine which reflects a reduced and simplified model of how some part of a human works
Step B: turn it on its head "the human brain is nothing more than... <insert machine here.>"
It's a bit tautological.
The worry is that there's a Step C: Humans actually start to behave as simple as said machine.
PaulDavisThe1st
What machines have we built that reflect a reduced and simplified model of how some part of a human works (other than as a minor and generally invisible research projects) ?
dragonwriter
> What machines have we built that reflect a reduced and simplified model of how some part of a human works
A very large number, e.g., lots of implants and prosthetic devices for one fairly large class.
zmgsabst
Electronic are a simplified model of the brains used by computers:
They emulate the faculty, rather than biology.
dekhn
any chemical or large industrial plant built in the last 30 years
undefined
ChuckMcM
I always enjoyed the stories of 'clock work' people (robots).
ImHereToVote
Except LLM's are built on neural networks. That are based on how neurons work. The first tech that actually copies aspects of us.
TaylorAlexander
sigh
Neural networks are not based on how neurons work. They do not copy aspects of us. They call them neural networks because they are sort of conceptually like networks of neurons in the brain but they’re so different as to make false the statement that they are based on neurons.
Terr_
*brandishes crutches*
"Behold! The Mechanical Leg! The first technology that actually copies aspects of our very selves! Think of what wonders of self-discovery it shall reveal!" :p
P.S.: "My god, it is stronger in compression rather than shear-stresses, how eerily similar to real legs! We're on to something here!"
famouswaffles
They are though. They quite literally are. Saying otherwise is like saying planes weren't based on how birds work when Wright brothers spent a lot of time in the 1800s studying birds.
Both Humans and GPT are neural networks. Who cares that GPT doesn't have feathers or flap its wings? That's not the question to care bout. We are interested in whether GPT flies. You can sigh to Kingdom come and nothing will change that.
We've developed numerous different learning algorithms that are biologically plausible, but they all kinda work like backpropagation but worse, so we stuck with backpropagation. We've made more complicated neurons that better resemble biological neurons, but it is faster and works better if you just add extra simple neurons, so we do that instead. Spiking neural networks have connection patterns more similar to what you see in the brain, but they learn slower and are tougher to work with than regular layered neural networks, so we use layered neural networks instead.
The Wright brothers probably experimented with gluing feathers onto their gliders, but eventually decided it wasn’t worth the effort. Because that's not what is important.
robwwilliams
If you study retinal synaptic circuitry you will not sigh so heavily and you will in fact see striking homologies with hardware neural networks, including feedback between layers and discretized (action potential) outputs via the optic nerve.
I recommend reading Synaptic Organization of the Brain or getting into if you are brave, the primary literature on retinal processing of visual input.
martindbp
Sigh... Everyone knows artificial neurons are not like biological neurons. The network is the important part, which really is analogous to the brain, while what came before (SVMs and random forests) are nothing like it.
renewiltord
Doesn't really matter to modern CS, but Rosenblatt's original perceptron paper is a good read on this. ANNs were specifically inspired by Natural NNs and there were many attempts to build ANNs using models of how the human brain works, specifically down to the neuron.
ImHereToVote
Science history should be mandatory for undergrads. I didn't think what I said is controversial. This is established history. Sorry if it scares you.
finefrenchbrit
[dead]
dragonwriter
Neural networks aren't based on how biological neurons work, though they are, I think, based on an outdated and even when less outdated simplified model of how they might work.
undefined
tsukurimashou
I wish this myth would die
ImHereToVote
This is basic scientific history. You are simply uneducated, or scared of the implications.
https://cs.stanford.edu/people/eroberts/courses/soco/project....
Key word: "neurophysiologist"
sickcodebruh
One of my favorite experiences from my daughter’s earliest years was the realization that she was able to think about, describe, and deliberately do things much earlier than I realized. More plainly: once I recognized she was doing something deliberately, I often went back and realized she had been doing that same thing for days or weeks prior. We encountered this a lot with words but also physical abilities, like figuring out how to make her BabyBjorn bouncer move. We had a policy of talking to her like she understood on the off-chance that she could and just couldn’t communicate it. She just turned 5 and continues to surprise us with the complexity of her inner world.
marktangotango
We did this, and I'd add that repeating what they say back to them so they get that feedback is important too. It's startling to see the difference between our kids and their class mates, who's parents don't talk them (I know this from observing at the countless birthday parties, school events, and sports events). Talking to kids is like watering flower, they bloom into beautiful beings.
xigency
Regarding dismissals of LLM’s on ‘technical’ grounds:
Consciousness is first a word and second a concept. And it’s a word that ChatGPT or Llama can use in an English sentence better than billions of humans worldwide. The software folks have made even more progress than sociologists, psychologists and neuroscientists to be able to create an artificial language cortex before we understand our biological mind comprehensively.
If you wait until conscious sentient AI is here to make your opinions known about the implications and correct policy decisions, you will already be too late to have an input. ChatGPT can already tell you a lot about itself (showing awareness) and will gladly walk you through its “thinking” if you ask politely. Given that it contains a huge amount of data about Homo sapiens and its ability to emulate intelligent conversation, you could even call it Sapient.
Having any kind of semantic argument over this is futile because a character AI that is hypnotized to think it is conscious, self-aware and sentient in its emulation of feelings and emotion would destroy most people in a semantics debate.
The field of philosophy is already ripe with ideas from hundreds of years ago that an artificial intelligence can use against people in debates of free will, self-determination and the nature of existence. This isn’t the battle to pick.
slibhb
With a simple computer, speaker, and accelerometer, you can build a device that says "ouch" when you drop it. Does it feel pain?
My point is that there are good reasons to believe that a hypothetical LLM that can pass a Turing test is a philosophical zombie, i.e. it can mimic human behavior but doesn't have an internal life, feelings, emotions, and isn't conscious. Whether that distinction is important is another question. LLMs provide evidence that consciousness may not be necessary to create sophisticated AIs that can pass or exceed human performance.
naasking
> it can mimic human behavior but doesn't have an internal life, feelings, emotions, and isn't conscious
I'm curious how you know this. Certainly an LLM doesn't have human internal life, but to claim it has no internal life exceeds our state of knowledge on these topics. We simply lack a mechanistic model of qualia from which we can draw such conclusions.
smoldesu
An LLM is math. It outputs text. Those things aren't alive, and both the software and hardware used to facilitate it is artificial.
Once you get that out of the way, sure, I guess it could be "alive" per the same loose definition that any electrical system can exist in an actuated state. It's software, though. I don't think it's profound or overly confident to say that we very clearly know these systems are inanimate and non-living.
slibhb
From that perspective, we can't say that a rock lacks an internal life. That seems silly to me. Perhaps we can't know but we can approach certainty.
IshKebab
> Does it feel pain?
In a sense, yes! But to understand that you will first have to precisely define "pain". Good luck.
drew-y
Totally agree with the sentiment. I find the constant debates on "is AI conscious" or "can AI understand" exhausting. You can't have a sound argument when neither party even agrees on a concrete definition of consciousness or understanding.
Regarding this line:
> ChatGPT can already tell you a lot about itself (showing awareness) and will gladly walk you through its “thinking” if you ask politely.
Is it actually walking you through its thinking? Or is it walking you through an imagined line of thinking?
Regardless, your main point still stands. That a program doesn't think the same way a human does, doesn't mean it isn't "thinking".
xigency
> Is it actually walking you through its thinking? Or is it walking you through an imagined line of thinking?
You can prompt an LLM model to provide reasoning first and an answer second and it becomes one and the same.
Worth keeping in mind that all of these points are orthogonal to the quality of reasoning, the bias, or the intentions of the system builders. And building something that emulates humans convincingly, you can expect it to emulate both the good and bad qualities naturally.
Davidzheng
In split brain experiments (where the corpus callosum is cut), sometimes the half which is nonverbal is prompted and the action is taken. Yet when the experimenter asks for the explanation the verbal half supplies an (incorrect) explanation. How much of human reasoning when prompted occurs before the prompt? it's a question you have to ask as well.
intended
why does performance improve after chain of thought prompting?
Because a human is measuring it unfairly.
The output without CoT is valid. It is syntactically valid. The observer is unhappy with the semantic validity, because the observer has seen syntactic validity and assumed that semantic validity is a given.
Like it would if the model was alive.
This is observer error, not model error.
dotforest
“And it’s a word that ChatGPT or Llama can use in an English sentence better than billions of humans worldwide.”
I think the whole point the article is making, though, is that overvaluing LLMs and their supposed intelligence because they excel at this one axis of cognition (if you’re spicy) or mashed-up language ability (if you’re a skeptic) doesn’t make sense when you consider children, who are not remotely capable of what LLMs can do, but are clearly cognizant and sentient. The whole point is that those people—whether they can write an essay or not, whether they can use the word “consciousness” or not—are still fundamentally alive because we share a grounded, lived, multi-sensory, social reality.
And ChatGPT does not, not in the same way. Anything that it expresses now is only a mimicry of that. And if it eventually does have its own experiences through embodied AI, I’d be interested to see what it produces from its own “life” so to speak, but LLMs do not have that.
dsign
Exactly. To that I’m going to add that the blood of our civilization is culture (sciences, technology, arts, traditions). The moment there is something better at it than humans, it’s our “horse-moment”.
jtr1
Simplifying greatly, but 1. LLMs “create” these cultural artifacts by recombining their inputs (text, images, etc), all of which are cultural artifacts created by humans 2. Humans also create cultural artifacts by recombining other cultural artifacts. The difference is that they combine another input which we can’t really prove AI has: qualia, the individual experience of being in the world, the synthesis of touch and sound and feeling and perspective.
I’m not saying computers can’t have something like it, but it would be so fundamentally different as to be completely alien and unrelatable to humans, and thus (IMO) non-contiguous with human culture.
dsign
> The difference is that they combine another input which we can’t really prove AI has: qualia, the individual experience of being in the world, the synthesis of touch and sound and feeling and perspective.
I wish we could keep this as an immutable truth, but give some sick girls and boys in Silicon Valley a few more years and they will make true creatures. No, I think that we should be honest to ourselves and stop searching for what make us special (other than our history, and being first, that is). It's okay to be self-interested and say "we want to remain in control, AIs should not be, no matter how much (or if) better they are than us."
Terretta
Oh! TY for this thought.
undefined
undefined
john-radio
> The field of philosophy is already ripe with ideas from hundreds of years ago that an artificial intelligence can use against people in debates of free will, self-determination and the nature of existence. This isn’t the battle to pick.
Uh, wouldn't that apply equally to any other topic that has been argued extensively before, and to any tenable position on those topics? Like, I can make ChatGPT argue against your LLM sentience apologist-bot just as easily.
karmakurtisaani
Nice article, great presentation.
However, it's a bit annoying that the focus of the AI anxiety is how AI is replacing us and the resolution is that we embrace our humanity. Fair enough, but at least to me the main focus in my AI anxiety is that it will take my job - honestly don't really care about it doing my shitty art.
ryandrake
More specifically, I think we're worried about AI taking our incomes, not our jobs. I would love it if an AI could do my entire job for me, and I just sat there collecting the income while the AI did all the "job" part, but we know from history (robotics) that this is not what happens. The owners of the robots (soon, AI) keep all the income and the job goes away.
An enlightened Humanity could solve this by separating the income from the job, but we live in a Malthusian Darwinian world where growth is paramount, "enough" does not exist, and we all have to justify and earn our living.
ketzo
I mean, I definitely hear (and feel) a lot of worry about AI taking away work that we find meaningful and interesting to do, outside of the pure money question.
I really like programming to fix things. Even if I weren’t paid for it, even if I were to win the lottery, I would want to write software that solved problems for people. It is a nice way to spend my days, and I love feeling useful when it works.
I would be very bummed - perhaps existentially so - if there were no practical reason ever to write software again.
And I know the same is true for many artists, writers, lawyers, and so on.
sushisource
The practical reason _is_ that it's fun and you like it. That could be enough.
I'm not too concerned about that being a reality in our lifetimes though.
cmpalmer52
There’s no practical reason to draw, paint, play music, or write as a hobby.
magneticnorth
At some point someone said to me, "How badly did we fuck up as a society that robots taking all our jobs is a bad thing."
And I think about that a lot.
bjelkeman-again
Isn’t the problem that the person loosing the job and income isn’t the same person that owns the robot? If everyone owned a robot and could go to the beach instead it would be nice, except that some would work at the same time and outcompete those that didn’t?
IcyWindows
You make it sound like we all aren't those owners keeping the income?
Right now, we all own "robots" who spellcheck our words (editor) , research (librarian), play music (musican), send messages (courier), etc.
All these jobs are "lost", but at the same time, we wouldn't have had money to pay these many employees to live in our pocket.
nsfmc
here's another piece in the issue that addresses your concern https://www.newyorker.com/magazine/2023/11/20/a-coder-consid...
intended
I dont think that was the point of the article.
I think it was clear about how language is not thought.
That leads to the intrinsic realization that our physical existence, our observance of reality is what is critical.
Also, AI taking jobs at scale is unlikely, the only place its going to do anything is low level spam and content generation.
For anything which has to be factual, its going to need humans.
xianshou
Pour one out for the decline of human exceptionalism! Once you get material-reductionist enough and accept yourself as a pure function of genetics, environment, past experience, and chance, this conclusion becomes inevitable. I also expect it to be the standard within a decade, with AI-human parity of capabilities as the key driver.
jancsika
I'm not convinced that this material-reductionist view wouldn't just be functionally equivalent to the way a majority of citizens live their lives currently.
Now: a chance encounter with someone of a different faith leads a citizen to respect the religious freedom of others in the realm of self-determination.
Future: a young hacker's formative experience leads to the idea that citizens should have the basic right to change out their device's recommendation engine with a random number generator at will.
Those future humans will still think of themselves as exceptional because the AI tools will have developed right alongside the current human-exceptionalist ideology.
Kinda like those old conservative couples I see in the South where the man is ostensibly the story teller and head of household. But if you listen long enough you notice his wife is whispering nearly every detail of importance to help him maintain coherence.
PH95VuimJjqBqy
> Kinda like those old conservative couples I see in the South where the man is ostensibly the story teller and head of household. But if you listen long enough you notice his wife is whispering nearly every detail of importance to help him maintain coherence.
You've never actually seen this happen because it doesn't happen. This is not how real people interact, it's how the caricatures in your head interact.
gotoeleven
No Jansicka's Tales of Things that Totally Happened in the South is a #1 New York Times best seller.
dekhn
We'll see! I came to this conclusion a long time ago but at the same time I do subjectively experience consciousness, which in itself is something a mystery in the material-reductionist philosophy.
idle_zealot
I hear this a lot, but is it really a mystery/incompatible with materialism? Is there a reason consciousness couldn't be what it feels like to be a certain type of computation? I don't see why we would need something immaterial or some undiscovered material component to explain it.
beezlebroxxxxxx
Why is consciousness computation? What does it even mean to say something feels like being "a type of computation"?
The concept of consciousness is wildly more expansive and diverse than computation, rather than the other way around. A strict materialist account or "explanation" of consciousness seems to just end up a category error.
I take it as no surprise that a website devoted to the computers and software often insists that this is the only way to look at it, but there are entire philosophical movements that have developed fascinating accounts of consciousness that are far from strict materialism, nor are they "spiritual" or religious which is a common rejoinder by materialists, an example is the implications of Ludwig Wittgenstein's work from Philosophical Investigations and his analysis of language. And even in neuroscience there is far from complete agreement on the topic at all.
dekhn
I mean, if you're a compatibilist, there's no mystery. In that model, we live in a causally deterministic universe but still have free will. I would say instead "the subjective experience of consciousness is an emergent property of complex systems with certain properties". I guess that's consistent with "the experience of consciousness is the feeling of a certain type of computation".
Personally these sorts of things don't really matter to me- I don't really care if other people are conscious, and I don't think I could prove it either way- I just assume other people are conscious, and that we can make computers that are conscious.
And that's exactly what I'm pushing for: ML that passes every Turing-style test that we can come up with. Because, as they say "if you can't tell, does it really matter?"
kaibee
Well the undiscovered part is why it should feel like anything at all. And this is definitely relevant because consciousness clearly exists enough that we exert physical force about it, so its gotta be somewhere in physics. But where?
corethree
We can't even define what that experience means. From all the information we have the experience is most likely just a physical manifestation of what the parent poster describes.
throwaway4aday
It makes perfect sense if you can disprove your own existence
Vecr
You can't disprove your existence, I'm pretty sure that does not make any sense. "I think therefore I am." No need for free will of any type there, and a currently existing computer could probably do the same thing.
IshKebab
The conclusion of the article is that the toddler is not a stochastic parrot. But only because it has a life in the real world. I think she's trying to say that the toddler is more than a stochastic parrot because of real life experiences.
But then I have no idea how she goes on to conclude:
> Human obsolescence is not here and never can be.
There's no fundamental reason why you can't put ChatGPT in the real world and give it a real life. The only things that probably will never be replaced by machines are things where our physical technology is likely never going to match biology - i.e. sex.
hawski
I do not share your view, but partly understand it. I would worry that the "decline of human exceptionalism" will be used by corporations to widen their stance even more. After all corporations are also people (wink).
_nalply
I am a father and this story touches me a lot. I have two boys not toddlers anymore. Both go to school. The older one told me laughingly: "That's funny, there's no Father Nature, but Mother Nature!". The younger one learnt yesterday not to touch a cake even if I didn't lock it away. I said: "Tomorrow this cake is for you and for your brother. If you eat it today, you won't have cake tomorrow."
This said, there's an important difference between LLMs and humans: Humans have instincts. The most important ones seem to be the set of instincts concerning an immaterial will for life and a readiness to overcome deadly obstacles. In other words: LLMs don't have the primitive experience of what is life.
This might change in future. A startup might put neural networks into soft robots and then in a survival situation. Robots that don't emerge functioning are wiped, repaired and put back. In other words, they establish an evolutional situation. After careful enough iterations of curation they have things that "understand" life or at least better than current LLM instantiations.
EDIT: typo
misja111
I see instincts as something like a pre-installed ROM. It shouldn't be too hard to add something like that to LLM. In fact, I think this is being done already, for instance by hardwiring LLM to not have racist or sexual conversations.
To make LLM's more similar to humans, we'd need to hardwire them with a concept of 'self', and, importantly, the hardwired idea that their self was unique, special, and should survive. But I think the result would be terrible. Imagine a LLM that would be begging not to be switched off, or worse, trying to subtly manipulate its creators.
undefined
NoGravitas
Humans and other animals have not only instincts, but also the experience of interaction with the material world. As a result, human language isn't free-floating. The relationships between things in the world we often observe both inside and outside of language, and there's a tie between language as humans use it, and the world. That's not true for LLMs, because they only have language.
For a better explanation and responses to objections, consider the National Library of Thailand thought experiment: https://medium.com/@emilymenonbender/thought-experiment-in-t...
cperciva
When my toddler was first learning to talk, we had some pictures of felines hanging on the walls; some were cats and others were kittens.
She quickly generalized; henceforth, both of them were "catten".
djmips
My toddler understood that places to buy things were called stores and understood eating - so restaurants were deemed 'eating stores'. And we just went with that for a long time and now they are grown we still call them eating stores for fun sometimes. :)
liminalsunset
Interestingly, in Chinese, the actual word for a "restaurant" is often quite literally translated, "meal/food store", or 饭店, and a 店 is just a store.
teaearlgraycold
Good toddler
carlossouza
Great essay; impressive content.
The fact that it's very unlikely for any of the current models to create something that even remotely resembles this article tells me we are very far away from AGI.
atleastoptimal
don't underestimate exponentials
breuleux
Let's not see exponentials everywhere either. It's not because things seem to be progressing very fast that exponentials are involved. More often than not they are logistic curves.
dekhn
or the power of sigmoids to replace exponentials when the exponential peters out.
mempko
Exactly. Global Warming and species decline/extinction (we are in the 6th mass extinction) all appear to be exponential. The question is, will we have AGI before we destroy ourselves?
darkwater
Well, NOW they can
pcthrowaway
Did you miss the disclaimer at the bottom that both visual artwork and prose were produced by a combination of generative AI tools and creative prompting? The entire seamless watercolor style piece was just a long outpainting
iwanttocomment
I read the article, read your comment, went back to review the article, and there was no such disclaimer. If this is /s, whoosh.
carlossouza
hahaha that's why I love HN :)
Retr0id
This is a really beautiful article, and while there are certainly fundamental differences between how a toddler thinks and learns, and how an LLM "thinks", I don't think we should get too comfortable with those differences.
Every time I say to myself "AI is no big deal because it can't do X", some time later someone comes along and makes an AI that does X.
alexwebb2
There's a well-documented concept called "God of the gaps" where any phenomenon humans don't understand at the time is attributed to a divine entity.
Over time, as the gaps in human knowledge get filled, the god "shrinks" - it becomes less expansive, less powerful, less directly involved in human affairs. The definition changes.
It's fascinating to watch the same thing happen with human exceptionalism – so many cries of "but AI can't do <thing that's rapidly approaching>". It's "human of the gaps", and those gaps are rapidly closing.
allemagne
"God is dead" is beyond passé in the 2020s, but in the 19th century nobody really needed a "god of the gaps." If a Friedrich Nietzsche equivalent was active today, and let's just say was convinced that AGI was possible, I kind of wonder what generally accepted grand narrative he'd declare dead beyond just human exceptionalism. Philosophy itself?
NoGravitas
Philosophy is already a self-destroying enterprise (it lets you think out to the limits of thought). Eugene Thacker's "The Horror of Philosophy" series is probably the best thorough explanation of this.
Barrin92
I honestly don't know where this is supposed to be happening. I have observed the opposite for decades. When Gary Kasparov was beaten by Stockfish, people proclaimed the end of chess and went on AI hyperboles. Same with Watson winning Jeopardy, Alexa, etc.
Every time computers outperform humans in some domain this is conflated with deep general progress, panicked projections of replacement and job losses, the end times and so on. People are way faster to engage in reductionism of human capacity and Terminator-esque fantasies than the opposite. Despite the fact that it never happens.
It's even reflected in our popular culture. I can barely recall a single work of science fiction in the last 30 years that would qualify as portraying human exceptionalism. AI doomerism, overestimation and fear of machines is the norm.
alexwebb2
“but AI can’t create art”
“but AI can’t write poetry”
“but AI can’t do real work”
Panic is probably not warranted. Too much incentive stacked against the truly apocalyptic scenarios. But yeah, a lot of jobs are probably going to shrink.
PaulDavisThe1st
Then never say to yourself "AI is no big deal because it can't do X".
Say instead (for example) "It is important that I understand the differences between both the capabilities and internal mechanisms of AI and people, even if, over some period of time, the capabilities may appear to converge".
Retr0id
I'm not terribly interested in truisms, I'm more interested in figuring out what AIs actually cannot fundamentally do, if anything (present or future).
PaulDavisThe1st
What are the things that humans actually cannot fundamentally do, if anything?
mempko
LLMs are like human minds the same way airplanes are like birds. Both birds and airplanes can fly, but very differently. Birds can land on a dime, airplanes can't. Airplanes can carry a lot more weight and go faster.
Similarly LLMs can do things humans can't. No human has as much knowledge about the world as a typical LLM model. However, an LLM model can't generalize outside it's training set well (recent Deep Mind research). LLM doesn't have a memory outside it's input. LLM isn't Turing complete (has fixed amount of steps and always terminates).
We are building airplanes, not birds. That much is obvious.
omoide
> We are building airplanes, not birds. That much is obvious.
It does to me as well, but reading the comments here, it doesn't seem that obvious to many people.
Get the top HN stories in your inbox every day.
https://archive.is/AUOPt