Brian Lovin
/
Hacker News
Daily Digest email

Get the top HN stories in your inbox every day.

dealforager

No. Every time someone makes a big stink about someone getting fired at one of the top tech companies, it is promptly followed by an article like this. A trillion dollar company that hires thousands of researchers and consistently produces some of the highest quality research with real results is not going to implode from one person being gone. Another pattern I've seen is someone leaving a company, followed by writing an article about how the company is doomed. Of course, the doom never arrives and the companies do even better. All of us are replaceable, from Bill Gates to Jeff Bezos to this researcher. This doesn't mean I agree with the firing, just saying that there is no implosion incoming.

paulsutter

Of general interest, Jeff Dean’s comment on why the Gebru paper didn’t meet Google’s publication guidelines:

“Highlighting risks without pointing out methods for researchers and developers to understand and mitigate those risks misses the mark on helping with these problems”

from https://docs.google.com/document/d/1f2kYWDXwhzYnq8ebVtuk9CqQ...

dominotw

Isn't that asking too much. How would a researcher also know how mitigate the problems they identified.

refenestrator

The problems they identified have been known for years and there are lots of papers exploring how to mitigate.

The whole paper was a nothing burger wrapped in social justice language with asides about how global warming is Actually Racism because of disparate impact (interesting, not an ML topic).

If the problems aren't novel and you're proposing zero solutions, it shouldn't be a paper.

mlthoughts2018

No, it is not too much to ask in the specific case of Gebru’s bad paper. Several of the arguments are specious, like comparing the total energy consumption for training GPT with car trips, or demanding that NLP researchers have to keep up with rapidly changing activist “woke” vocabulary and ensure their models are respecting it.

These are ridiculous claims, and it’s fair to respond to them by saying, “well, what exactly do you imagine a solution or mitigation looks like?”

Essentially, by the nature of how specious Gebru’s stated problems are, they demand clarity over what an “ethical solution” even is, conceptually, and why everyone would have to agree.

For example, you could discuss economies of scale or train-once-finetune-everywhere approaches with GPT that reduce total energy needs. Or you could discuss how researchers can register the corpus they use and the snapshot of time it was grabbed, with an open understanding that as long as the methods and data are reproducible, there is no research ethical issue with studying that corpus, no matter how much bias or lack of woke vocab a given person believes it has. (And also, nobody is required to just accept activist language as important or valid.)

Gebru did none of this. The article could literally be summed up by Gebru saying, “I think <supposedly shocking evidence> is bad, therefore its connection to something in ML is bad.”

E.g. “I think, subjectively, that the raw energy use to train GPT is bad. Here are some shocking comparisons. Therefore GPT is bad.”

It’s incredibly unrigorous and juvenile. Dean’s comments that it needs to clearly state mitigations is actually a super generous, polite way of saying the paper is just subjective amateur hour.

jgalt212

Isn't that what the "directions for future research" section is for?

sp332

Of course. Especially to expect all that to be in each individual published paper.

ausbah

wasn't the real issue that the standards they cited were quoted later in the process than normal

tinyhouse

To be fair, the article is not about the firing. In any case, the two researchers who got fired did more harm to Google than good. Internally no one cares they left. AI etichs is an esoteric academic research field.

bo1024

I'm in the research community and I think you're significantly underetsimating the effect of firing Gebru and Mitchell. Machine learning is the hottest research area in computer science and ethics of ML is possibly its hottest subfield. And people pay attention to employers' actions. I think Microsoft Research is still feeling reputation effects from closing down its Silicon Valley lab 10 years ago with no warning. It sent a message to everyone who worked there that they had no job security, and plenty left for academia. The research community is not going to forget about Google's actions here nor, for the most part, will it view Google very favorably.

tinyhouse

I think you're very wrong. Yes, there's hype around ML etichs. Trends in ML come and go. Do you see any VC investing in etichs in ML startups? As I said, it's an esoteric academic field.

How do you even compare between the two? MSR closed an entire lab out of the blue of some great researchers who didn't do anything wrong. Here you have two employees going against their company and shitting on it publicly. The only researchers who will not want to work at Google after this saga are the ones Google better off without.

RcouF1uZ4gsC

In terms of attracting AI researchers, think of this:

Gebru has very publicly got into fights with Yann LeCunn and now with Jeff Dean. If you are building AI, who would you rather build your team around, Dean/LeCunn or Gebru? If you are an AI researcher, do you want a join a team where one of the team members is in the habit of aggressively accusing other researchers of racism? Would you be worried that your research might fall within their crosshairs for some reason or another? For example, if you are working on natural language research, and your model ends up doing better with Indo-European languages versus those from other families, do you want to be accused of propagating racist power structures on Twitter?

blueblisters

> ethics of ML is possibly its hottest subfield.

Is this really true? I don't see ethics in ML papers getting the same attention in major conferences as theoretical or experimental breakthroughs in deep / reinforcement learning.

Don't get me wrong, ethics could be hot outside the ML academia, but I very much doubt it's something majority of grad students in ML are dying to get into.

lrhegeba

i guess you are right that potential employees will consider this behaviour in their calculations. for the most part by adjusting their salary demands with an additional "risk adjustment bonus". as the FAANG can easily swallow that additional cost and are still incredibly attractive i doubt there will be a big effect besides loosing some value-oriented people. i doubt this will make a difference numbers-wise. nonetheless i applaud employees sharing their view of a companies inner workings for us others to have more information to make an informed decision themselves - yeah transparency

fmajid

More importantly it has no connection to the bottom line, which is why Google management doesn't seem particularly concerned with disquiet in that research group, as long as it doesn't spread to the rest of the company.

tkgally

Google and other companies should regard rigorous research and discussion about AI ethics as long-term protection of their bottom lines. If they start launching products and selling services that are found to unfairly favor or disfavor certain groups of people, they will be vulnerable to lawsuits, government regulation, and damage to their reputations.

pyrale

This may not change end of year results, but this kind of research is what gives Google a credible voice when it comes to shaping public discourse and influencing legislative process, for instance.

tiahura

AI etichs is an esoteric academic research field.

I would argue it’s a Public Relations field.

marshmallow_12

i agree. Ethics means nothing. It's just an emotional security blanket. They are no more qualified than a 5 year old to invent ethical standards.

Tenoke

>AI etichs is an esoteric academic research field.

I don't agree in general but I do think these two researchers, and this whole saga have just hurt the AI ethics field.

DavidSumpter

But when you write something like this, do you also understand that their actual research is widely considered to be of a high quality and very important? So if you agree that ethics is important, would you leave them off a top 10 list (and who would you put on)?

djmips

I speculate this article is because of the firing.

Rochus

> All of us are replaceable

That's a management illusion. Try to replace e.g. someone like Fabrice Bellard, Mike Pall or Claude Shannon. Of course such things happen in big companies, but mostly because management is too limited to properly assess the true value of certain individuals. But the article is actually about a different topic.

marcus_holmes

That's an ego illusion. It hurts to admit that we're not replaceable, but we are. The job might not get done as well, or done in a different way than we'd do it, but it'll still get done.

ramblerman

It's certainly true for most of us but not the names the op mentioned.

I would argue the real ego resistance is in not accepting such people exist.

espadrine

Replaceability is a vain concept.

Simile: saying “your brain is replaceable”. Beyond the fact that the most likely context is a threat, it is a poor argument: while technically true, what would remain of me would not be meaningfully me. And the surgery is work that would be hard-pressed to generate the expected value, such that the only reason to do it, is either out of anger or as a consequence of irremediable damage.

Companies are stories. The decisions are made internally, but their meaning is narrated externally. If you change the protagonists, the story changes. The case of Uber’s self-driving car division is quite an example of that.

Does the change in Google’s story converge to a positive or a negative light?

Rochus

>> It hurts to admit that we're not replaceable, but we are

The more people, the less the individual is valued. But that does not make the individual less valuable. Unfortunately, for a few years now, respect for the performance and qualifications of others has been declining more and more. This increases the illusion that everyone could be replaceable. Just ask your family if they see it that way in relation to you; the illusion of replaceability definitely ends here.

throwaway29303

  The job might not get done as well, or done in a different way than we'd do it, but it'll still get done.
If the job isn't done as well, then no one isn't as replaceable as you put it.

Excellence can't be replaced as easily. Maybe for certain kinds of jobs yes, but for all jobs? No. If that were the case then we'd be inundated with Einsteins, etc. And we aren't.

auggierose

That's maybe true for YOUR job. The more a job has a well-defined description, the truer your statement is.

bartimus

People are somewhere on the scale of greatness. At some point it becomes harder and harder to find replacements that will be able to get that job done. People are very capable to steer projects into failure.

belval

It's not, at least not in ML for a lab as prestigious as Google AI. They probably have several hundred researchers with excellent publications that would be willing to drop everything and get a FANG salary.

sdevonoes

Alright. 99.999% of us are replaceable.

Rochus

Also this is a management illusion. There is no evidence for this assumption. You don't even know the probability distribution. There is no reason to assume that the percentage is equally distributed across all firms or countries. And anyway, the article is about something else.

ctrlp

Brings to mind De Gaulle saying "the graveyards are filled with indispensable men."

Rochus

And yet so many people consider themselves important enough e.g. to post comments here. It just seems that it is always the others who are dispensable. For people who make it into management, this tendency even seems to intensify (or it was the prerequisite why they wanted to be in management).

fractionalhare

Okay, approximately all of us are replaceable. We can agree there is an epsilon of people who are clearly beyond others. However for almost all the work that has to get done, the actual bar is "can you write decent Python?", not "can you design and implement a novel algorithm for computing Pi?"

Rochus

> approximately all of us are replaceable

I guess it depends on the purpose for which we are all supposed to be replaceable. Nature probably doesn't care which individuals reproduce or are eaten, as long as the numbers are right. Human society with its elaborate specializations and long training periods has added a few more dimensions.

avereveard

Shannon built on Hartley, as much as Einstein built on Lorentz.

That's not to say these weren't great minds, but the concepts where in the air and the race to formalize them was on; most of the "second places" are today forgotten or their contribution diminished from the modern "winner takes all" mentality, but none of them existed in a vacuum.

The history of science is fraught with independent discoveries, from calculus to the the telephone, up and including mass energy relation and the basis that later became quantum mechanics.

jessriedel

If A and B made the same discovery independently, that is evidence that A was replaceable, but that C built on D is not evidence that C was replaceable.

jmcgough

Did you read the article? Gebru's firing isn't the focus of it.

dealforager

Yes, I stopped reading when it mentioned the last good paper was from 2017. This is simply not true. I don't have time to go through all of their papers right now, but as someone else mentioned the protein folding one was a real breakthrough. They also have lots of great stuff in the NLP space (something similar to gpt-3 like 2 years earlier). Also tons of stuff on the actual training architecture/methods.

Edit: I want to add that saying the title has nothing to do with the article is not helping the case. I finished reading the article in case I was being unfair, but I still stand with my original comment.

skywhopper

The title, which does not mention anyone's firing, has everything to do with the article...

Anyway, you stopped reading in the first sentence? That's essentially the same as not reading it.

DavidSumpter

It says the high point is 2017, not the last good paper. There are of course other good papers coming out go Google. But the novelty is dropping and the angst is increasing.

biztos

To be fair, you wouldn't know that from the article's subtitle:

> What does Timnit Gebru’s firing and the recent papers coming out of Google tell us about the state of research at the world’s biggest AI research department.

Udik

I read the article and I don't get what's the focus of it. It seems a disconnected rambling about vague deep learning issues with Gebru's name interspersed several time in the text as to suggest her relevance.

undefined

[deleted]

herodoturtle

Hear hear.

These clickbaity article titles are tiresome.

Thanks for saying it like it is.

DavidSumpter

Please read the article. I address some of the issues raised. My point is that Gebru's firing is symptomatic of some deep problems Google are experiencing. Thank you.

high_derivative

>. I don’t want to downplay the deep instutionalised sexism and racism that is at play in Gebru’s firing — that is there for all to see.

This is a very badly written and uniformed article, and sentences like these essentially illustrate the thinking here (It's imploding because I don't like it).

Here is an alternative reading: Google is cleaning house of toxic activists who are not interested in serious ethics research but use it as a vehicle for their ultra-progressive political agendas.

Manifretto

I read the article and i'm not following the argument delivered at all.

There is no real proof to this.

I'm following and reading research @ google (stuff like this https://ai.googleblog.com/ and other sources) for ages now and NOTHING indicates an 'implosion'.

It is strong research with real and constant results.

I have no idea why the autor would even consider using the word 'implode'.

Its not rocket science that data is biased and it just will continue be researched and a solution will be found. For the single reason that biased systems in certain areas will not deliver the results you need to use it properly.

tinyhouse

Nothing symptomatic about firing toxic employees. The only thing imploding is AI etichs research. There are good people doing quality research that will now have much harder time finding a job in industry because of the bad rep these Google employees "contributed" to the field.

herodoturtle

I read the article. Doesn't change the fact that your chosen title is clickbaity. Even if you disclaim it in the article itself.

hoseja

Blah blah blah racism blah blah blah lgbtiqp blah blah blah parroting blag blah blah white male redditors.

Very good article.

eli_gottlieb

>All of us are replaceable

This sounds more like a reason to unionize than a reason to celebrate a firing.

danielscrubs

I believe you, but just for the sake of it: What AI research has had business value from Google in the last years that you could sell?

FartyMcFarter

WaveNet is one example:

"Google Assistant adds 9 new AI-generated voices": https://venturebeat.com/2019/09/18/google-assistant-gains-9-...

Data center cooling is another one:

"How Google is Using AI for Data Center Cooling": https://www.bmc.com/blogs/data-center-cooling/

Isinlor

AlphaFold may bring millions if not billions to DeepMind, Google and Alphabet. Figuring out structure of a single protein may cost up to 100 000 dollars.

Bert as applied to search from late 2019: https://www.google.com/amp/s/blog.google/products/search/sea...

Rochus

Time will show; winning the competition was just the first step.

karmasimida

BERT has been powering all Google's search traffic at this moment:

https://searchengineland.com/google-bert-used-on-almost-ever...

It is THE biggest change that Google's search algorithm had ever been through, I would assume. And to push such a fundamentally different model to ALL their English traffic is pretty telling itself that how much an improvement Google had been seeing.

This is easily billions of ROI for Google, if not tens of billions.

tinyhouse

Are you kidding me? Everything they do pretty much. Google search results, Gmail Compose, Google Translate, etc etc.

undefined

[deleted]

sgt101

Alphafold is transformational for life sciences. I find it hard to articulate how much it's worth - maybe the sum of the top three life sciences companies today?

Honestly - the discussion is over, the AI folks won.

svaha1728

Unfortunately, they’ve got the data... that alone will attract talent and money.

nmfisher

Set aside the ethics issues for the moment, the criticism that Google is neglecting alternatives to neural networks is probably reasonably fair.

But on the other hand, research labs or not, Google is still a commercial entity driven by a shorter term horizon than your average academic.

Google Research has probably subconsciously drifted towards areas (or at least applications) that can tangibly benefit Google in the near future. Neural networks might not be The Answer (TM), but they can deliver results today, and there's still untapped potential. Speech recognition, search, speech synthesis - all core Google products - have all benefited from neural networks, not to mention the broader applications like protein folding for DeepMind.

I'm hesitant to find fault in a corporate group that's doubling down on the stuff that's actually delivering results, even if it's probably not the path to AGI.

I probably wouldn't be so lenient with OpenAI, because they are (were) purely a research outfit, openly committed to AGI.

hankstenberg

In my opinion it's Peter Naur who has found the key issues that keep AGI out of reach for us. Check out his Turing Award lecture. Too bad that it is the von-Neumann-architecture itself that is keeping us from reaching our goal, because it is still a crude emulator for information processing as it is really done. Like in nature.

linspace

Thank you for giving a constructive answer because quite frankly after reading the article my reply was going to be much more acid against the author, which in my mind has some kind of agenda.

I'm always thinking that Google research is going to stagnate and yet they continue to show impressive results. So, yes, I would love to see something more original than yet another NN, but on the other hand... amazing.

ramblerman

> I don’t want to downplay the deep instutionalised sexism and racism that is at play in Gebru’s firing — that is there for all to see.

Really, where is that to see. You weaken your whole case with this kind of casual reference to "oh and she was also a black women" so racism and sexism apply.

Its a type of crying wolf that loses you more people in what is the potentially important issue at hand, her work at google.

I really wish there would be repurcusions for this kind of casual libel, as it can be thrown in today's climate without a second thought.

tantalor

It's tough to draw a straight line from "deep institution" (redundant?) to specific examples. It shows up more in background statistic than anecdotes. When we're focusing on N=1, I'd rather see concrete complaints (and no, internal forum posts do not count).

cowpig

> and no, internal forum posts do not count

I don't understand this statement. In my mind, internal forum posts as among the most revealing forms of evidence about the culture of an organization.

tantalor

Too subjective; what looks like malice could just be basic misunderstanding.

Simple example: if I reply only the original forum post, but my message is posted after yours, then you might assume like I'm excluding you intentionally, probably because I don't like you personally; you get mad. This assumption is wrong and harmful to productive conversations. The simplest explanation is I began writing my reply before you posted; no apology is warranted. Another possibility is I'm simply a careless or excitable type of person who usually replies quickly before reading other replies; an apology is warranted, but it wasn't a personal attack!

jmeister

In older honor-based cultures, this type of accusation would lead to a duel. We need a modern equivalent.

undefined

[deleted]

anewaccount2021

A one-mile race. Most HN readers couldn't finish.

activatedgeek

The implosion argument in this article is solely ~based~ triggered by Google's botched handling of AI Ethics researchers. To argue that this will implode the complete AI organization seems like wild stretch. On a normal day, all the arguments padding the article would fall on deaf ears.

So, the answer to the question is no. Instead, AI Ethics research will probably implode. AI research, probably no.

For AI research to implode, major AI research leadership need to be seen leaving. And I am not talking about the corporate leadership but the research thought leadership whose sole purpose is to shield the junior/senior researchers from corporate BS (and who are themselves well-known researchers).

Edit: It is perhaps be more appropriate to have said that the botched AI Ethics researchers handling solely triggered the article.

mantap

Feels like the author has an axe to grind.

Research isn't about knowing all the answers or always being successful. Negative results are just as important as positive ones. Author does not even mention the recent protein folding result.

DavidSumpter

I am the author. I think it is interesting with this "axe to grind" response (there was another above). I am an independent researcher, a professor at a University. And I have absolutely no reason to be disappointed with Google. My knowledge of this subject goes considerably deeper than I write in the article, but it is important to get the points across in an easy to follow manner. Thus I use certain literary devices. But I think it isn't fair to describe my opinion as a "axe to grind". I might (of course) be wrong, but there is agenda.

jfindley

I'm not certain that claiming deep knowledge of the subject and handwaving about literary devices is a terribly useful response when you write an article suggesting decline while completely ignoring a bunch of significant more recent developments. That's not a literary device, that's cherry-picking facts to support a preconceived conclusion.

Paying more attention to other, more recent, developments might also have helped you avoid writing a paragraph where you talk about deepmind not being able to play games where it needs to perform multiple actions to obtain a reward - I think deepmind managing to beat professional starcraft players is clear evidence that this conclusion is questionable at best.

DavidSumpter

Ha ha *no agenda, I mean. ..... Maybe my subconscious has an agenda I don't know about :-)

marcus_holmes

Maybe you subconsciously want Google's AI research to collapse? ;)

at_a_remove

As I have said before, whenever someone talks about ethics and AI, I ask the question "Whose ethics? Ethics according to who?"

It's a fantastic chance to export your viewpoint on the entire world when this ethical AI begins to underlay various services. For example, "Members of Group X are inferior/should die" being unethical while "Members of Group Y are inferior/should die" getting past the AI ethics is a great way to slide this past a ton of people. No need for argument, it just comes pre-censored on the commenting system.

AI, as a field, is going to have to face a painful question: what if we get answers we do not like? I'm Irish enough by extraction, so I'll use it as a self-denigrating point ... what if AI found that the Irish, just by genetics, did actually tend toward alcoholism and drunkenness? Would we accept that, or would we say "No, that's wrong. Go back to the drawing board until we get the answers we want"?

My guess is that we are going to look for the latter. AI is going to give us the answers we want, because that's how we are going to build it. AI won't be constrained by ethics so much as it will by the discovered truths we are unwilling to accept, whatever they may be. And that means there's a space for people with a thousand tiny axes to grind to be employed. It will be a chance to shape what "truth" in the Orwellian sense will mean.

prof-dr-ir

As the author admits, the headline is a clickbait question.

Indeed, the article highlights some issues with AI research, in particular those which can lead to ethical problems when AI methods are implemented in consumer products. These are by now well-known and important issues and people should find a way to resolve them.

Then, in its final paragraph, the article suddenly claims to have answered its title question in the affirmative! Am I alone in thinking that the issues, valid as they could be, do not obviously spell doom for the entire program?

ppod

This article is correct about underspecification, but completely dodges the argument on language and understanding, drawing the hackneyed, shallow distinction between "pattern matching" and "understanding".

How do you define understand? People just use the word "understand" to fill in for the magic stuff that they say humans can do but machines can't. They also sometimes use "symbolic reasoning" that way, but the best work on machine understanding and symbolic reason is done by DeepMind:

https://arxiv.org/pdf/2102.03406.pdf

The examples in the original article are about truth, not understanding, mainly because we don't have anything approaching a formal definition of 'understanding'. But, if anyone does, it is probably the deep learning community, where conferences in the last 3-4 years have had hundreds of papers working carefully to examine what is the nature and structure of the knowledge encoded in these kind of systems.

jrootabega

Yeah, tech and AI is pretty unpopular and they'll probably have a hard time finding people who want to work there since the entire planet is now independently wealthy

geodel

May be "AI Ethics" research will be affected and I believe this is good thing. After reading David Graeber's Bullshit Jobs, He claims that administrative jobs in universities and managerial, duct tape IT jobs in industry as examples of it. Now it occurred to me that this AI ethics research is just one more example of proliferating bullshit job. Just recycle same idea and conclusions thousand times and you have years of research and hundreds of papers generated out of it.

And a mutual admiration society of university departments, twitter fans and these type of researchers in tech companies keep everyone in high esteem as world leading authorities.

snicksnak

Every time critics try to lump conspiracy theories, LGBT representation, and fake news together just to make a general point, it directly puts me off, I think that's mostly a cheap shot and lazy.

I'm out of the loop on the whole Gebru situation, from what I know she was researching ethics wrt to AI, so I don't really get the whole "novel idea/work" part in the last paragraph. I often get the impression that such critics never see the rapid developments in AI as progress as long as they don't cover the topics they would like to see focussed. It will never be good enough and always a concern because of X.

throwaway316943

Nice try, but still a miss. Building assumptions into NN won’t work for anything complex like ladders and keys which according to this author are built in to human brains? Please. Yes, NN are still in their infancy but they are clearly foundational to general intelligence. They will probably require many additional discoveries about how they need to be connected and maybe even a few updates to the model of the neuron but they aren’t going away. Secondly, of course a language model is going to parrot back the data it is trained on, that’s a major goal ie given a giant dataset answer these questions. It’s also how a lot of casual human conversation works, we generally just parrot back things we’ve read or heard. The interesting parts are the new syntheses that human brains come up with but even those are standing on the shoulders of the dataset so to speak. You’ll never get a truly neutral view of the data, I don’t even know what such a thing would look like maybe just pure ignorance of the data could be considered neutral? Biases aka heuristics are a a core part of learned intelligence, they can of course be flawed or entirely incorrect for a given environment but they serve a purpose and you can’t do away with them or the ability to form them based on observed data. You can optimize them for specific goals but you can’t get by with just one.

c-cube

> Please. Yes, NN are still in their infancy but they are clearly foundational to general intelligence.

This calls for a big fat "citation needed". What we call neural networks have very little to do with actual neurons, so I fail to see how this is supposed to be "clear". And it's in its infancy since the sixties, really?

nonameiguess

Technically, the perceptron was developed in the 50s, and it's kind of hilarious to read the original press release which claimed it was going to be conscious of its own existence.

throwaway316943

I’d say it’s clear since we continue to make significant progress in replicating the behaviour of biological NNs with the main limitation being the amount of parallelism we can throw at it. If you read closely I stated that our model of the neuron may require modification but the basic idea is correct. Our implementation of artificial NN is in its infancy, we still don’t fully understand how to structure these networks and the way we feed them data is nowhere close to how biological NNs get theirs from the environment but the current SOTA is necessary because we don’t know how to build something capable of operating in the same way. We still have a lot to learn. We’re competing with hundreds of millions of years of evolutionary exploration using our fairly rudimentary understanding of biology. 60 years is nothing especially considering progress is not inevitable and this area of science has major boom and bust cycles where not a lot gets done.

aardvarkr

What about electric cars, since they've been around for 180 years? Is it wrong to say that EV tech is in its infancy? We're seeing significant innovation in electric vehicles and there's so much room for it to continue improving.

Technology goes through periods of explosive innovation as well as periods of very little growth. NN's were dead until they weren't, and now they are having quite the renaissance ever since 2012. Currently they seem to be getting us closest to general intelligence and I'm excited to see where they go from here and if anything else comes along to supplant it's place on the throne.

Daily Digest email

Get the top HN stories in your inbox every day.

Is Google’s AI research about to implode? - Hacker News