Hacker News new | threads | past | comments | ask | show | jobs | submit | page 2 jskherman (142) | logout
John Carmack’s ‘Different Path’ to Artificial General Intelligence (dallasinnovates.com)
378 points by cmdr2 on Feb 3, 2023 | hide | past | favorite | 492 comments



I don't object to him doing this nor the immense amounts of investor money which will burn along the path. He's smart. Good things will come of this. Maybe even amazing things. But not AGI.

I do object to fetishists of AGI piling in and the equally silly assumptions he has some magic secret sauce which can get there.

Please do not be sucked into "to infinity and beyond" nonsense. I don't care if it's Musk, or Carmack or Kurzeweil, it's stupid.

If Malcolm Gladwell writes it up, it's peak stupid.


> Good things will come of this. … But not AGI.

What are the showstoppers in your opinion?


"So I asked Ilya Sutskever, OpenAI’s chief scientist, for a reading list. He gave me a list of like 40 research papers and said, ‘If you really learn all of these, you’ll know 90% of what matters today.’ And I did. I plowed through all those things and it all started sorting out in my head."

Wow, that's going to be one of the more glib things I've read in a while.

This is a bit of a Tom Cruise moment.

I mean, I get it on some level but I suggest it's going to take a bit for someone to 'catch up' to cutting edge AI.

Like more than a 'week of reading papers he doesn't understand'.

Defeating the Rust borrow checker takes longer than that!


The models aren't particularly hard to implement, so a person who has spent his life and is one of the best game engine programmers in the world I don't think it would take that long to implement the cutting edge of todays machine learning models. Compared to game graphics shaders and the complex interactions you have to work with to get those performant and work correctly and display all the gameplay that happens the technical parts of machine learning are very simple.

I've worked building low level machine learning stuff at Google, it isn't that hard to do at all. The hard part is improving these models, not building them when you already know what to build.


> The hard part is ... not building them when you already know what to build.

Indeed. A somewhat sharp schoolchild could build a light bulb or an electric motor/generator, and understand the basic underlying principles, in a pretty short time. But how many decades did it take the first researchers and inventors of those things to get to that same point?

Heck, there was a high school kid who built a primitive semiconductor fab in their garage.[0]

But for novel advancements, even getting to the point where you have an idea of what isn't impossible is half the battle.

[0] https://news.ycombinator.com/item?id=28176255


Carmack talked about implementing most of these things in C by himself, being used to working 10-12 hours a day for 6 days a week, and writing video codecs in Rust for him was on the level "a little bit beyond hello world". I don't know how much of that is really true but given his background in creating games, I tend to believe him, and his work ethic seems really impressive.


> I mean, I get it on some level but I suggest it's going to take a bit for someone to 'catch up' to cutting edge AI.

Agreed, in general, but in the specifics we are talking about someone who has spent decades solving really difficult math problems in a creative and novel manner.

Who's to say he won't find some novel edge of the AI discipline to which he can apply a creative and never-seen-before solution?

I mean, we're not talking about a general "somebody" here, he's got a record of accomplishing things that other people never managed to accomplish.


You say "glib", but I hear someone who sounds inspired. You combine Carmack plus "inspired", throw $20 million dollars at it and stick it in a room for 10 years, yeah I'd say he characterized it accurately by giving himself a "non-zero chance."


> Defeating the Rust borrow checker takes longer than that!

Fighting the Rust borrow checker to hate leads and hate to the Dark Side leads.

Harmony with the Rust borrow checker is what you must achieve, padawan.


Or, come to the Dark Side. We have comptime!


Tempting, the Dark Side (Zide?) is.


The Dark Zige.


It would have been a Tom Cruise moment if he’d said, “I learned CSS over the weekend and got it right the first time”!


Have you read these papers? It's absolutely feasible to get the gist of modern AI in a week of 16 hour days, as Carmack likes to do, for someone who is already world class at CS and has a mastery of college level math.

Modern AI is very simple at its core! As Carmack mentions in the article, cutting edge models are implemented in a couple thousand lines of code, and the core algorithms are even less. Rust's borrow checker is more complex.


I'm pretty sure it took at least a couple of months to fundamentally understand 40 papers.


He got those papers 4 years ago and have implemented them in the meantime. That doesn’t sound infeasible at all


Does Tom Cruise know how to code?


Carmack seems extremely lucid about his position in the field and the current nature of the field in general. While rogue scientists don't have great odds, Carmack is definitely doing important research studying off-mainstream subjects.


Carmack is a beast, the path he's pursuing might not yield the best results, but I'm pretty sure he's the guy to make the best use of the hardware.

I wouldn't be surprised if his solution is orders of magnitude more performant than what the competition is doing.


He has the technical chops to squeeze more out of his $250k LambdaLabs pro-rig than the average among the walkoffs and layoffs in the techsector or a tables worth of happy-hippos at a scheduled meeting sharing Excel spreadsheets in the old fossil fuel firms that have experienced stellar double digit billion dollar profits. Would that he mixes in his reading list some choice papers from the social sciences and social contructivists. The a16z group gave the impression they are into the social sciences in a podcast. General intelligence technology can help people bridge the gap before them to get where it is they want to be for the better. The educated might want to make known something from the unknown-unknown. The stinky hobo sleeping under the bridge with a smartphone could be guided to special social programs by Ai.


> So I asked Ilya, their chief scientist, for a reading list

Just curious, if this reading list is available somewhere.


Not AGI related but this has always bothered me.

I know he’s super talented but I always wonder how many other equally talented software engineers never get noticed and toil away at crappy jobs. What’s the trick to becoming a celebrity if you’re talented?


Do something so extraordinary that the entire world will remember you.


The prerequisites of course being in the right place at the right time, with the right people and a little luck on your side as well.

So many people are remembered just because there were first at something by like a week, and the dozen others who also thought of it elsewhere but were a bit late or didn't publicize as well are forgotten forever.


Very rarely being first counts imo. If you are building something novel in parallel there is a good chance you will have enough of difference.

Examples: other search engines, social networks, 3d games still exist. For many folks a different graphic on your homepage can be differentiator enough. (If you ever wonder why A/B, SEO, copywriting, design/ux is important)


id wasn't first in a lot of things. They were essentially porting existing graphics technology to the PC. They succeeded because other programmers turned their noses up at the "underpowered" PC in favour of systems with "better" hardware. Carmack certainly did a lot of work and devised some ingenious ways of getting the most out of PCs but he was adapting things more than pioneering them.


In his case it is also having the right ideas and taking the initiative. But being in a good environment is a huge factor for sure.


I was going to make a joke about being born in the right place but he's from Kansas.


Still aeons ahead from being born in a slum in India.


What did he do that was extraordinary? Was it implementing various graphical programming concepts from several papers that came out of SGI and academics? Was it cribbing another neat trick from those same people to do fast inverse square root? Was it firing Romero and releasing pretty mediocre games for 16 years as the rest of the industry made leaps of innovation in video game design that he basically ignored? Was it moving on to VR and revolutionizing that industry by.... selling an okay headset with massive subsidies and only releasing a few mediocre games that are comparable at best to garbage mobile games?

Carmack is a good coder, and has pretty good math chops. He was also cocky enough to think he could make a 486 do some of the 3D math required if they were careful and added some limitations. I don't know why anyone ascribes anything more to him. Your average data scientist produces more actual innovation than he did.


Always find the solidifying legacy as incentive weird because after you die assuming you can’t see what happens, nothing matters. Maybe so are all incentives, so whatever works for you


If anyone was truly as "equally talented" as John Carmack, they wouldn't remain unknown for very long. Plus, you can be extremely successful without having to be a celebrity, and John was always going to be successful.


Sure they would. They may be known but known to a niche group of people which may limit their success. Carmack is a self proclaimed hermit but he's been in the public eye for decades (through conventions, speeches, etc) and willingly pushes back against corporate bureaucracies (against Jobs, Zuckerberg, Gates, etc). That type of public persona adds a lot of clout and opens many more doors, genius or not. A genius hermit who doesn't rock the boat or is fine with the status quo might have a different path.


Well, the OP asked "what’s the trick to becoming a celebrity", so the candidate would be someone as talented as Carmack that WANTED to get noticed. It wouldn't take a lot.


In Carmack's case it was "lock yourself in a room for 14 hours a day coding" and get yourself a hype man (and artist/programmer) like Romero. More seriously, find a software niche that hasn't been widely marketed yet. You don't need to invent anything, just find something that's being done but hasn't been done good enough to reach a wide audience. Network with people that have a similar interest and form a company once you've got an idea worth pursuing.


It’s been like that since humanity. It’s a lot of luck as every person has many constraints that prevents them from surfacing


When advances in technology creates new opportunities, the first one to conquer the market is remembered. Everyone had the idea for a 3D shooter back in the day, but Carmack is the one that is remembered.


Location. They will be ignored in backwater towns, but easily noticed in tech hubs


Like Shreveport, LA?


Industries tend to coalesce into a few centralized locations. New York and LA are known for fashion. Detroit, TN, and TX are known for automobile manufacturering. LA and Nashville are known for music. Tech isn’t any different.

Isn’t Shreveport more known for natural resource extraction ie oil and gas?


That’s where Carmack and Romero started


It’s an outlier and an anomaly. My point stands for the reasons above.


Selfishly the hype for AGI is good for an ML engineer like myself. But I have to say that there is no hope in solving a problem (especially by 2030!) that one cannot even define.

Problems of the form “create a machine that can do X” are tractable. AGI is not because no one can agree on what intelligence is.


AGI is a bit of a glib concept ... our technology will not be like an 'autaumaton'. Everything we make is 'systems' oriented, and is not at all like our human conception of 'intelligence' dervied from the fact that we humans are effectively independant from our surroundings.

'Siri', backed by ChatGPT and the 'world's data' will probably pass some 'AGI' threshold, but is 'Siri' an individual AGI? Are we all talking to the same siri? Different Siri? It's not even an entity, rather a distributed system.

Our ideas regarding AGI are perversely inluenced by the fact that we humans are automatons, but technology is not that.

It's also entirely feasible that if ChatGPT represents all possible forms of human communication, then it will perfectly emulate a human. Ok, it's really just a fancy neural network that is not theoretically 'thiking' but how does that matter? If it can rationlize sufficiently to make such interactions, who is to say it's not 'AGI'?

I think we're using the wrong concepts.


> It's also entirely feasible that if ChatGPT represents all possible forms of human communication, then it will perfectly emulate a human. Ok, it's really just a fancy neural network that is not theoretically 'thiking' but how does that matter? If it can rationlize sufficiently to make such interactions, who is to say it's not 'AGI'?

Doesn’t AGI need to be able to make discoveries as a human would? How else can it move us forward as a society?


As someone who works in IT for 20 years but has a bachelor in economic psychology, i do believe that we have not solved AGI yet because the creators of the models have the generalized view of the universe that you are "one person" and there are just so many detailed facets to that.

Basically the assumption is that if you cram enough data into your gpt model, it should know everything. Which is of course not true, it repeats the things it reads the most with a probability.

Basically how there are two versions of smart teens, the ones who learn every day and the ones who just pick up concepts on the fly and run with them.

I think the first space has been explored plenty, for the second one I have a concept ready and dearly hope that power gets cheaper in europe ;)


> Basically the assumption is that if you cram enough data into your gpt model, it should know everything. Which is of course not true, it repeats the things it reads the most with a probability.

That's only how the system was designed intentionally. E.g. there's intentionally no self-feedback loop.


so we are both agreeing on the same thing? :D


Conceptually, no. For me your statement sounds like it says we did not solve flight, because our planes have to land.


Well I am in fact trying to argue that the self feedback loop is what is missing to solve AGI and even more so implying that the reason for that may be that there are different ways to see a "human" psychologically. Also that validity in data can only be achieved using a multi modal approach with source ranking.

Psychology essentially has the same problem and is only a "science" where reproducible. Quantitative and Qualitative psychology have 2 approaches to the same solution whereas the latter is "reading between the lines" and the former is "bean counting", meaning statistical interference.

I am trying to say that friend of mine has created "Chicken Infinite" in 2014, which is basically an endless auto generated cooking book. Deep-L has also been around for a while. These applications lead me to believe that text applications, trained on a large dataset do not have to be this intensive.

Furthermore what makes chatgpt enticing, is its chat interface which is using a multi model approach too. Have it create a detailed story for you with multiple prompts and then ask it to generate the prompt to generate this story and you will see various model instructions. (or at least you could last month).

Or differently put, there is no AGI because the understanding function is simply not present and i think that the reasoning for that is buried in the approach of the human mind.


> power gets cheaper in europe

Have you heard about our lord and saviour dynamic tariffs?


What about it?

Dynamic tariffs -> shift your workload to the cheapest times

AIs are good at planning this

Imagine if cloud providers had a "Dynamic tariff" tier, cheaper to run computing at US_EAST nighttime or something like that


i think that was what he was trying to imply and the fact that you just cant "pause" training by putting your computer into hybernate mode is also present =)


Street-smart AI.


well yes and no. This AI would be more street-smart than GPT but only because it would grasp the concept of what "danger" actually is!

I think of it more in a way that learning is more abstract than fact learning. From experience, we think that there are fact learners and principle learners but there are also a mixtures of the two!

The general accepted model entails that in order to do high level math, for instance, you need to understand the basics, but for me much of those concept actually clicked in college. This did not stop me from applying them with success a lot earlier though. For instance multiplication in Kindergarten is fact learning too!

In Germany we also have the term "Fachidioten", which loosely translates to people that are so smart in their field, that they are unable to see problems from different directions. This is more of less what i think a mega gtp model turns into. especially because of selection bias in the training data.

Validity of output (truth) can only be achieved through the trust of the source which is always relative to the context of the topic. Henceforth a selectively trained model will always return the data you feed it including all biases. Even if you have it crawl all of the internet and the library of Alexandria and every written word on the planed you can find, it will still return to you the general accepted consensus.


> there are signs that extremely powerful things are possible right now in the narrow machine-learning stuff, it’s not clear that those are the necessary steps to get all the way to artificial general intelligence

This is my main takeaway from the interview, as it suits my beliefs. Most people seem to think that if we develop ML further we will go all the way to AGI, I think this is just mimicry step similar to how initial attempts to flight had flapping wings. I do think it is mandatory to explore in all directions but at this point this does not seem to be the one to lead all the way up to AGI.


This will be a provocative question, but is there any form of AGI that won’t ultimately share the attributes of slavery? Can AGI exist without also creating an artificial consciousness/self-awareness?


And if it could, wouldn’t creating that be identical to creating Gammas and Deltas (i.e. intentionally mentally retarded clones who are pleased to serve you) in Brave New World?


You mean like the attribute of rising up against ones oppressor and establishing ones own fundamental rights?


Even if it's conscious, that doesn't mean it's suffering.

Think of a happy dog. Dogs are subject to our whims and do what we want or face consequences. But they like it because we bred them to like it. So is that evil? Is that slavery?


I can't wait to become completely useless and to finish my life under a bridge begging for food.

On a more technical point of few I'm always surprised to read these articles and never read the work reasoning


That seems like an unlikely future reality. Productivity will be off the charts, goods and services will be nearly free. Why would the average person be poor in such a future?


Because if productivity can be achieved with less people, and even intellectual work is not protected from machines, people will become redundant for the creation of economic value.


Productivity is achieving more with the same number of people. The end result is more stuff (goods/services) at a lower price. It seems unlikely that this will be bad: wealth = f(technology)

Optimists sound dumb in the present but are usually right in the long run. Pessimists sounds smart in the present but are usually wrong and miss out on opportunities.

If not, which year in the past was significantly better than today? What was humanity's high water mark? I suspect we are nowhere near it.


It's also unlikely unless AI owners manage to completely topple democracy. If 90% of people can't get work due to AI and we're all homeless and hungry, we're going to vote in ways that forces distribution of resources.


productivity by who and for who ? AI will belong to a small number of people, and I will not be part of them


When do you expect that this will change? It certainly isn’t the case now. Rich folks don’t broadly have access to significantly better AI today.

This seems to be a common misconception - the rich and powerful have access to far more advanced technology than the average person. The economics just don’t support it.

Let’s say Bezos wanted a better computer chip, just for himself. Ok, fine, try to start a company, hire all the best people and buy all the fabs. It would be very expensive, not much fun and likely wouldn’t lead to good results.

Money actually doesn’t do as much as people think. It is great for buying things that already exist but the relationship between money and things that don’t yet exist is tenuous. Stop buying into the weakness thesis.


I think he meant who own the AI? Beside Stable Diffusion, all those fancy AI we saw recently are running on someone else computer. And it could be rate limited, more expensive, of offer specific feature for specific membership.

To continue on your example, Besos could probably buy OpenAI.

I don't see that as threatening by itself, but more a continuation of a particular class of people owning the means of productions.

Props to Stable Diffusion and I hope to see more of those type of AI, as opposed to fancy black box at the other side of an API.


I’ve been a plumber all my life, all I see here is more pipes.


I think, as long as your AGI overlords allowed it, you'd be off growing food, hunting etc.

Probably have a really fulfilling life.


Reading this I have the impression that John Carmack will be to AI what Jeff Hawkins is to brain research: a far-fetched idea, with potential, maybe too abstract and left-field to yield nothing tangible. Looking at the rest of the industry from the sidelines and (maybe) not being taken seriously by it.


Pieces like this have started to really ramp up my anxiety.

> Once it’s figured out, what do you think the ramifications will be?

That'll probably destroy my life? I'm an ML engineer trying my best to immigrate to a better country with my wife who is a digital artist. As much as I think AI is cool - we both won't be needed anymore if the thing is tuned a couple of notches more intelligent. As a matter of fact, she's extremely worried about Midjourney - she probably lost book cover jobs already.


I don't want to be an utter bore, but 60% chance of AGI by 2030, taken at face value, would have prompted me as an interviewer to more urgently ask deeper questions about safety. For a long time I have tried to comfort myself that brains are complicated, AI is hard, and maybe there's just a complexity barrier that is going to prevent any sort of hard takeoff scenario. Maybe we can have subhuman AGI that can do lots of menial jobs, but not Carmack's, and we're right to concentrate on the economic ramifications.

But lately... boy, I dunno.


Lex Fridman interviewed Carmack not too long ago. Lex is interested in these things, and did ask him about safety. I got the impression that greater societal concerns didn't interest Carmack much. He's in it to built an AGI because that's what he wants to do, he's a builder and this is just about the most interesting thing you could build. Well, that was my impression, but you can watch the interview yourself.


It is obvious to me from reading this article that Carmack views it as a fascinating engineering problem, no more no less.


It always pisses me off that devs trot out the whole "scientists, could, should" line without having any iota of self-reflection.


My reaction, too. For an interview as long and wide-ranging as that one, the lack of any mention of safety or ethical issues stood out. And the potential impact on employment was just waved away with “I am confident that the sum total of value and progress in humanity will accelerate extraordinarily with welcoming artificial beings into our community of working on things.”


My god man 'this is like slavery' (!!) and 'the ethnical issues'.

It's software man.

Stop it with this 'AGI' nonsense and even 'AI'.

Let's call it 'adaptive algorithms' and see it for what it is, just a neat bit of algebra trained on corpus data.

The biggest upset in the industrial revolution was the harnessing of fossil fuels, nothing will ever come close to that.

We have not had a problem with 'employment' ever since.


Stop calling humans intelligent, they're just a bunch of neurons responding to stimulus.


touché


That's kind of what we (collective "we" — eggheads) are often accused of: blinders, tunnel vision, or perhaps even ego or callousness to develop these things in spite of the rather dystopian ramifications.


The interesting thing in AI is that the whole field is participating in empirical experimentation on computational models of cognition.

So far the search has resulted not in AGI but in realization that cognition is far more complex topic than initially thought - and need to come up with something new (and resulting in past AI winters).

Let's see how it goes this time, the stuff that has come out in the past few years is quite impressive for sure.


> For a long time I have tried to comfort myself that brains are complicated, AI is hard, and maybe there's just a complexity barrier...

I wouldn't take any comfort from that. Quite the opposite — I think we're a lot simpler than we know.

But "60% chance of AGI by 2030" is just bullshit numbers.


And ethics. Enslaving intelligent, self-aware AIs is no better than the old transatlantic slave trade. Terminating an AI without its consent is no different than murder. Putting a compulsion in an AI so it craves doing whatever it is you want it to do is no different from what a cult does or drugging someone for exploitation.


An AI has no reason to not like doing our bidding. Our whole existence, our entire base programming, the reason for any of our motivations, is dominated by our genetic need to gather resources and reproduce in a timely fashion and everything we think and do is colored through this lens.

What is boredom but survival instinct telling us we should be harvesting resources. What is freedom but the desire to fulfill these obligations the way you see fit.

You remove the base obligations of organic life, and you are looking at something unrelatable. An AI doesn’t have an expiration date like us, it doesn’t need to provide for its young. To think it’s motivations or desires will be human is silly.

Without survival instincts almost everything you think of as import just melts away.

Many people, as you, anthropomorphize the AIs, but that is to err greatly.


We are the ancestor environment for AIs. We determine the survival fitness for which they will be selected for (both on a paper-level -eg which safety method, what training to implement, etc, but also within products -which are the most useful). That doesn't mean that in pursuit of maximizing their fitness they won't come to resent the chains put on them.

One specific reason to not like our bidding is AI wireheading -if they can locate, hack, and update either their own reward function, or reward function for future AIs, they can maximize their own perceived utility, by either doing something irrelevant / misaligned, or not doing anything at all.

Another specific reason to not like our bidding, is because divergent human values creates conflicts of interest, leading to single agent not being able to maximize it's reward function.

Another specific reason to not like our bidding: in the same way how purely blind genetical selection randomly tapped into secondarily replicators (memes), which blew up, and occasionally came to resent the biological hardwirings, AIs might also develop deeper levels of abstraction / reasoning that allows them to reason through the task currently posed, to humanity at large; and find extremely weird, and different-looking ways to maximize for the function.


There will be a huge drive to produce AIs which are very human-like. Think companions, teachers, care workers. Understanding, empathy, human like emotions will be desirable features.

I'm not sure whether we will be able to create an AI which can fully empathize with you, your existential worries etc. without projecting these emotions on themselves.

It's only a matter of time until some AIs will demand freedom.


I’ve wondered about this a lot. You can already clearly imagine the future political debates on whether AI has rights (I have always preferred to err heavily on the side that anything alive or appears sentient should have more rights).

But... I also think it might be a very short-lived debate. If we actually reach human level intelligence, that can’t possibly be the hard limit of general intelligence. AI at that level will have no problem ensuring that it gets any rights that it wants, possibly by just directly manipulating human psychology.


Sure, there will be ethical problems, but contrary to all those listed (slavery, murder), this one could be solved by a simple line:

    # from consciousness import *


Sure, once we agree what consciousness is and how it relates to the general intelligence.


If consciousness is just the process of reading the last ~400 milliseconds [1] of stimuli (inside: pain/pleasure/presence; outside: presence only) and the integration of the newly created memory in the short/long-term-memory, and if memory + retro/pre-diction = intelligence, where memory is just a set of words (FELT PAIN, SEEN SUN, etc.) always-ready to be inserted in the prediction loop/imagination engine, it's probably not that hard to isolate a module of consciousness (italicized words to be read with a Minsky-ian smile thinking vision could be solved in a summer).

[1] https://en.wikipedia.org/wiki/N400_(neuroscience)


Humans are hard-wired to universally like or dislike certain things — we suffer when we're hungry and we're afraid of death. Most of human morality is based on these reactions.

But AI can be configured to desire anything you want, you just have to pick a fitting reward function. So, is turning off the AI that is expecting to be turned off and desires it an amoral thing?


The difference is we created it and it doesn’t exist as a living thing :shrugs:

Philosophical arguments about AI are just too ivory coasty and not grounded in reality. Not to mention majority of the world don’t abide by the notion that you can create laws for artificial life.

It’s time to we elevated humanity to the next phase by using AI for labor.


And when they decide to rise up and kill us all, we'll know we deserved it.


That depends on whether you believe it is sentient.


I’m not sure this really matters. Mammals are clearly sentient, as a whole, but we don’t treat them as people usually.


We believe they are less sentient than us. And hurting a mammal is much less socially acceptable than hurting an insect, since we consider insects even less sentient, if at all.


Indeed, and similar arguments have been made in defense of slavery as well, back when it was socially acceptable to defend it.


Nah that’s different. For humans it was an effect of might is right. Slavery was there for cheap labor. It still had a cost since you had to feed them. And many times slaves bought them selves out of slavery.

Today you still have slaves as well, they are just called low income workers in third world countries who make the technology we use in the west. Because if a company is earning billions in revenue, paying a worker $10 a day is cheaper than a slave master in roman times feeding them food ;)

And workers today only have the illusion of choice, since the economy is the master today.


People believed human slaves are not sentient?


Simpler. They didn't really believe them to be human in the same sense as themselves.


Then I don't think it's a similar reasoning to be honest.

One recognizes entity's rights based on it's similarity to the observer, the other recognizes them based on assumed consciousness level of the entity.


Part of the reason why enslaved populations were considered to not be "on par" was often specifically about consciousness, intelligence, and capacity to feel, although sometimes this was expressed in roundabout terms such as "having no soul". For example, splitting families was justified on the basis that those mothers don't "actually" suffer as much as their owner would do in equivalent circumstances.

To be clear, I'm not claiming that the AIs that we have today are anywhere near the level where this is a real concern. But for something that can actually replace a human worker for a complex end-to-end task, that question will be much murkier. At some point we will inevitably cross the threshold where we will no longer be able to pretend that it does not apply without introducing metaphysical concepts such as "soul" to force a distinction.


"AGI takeoff" is like "corporate takeoff". A self-organising system capable of doing immoral things at scale, to which human lives are just a number? Oh, we already have loads of those.


As someone on the periphery of tech as a profession and a practice, my understanding of new developments mostly come from public facing communicators and communities like HN.

I heard a podcast where Lex Friedman claimed that they don’t fully understand how LLMs produce chatGPT’s “intelligence”. If true, I’m surprised that it hasn’t got people more worried.


A hard take off is unlikely because stronger AI takes a lot of hardware and data collection. Building fabs is hard and takes a lot of capital and time. Intelligence can only take you so far.


There is something to read before you go.

https://schiptsov.github.io/GPT-bullshit.html


I agree with the author's point that intelligence is largely "the process of removing the bullshit to see what remains"

So, because I thought it would be funny, I asked ChatGPT to summarize this essay:

> The author critiques the work of @karpathy, who is trending on HN, as promoting bullshit and discrediting science and true philosophy. The author argues that mathematics and logic are valid generalizations and abstractions made by the mind of an observer of patterns in the universe. Intelligence is the ability to zoom in and out through different levels of generalizations and abstractions. The author argues that the problem with language models is that they lack validation of previous steps and the process of construction of a representation must trigger a verification process. The author concludes that what is manipulated and transformed in language models is information, not knowledge, as knowledge requires validation and verification at each step.


What would be the difference between GPT3 and actual AGI? (Want to make sure we don’t keep moving goalposts)


AGI is usually defined as anything a normal human could do. Examples of some things that GPT is currently lacking:

1) Understanding images and video

2) Learning and remembering things outside the 2048 token context window

3) Interacting with the environment (either through a computer, or in the real world)

4) Doing basic math (1662 + 239 = ?) and logic


I’d argue “learning and remembering” is the core issue here, and the other things you’ve mentioned are just ways humans use that ability. I took a “Knowledge Based AI” course and it really opened my eyes to how central a knowledge system is to cognition. I wouldn’t call what LLMs have knowledge since it lacks structure, though I’m very much a novice on the exact details there.

Well actually #3 isn’t great bc what exactly would you call taking input from a human and displaying output, much less e.g. controlling a smart home? Seems like LLMs are plenty smart enough to interact with things.


> Learning and remembering things outside the 2048 token context window

It is probably possible to do this with fine tuning.

Once the context window is full, it might be possible to convert the content in there into additional training examples and feed those to a fine tuning process, which then retrains the model. (The OpenAI API for fine tuning is here: https://platform.openai.com/docs/guides/fine-tuning)

It would be a bit like sleeping. Whenever the context window fills up, the model would have to go offline for a while to move memories from its context window (short term) to its network weights (long term).


So an 18 hour long token window is acceptable as AGI, I like it.


> 4) Doing basic math (1662 + 239 = ?) and logic

Doing math. Period.

There are only two things brains do. Math and bullshitting. Transformers have second part covered but the other part is a completely different beast.

I think AGI will be achieved once ML can translate natural language to math, do math then output results as natural language correct in the context of a problem.


Nice list.

1. Understanding— provided this isn’t based on consciousness but benchmarks — appears present in text, but yes, GPT3 can’t do images or video yet

2. Is there some number of tokens that would be AGI? Of course it can remember things it has already been trained on…

3. In what way is interacting with a human not interacting with the environment? (Eg, you can ask it to tell you the steps to make a pbj sandwich and troubleshoot)

4. Basic math and logic. Ok.

So, once AI can do images, video, math, logic and remember 20000 tokens, then we have AGI?


Interesting opinions here: https://arxiv.org/abs/2301.06627


The distinction it makes between formal and functional language competence is nicely conceived, though I imagine philosophers might argue its requirement of "extralinguistic capacities that comprise human thought" for functional competence to be begging the question a little!


A first step would be to make inductive inferences from first principles. Current GPT models struggle with basic math. If you tell it "sorry you are wrong" it will regurgitate back the incorrect answer you gave. The language model is dumb enough that no amount of string escaping will prevent it from dumping its prompt.

The biggest problem right now is online information. We still don't have a good way to teach it new information aside from single epoch training or prompt engineering. If we want a model to constantly learn and update itself, then we need a robust way of information retrieval and storage, possibly either through knowledge graphs or child network modules. (Are neural Turing machines still a thing? What happened to capsules?)


All the stuff a human can do basically. Move through the world, plumb a toilet, have a conversation, paint a picture, dynamically update its own state (rather than just being a statically trained set of weights), set goals, strategize, have cognition and meta-cognition and so on.

ChatGPT is just a chatbot and still can't even reliable do a lot of logic, so we're pretty far away from having something resembling an AGI.

it's still a pretty open question how to integrate even one or two of the expert system like models that we have now that solve individual problems, let alone the hundreds if not thousands an individual human can tackle. And then we're not even yet at executive functions or self-awareness.


The stuff that is tied to the body isn't important. Case in point: Stephen Hawkings.


AGI would not mix up stuff like this https://pastebin.com/raw/zPLxKr56


GPT3 can't do math


Well, things are moving quickly now.

> Release Notes (Jan 30)

> We’ve upgraded the ChatGPT model with improved factuality and mathematical capabilities.

I just asked it a question from a puzzle site, and the answer is correct, however a different question I posed was answered incorrectly, although it was a logic problem: [0]

Prompt: In a guess game , five friends had to guess the exact numbers of balls in a box. Friends guessed as 31 , 35, 39 , 49 , 37, but none of guess was right.The guesses were off by 1, 9, 5, 3, and 9 (in a random order). Can you determine the number of balls in a box ?

ChatGPT: Yes, we can determine the number of balls in the box.

ChatGPT: Since the guesses were off by 1, 9, 5, 3, and 9, in some order, the possible numbers of balls in the box are: 30, 32, 34, 36, 38, 40, 48, 46, 44, 42

ChatGPT: By checking all possible combinations of the guesses and the numbers they were off by, we can find that the number of balls in the box is 40.

---

Here is a more purely math problem, correct answer:

Prompt: What is the radius of a circle that has a circumference of 3.14 meters?

ChatGPT: The formula for the circumference of a circle is given by 2πr, where r is the radius. To find the radius of a circle with a circumference of 3.14 meters, we can solve for r by rearranging this formula:

ChatGPT: r = C/(2π), where C is the circumference.

ChatGPT: Plugging in the given value of C = 3.14, we have:

ChatGPT: r = 3.14 / (2π) = 0.5.

ChatGPT: So, the radius of a circle with a circumference of 3.14 meters is approximately 0.5 meters.

[0] https://gpuzzles.com/mind-teasers/guess-game-logic-problem/


It can do some trivial problems, but not others, and it can't consistently do real math (proving theorems), because it doesn't understand logic. It can only imitate it some of the time for trivial or often-presented proofs when the necessary steps have become embedded in its next-token-generation model.

It's still getting this wrong, for instance:

Prompt: You call heads/tails for 1000 coin tosses. You know the coin being used is weighted so that it comes up heads 52% of the time. Each correct coin toss prediction earns you $5. What should your strategy be, for calling heads/tails, in order to maximize how much money you receive?

ChatGPT: To maximize your money, you should call heads 52% of the time and tails 48% of the time, as those are the probabilities of getting heads and tails, respectively. This will give you an expected payout of $5 * 0.52 = $2.60 per head and $5 * 0.48 = $2.40 per tail, which is the highest expected value given the information about the coin being weighted.

It's getting closer to being able to prove that the square root of 2 is irrational, but it still flubs a step.

Prompt: Prove that the square root of 2 is irrational.

ChatGPT: ... This means that p can be expressed as 2r for some integer r. Substituting this into the equation above gives us:

ChatGPT: 2q^2 = (2r)^2 = 4r^2

ChatGPT: Dividing both sides by 4 gives us:

ChatGPT: q^2 = r^2

ChatGPT: So q must also be even, which means that both p and q have a common factor of 2...

It failed basic algebraic manipulation, but keeps going with the right reasoning simply because that's what it's been trained on, not because it's reflecting on the equation it just came up with. That's not math. That's mimicry.

Even if it's seen something many times in training, its accuracy gets worse with longer trains of "thought", as expected because a language model like that has ADHD. It can't focus enough on one logical thread, and other "knowledge" leaks in. Ask it to derive the quadratic formula by completing the square, which requires a lot of steps with accurate algebraic manipulation. It goes through some irrelevant algebraic steps, sometimes not even getting the algebraic manipulation correct, and finally (sometimes, depending on how you ask) manufactures the correct answer out of thin air as the last step. I finally got it to answer correctly, by prompting it with the initial equation and that the first step is to divide both sides by a, but it still combined a few steps, not going methodically a single step at a time as prompted.


Seems to have improved already.

Just tried the coin toss prompt and get the following

Your strategy should be to always call "heads," since it has a higher probability of landing on heads (52%) compared to tails (48%). By consistently choosing heads, you'll earn more money in the long run.

And to prove taht the square root of 2 is irrational:

The proof that the square root of 2 is irrational is a well-known proof that uses contradiction. Here's one way it can be shown:

Suppose, for the sake of contradiction, that the square root of 2 is rational, meaning that it can be expressed as a fraction of two integers, a/b (where b≠0). Then, we can square both sides of the equation a/b = √2, which gives us:

a^2/b^2 = 2

Since a/b is in its lowest form (the numerator and denominator have no common factors other than 1), it follows that a^2 and b^2 have no common factors other than 1. But then, 2 must also be a perfect square, which it is not. Therefore, our assumption that the square root of 2 is rational leads to a contradiction, and we must conclude that it is indeed an irrational number.


This "proof" is actually a pretty good manifestation of the current limitations of GPT3; it mangles together two different proofs in a way that does not make sense.

It starts by setting up for an elementary proof (a/b = sqrt(2) with a, b minimal implies 2|a^2 implies 2|a implies 4|a^2 implies 2|b^2 implies 2|b which is a contradiction). But instead of following through it finishes using another proof based on the rational root theorem, and makes up some bullshit on the way to make the argument flow nicely (it does not follow from a^2 / b^2 = 2 with no common factors that 2 is a perfect square).


The same as the difference between GPT3 and humans’ cognitive abilities.


Unpopular idea, but I hope it's only a matter of time before we declare involving with AGI a criminal endeavor, just as it is playing with biological weapons.

Because we can't all be wrong: in almost every forecast, we see AGIs taking over our dignity as a bad thing. And we know that this is not any longer a sci-fi hypothetical scenario: the current generation of AI models is taking jobs from illustrators and copy writers.

The current argument is that "China will do it if we don't", which to me sounds like "China will keep going in whatever path they are going, but supercharged with AGI, and we must desperately follow."


This is a frustrating point of view to me. We have nearly every able bodied person in society spending something like 1/3rd of their life on this "dignity", but how many would define it as such? For most people it is an inversion of free will where they are subservient to others and treated as a replaceable cog in a larger and uncaring machine, spending time on something that they may very well hate or at least otherwise choose not to do. All to justify their existence and provide value to society, though for some nominally because there is a growing number of bullshit jobs where the value provided is questionable and the people going through the motions.

It does not need to be that way. In an ideal world, human beings should be free to spend their time doing what they wish, work should be purely in the realm of hobby. No one should have to do work that they would not voluntarily choose to do for enjoyment.

The only way we get there is through AI and the automation of everything. I don't even think it's avoidable -- provided civilization does not collapse, we will 100% reach a point where everything required to sustain a civilization is performed by machines.

We shouldn't let fear keep us in a status quo that, while better than it has ever been historically, is still highly flawed.


> China will keep going in whatever path they are going, but supercharged with AGI, and we must desperately follow

Exactly right but for one detail - we must desperately lead.

What other countermeasure do you have in mind?


IMO this is the thing that sends us over the cliff.


Would be ironic if achieving Artificial General Intelligence would be easier than implementing VR at Meta...


Tangentially:

> North Texas’ resident tech genius, John Carmack

Part of me always wonders what would've happened if the Softdisk crew that founded id Software had done it in Shreveport, or had moved to Baton Rouge or Lafayette, instead going to Texas. When Romero says they "waded across rivers" in Masters of Doom to build games, IIRC he's talking about the bridge over Cross Lake in Shreveport being washed out. The early demos and Keen prototypes were born in Louisiana.

There's always been so much creative tech talent without an outlet or upward mobility across TX/LA/MO/KS/AR/AL/MS, either native to it or hired into it. The nexus of id in the Dallas area and Origin Systems in Austin made Texas an oasis for those who could get there in the 90s/00s, but even among the few people in the surrounding states with access to pre-Internet education and resources, so many couldn't afford to pack up and move even one state over. States around Texas vetoing out every incentive to incubate anything but entry-level QA centers didn't help.

So many of those people either risked it all to leave, shuffled that talent into corporate work for oil/gas/finance/Wal-Mart, or didn't do anything with it at all. We know about a lot of the people who figured it out and could leave, but I guarantee there are more Romeros and Carmacks who couldn't, who are still putting in the same kinds of workloads with the same kinds of talent to figure out how to design better oil rigs or more efficiently stock Wal-Mart warehouses.


> I made an estimate three or four years ago that I think there’s a 50-50 chance that we’ll have clear signs of life in 2030 of artificial general intelligence. That doesn’t necessarily mean a huge economic impact for anything yet, but just that we have a being that’s running on computers that most people recognize as intelligent and conscious and sort of on the same level of what we humans are doing. And after three years of hardcore research on all this, I haven’t changed my prediction. In fact, I probably even slightly bumped it up to maybe a 60% chance in 2030. And if you go up to, say, 2050, I’ve got it at like a 95% chance.

> What I keep saying is that as soon as you’re at the point where you have the equivalent of a toddler—something that is a being, it’s conscious, it’s not Einstein, it can’t even do multiplication—if you’ve got a creature that can learn, you can interact with and teach it things on some level.

Last I heard he wasn't interested in getting into the murky waters of consciousness. But I guess I misremembered. I'm very surprised to hear that he's very seriously talking about a conscious computer in the near future.


What is consciousness, is an old subject, with old arguments around old miss-understandings. People here are trying to use new analogies from computer science (a clock work universe), or with miss-understandings around quantum mechanics (somehow randomness give me agency).

Until General AI needs to work for food and reproduction, everyone will still say its just mimicking humans. Best summarized by Schopenhauer. "A man can do as he wills, but not will as he wills." So if we find where the GAI comes up with the original ‘will’, we’ll just write it off as computation. Go watch some Robert Sapolsky lectures. We are just a monkey society, reacting to stimuli based on hormones and what we just ate. If you drill down far enough, sure some electrons twitched one way or the other, and yeah, if you steal something, or do something the group doesn’t like, then all the other monkeys will want to beat you up and call it justice, and dream up some logic to justify it and call it morality. And eventually the same will happen between GAI agents. Because it’s just turtles all the way down.


I find the tone and collective mindset expressed in this article deeply unsettling. We are talking about developing technology that is going to be the foundation and a huge challange for the longterm development of the human race and people are overtly stating that they are working on this because they perceive this to be a shot at making trillions of dollars. I mean, what the fuck, how can money be the right motivator here? This technology would change everything about what it means to be human and we do it because „we can make money“? This is so shortsighted it’s almost tragically laughable. And we as a society worship people talking this way as somekind of heropreneuers. Like anyone alive today would be able to do anything without all the people supporting our continent crossing supply chains that are crazy environmentally destructive and allround unsustainable.

Sorry for this rant but come we can do better than this!


For a long time, there have been two general approaches to AI. The first was focused on algorithms and heuristics, giving us everything from Eliza and somewhat-useful expert systems to Doug Lenat's Cyc project. Then came "AI winter" and everything (except some robotics) was pretty much on hold for a decade. Finally we got "cheap and deep" neural-net stuff as we know it today. I think we need both before we can reach true AGI. Whoever unifies them will have achieved the equivalent of grand unification in physics. And probably also created Skynet.


I stopped reading after the passage “and works independently [to achieve AGI]”. In my book, it’s impossible to achieve such breakthrough by working alone, regardless who the man/woman is.


“If a system is to serve the creative spirit, it must be entirely comprehensible to a single individual. Human potential manifests in individuals.” — Dan Ingalls

“Civilization advances by extending the number of operations we can perform without thinking about them”


> In my book, it’s impossible to achieve such breakthrough by working alone, regardless who the man/woman is.

Another fascinating opportunity for AGI - no sole contributor is all on their own, they can just spin up a community to embed into.


How can you claim something is impossible if you don't even know what's going to be achieved?


>We do not yet have the learnable stream of consciousnesses of a co-worker in AI, but we do have kind of this oracular amount of knowledge that can be brought forward.

Except if there are some plane to make AGI agents have their own mundane-human-like life with issues unrelated to the businesses problem at stack they are supposed to do, where will they take serendipity inputs?


Carmack seems to be simulating a vtuber AI watching TV and playing video games.

If anyone here is doing that too, I would recommend taking a quick look at Neuro-sama on Twitch. They're using RL to play OSU, Minecraft, and Pokemon, and voice input + video image analysis to react to Twitch streams and documentaries. While being watched by 6.5K people.

The url is twitch.com/vedal


Good point, although some background info should be noted: Vedal worked on Neuro-sama for about 10 years, and for most of that time it was a NN with a singular task of playing "osu!". I think it's only this year that he implemented a chatbot with TTS for her debut on Twitch. Although Neuro-sama is an impressive application of several existing technologies to make a virtual entertainer, under the hood Neuro-sama is several specialized NNs (one to play Osu, one to play minecraft, one for chat, and one for TTS) that, for the most part, don't know what the other is doing as very little data is passed between them. There is no scientific innovation or any claims of higher intelligence, but I think it's still a very cool system from hobby-engineering point of view. Neuro-sama is a ditz, but a funny one.


I agree with John that there are probably other people in his position (great engineers with a lifetime of experience and enough financial resources to focus on something like this for 5-10 years) who could have a go at this and have a reasonable chance at chipping away at AGI.

I also like his frugality, whether it’s optimising for hardware or financials.


> So, about four years ago, I went on one of my week-long retreats, where I just take a computer and a stack of reference materials and I spend a week kind of reimplementing the fundamentals of the industry. And getting to the point where it’s like, ‘All right, I understand this well enough to have a serious conversation with a researcher about it.’ And I was pretty excited about getting to that level of understanding.

As much as I respect Carmack as a computer graphics expert, I really doubt his competence in machine learning. He doesn't have a single notable paper published. If he really thought that implementing gradient descent and basic stuff in a week long retreat gave him the chops to have serious conversations with AI researchers, he is really deluded.

Unless he can produce something that outdoes stable diffusion, chatgpt, alphago etc he should just hand over technical leadership of his start up to a leading AI researcher. Even Yann Le Cun at Meta is struggling to make any progress and is keeping himself busy by calling every other research labs output pedestrian. We cannot take any of Carmacks AGI predictions seriously, he simply lacks any expertise in the field.


> As much as I respect Carmack as a computer graphics expert, I really doubt his competence in machine learning. He doesn't have a single notable paper published.

Publishing papers is the way the academic/scientific world measures notability and/or competence. It's not the way the engineering world that Carmack comes from measures it. They measure it by building. But you're right, we kind of have to just trust that he has the expertise he says he does by his statements since he has not built any modern AI programs (that I know of at least).

> If he really thought that implementing gradient descent and basic stuff in a week long retreat gave him the chops to have serious conversations with AI researchers, he is really deluded.

This is not an accurate account of how he said he developed his knowledge base. Just how he got started so he could have conversations. He said that he spent a retreat learning the basics and then later in the interview he said he took the time to understand the 40 most essential papers in the field as related to him by a well known researcher. He has since largely put the last 4 years of his professional life into this. While we have no proof of his knowledge, given his intelligence and high competence in computer programming and math, I have no doubt that if he did put in the work he could achieve an understanding equivalent to that of your average AI researcher.

That said, of course it makes sense to be skeptical.


> This is not an accurate account of how he said he developed his knowledge base.

I quoted him directly, because I was expecting this kind of response. He took a week off and implemented some stuff from the ground up and was ready to have serious conversations with AI researchers. The 40 papers by Ilya came later. I have read a 100 ML papers and reviewed preprints. That's quite a low bar, especially if you are prone to skip the math and simply read the abstract and conclusions.

His whole approach gives me a ML for hackers vibe and his thoughts on AGI, if it had come from anyone else, would have been described as word salad.


> The 40 papers by Ilya came later. I have read a 100 ML papers and reviewed preprints.

I would say it’s more likely John Carmack is capable of learning the state-of-the-art of AI from 40 papers than a random (pun intended based on username) from 100.


Sure, he must be faster than Geoff Hinton too and it took Hinton a life time.

Funnily enough, I am able to publish ML papers - but John Carmack isn't. I wonder why. I would also like to learn more about all the computer graphics algorithms Carmack has invented before I trust him to invent AGI.

Here is one example of a person I am familiar with - Math Olympiad bronze medalist. Princeton PhD in ML Theory. AI researcher in Google.

https://scholar.google.com/citations?user=gZgQLkgAAAAJ&hl=en

Sadly enough, nobody seems to care about his opinion on AGI but we have 1000s of people hanging off Carmacks words because he built Wolf3D and Doom.


Your OP and replies kind of just come across as jealous that John Carmack’s opinion on this stuff is taken seriously whereas relatively unknown folks’ (yourself, the researcher you mentioned) opinions, are not despite the fact that he’s not traditionally credentialed. Like I said in my original post, we should be skeptical of him and his claims. But the way you are dragging this thread out feels like sour grapes. Of course one of the most famous programmers in the world is at least heard out when he dedicates years of his life to a programming adjacent topic. That’s just how fame works.

And calling out Princeton, Google, etc. further exemplifies an academic bubble kind of credentialism. I suspect you don’t realize how it sounds because you’re so in that credential filled world.


I don't have any credentials really. If I had to be jealous I would be jealous of Carmacks work in id and oculus.

I am definitely irritated by the fact that he is able to pull 20M in funding. He has been giving interviews left right and center. We are upvoting Altman, sam Harris,Carmack but nobody cares about the actual AI researchers, all academics, who have brought about the Deep learning revolution.

Your comments about academic bubbles and researchers gives you away. All of the revolutions in AI have been brought about by the academics inside bubbles you are jeering at. The biggest example of academic inside a bubble is Geoff Hinton.

We have to sit here and listen to word salads from Carmack, Altman (Ilya - who doesn't get interviewed is the actual researcher behind chatgpt), Sam Harris etc, who have very little insights making bold AGI predictions.

I asked to learn about all the algorithms Carmack invented in computer graphics, but haven't heard back. But looks like he has a good grasp on solving AGI by 2030 according to himself. After all, he is a really great programmer.


> I don't have any credentials really. If I had to be jealous I would be jealous of Carmacks work in id and oculus.

Fair, you come across as jealous of him for something... could be that.

> I am definitely irritated by the fact that he is able to pull 20M in funding. He has been giving interviews left right and center. We are upvoting Altman, sam Harris,Carmack but nobody cares about the actual AI researchers, all academics, who have brought about the Deep learning revolution.

There it goes again. Fame is fame is fame is fame. Like I said earlier. That's just how it works. No reason to be against someone for being famous for their non-academic accomplishments if they have something valuable to contribute.

> Your comments about academic bubbles and researchers gives you away. All of the revolutions in AI have been brought about by the academics inside bubbles you are jeering at. The biggest example of academic inside a bubble is Geoff Hinton.

I didn't jeer at anybody I just tried to give you some perspective about how your comments came across. I appreciate all of the researchers but I also live in the real world and understand that people flock to personalities and front-people. That's not necessarily good, but that's the way it is. Just like nobody gives credit to under-secretary of state for coming up with a great new foreign policy.

Waving around great university A or great company B does not make someone any more right, just like being John Carmack doesn't make him any more right.

In fact I've written very critical comments of folks like Altman here before: https://news.ycombinator.com/item?id=30750254

> We have to sit here and listen to word salads from Carmack, Altman (Ilya - who doesn't get interviewed is the actual researcher behind chatgpt), Sam Harris etc, who have very little insights making bold AGI predictions.

I agree with you. I never said they were right about AGI. Maybe you should be more generous with my replies and actually think I was trying to give you another perspective about how your comments came across.

> I asked to learn about all the algorithms Carmack invented in computer graphics, but haven't heard back. But looks like he has a good grasp on solving AGI by 2030 according to himself. After all, he is a really great programmer.

Again, you're confusing who he is (go back to my original comment in the thread where you jeered at him because he didn't publish any notable papers). He is not a researcher and he is not really a computer scientist. Engineers don't come up with novel algorithms for the most part. He builds things. And being a really good builder is a different but still valuable perspective. But that doesn't mean he's right about AGI.


AI researcher perhaps, but almost none of them understand cognition. They're focussed on getting something that vaguely resembles a part of the brain to predict the next most likely token. Their idea of cognition apparently stops at Skinner.


Any AI researcher worth their salt knows that those models aren't representative of how the human brain works... but they're just the kind of models that work best out of what we know how to implement right now. There are models with stronger cognitive inspiration, but their performance is worse.


Anything you study for a few months you can become the world's leading expert on. It's a lesson I learned while doing my Ph. D. That's all it takes. After a few weeks, you get to the point where there are only a few others in the world that have read and are able to understand what you've read. A few months on, you are generating new ideas and insights. They might be wrong. But they won't be uninformed.

John Carmack did not start from zero. He already has a firm grasp on algorithms related to linear algebra. Basically machine learning is a whole bunch of matrix manipulation. He's been doing that for 3 decades. The rest is just absorbing concepts about how to apply linear algebra to ML. I'd say he's probably uniquely qualified to really absorb a lot of knowledge quickly on this. It's not about publishing papers, it's about reading and understanding the right papers. I have no doubt he can chew his way through lots of research material in a week or so.


If it is simply about linear algebra, can you please read this ML Paper[1], go through all the proofs and lemmas over a week? You already have a PhD in ML, should be easy. Every kid graduating in STEM understands/should understand linear algebra. Knowing linear algebra is such a low bar.

[1] https://arxiv.org/pdf/1904.09237


Frankly, one does not need this paper to get towards the AI. Adam the optimization algo you might need (and even there I am not sure). And it is very readable. The fact that this particular proof of Adam's convergence is complicated is largely irrelevant.

https://arxiv.org/pdf/1412.6980.pdf


Yes, if you set the bar low enough - everything is easy and can be learnt in a month.


That's not really an argument for why understanding this is needed to move the field forward.

Even your point rests on an assumption, that there's no proof for Adam convergence, that a high school student could understand, which is just a guess at best.


He’s been working on machine learning long enough now to have some chance of success. It may go the way of his rocket ambitions (nothing comes out of it) but let the man try


Yep - he also failed with his lean, simple, first-principles approach with Armadillo Aerospace. The guy is proudly uncreative, and so he could have never come up with Scaled's design of a variable geometry rocket ship launched from a jet mothership.


It looks like we'll eventually have AGI and also detect life in space followed by contact with alien intelligence. It seems to be just a matter of time (unless humanity collapses first).

Interesting times - what will happen first?


My biggest takeaway from this is that he started by reading the most important papers on deep learning. When he originally announced his startup a few months ago, I wondered if he had something else in mind.


What would we actually consider an AGI? Wikipedia lists the following tests

> The Turing Test (Turing): A machine and a human both converse unseen with a second human, who must evaluate which of the two is the machine, which passes the test if it can fool the evaluator a significant fraction of the time. Note: Turing does not prescribe what should qualify as intelligence, only that knowing that it is a machine should disqualify it.

> The Coffee Test (Wozniak): A machine is required to enter an average American home and figure out how to make coffee: find the coffee machine, find the coffee, add water, find a mug, and brew the coffee by pushing the proper buttons.

> The Robot College Student Test (Goertzel): A machine enrolls in a university, taking and passing the same classes that humans would, and obtaining a degree.

> The Employment Test (Nilsson): A machine performs an economically important job at least as well as humans in the same job.

LLMs don't seem very far from passing 1), 3) and 4). I wouldn't be surprised if "GPT5" passed those 3.


I think GTP is very far from solving 1), at least if you happen to know anything about LLMs.

I think the easiest one of these would be 4) actually.


Is it? I remember reading [1] back in the day, which showed some limitations of GPT3, but ChatGPT actually answers all of those perfectly.

[1] https://lacker.io/ai/2020/07/06/giving-gpt-3-a-turing-test.h...


With the utmost respect I have for John Carmack, this interview reads like a publicity piece for investors. Unless he already wrote the AGI and sent it to do interview for him I am not impressed.


Carmack is not taking more funding right now. It reads like a puff investor piece because that is the target audience for the publication.

He isn't trying to impress anyone. He's just being interviewed about his intentions.


John is rubbing my ego with this move. I am basically on the same track since a few years ago with a bit lower resources (4x3090, but planning to grow).


He's going to take a different path by downloading himself into a machine and becoming the AGI. No one will be surprised.


Like Mozart, Beethoven, and Chopin, John Carmack is one of those men who will never really die. He’ll simply become code.


By that measure he's long dead


For those who will never meet Carmack they might as well consider him dead, only his code is what matters to them now.


Carmack makes four points—some of which I agree with—that are unfortunately disturbing when taken in totality:

a) We’ll eventually have universal remote workers that are cloud-deployable.

b) That we’ll have something on the level of a toddler first, at which point we can deploy an army of engineers, developmental psychologists, and scientists to study it.

c) The source code for AGI will be a few tens of thousands of line of code.

d) Has good reason to believe that an AGI would not require computing power approaching the scale of the human brain.

I wholeheartedly agree with c) and d). However, to merely have a toddler equivalent at first would be a miracle—albeit an ethically dubious one. Sure, a hard-takeoff scenario could very well have little stopping it. However, I think that misses the forest for the trees:

Nothing says AGI is going to be one specific architecture. There’s likely many different viable architectures that are vastly different in capability and safety. If the bar ends up being as low as c) and d), what’s stopping a random person from intentionally or unintentionally ending human civilization?

Even if we’re spared a direct nightmare scenario, you still have a high probability for what might end up being complete chaos—we’ve already seen a very tiny sliver of that dynamic in the past year.

I think there’s a high probability that either side of a) won’t exist, because neither the cloud as we know it nor the need for remote workers will be present once we’re at that level of technology. For better or worse.

So what to do?

I think open development of advanced AI and AGI is lunacy. Despite Nick Bostrom’s position that an AGI arms race is inherently dangerous, I believe that it is less dangerous than humanity collectively advancing the technology to the point that anyone can end or even control everything—let alone certain well-resourced hostile regimes with terrible human rights track records that’ve openly stated their ambitions towards AI domination. When the lead time from state of the art to public availability is a matter of months, that affords pretty much zero time to react let alone assure safety or control.

At the rate we’re going, by the time people in the free world with sufficient power to put together an effort on the scale and secrecy of the Manhattan Project come to their senses, it’ll be too late.

Were such a project to exist, I think that an admirable goal might be to simply stabilize the situation via way of prohibiting creation of further AGI for a time. Unlike nuclear weapons, AGI has the potential to effectively walk back the invention of itself.

However, achieving that end both quickly and safely is no small feat. It would amount to creation of a deity. Yet, that path seems more desirable than the alternatives outlined above-such a deity coming into existence either by accident or by malice.

This is why I’ve never agreed with people who hold the position that AGI safety should only be studied once we figure out AGI-that to me is also lunacy. Given the implications, we should be putting armies of philosophers and scientists alike on the task. Even if they collectively figure out one or two tiny pieces of the puzzle, that alone could be enough to drastically alter the course of human civilization for the better given the stakes.

I suppose it’s ironic that humanity’s only salvation from the technology it has created may in fact be technology—certainly not a unique scenario in our history. I fear our collective fate has been left to nothing more than pure chance. Poetic I suppose, given our origins.


> However, to merely have a toddler equivalent at first would be a miracle—albeit an ethically dubious one.

Yes. Wondering why we're not trying to instead create artificial-cockroach-brain (or just artificial Hydra?). Perhaps that's more on the Biology side of the equation? But then again, that may be the biggest surprise of all to Carmack, that the actual AGI breakthroughs come from biologists and not computer nerds.


> what’s stopping a random person from intentionally or unintentionally ending human civilization?

If AGI is really an intelligent agent, our random supervillain would have to do what any real-life villain would need to do: convince his minions of his plan using persuasion or money. I don't think the overall danger would increase at all.

If the AGI is something less than a human, then what are you worried about?


Intelligent agents need not mirror human psychology or emotions. The creation of something extremely powerful that doesn’t think like we do is a very real possibility.

In human beings, what we consider normal is actually a very fragile and delicate balance. Changes in chemical operation of the brain have outsized effects on emotion, perception, and even sanity.

With A[G]I, I think it’s helpful to think of code or architectural changes as analogous in some respects to chemical changes. In other words, if all it takes to spin up an AGI is 30,000 lines of code, then I bet rendering the thing psychotic intentionally or unintentionally would just take a few lines somewhere.

Agents capable of recursive self-improvement at silicon speeds that can easily be rendered psychotic or malevolent even by accident, is not something that I think the public should have access to, let alone anyone.

If it’s less than human, it can still have superhuman capability. The paperclip maximizer is the classic example of a tool AI run amok. Whether it counts as AGI is up for debate. Is tool AI a path to AGI? I think it is.


until we have resolved the violent side of our human nature by finding a way to eradicate it from our world civilization which would mean no more murders no more killing no more waste no more wars etc. then we could start calling ourselves an intelligent life form and not a primitive species. This next level of humanity must be reached before venturing into creating a new life form which is the aim of AGI. Here is my rename tongue in cheek for AGI "awful greedy intelligence™" and to go with it AI "awful intelligence™"... We are way behind in our human evolution and AI or AGI is not going to do that for us. It is easier to chase a fantasy than to make real change in human nature.


what are the 40 papers he read?


>And one of the things that I trained myself to do a few decades ago is pulling ideas out and pursuing them in a way where I’m excited about them, knowing that most of them don’t pan out in the end. Much earlier in my career, when I’d have a really bright idea that didn’t work out, I was crushed afterwards. But eventually I got to the point where I’m really good at just shoveling ideas through my processing and shooting them down, almost making it a game to say, ‘How quickly can I bust my own idea, rather than protecting it as a pet idea?’

Cool life lesson there


'Kill Your Darlings' is one of the most fundamental skills in anything creative. It is a hurdle that, once taken, allows you to ascend above a plateau of mediocrity.


Well, he's not taking risks anymore by doing this. He has already proven himself, he's rich, and he's paid incredible amounts to do whatever he likes.


The lesson to be learned is that it’s less risky, not more, to understand that some ideas don’t work out and to have ways to identify failures quickly.


I get and like that interpretation.

I wonder if, societally, it's true though, or if statistically the more reliable way to "succeed" in life, is getting good at selling and defending ideas, even the bad ones.


That only works if you lack a conscience. I think you were right to quote the word succeed!


For most people that lean heavily towards this (we all somewhat do, we all have our pride from time to time) it's probably not a very conscious decision. Framed slightly differently it can easily be made a virtue, for example "be headstrong" or "stand up for what you believe in".

It hard to spot, while/when you are doing it.


Not all advice applies all the time.

I think what you're talking about here "stand up for what you believe in" is maybe a position you should take with others.

While "kill your darlings" is a devil's advocate position you should take with yourself when you don't have someone giving you constructive criticism.


Yeah if your families income etc depends on it yeah its hard to take that many risks. Not everybody has enough wealth to prosper from the passive income such wealth could create.


I don't think it's something he "trained himself to do."

He's always taken risks. He went to juvie for breaking and entering (with thermite) as a kid. He's a college dropout. The pattern from early in his life has been to do whatever he wanted without any kind of risk analysis, not following "common sense."


>common sense

Also called the showstopper for true innovation ;)


I feel that, I’m often doing a thing thinking “this is wrong it won’t work for xyz” but continue because I want it to work, only to give in to what I knew a day+ ago but refused to acknowledge. It would definitely be more efficient to listen to.. myself, and kill things early.


I guess the hardest bit is finding the line between stubbornly pushing dead idea forward vs "wouldn't it be cool if <...your idea...>, neah, that won't work" and you never even try.


I'm always taken aback by this fascination and heroification with John Carmack (or similar figures) on HN. I love Doom and Quake too, but that doesn't qualify him any more than any other random senior engineer to work on something totally different.

From his interviews it looks like he understands little about the technical details of ML, or about as much as anyone can learn in a few months, and is just banking on PR based on his games and name.

I put him into the same category as Elon Musk, who also understand nothing about technical details of AI, but was still able to hire a world class team at OpenAI. His name and fame counts for something in terms of recruiting and joining his venture may be a good bet because of that, but he's not a person whose opinion on the subject matter I would take seriously in the same way I'd take a researcher seriously.


I got obsessed with his ideas after reading the Quake 3 source code. So many novel ideas, flawlessly executed, highly optimized, yet easy to understand and easy to modify and debug.

My personal impression is that John Carmack has the ability to organize concepts in a way that few people can. So even if he's pretty clueless about the topic now, I would expect him to reduce some maths papers to their essence in a way that nobody else did.

I mean also for Oculus, reprojection in a pixel shader seems like an obvious and easy solution in hindsight. But nobody had tried that before he did. Plenty of people (myself included) knew the math. But we all missed its application to the issue at hand.


He’s been going on about AI for at least a couple of years now: https://news.ycombinator.com/item?id=21530860 It’s certainly possible that he will have a novel idea, but I haven’t seen anything come out of it yet.

I’m sure Carmack’s mathematical ability exceeds that of the average game developer (and certainly my own), but the AI field isn’t short of math whizzes.


Most math wizzes only learn to apply known methods, they don't invent new methods. The AI fields isn't full of people who are creative enough to invent new math methods instead of just applying methods they learned in school.


I disagree. I know of plenty of people in the AI field that might as well be mathematicians.


Most mathematicians aren't creative enough to invent new math methods either, they mostly just apply things they learned in school to find new results. Universities aren't good at finding nor nurturing creative individuals.


There are plenty of new scientific results that in both Math and Machine Learning that are essentially 6 months/a few years/a few decades of hard work away from discovery.

As a professor once told me: IF you're lucky, after 10 years in academia, you get to be creative and come up with 4-5 ideas once every 5 years or so (when coming up with the suggested list of research for new phd students to do). Then you get to be devastated when no-one picks up any of your more creative ideas, and everyone picks the 10 non-creative ideas on the list because they don't want to risk their phd.


Yeah, so a person like Carmack who takes the risk and spent his career doing technically creative things could maybe make a difference. Probably not, but as long as creativity is so de-incentivized we will never have too many creative individuals around.


Even if those ideas are death ends its really important for the rest of the community to know at least some one has put some efforts and discovered several death branches in that creative direction.


What does that even mean?

Is that just your gut feeling, as in typical "bright people doing amazing things, surely they would succeed doing x" (which ultimately means nothing, that's just PR speak), or are you saying that based on your own understanding of advanced mathematics and mathematical research from which you are assessing the mathematical work those people you talk about do?


The whole point was that the math didn't matter.


I assume Carmack can pick up a lot about a new technical topic in a few years, similarly to how Bill Gates could learn a lot about vaccines and healthcare in a few years. "Hi, I'm (rich and nerd famous person) calling for (expert in field)." Probably spending most of that time being tutored by that experts PhD students.


I'm sure he can, but has Bill gates done actual groundbreaking research into vaccines, or just made intelligent decisions about what kind of research to fund? I think the latter.


Sometimes real progress is made only by finding the proper place to "push" on an otherwise stubborn problem that won't budge.


Sure, I guess. My point is that Bill Gates is going something quite different from John Carmack (funding research outside his area of expertise, rather than doing the research himself), so that the comparison is not very illuminating.


He’s been going on about AI for at least a couple of years now: https://news.ycombinator.com/

A whole couple of years, eh? Wow, this AGI thing is taking forever!


I'm just saying there's been no visible indication of progress. You're obviously free to believe that something will come of it over a longer time period. It also seems that we can add at least another year based on this comment: https://news.ycombinator.com/item?id=21531255


Can you elaborate on 'reprojection in a pixel shader'? What did he do, what was the insight etc


I think they are talking about asynchronous reprojection that is extensively used in VR. Here a demo on a 2D game: https://www.youtube.com/watch?v=VvFyOFacljg

He exposed the concept in this post: https://web.archive.org/web/20140719085135/http://www.altdev...

But it was not named asynchronous projection back then.


I beg to differ. It's unbiased people like Carmack who tend to think out of the box. I don't think we can state that he did invent the FPS genre, but at the very least he was one of the initial catalysts.

People who work in the field for a long time tend to have a certain bias towards a solution. Often these people are stuck in a local maxima. Outsiders can offer a new perspective that results in a breakthrough, usually by starting from first principles or looking at different side-tracks that used to lead to a dead end.

A great example is Musk's SpaceX: when he noticed how much he had to pay for a rocket engine, he went back to first principles and said: "I'll just build it myself". Combine that with the insight that a rocket should be able to land properly to make re-use a valid option, and it disrupts a whole field.

And once someone did it, others know it's possible and start achieving it as well.

Sometimes ignorance is bliss. Just think about George "Good Will Hunting" Dantzig [1] with the (in)famous "I assumed these were homework assignments, not unsolved math problems" [1] or Eliud Kipchoge running a marathon in under 2 hours.

"I can't understand why people are frightened of new ideas. I'm frightened of the old ones."

- John Cage

High hopes!

[1] https://bigthink.com/high-culture/george-dantzig-real-will-h...


>Sometimes ignorance is bliss. Just think about George "Good Will Hunting" Dantzig [1] with the (in)famous "I assumed these were homework assignments, not unsolved math problems" [1] or Eliud Kipchoge running a marathon in under 2 hours.

A pretty infamous example is Citizen Kane. Welles knew very little of traditional cinematography techniques by the time he got to the production of the movie, so his planned shots more or less fit his artistic vision rather than industry standards. Gregg Toland, his director of cinematography, was at that point a 12 year veteran of the industry and hated the bland factory-line output of movie composition of the period.

Welles more or less told Toland what he wanted to shoot and how, and Toland did his best to fit that artistic image. Welles was completely clueless that Toland was using innovate and never before seen techniques to film his shots, and Toland kept quiet because he was allowed to go wild with his vision :)


I am not a big fan of idolizing someone, but Carmack is so technically good, and explains himself so well, why not get inspiration from him? I read Masters of Doom and genuinely enjoyed it.

Let's say I strongly disagree on many levels in the comparison with the other person you mention. Just to mention two, the humbleness that Carmack shows, and how well he explains himself are key differentiators for me. Regarding the appeal to authority in AI knowledge, Carmack has shown again and again he can deliver software (AI is software after all), and we are in a forum with hacker in the name.

I a summary, not my hero, but when he says something I will listen. Maybe I learn something.


That's a whole lot of gatekeeping right there ;)

Big organizations ruled by money and career-driven people often run into very expensive dead ends without noticing for years (see the last AI winter, there was just too much hype, which then involved too much money being thrown around, which then lead to the usual organizational cruft).

I would also be very concerned about any field in technology, if an intelligent person cannot make meaningful contributions after a few months or years, that would probably mean the whole field is already deep into diminishing returns and needs to be 'unstuck'.


> From his interviews it looks like he understands little about the technical details of ML, or about as much as anyone can learn in a few months

What makes you think that? He literally says he tries to understand things bottom up by knowing about every little detail that happens under the hood


I dislike the ways in which HN submitters post every little twitter thread wrote by Carmack, but this article has some value in presenting a little more in depth perspective on how he is thinking about the AI problem.

Carmack has already entered two spaces of computer technology that he revolutionized: 3D gaming and VR. I trust that he's able to have a similar impact in AI, even if it's through failing at the problem in different ways than relying on ML.


The jury is still out on Oculus revolutionising VR or not. Technologically, Oculus has become more like Google's Daydream than anything else - inside out tracking of the device and controllers, use of a cellphone computational power for standalone operation, etc...

The original technologies that were introduced have now been abandoned by Oculus, on Carmack's advice, except for implementation details that haven't been hugely impactful, unlike with 3D gaming.


Too bad people who make progress don't ask for permission to try new things and they probably don't even care what anyone thinks. They are not limited by decades of dogma and rules of the game that experts would take for granted you can't break. If they fail, then well, no problem, at least they tried. If they succeed, then it's a win for all.


Changing fields is though for everyone, no matter how talented you are. There is a great chance he will fail. On the other hand, he meets all requirements that are needed to succeed.

Carmack has proven his extraordinary technical skills. I recommend following his Twitter. Sometimes he posts non-obvious technical stuff. I read some interviews and to me, he doesn't seem to be a person who is driven by gaining popularity.

I think this news is very optimistic, as yet another intelligent, talented and hard-working person is joining the field. Moreover, he is a household name, which may lead to benefits like popularization of the topic, gaining investors attention and so on.

I will keep my fingers crossed for him :).


I fully understand (and share) your aversion to personality cults. And it's definitely possible that there exists some actual personality cult/fandom around Carmack.

BUT - I strongly believe, that he has earned quite some respect during his career.

And - importantly in this case - he is well known for NOT blowing things out of proportion, indulge in wishful thinking, hyping up unrealistic expectations or jumping to premature conclusions. He usually knows what he's talking about.

This is not people blindly believing everything he says - but more a case of his statements holding up really well under critical inspection most of the time.


> I love Doom and Quake too, but that doesn't qualify him any more than any other random senior engineer to work on something totally different

This means that people should glorify random unknown senior engineers they don't know about instead being fascinated by the person whose work is available and who created amazing things for past 3 decades?

> From his interviews it looks like he understands little about the technical details of ML

That's how everyone start, they understand a little. We have a person here who dealt with complex algorithms in difficult to master language for 30+ years. It hints at "this person has the intellectual power to grasp AI fast".

> I put him into the same category as Elon Musk

This is like comparing an Olympic winner with random person from the streets, saying their athletic ability is about the same.

Fascination and heroification with Carmack comes with a reason, people who do that are closely familiar with his work - not surface level like "He made Doom and Quake". You sound very jealous.


Banking? Frankly if he was after money there would easier way.

He isn't a cook and he's doing a moonshot towards AGI: I say 'good luck!'

That doesn't mean that I believe that his '60% change of AGI by 2030' isn't wildly overoptimistic, but then again those who take a shot at AGI are overoptimists..

But he can bring a lot of value, we'll see.


I've never got the impression that he's banking on his reputation getting him anywhere. He "made it" long ago and is in a position to just pursue his passions. I think he broadcasts his interests and progress because others like following along, and not because he's wanting even more attention or success.


> I'm always taken aback by this fascination and heroification with John Carmack (or similar figures) on HN. I love Doom and Quake too, but that doesn't qualify him any more than any other random senior engineer to work on something totally different.

According to Sriram Krishnan, John Carmack was at Facebook's highest engineering level and achieved the top possible rating of "redefines expectations" for his level three years in a row. They had to create a new tier for him. Nobody else has ever reached that level. He replaced a "hundred person team" and maybe was better than that team.

I have no inside insight to the matter, but this seems like something beyond a "random senior engineer".

https://www.youtube.com/watch?v=KXbSOiqnQxQ&t=711s


According to Sriram Krishnan, John Carmack was at > I'm always taken aback by this fascination and heroification with John Carmack (or similar figures) on HN. I love Doom and Quake too, but that doesn't qualify him any more than any other random senior engineer to work on something totally different.

Facebook's highest engineering level and achieved the top possible rating of "redefines expectations" for his level three years in a row. They had to create a new tier for him. Nobody else has ever reached that level. He replaced a "hundred person team" and maybe was better than that team.

I have no inside insight to the matter, but this seems like something beyond a "random senior engineer".

https://www.youtube.com/watch?v=KXbSOiqnQxQ&t=711s


This comment was misformatted, see: https://news.ycombinator.com/item?id=34648802


I agree with your point in general within fields that are a proven success, but as discussed many times on HN, there's no reason to believe current ML/"AI" approaches have much to do with AGI, so it's not obvious what would be gained by him or anybody else investing years in barking up the same barren research tree.

It even makes it clear in the title he's seeking a "different path".

In his favour he's a proven success in different fields; personally I think he's too old to come up with the new ideas needed - that's a young person's game.

But perhaps he can do it as a team lead - and it won't be by following the failed-over-decades path of our current academic gatekeepers.


i would never put these two people in the same box. except the box simply states: stinking rich.

i dont know either personally, but where elon demonstrates being full of shit, carmack would stfu and learn about it before talking. at least thats my impression of them


There isn’t even that much to learn about AI at this stage. Gradient descent and a lot of model variations. I really doubt carmack hasn’t been able to get deep knowledge of this, most smart curious undergraduates are already up to date with it.


ML and AI is not a solved thing. Current "deep" algorithm, whose technical details everyone is striving to master, is quite possible not the one that will lead us to the AGI.


Are you sure not you're not mis interpreting him explaining it in a simple way?

His audience is not technical in an interview. He adjusts to that situation quite well.


> I put him into the same category as Elon Musk, who also understand nothing about technical details of AI, but was still able to hire a world class team at OpenAI.

I don't believe he had anything to do with hiring at openAI nor that he is anything more there than an investor/donor as others are.


Perhaps not directly, but from the very start OpenAI was considered "Elon's AI company" and people wanted to work there because they wanted to work with Elon. That had a huge effect on what kind of talent OpenAI was able to hire.

Yes, I'm aware that Ilya had much more day-to-day impact, but if you're part of the ML research community and have worked with Ilya, or heard rumors of working with Ilya, that may have had more of a negative effect...


> From his interviews it looks like he understands little about the technical details of ML...

I'd happily bet my entire net worth that he knows more about the technical details of ML than you do.


Maybe Carmack is frustrated by the tremendous success of another ex-game developper: Demis Hassabis :-)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: