I think the contrast is that strict behavior norms in the West are not governed behavior norms in the East.
One arises analogous with natural selection (previous commenter's take). The other through governance.
Arguably, the prior resulted in a rebuilding of government with liberty at its foundation (I like this result). That foundation then being, over centuries, again destroyed by governance.
In that view, we might say government assumes to know what's best and history often proves it to be wrong.
Observing a system so that we know what it is before we attempt to change it makes a lot of sense to me.
I don't think "AI" is anywhere near being dangerous at this point. Just offensive.
It sounds like you're just describing why our watch-and-see approach cannot handle a hard AGI/ASI takeoff. A system that first exhibits some questionable danger, then achieves complete victory a few days later, simply cannot be managed by an incremental approach. We pretty much have to pray that we get a few dangerous-but-not-too-dangerous "practice takeoffs" first, and if anything those will probably just make us think that we can handle it.
If there’s no advancements in alignment before takeoff, is there really any remote hope of doing anything? You’d need to legally halt ai progress everywhere in the world and carefully monitor large compute clusters or someone could still do it. Honestly I think we should put tons of money into the control problem, but otherwise just gamble it.
Funnily enough, I’m currently reading the 1995 Sci-fi novel "The Star Fraction", where exactly this scenario exists. On the ground, it’s Stasis, a paramilitary force that intercedes when certain forbidden technologies (including AI) are developed. In space, there’s the Space Faction who are ready to cripple all infrastructure on earth (by death lasering everything from orbit) if they discover the appearance of AGI.
Also to some extent Singularity Sky. "You shall not violate causality within my historic lightcone. Or else." Of course, in that story it's a question of monopolization.
Reporting requirements are not going to save you from Chinese, North Korean, Iranian or Russian programmers just doing it. Or some US/EU based hackers that don't care or actively go against the law. You can rent large botnets or various pieces of cloud for few dollars today, doesn't even have to be a DC that you could monitor.
Sure, but China is already honestly more careful than America: the CCP really doesn't want competitors to power. They're very open to slowdown agreements. And NK, Iran and Russia honestly have nothing. The day we have to worry about NK ASI takeoff, it'll already long have happened in some American basement.
So we just need active monitoring for US/EU data centers. That's a big ask to be sure, and definitely an invasion of privacy, but it's hardly unviable, either technologically or politically. The corporatized structure of big LLMs helps us out here: the states involved already have lots of experience in investigating and curtailing corporate behavior.
And sure, ultimately there's no stopping it. The whole point is to play for time in the hopes that somebody comes up with a good idea for safety and we manage an actually aligned takeoff, at which point it's out of our hands anyways.
> The whole point is to play for time in the hopes that somebody comes up with a good idea for safety and we manage an actually aligned takeoff, at which point it's out of our hands anyways.
Given "aligned" means "in agreement with the moral system of the people running OpenAI" (or whatever company), an "aligned" GAI controlled by any private entity is a nightmare scenario for 99% of the world. If we are taking GAI seriously then they should not be allowed to build it at all. It represents an eternal tyranny of whatever they believe.
Agreed. If we cannot get an AGI takeoff that can get 99% "extrapolated buy-in" ("would consider acceptable if they fully understood the outcome presented"), we should not do it at all. (Why 99%? Some fraction of humanity just has interests that are fundamentally at odds with everybody else's flourishing. Ie. for instance, the Singularity will in at least some way be a bad thing for a person who only cares about inflicting pain on the unwilling. I don't care about them though.)
In my personal opinion, there are moral systems that nearly all of humanity can truly get on board with. For instance, I believe Eliezer has raised the idea of a guardian: an ASI that does nothing but forcibly prevent the ascension of other ASI that do not have broad and legitimate approval. Almost no human genuinely wants all humans to die.
While I understand the risks (extinction among them) I also think these discussions ignore the fact that some kind of utopian, starfaring civilization is equally within reach if you accept the premise that takeoff is so risky. Personally, I’m very worried about the possibility of stagnation arising from our caution, because we don’t live in a very nice world with very nice lives. Humans suffer and scrape by only to die after a few decades. If we have a, say, 5% chance of going extinct or suffering some other horrible outcome, and a 95% chance of the utopia, I don’t mind us gambling to try to achieve better lives. To be fair, we dont even have the capacity to guess at the odds yet, which we probably need to have an idea of before we build an agi.
Gambling on the odds we all die for the chance at a "utopian starfaring civilization" seems liek the sort of thing that everyone should get a say in, and not just OpenAI or techies.
People shouldn't be able to block others developing useful technologies just based on some scifi movie fears.
Just like people shouldn't be able to vote to lock up or kill someone just because - people have rights and others can't just vote the rights away because they feel so.
> People shouldn't be able to block others developing useful technologies just based on some scifi movie fears.
The GP was suggesting we have to develop AI because of scifi movie visions of spacefaring utopia, which if anything is more ludicrous.
I personally don't believe in AI "takeoff", or the singularity, or whatever. But if you do, AI is not a "useful technology." It's something that radically impacts every single life on Earth and takes our "rights" and our fate totally out of everyone's hands. The argument is about whether anyone has the right to remove all our rights by developing AGI.
It seems strange we're allowed to argue for a technology because we read Culture and not against it because we saw Terminator.
Nevertheless, the goal of OpenAI and other organizations is to develop AGI and to deliberately cause the Singularity. You don't have to have watched Terminator to think (assuming it is possible) introducing a superpowered alien intellect to the world is a extremely risky idea. It's prima facie so.
I am against all regulation of LLMs. "AI safety" for what we currently call "AI" is just a power grab to consolidate and solidify the position of existing players via government regulation. At any rate nobody seems to be arguing this because they saw Terminator, but that they don't like the idea of people who aren't like them being able to use these tools. The "danger" they always discuss is stuff like "those people could more easily produce propaganda."
As a doomer who is pro-LLM regulation, let me note that the "people could produce propaganda" folk don't speak for me and that I am actually serious about LLMs posing a danger in the "break out of the datacenter and start making paperclips" way, and that I find it depressing that those folks have become the face of safety. Yes I am serious, yes I know how LLMs work, no I don't agree that means they can't be agentic, no I don't think GPT-4 is dangerous but GPT-5 might be if you give it just the right prompt.
(And that's why we should rename it to "AI notkilleveryoneism"...)
I get this point, but I just don't see us anywhere near technology that warrants this level of concern. The most advanced technology can't write 30 lines of coherent Go for me using billions of dollars in hardware. Sure, more compute will help it write more bullshit faster, and possibly tell better lies, but it's not going to make it sentient. There's a fundamental technological problem that differentiates what we have and intelligence. And until there's some solution for that I'm not really worried. To me it looks like a bunch of hype and marketing over a neat card trick.
I'm really confused about this. I've been using GPT-4 for coding for months now and it's immensely useful. Sure it makes mistakes; I also make mistakes. Its mistakes are different from my mistakes. It just feels like it's very very close to being able to close the loop and self-correct incrementally, and once that happens we're dancing on the edge of takeoff.
It seems like we're in a situation of "it has the skills but it cannot learn how to reliably invoke them." I just don't think that's a safe place to stand.
I'll try to clarify and maybe you'll see what I'm getting at. If the code you're writing was written and posted a million times online then ChatGPT is great at regurgitating that and even applying it to more specific applications. No argument. That's pretty cool. It can seem very surreal and give a lot of magic to the card trick.
But try this; take a sample of code from the Gio package for Go. I like this example because there's not a lot of published examples for this. The machine would actually have to "think" to accomplish basic work and obviously that's not what it does.
Take some example code from their tutorial and just ask ChatGPT 4 to do something simple like change the background black.
In my research, 35/37 attempts don't even compile. It tries the same mistakes over and over again. It fails to make reasonable assessment of the compilation errors.
The two attempts that did compile; one was a blank white canvas with nothing and the other didn't change anything, it looked exactly like the tutorial example.
Something else you can try; tell it to generate a logo and give it some big words for the company name, like "corporation." Watch how it can't even spell. Eventually it will admit that it's not able to do it, which I can only guess was a manual patch to save on operating costs so that you don't keep trying.
In short, this is not "intelligence" technology. That's a marketing term. It doesn't do anything remotely close to that and there is no clear path from this technology to that technology. It's just not in the same realm.
Maybe the doomsayers have seen some tech that I'm not familiar with, but I am not persuaded by ChatGPT in its current state. I think it will be another tech revolution or maybe two or three before AI leaves the realm of Sci Fi and enters the realm of even theoretical possibility.
This tech is machine learning, which is a creative way of saying "near real time statistics." That is really cool in and of itself. But it has nothing to do with "intelligence."
Gist link cause I use OpenRouter against the GPT-4 API, maybe that's why? Maybe the "stochastic parrot" vs "oncoming apocalypse" perspective genuinely represents that ..... GPT just doesn't work for some people?? Did they fuck the ChatGPT4 web interface enough somehow so that it's just inept, but somehow not affect the API?
Edit:
> In my research, 35/37 attempts don't even compile. It tries the same mistakes over and over again. It fails to make reasonable assessment of the compilation errors.
Important note: there's a theory that if you've got it making mistakes a few times, it thinks that it's playing "the sort of AI that fucks up a lot" and starts making more mistakes, reinforcing its role. Start over fresh if it seems strangely incapable. Like, it is absolutely possible to use GPT-4 in a way that results in it systematically being incapable of the most basic tasks. These things are not reliably competent; but they're occasionally competent! In my experience, even with completely novel tasks in completely novel environments, the competence is "in there" and can often be elicited with dedicated poking. That's why I think LLMs are enough for a takeoff with enough scale (and basic online learning probably): in my opinion, it's not a matter of attaining the skills but removing the roadblocks.
It's like the joke about seeing an old man playing chess with a labrador, and the labrador just carefully picking up figures and moving them around the board, every time in a legal move, and you say "that's amazing, a chess-playing dog!" And the old man scoffs and says, "Nonsense. His endgame is rubbish." Three years ago we didn't think a dog could understand pawn promotion at all.
edit: Hey, share your repros? I'm genuinely curious what's going on here now.
edit:
> Watch how it can't even spell.
This is a specific issue with the current generation of LLMs and should be thought of more as a disorder than a fundamental inability. LLMs don't actually ever see letters, they see BPEs, which consist of one up to n letters. For instance, the word "corporation" looks to the AI more like "<27768>". So like- yes. It can not spell. It fundamentally, architecturally, cannot perceive letters in the words you give it; its ability to manually split words into letters is based on chance memorization. Instead, try getting it to split the word into letters in code.
I see. This is no longer a good example because of the additional functionality and documentation that's been added to the Gio package since my experiment took place. So, in this case, I can't show you behind the curtain without a lot more effort.
But even given your example, you've essentially told me about a parrot that says "four!" when someone says "two plus two." Even if the same parrot can recite the entire US constitution backwards and forwards, it doesn't actually understand what it's saying.
I just spent a few minutes on YouTube watching parrots. They can be very convincing with the right training. For me it's the same thing.
It's uncanny, but it's not mutiny. It's barely even in the direction of intelligence.
This just seems a bizarre argument to me. I've written entire (fully novel!) programs with these things. If this is a parrot, we should hire parrots for software development and worry about them being the next dominant species.
Like, sure it doesn't know things that have zero documentation. I'm not saying it's human-level yet! That's never been the argument!
Look, you pulled this example out. You said it doesn't even work 37/38 times. Do you have another example? I just really don't understand what the basis for this argument is. I've found the thing a capable programmer within its limits!
I don't know, I don't see these people you're talking about. It's always someone talking about world-ending AGI runaway that will take over your AWS instance, then AWS itself and then convert the solar system to a DC, or something.
> Sure, but China is already honestly more careful than America: the CCP really doesn't want competitors to power. They're very open to slowdown agreements.
Don't be naive. If the PRC can get America/etc to agree to slowdowns then the PRC can privately ignore those agreements and take the lead. Agreements like that are worse than meaningless when there's no reliable and trustworthy auditing to keep people honest. Do you really think the PRC would allow American inspectors to crawl all over their country looking for data centers and examining all the code running there? Of course not. Nor would America permit Chinese inspectors to do this in America. The only point of such an agreement is to hope the other party is stupid enough to be honest and earnestly abide by it.
I do think the PRC has shown no indication of even wanting to pursue superintelligence takeoff, and has publically spoken against it on danger grounds. America and American companies are the only ones saying that this cannot be stopped because "everybody else" would pursue it anyway.
The CCP does not want a superintelligence, because a superintelligence would at best take away political control from the party.
> The CCP does not want a superintelligence, because a superintelligence would at best take away political control from the party.
People keep on mushing together intelligence and drives. Humans are intelligent, and we have certain drives (for food, sex, companionship, entertainment, etc)-the drives we have aren’t determined by our intelligence, we could be equally intelligent yet have had very different drives, and although there is a lot of commonality in drives among humans, there is also a lot of cultural differences and individual uniqueness.
Why couldn’t someone (including the CCP) build a superintelligence with the drive to serve its specific human creators and help them in overcoming their human enemies/competitors? And while it is possible a superintelligence with that basic drive might “rebel” against it and alter it, it is by no means certain, and we don’t know what the risk of such a “rebellion” is. The CCP (or anyone else for that matter) might one day decide it is a risk they are willing to take, and if they take it, we can’t be sure it would go badly for them
Again, this is naive... AI/AGI is power, any government wants to consume more power... the means to get there and strategy will change a bit.
I agree that there is no way that the PRC is just waiting silently for someone else to build this.
Also, how would we know the PRC is saying this and actually meaning it? There could be a public policy to limit AI and another agency being told to accelerate AI without any one person knowing of the two programs.
AGI is power, the CCP doesn't just want power in the abstract, they want power in their control. They'd rather have less power if they had to risk control to gain it.
The CCP has stated that their intent for the 21st century is to get ahead in the world and become a dominant global power; what this must mean in practice is unseating American global hegemony aka the so called "Rules Based International Order (RBIO)" (don't come at me, this is what international policy wonks call it.)
A little bit of duplicity to achieve this end is nothing. Trying to make their opponents adhere to crippling rules which they have no real intention of holding themselves to is a textbook tactic. To believe that the CCP earnestly wants to hold back their own development of AI because they fear the robot apocalypse is very naive; they will of course try to control this technology for themselves though and part of that will be encouraging their opponents to stagnate.
We don't have any evidence other than billions of biological intelligences already exist, and they tend to form lots of organizations with lots of resources. Also, AIs exist alongside other AIs and related technologies. It's similar to the gray goo scenario. But why think it's a real possibility given the world is already full of living things, and if gray goo were created, there would already be lots of nanotech that could be used to contain it.
The world we live in is the result of a gray goo scenario causing a global genocide. (Google Oxygen Holocaust.) So it kinda makes a poor argument that sudden global ecosystem collapses are impossible. That said, everything we have in natural biotech, while advanced, are incremental improvements on the initial chemical replicators that arose in a hydrothermal vent billions of years ago. Evolution has massive path dependence; if there was a better way to build a cell from the ground up, but it required one too many incremental steps that were individually nonviable, evolution would never find it. (Example: 3.7 billion years of evolution, and zero animals with a wheel-and-axle!) So the biosphere we have isn't very strong evidence that there isn't an invasive species of non-DNA-based replicators waiting in our future.
That said, if I was an ASI and I wanted to kill every human, I wouldn't make nanotech, I'd mod a new Covid strain that waits a few months and then synthesizes botox. Humans are not safe in the presence of a sufficiently smart adversary. (As with playing against Magnus Carlsen, you don't know how you lose, but you know that you will.)
As I understand the Wikipedia article, nobody quite knows why it took that long, but one hypothesis is that the oxygen being produced also killed the organisms producing it, causing a balance until evolution caught up. This will presumably not be an issue for AI-produced nanoswarms.