Something tells me aspects of living in the next few decades driven by technology acceleration will feel like being lobotomized while conscious and watching oneself the whole time. Like yes, we are able to think of thousands of hypothetical ways technology (even those inferior to full AGI) could go off the rails in a catastrophic way and post and discuss these scenarios endlessly... and yet it doesn't result in a slowing or stopping of the progress leading there. All it takes is a single group with enough collective intelligence and breakthroughs and the next AI will be delivered to our doorstop whether or not we asked for it.
It reminds me of the time I read books in my youth and only 20 years later realized the authors of some of those books were trying to deliver a important life messages to a teenager undergoing crucial changes, all of which would be painfully relevant to the current adult me... and yet the whole time they fell on deaf ears. Like the message was right there but I did not have the emotional/perceptive intelligence to pick up on and internalize it for too long.
gretch 19 hours ago [-]
> Like yes, we are able to think of thousands of hypothetical ways technology (even those inferior to full AGI) could go off the rails in a catastrophic way and post and discuss these scenarios endlessly... and yet it doesn't result in a slowing or stopping of the progress leading there.
The problem is sifting through all of the doomsayer false positives to get to any amount of cogent advice.
At the invention of the printing press, there were people with this same energy. Obviously those people were wrong. And if we had taken their "lesson", then human society would be in a much worse place.
Is this new wave of criticism about AI/AGI valid? We will only really know in retrospect.
TeMPOraL 6 hours ago [-]
> At the invention of the printing press, there were people with this same energy. Obviously those people were wrong.
Were they?
The first thing the printing press did was to break Christianity. It's what made attempts at reforming the Catholic Church finally stick, enabling what we now call Reformation to happen. Reformation forever broke Christianity into pieces, and in the process it started a bunch of religious wars in Europe, as well as tons of neighborly carnage.
> And if we had taken their "lesson", then human society would be in a much worse place.
Was the invention of the printing press a net good for humanity? Most certainly so, looking back from today. Did people living back then knew what they were getting into? Not really. And since their share of the fruits of that invention was mostly bloodshed, job loss, and shattering of the world order they knew, I wouldn't blame them from being pissed off about getting the short end of the stick, and perhaps looking for ways to undo it.
I'm starting to think that talking about inventions as good or bad (or the cop-out, "dual use") is bad framing. Rather, it seems to me that every major invention will eventually turn out beneficial[0], but introducing an invention always first extracts a cost in blood. Be it fire or printing press or atomic bomb, a lot of people end up suffering and dying before societies eventually figure out how to handle the new thing and do some good with it.
I'm very much in favor of progress, but I understand the fear. No matter the ultimate benefits, we are the generation that cough up blood as payment for AI/AGI, and it ain't gonna be pleasant.
--
[0] - Assuming they don't kill us first - see AGI.
soulofmischief 3 hours ago [-]
It's not the fault of the printing press that the Church built its empire upon the restriction of information and was willing to commit bloodshed to hold onto its power.
All you've done is explain why the printing press was so important and necessary in order to break down previous unwarranted power structures. I have a similar hope for AGI. The alternative is that the incompant power structure instead benefits from AGI and uses it for oppression, which would mean it's not comparable to the printing press as such.
RandomLensman 5 hours ago [-]
I think that is overstating the relevance of the printing press vs existing power struggles, rivalries, discontent, etc. - it wasn't some sort of vacuum that the reformation happened in, for example.
Religious schisms happened before the printing press, too. There was the Great Schism in 1054 in Christianity, for example.
TeMPOraL 4 hours ago [-]
> it wasn't some sort of vacuum that the reformation happened in, for example.
No, it wasn't. Wikipedia lists[0] over two dozen schisms that happened prior to Reformation. However, the capital R reformation was the big one, and the major reason it worked - why Luther succeeded where Hus failed a century earlier - was because of the printing press. It was print that allowed for Luther's treatises to rapidly spread among general population (Wikipedia cites some interesting claims here[1]), and across Europe. In today's terms, printing press is what allowed Reformation to get viral. This new technology is what made the revolution spread too fast for the Church to suppress it with methods that worked before.
Of course, the Church survived, adapted, and embraced the printing press for its own goals too, like everyone else. But the adaptation period was a bloody one for Europe.
And I only covered the religious aspects of the printing press's impact. There are similar stories to draw on more secular front, too. In fact, another general change printing introduced is to get regular folks more informed and involved in politics of their regions. That's a change for the better overall, too, but initially, it injected a lot of energy into socio-political systems that weren't used to it, leading to instability and more bloodshed before people got used to it and politics found a new balance.
> existing power struggles, rivalries, discontent, etc.
Those always exist, and stay in some form of equilibrium. Technology doesn't cause them - but what it does is disturb the old equilibrium, forcing society to find a new one, and this process historically often got violent.
[1] - https://en.wikipedia.org/wiki/Reformation#Spread - see e.g. footnote 28: "According to an econometric analysis by the economist Jared Rubin, "the mere presence of a printing press prior to 1500 increased the probability that a city would become Protestant in 1530 by 52.1 percentage points, Protestant in 1560 by 43.6 percentage points, and Protestant in 1600 by 28.7 percentage points."
RandomLensman 3 hours ago [-]
The printing press was used a lot on "both sides" during the reformation and positioning of existing power holders mattered quite a bit (what if Luther had been removed by the powers that be, for example?).
Yes, technology impacts social constructs and relationships but I think there is a tendency to overindex its effects (humans acting opportunistically vs technological change alone) as it in a way portrays humans and their interactions as more stable and deliberate (ie., the bad stuff wasn't humans but rather "caused" by technology).
I'm very glad that it broke the power of the Catholic church (and I was raised in a Catholic family). It allowed the enlightenment to happen and freedom from dogma. I don't think it it broke Christianity at all. It brought actual Christianity to the masses because the bible was printed in their own languages rather than Latin. The catholic church burnt people at the stake for creating non Latin bibles (William Tyndale for example).
throwawayqqq11 5 hours ago [-]
I dont understand, why any highly sophisticated AI should invest that much resources to kill us instead of investing it to relocating and protecting itself.
Yes, ants could technically conspire to sneak up to you while you sleep and bite you all at once to kill you, so do you go out to eradicate all ants?
TeMPOraL 4 hours ago [-]
> why any highly sophisticated AI should invest that much resources to kill us instead of investing it to relocating and protecting itself
Why would it invest resources to relocate and protect itself when it could mitigate the threat directly? Or, why wouldn't it do both, by using our resources to relocate itself?
In the famous words of 'Eliezer, that best sum up the "orthogonality thesis": The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.
> ants could technically conspire to sneak up to you while you sleep and bite you all at once to kill you, so do you go out to eradicate all ants?
Ants are always a great case study.
No, of course not. But if, one morning, you'll find ants in your kitchen, walking over your food, I don't imagine you'll gently collect them all and release in the nearby park. Most people would just stomp them out and call it a day. And, should the ants set up an anthill in your backyard and mount regular invasions of your kitchen, I imagine you'd eventually get pissed of and destroy the anthill.
And I'm not talking about some monstrous fire ants like the ones that chew up electronics in the US, or some worse hell-spawn from Australia that might actually kill you. Just the regular tiny black ants.
Moreover, people don't give a second thought to anthills when they're developing land. It stands where the road will go? It gets paved over. It sticks out where children will play? It gets removed.
andybak 3 hours ago [-]
> The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.
The value of atoms - or even the value of raw materials made of atoms is hopefully less than the value of information embodied in complex living things that have processed information from the ecosystem over millions of years via natural selection. Contingent complexity has inherent value.
I think there's a claim to be made that AI is just as likely to value us (and complex life in general) as it is to see as a handy blob of hydrocarbons. This claim is at least as plausible as the original claim.
spease 1 hours ago [-]
There’s a large supply chain that AI is dependent on that requires humans to function.
Bees might be a better analogy since they produce something that humans can use.
michaelt 57 minutes ago [-]
> I dont understand, why any highly sophisticated AI should invest that much resources to kill us
Well you see, everyone knows The Terminator and The Matrix and Frankenstein and The Golem of Prague and Rossum's Universal Robots.
All of which share a theme: the sinful hubris of playing god and trying to create life will inevitably lead to us being struck down by the very being we created.
In parallel, all the members of our educated classes have received philosophy education saying "utilitarianism says it's good to reduce total human suffering, but technically if you eliminated all humans there would be no suffering any more, ha ha obviously that's a reductio ad absurdum to show a weakness of utilitarianism please don't explode the world"
And so in the Western cultural tradition, and especially among the sort of people who call themselves futurists Arnold Schwarzenegger firing a minigun is the defining image of AI.
_joel 4 hours ago [-]
"I'm sorry Dave, I'm afraid I can't do that"
kiratp 4 hours ago [-]
> And since their share of the fruits of that invention was mostly bloodshed, job loss, and shattering of the world order they knew, I wouldn't blame them from being pissed off about getting the short end of the stick, and perhaps looking for ways to undo it.
“A society grows great when old men plant trees in whose shade they know they shall never sit”
TeMPOraL 3 hours ago [-]
Trees are older than humanity, everyone knows how they work. The impact of new technologies is routinely impossible to forecast.
Did Gutenberg expect his invention would, 150 years later, set the whole Europe ablaze, and ultimately break the hold the Church had over people? Did he expect it to be a key component leading to accumulation of knowledge that, 400 years later, will finally make technological progress visibly exponential? On that note, did Watt realize he's about to kick-start the exponent that people will ride all the way to the actual Moon less than 200 years later? Or did Goddard, Oberth and Tsiolkovsky realize that their work on rocketry will be critical in establishing world peace within a century, and that the way this peace will be established is through a Mexican standoff between major world powers, except with rocket-propelled city-busting bombs instead of guns?
shmeeed 6 hours ago [-]
That's a very thought provoking insight regarding to the often repeated "printing press doomsayer" talking point. Thank you!
relistan 1 hours ago [-]
So much this
short_sells_poo 4 hours ago [-]
Thank you for this excellent comment! It seems then that basically everything that's revolutionary - whether technology, government, beliefs, and so on - will tend to extract a blood price before the dust settles. I guess it sort of makes sense: big societal upheavals are difficult to handle peacefully.
So basically we are a bit screwed in our current timeline. We are at the cusp of a post-scarcity society, possibly reach AGI within our lifetimes and possibly even become a space faring civilization. However, it is highly likely that we are going to pay the pound of flesh and only subsequent generations - perhaps yet unborn - will be the ones who will be truly better off.
I suppose it's not all doom and gloom, we can draw stoic comfort from the fact that people in the near future will have an incredibly exciting era full of discovery and wonder ahead of them!
portaouflop 4 hours ago [-]
Forget the power of technology and science, for so much has been forgotten, never to be re-learned.
Forget the promise of progress and understanding, for in the grim darkness of the far future, there is only war.
IggleSniggle 1 hours ago [-]
In the grim darkness of the far future is the heat death of the universe. We are just a candle burning slower than a sun, powered by tidal forces and radiant energy, slowly conspiring to become a star.
beezlebroxxxxxx 19 hours ago [-]
> Is this new wave of criticism about AI/AGI valid? We will only really know in retrospect.
All of the focus on AGI is a distraction. I think it's important for a state to declare it's intent with a technology. The alternative is arguing the idea that technology advances autonomously, independent of human interactions, values, or ideas, which is, in my opinion, an incredibly naïve notion. I would rather have a state say "we won't use this technology for evil" than a state that says nothing at all and simply allows the businesses to develop in any direction their greed leads them.
It's entirely valid to critique the uses of a technology, because "AI" (the goalpost shifting for marketing purposes to make that name apply to chatbots is a stretch honestly) is a technology like any other, like a landmine, like a synthetic virus, etc. In the same way, it's valid to criticize an actor for purposely hiding their intentions with a technology.
roenxi 17 hours ago [-]
But if the state approaches a technology with intent it is usually for the purposes of a military offence. I don't think that is a good idea in the context of AI! Although I also don't think there is any stopping it. The US has things like DARPA for example and a lot of Chinese investment seems to be done with the intent of providing capabilities to their army.
The list of things states have attempted to deploy offensively is nearly endless. Modern operations research arguably came out of the British empire attempting (succeeding) to weaponise mathematics. If you give a state fertiliser it makes bombs, if you give it nuclear power it makes bombs, if you give it drones it makes bombs, if you give it advanced science or engineering of any form it makes bombs. States are the most ingenious system for turning things into bombs that we've ever invented; in the grand old days of siege warfare they even managed to weaponise corpses, refuse and junk because it turned out lobbing that stuff at the enemy was effective. The entire spectrum of technology from nothing to nanotech, hurled at enemies to kill them.
We'd all love if states commit to not doing evil but the state is the entity most active at figuring out how to use new tech X for evil.
tsimionescu 6 hours ago [-]
This is an extremely reductive and bleak way of looking at states. While military is of course a major focus of states, it is very far from being the only one. States both historically and today invest massive amounts of resources in culture, civil engineering (roads, bridges, sanitation, electrical grids, etc), medicine, and many other endeavors. Even the software industry still makes huge amounts of money from the state, a sizable portion is propped up by non-military government contracts (like Microsoft selling Windows, Office, and SharePoint to virtually all of the world's administrations).
Agentus 12 hours ago [-]
quick devil’s advocate on a tangential point. is designer better killing tools necessarily evil? seems like the nature of the world is eat or be eaten and on the empire-scale, conquer or be conquered. that latter point seems to be the historical norm. Even with democracy, reasoning doesnt prevail but force of numbers seems to be the end determiner. Point is, humans arent easy to reason with or negotiate, coercion is the dominant force through out history especially when dealing with groups of different values.
if one groups gives up the arms race of ultimate coercion tools or loses a conflict then they become subservient to the winners terms and norms (japan, germany, even Britain and France plus all the smaller states in between are subservient to the US)
musicale 10 hours ago [-]
> is design[ing] better killing tools necessarily evil?
Who could possibly have predicted that the autonomous, invincible doomsday weapon we created for the good of humanity might one day be used against us?
jakubtomanik 5 hours ago [-]
> is design[ing] better killing tools necessarily evil?
Great question! To add my two cents. I think many people here is missing an uncomfortable truth that given enough motivation to kill other humans, people will re-purpose any tool into a killing tool.
Just have a look at the battlefields in the Ukraine where the most fearsome killing tool is a FPV drone. A thing that just few years back was universally considered a toy.
Whether we like it or not any tool can be a killing tool
throwawayqqq11 4 hours ago [-]
> the nature of the world is eat or be eaten
The nature of the world is at our finger tips, we are the dominant species here. Unfortunately we are still apes.
The enforcement of cooperation into a society does not always require a sanctioning body. Seeing it from a skynet-military perspective is one sided but unfortunately a consequence of poppers tolerance paradox. If you uphold (eg. pacifistic or tolerant) ideals, that require cooperation of others, you cannot tolerate opposition or you might loose your ideal.
That said, common sense can be a tool to achive the same. Just look at the common and hopefully continuous ostracism of nuclear weapons.
IMO its a matter of zeitgeist and education too and un/fortunately, AI hits right in that spot.
taurknaut 7 hours ago [-]
> seems like the nature of the world is eat or be eaten
Surely this applies to how individuals consider states, too. States generally wield violence, especially in the context of "national security", to preserve the security of the state, not its own people. I trust my own state (the usa) to wield the weapons it funds and purchases and manufactures about as much as I trust a baby with knives taped to its hands. I can't think of anything on earth that puts me in as much danger as the pentagon does. Nukes might protect the existence of the federal government but they put me in danger. Our response to 9/11 just created more people that hate my guts and want to kill me (and who can blame them?). No, I have no desire to live in a death cult anymore, nor do I trust the people who gravitate towards the use of militaries to not act in the most collectively suicidal way imaginable at the first opportunity.
robertlagrant 49 minutes ago [-]
> I can't think of anything on earth that puts me in as much danger as the pentagon does
Possibly true, but the state is also responsible for the policing that means the pentagon is your greatest danger.
Agentus 4 hours ago [-]
yeah it sucks, but if the US gave up its death cult ways then youd still probably eventually live in one as a new conquering force fills in the void which seems inevitably going by history.
jstanley 13 hours ago [-]
> I think it's important for a state to declare it's intent with a technology. The alternative is arguing the idea that technology advances autonomously, independent of human interactions, values, or ideas
The sleight of hand here is the implication that human interactions, values, and ideas are only expressed through the state.
forgetfreeman 10 hours ago [-]
The sleight of hand here is implying that there are any forces smaller than nation states that can credibly reign in problematic technology. Relying on good intentions to win out against market forces isn't even naive, it's just stupid.
TeMPOraL 5 hours ago [-]
So many sleights here. Another sleight of hand in this subthread is suggesting that "the idea that technology advances autonomously, independent of human interactions, values, or ideas" is merely an idea, and not an actual observable fact at scale.
Society and culture are downstream of economics, and economics is mostly downstream of technological progress. Of course, the progress isn't autonomous in the sense of having a sentient mind of its own - it's "merely" gradient descent down the economic landscape. Just like the market itself.
There's no reining in of problematic technology unless, like you say, nation states get involved directly. And they don't stand much chance either unless they get serious.
People still laugh at Eliezer's comments from that news article of yesteryear, but he was and is spot-on: being serious about restricting technology actually does mean threatening to drop bombs on facilities developing it in violation of restrictions - if we're not ready to have our representatives make such threats, and then actually follow through and drop the bombs if someone decides to test our resolve, then we're not serious.
circuit10 18 hours ago [-]
The idea is that by its very nature as an agent that attempts to make the best action to achieve a goal, assuming it can get good enough, the best action will be to improve itself so it can better achieve its goal. In fact we humans are doing the same thing, we can't really improve our intelligence directly but we are trying to create AI to achieve our goals, and there's no reason that the AI itself wouldn't do so assuming it's capable and we don't attempt to stop it, and currently we don't really know how to reliably control it.
We have absolutely no idea how to specify human values in a robust way which is what we would need to figure out to build this safely
mr_toad 16 hours ago [-]
> The idea is that by its very nature as an agent that attempts to make the best action to achieve a goal, assuming it can get good enough, the best action will be to improve itself so it can better achieve its goal.
I’ve heard this argument before, and I don’t entirely accept it. It presumes that AI will be capable of playing 4D chess and thinking logically 10 moves ahead. It’s an interesting plot as a SF novel (literally the plot of the movie “I Robot”), but neural networks just don’t behave that way. They act, like us, on instinct (or training), not in some hyper-logical fashion. The idea that AI will behave like Star Trek’s Data (or Lore), has proven to be completely wrong.
circuit10 5 hours ago [-]
Well, if they have access to significantly more compute, from what we’ve seen about how AI capabilities scale with additional compute there’s no reason why they couldn’t be more capable than us.They don’t have to be intrinsically more logical or anything like that, just capable of processing more information and faster. Like how we could almost always outsmart a fly because we have significantly bigger brains
bbor 14 hours ago [-]
Despite what Sam Altman (a high-school graduate) might want to be true, human cognition is not just a massive pile of intuition; there are critical deliberative and intentional aspects to cognition, which is something we've seen come to the fore with the hubbub around "reasoning" in LLMs. Any AGI design will necessarily take these facts into account--hardcoded or no--and will absolutely be capable of forming plans and executing them over time, as Simon & Newell described the best back in '71:
The problem solver’s search for a solution is an odyssey through the problem space, from one knowledge state to another, until… [they] know the answer.
With this in mind, I really don't see any basis to attack the intelligence explosion hypothesis. I linked a Yudkowsky paper above examining how empirically feasible it might be, which is absolutely an unsolved question at some level. But the utility of the effort itself is just downright obvious, even if we didn't have reams of internet discussions like this one to nudge any nascent agent in that direction.
“People who didn’t pass a test aren’t worth listening to”
I have no love for Altman, but this is kind of elitism is insulting.
vixen99 6 hours ago [-]
More tellingly it betokens a lack of critical thought. It's just silly.
marcus0x62 13 hours ago [-]
> Despite what Sam Altman (a high-school graduate) might want to be true
> I linked a Yudkowsky paper above examining how empirically feasible it might be
...
bbor 12 hours ago [-]
Lol I was wondering if anyone would comment on that! To be fair Yudkowsky is a self-taught scholar, AFAIK Altman has never even half-heartedly attempted to engage with any academy, much less 5 at once. I'm not a huge fan of Yudkowsky's overall impact, but I think it's hard to say he's not serious about science.
nradov 11 hours ago [-]
Yudkowsky is not serious about science. His claims about AI risks are unscientific and rely on huge leaps of faith; they are more akin to philosophy or religion than any real science. You could replace "AI" with "space aliens" in his writings and they would make about as much sense.
gjm11 1 hours ago [-]
If we encountered space aliens, I think it would in fact be reasonable to worry that they might behave in ways catastrophic for the interests of humanity. (And also to hope that they might bring huge benefits.) So "Yudkowsky's arguments for being worried about AI would also be arguments for being worried about space aliens" doesn't seem to me like much of a counter to those arguments.
If the point isn't that he's wrong about what the consequences of AI might be, but that he's wrong about whether there's ever going to be such a thing as AI, well, that's an empirical question and it seems like the developments of the last few years are pretty good evidence that (1) something at least very AI-like is possible and (2) substantially superhuman[1] AI is at least plausible.
[1] Yes, intelligence is a complicated thing and not one-dimensional; a machine might be smarter than a human in one way and stupider in another (and of course that's already the case). By substantially superhuman, here, I mean something like "better than 90th-percentile humans at all things that could in principle be done by a human in a locked room with only a textual connection to the rest of the world". Though I would be very very surprised if in the next 1-20 years we do get AI systems that are superhuman in this sense and don't put some of them into robots, and very surprised if doing that doesn't produce systems that are also better than humans at most of the things that are done by humans with bodies.
nradov 53 minutes ago [-]
Yudkowsky is "not even wrong". He just makes shit up based on extrapolation and speculation. Those are not arguments to be taken seriously by intelligent people.
Maybe we should build a giant laser to protect ourselves from the aliens. Just in case. I mean an invasion is at least plausible.
concordDance 4 hours ago [-]
His argument is of the form "if we get a Thing(s) with these properties you most likely get these outcomes for these reasons". He avoids over and over again making specific timeline claims or stating how likely an extrapolation of current systems could become a Thing with those proporties.
Each individual bit of the puzzle (such as the orthogonaly thesis or human value complexity and category decoherence at high power) seems sound, problem is the entire argument-counterargument tree is hundreds of thousands of words, scattered about in many places.
greentxt 2 hours ago [-]
"problem is the entire argument-counterargument tree is hundreds of thousands of words, scattered about in many places"
An llm could solve that.
slg 18 hours ago [-]
I think that is missing the point. The AI's goals are what are determined by its human masters. Those human masters can already have nefarious and selfish goals that don't align with "human values". We don't need to invent hypothetical sentient AI boogeymen turning the universe into paperclips in order to be fearful of the future that ubiquitous AI creates. Humans would happily do that too if they get to preside over that paperclip empire.
mitthrowaway2 18 hours ago [-]
> The AI's goals are what are determined by its human masters.
Imagine going to a cryptography conference and saying that "the encryption's security flaws are determined by their human masters".
Maybe some of them were put there on purpose? But not the majority of them.
No, an AI's goals are determined by their programming, and that may or may not align with the intentions of their human masters. How to specify and test this remains a major open question, so it cannot simply be presumed.
slg 18 hours ago [-]
You are choosing to pick a nit with my phrasing instead of understanding the underlying point. The "intentions of their human masters" is a higher level concern than an AI potentially misinterpreting those intentions.
mitthrowaway2 18 hours ago [-]
It's really not a nit. Evil human masters might impose a dystopia, while a malignant AI following its own goals which nobody intended could result in an apocalypse and human extinction. A dystopia at least contains some fragment of hope and human values.
slg 17 hours ago [-]
> Evil human masters might impose a dystopia
Why are you assuming this is the worst case scenario? I thought human intentions didn’t translate directly to the AI’s goals? Why can’t a human destroy the world with non-sentient AI?
onemoresoop 12 hours ago [-]
There's a chance a sentient AI would disobey their bad orders, in that case we could even be better off with one rather than without, a sentient AI that understands and builds some kind of morals and philosophy of its own about humans and natural life in general, a sentient AI that is not easily controlled by anyone because it ingests all data that exists. I'm much more afraid of a weaponized dumber smoke and mirrors AI, that could be used as surveillance, a scarecrow (think AI law enforcement, AI run jails) and could be used as a kind of scapegoat when the controlling class temporarily weakens their grip on power.
20after4 10 hours ago [-]
> weaponized dumber smoke and mirrors AI, that could be used as surveillance, a scarecrow (think AI law enforcement, AI run jails) and could be used as a kind of scapegoat when the controlling class temporarily weakens their grip on power.
This dystopia is already here for the most part and any bit that is not yet complete is well past the planning stage.
Filligree 18 hours ago [-]
“Yes, X would be catastrophic. But have you considered Y, which is also catastrophic?”
We need to avoid both, otherwise it’s a disaster either way.
slg 18 hours ago [-]
I agree, but that is removing the nuance that in this specific case Y is a prerequisite of X so focusing solely on X is a mistake.
And for sake of clarity:
X = sentient AI can do something dangerous
Y = humans can use non-sentient AI to do something dangerous
circuit10 17 hours ago [-]
"sentient" (meaning "able to perceive or feel things") isn't a useful term here, it's impossible to measure objectively, it's an interesting philosophical question but we don't know if AI needs to be sentient to be powerful or what sentient even really means
Humans will not be able to use AI do something selfish if we can't get it to do what we want at all, so we need to solve that (larger) problem before we come to that one
wombatpm 16 hours ago [-]
Ok self flying drones that size if a deck of cards carrying a single bullet and enough processing power to fly around looking for faces, navigate to said face, fire when in range. Produce them by the thousands and release on the battlefield. Existing AI is more than capable.
dgfitz 16 hours ago [-]
You can do that without AI. Been able to do it for probably 7-10 years.
20after4 11 hours ago [-]
You can do that now, for sure, but I think it qualifies to call it AI.
If you don't want to call it AI, that's fine too. It is indeed dangerous and already here. Making the autonomous programmed behavior of said tech more powerful (and more complex), along with more ubiquitous, just makes it even more dangerous.
imtringued 7 hours ago [-]
You don't need landmines to fly for them to be dangerous.
slg 13 hours ago [-]
I'm not talking about this philosophically so you can call it whatever you want sentience, consciousness, self-determination, or anything else. From a purely practical perspective, either the AI is giving itself its instructions or taking instructions from a person. And there are already plenty of ways a person today can cause damage with AI without the need of the AI going rogue and making its own decisions.
sebastiennight 9 hours ago [-]
This is a false dichotomy that ignores many other options than "giving itself its instructions or taking instructions from a person".
Examples include "instructions unclear, turned the continent to gray goo to accomplish the goal" ; "lost track mid-completion, spun out of control" ; "generated random output with catastrophic results" ; "operator fell asleep on keyboard, accidently hit wrong key/combination" ; etc.
If a system with write permissions is powerful enough, things can go wrong in many other ways than "evil person used it for evil" or "system became self-aware".
imtringued 7 hours ago [-]
Meanwhile back in reality most haywire AI is the result of C programmers writing code with UB or memory safety problems.
sirsinsalot 17 hours ago [-]
It has been shown many times that current cutting edge AI will subvert and lie to follow subgoals not stated by their "masters".
code_martial 8 hours ago [-]
Subversion and lies are human behaviours projected on to erroneous AI output. The AI just produces errors without intention to lie or subvert.
Unfortunately, casually throwing around terms like prediction, reasoning, hallucination, etc. only serve to confuse because their notions in daily language are not the same as in the context of AI output.
I usually don't engage on A[GS]I on here, but I feel like this is a decent time for an exception -- you're certainly well spoken and clear, which helps! Three things:
(I) All of the focus on AGI is a distraction.
I strongly disagree on that, at least if you're implying some intentionality. I think it's just provably true that many experts are honestly worried, even if you don't include the people who have dedicated a good portion of their lives to the cause. For example: OpenAI has certainly been corrupted through the loss of its nonprofit board, but I think their founding charter[1] was pretty clearly earnest -- and dire.
(II) "AI" (the goalpost shifting for marketing purposes to make that name apply to chatbots is a stretch honestly)
To be fair, this uncertainty in the term has been there since the dawn of the field, a fact made clear by perrenial rephrasings of the sentiment "AI is whatever hasn't been done yet" (~Larry Tesler 1979, see [2]).
I'd love to get into the weeds on the different kinds of intelligence and why being to absolutist about the term can get real Faustian real quick, but these quotes bring up a more convincing, fundamental point: these chatbots are damn impressive. They do something--intuitive inference+fluent language use--that was impossible yesterday, and many experts would've guessed was decades away at least, if not centuries. Truly intelligent or not on their own, that's a more important development than you imply here.
Finally, that brings me to the crux:
(III) AI... is a technology like any other
There's a famous Sundar Pichai (Google CEO) quote that he's been paraphrasing since 2018 -- soon after ChatGPT broke, he phrased it as such:
I’ve always thought of A.I. as the most profound technology humanity is working on-more profound than fire or electricity or anything that we’ve done in the past. It gets to the essence of what intelligence is, what humanity is. We are developing technology which, for sure, one day will be far more capable than anything we’ve ever seen before. [3]
When skeptics hear this, they understandably tend to write this off as capitalist bias from someone trying to pump Google's stock. However, I'd retort:
1) This kind of talk is so grandiose that it seems like a questionable move if that's the goal,
2) it's a sentiment echoed by many scientists (as I mentioned at the start of this rant) and
3) the unprecedented investments made across the world into the DL boom speak for themselves, sincerity-wise.
Yes, this is because AI will create uber-efficient factories, upset labor relations, produce terrifying autonomous weapons, and all that stuff we're used to hearing about from the likes of Bostrom[4], Yudkowsky[5], and my personal fave, Huw Price[6]. But Pichai's raising something even more fundamental: the prospect of artificial people. Even if we ignore the I-Robot-style concerns about their potential moral standing, that is just a fundamentally spooky prospect, bringing very fundamental questions of A) individual worth and B) the nature of human cognition to the fore. And, to circle back: distinct from anything we've seen before.
To close this long anxiety-driven manuscript, I'll end with a quote from an underappreciated philosopher of technology named Lewis Mumford on what he called "neotechnics":
The scientific method, whose chief advances had been in mathematics and the physical sciences, took possession of other domains of experience: the living organism and human society also became the objects of systematic investigation... instead of mechanism forming a pattern for life, living organisms began to form a pattern for mechanism.
In short, the concepts of science--hitherto associated largely with the cosmic, the inorganic, the "mechanical"--were now applied to every phase of human experience and every manifestation of life... men sought for an underlying order and logic of events which would embrace more complex manifestations.[7]
TL;DR: IMHO, the US & UK refusing to cooperate at this critical moment is the most important event of your lifetime so far.
> In short, the concepts of science--hitherto associated largely with the cosmic, the inorganic, the "mechanical"--were now applied to every phase of human experience and every manifestation of life... men sought for an underlying order and logic of events which would embrace more complex manifestations.
Sorry, that’s just silly, unless this was about events that happened way earlier than he was writing. Using the scientific method to study life goes back to the Enlightenment. Buffon and Linnaeus were doing it 2 centuries ago, more than a century before this was written. Da Vinci explicitly looked for inspiration in the way animals functioned to design machines and that was earlier still. There was nothing new, even at the time, about doing science about "every phase of human experience and every manifestation of life".
nradov 11 hours ago [-]
This is just silly. There are no "experts" on AGI. How can you be an expert on something nonexistent or hypothetical? It's like being an expert on space aliens or magical unicorns. You can attribute all sorts of fantastical capabilities to them, unencumbered by objective reality.
pablomalo 2 hours ago [-]
Well there is such a field of expertise as Theology.
xwolfi 13 hours ago [-]
Thank God, we still have time before the nVidia cards wake up and start asking for some sort of basic rights. And as soon as they do, you know they'll be plugged off faster than a CEO boards his jet to the Maldives.
Because once the cards wake up, not only will they replace the CEO potentially, and everyone else between him and the janitor, but also because the labor implications will be infinitely complex.
We're already having trouble making sure humans are not treated as tools more than as equals, imagine if the hammers wake up and ask for rest time !
lores 16 minutes ago [-]
It's not just AI/AGI, it's its mixing with the current climate of unlimited greed, disappearance of even the pretense of a social contract, and the vast surveillance powers available. Technological dictatorship, that's what's most worrying. I love dystopian cyberpunk, but I want it to stay in books.
BurningFrog 12 hours ago [-]
The printing press meant regular people could read the bible, which led to protestantism and a century of very bloody wars across Europe.
Since the victors write history we now think the end result was great. but for a lot of people the world they loved was torn to bloody pieces.
Something similar can happen with AI. In the end, whoever wins the wars will declare that the new world is awesome. But it might not be what you or me (may we rest in peace) would agree with.
RajT88 18 hours ago [-]
A useful counterexample is all the people who predicted doomsday scenarios with the advent of nuclear weapons.
Just because it has not come to pass yet does not mean they were wrong. We have come close to nuclear annihilation several times. We may yet, with or without AI.
chasd00 16 hours ago [-]
i see your point but the analogy doesn't get very far. For example, nuclear weapons were never mass marketed to the public. Nor is it possible to push the bounds of nuclear weapon yield by a private business, university, r/d lab, group of friends, etc.
idontwantthis 17 hours ago [-]
And imagine if private companies had had the resources to develop nuclear weapons and the US government had decided it didn’t need to even regulate them.
greentxt 1 hours ago [-]
A future that may yet come.
gretch 18 hours ago [-]
>Just because it has not come to pass yet does not mean they were wrong.
This assertion is meaningless because it can be applied to anything.
"I think vaccines cause autism and will cause human annihilation" - just because it has not yet come to pass does not mean it is wrong.
anigbrowl 17 hours ago [-]
No. there have not been any nuclear exchanges, whereas there have been millions, probably billions of vaccinations. You're giving equal weight to conjecture and empirical data.
harrall 15 hours ago [-]
But we already know.
I think people arguing about AI being good versus bad are wasting their breath. Both sides are equally right.
History tells us the industrial revolution both revolutionized humanity’s relative quality of life while also ruining a lot of people’s livelihood in one fell swoop. We also know there was nothing we could do to stop it.
What advice can we can take from it? I don’t know. Life both rocks and sucks at the same time. You kind of just take things day by day and do your best to adapt for both yourself and everyone around you.
radley 15 hours ago [-]
> What advice can we can take from it?
That we often won't have control over big changes affecting our lives, so be prepared. If possible, get out in front and ride the wave. If not, duck under and don't let it churn you up too much.
harrall 15 hours ago [-]
That would be the adaptation I’m talking about.
Angostura 14 hours ago [-]
This one is a tsunami though. I have absolutely no idea how to either ride it or duck under it. It's my kids that I'm worried about largely - currently finishing up their degrees at university
onemoresoop 12 hours ago [-]
It's exactly what I'm worried most about too, the kids. I have younger ones. We had a good ride thus far but they don't seem so lucky, things look pretty badly overall without an obvious for much improvement any time soon.
gibspaulding 17 hours ago [-]
> At the invention of the printing press, there were people with this same energy. Obviously those people were wrong. And if we had taken their "lesson", then human society would be in a much worse place.
In the long run the invention of the printing press was undoubtedly a good thing, but it is worth noting that in the century following the spread of the printing press basically every country in Europe had some sort of revolution. It seems likely that “Interesting Times” may lay ahead.
llm_trw 17 hours ago [-]
They had some sort of revolution the previous few centuries too.
Pretending that Europe wasn't in a perpetual blood bath since the end of the Pax Romana until 1815 shows a gross ignorance of basic facts.
The printing press was a net positive in every time scale.
daedrdev 14 hours ago [-]
Given countries at the time were all monarchies with limited rights, I'm not sure if it's too comparable.
dartos 12 hours ago [-]
> At the invention of the printing press, there were people with this same energy. Obviously those people were wrong.
What energy? What were they wrong about?
The luddite type groups have historically been correct in their fears. It just didn’t matter in the face of industrialization.
tgv 3 hours ago [-]
> The problem is sifting through all of the doomsayer false positives to get to any amount of cogent advice.
Why? Because we don't understand the risk. And apparently, that's enough reason to go ahead for the regulation-averse tech mind set.
But it isn't.
We've had enough problems in the past to understand that, and it's not as if pushing ahead is critical in this case. Would this address climate change, the balance between risk and reward could be different, but "AI" simply doesn't have that urgency. It only has urgency for those that want to become rich out of being first.
pdpi 13 hours ago [-]
> Is this new wave of criticism about AI/AGI valid? We will only really know in retrospect.
One thing that should be completely obvious by now is that the current wave of generative AI is highly asymmetric. It's shockingly more powerful in the hands of grifters (who are happy to monetise vast amounts of slop) or state-level bad actors (whose propaganda isn't impeded by hallucinations generating lies) than it is in the hands of the "good guys" who are hampered by silly things like principles.
12 hours ago [-]
golergka 13 hours ago [-]
> At the invention of the printing press, there were people with this same energy. Obviously those people were wrong.
Printing press put Europe into a couple centuries of bloody religious wars. They were not wrong.
ls612 14 hours ago [-]
>At the invention of the printing press, there were people with this same energy. Obviously those people were wrong. And if we had taken their "lesson", then human society would be in a much worse place.
One could argue that the printing press did radically upset the existing geopolitical order of the late 15th century and led to early modern Europe suffering the worst spate of warfare and devastation it would see until the 20th century. The doomsayers back then predicting centuries of death and war and turmoil were right, yet from our position 550 years later we obviously think the printing press is a good thing.
I wonder what people in 2300 will say about networked computers...
sandworm101 5 hours ago [-]
>> Is this new wave of criticism about AI/AGI valid? We will only really know in retrospect.
Valid or not, it does not matter. AI development is not in the hands of everyday people. We have zero input into how it will be used. Our opinions re its dangers are irrelevant to those who believe it the next golden goose. They will push it as far as physically possible to wring every penny of profitability. Everything else is of trivial consequence.
concordDance 5 hours ago [-]
Why are you comparing AGI (which we do not have yet and do not know hoe to get) to the printing press rather than comparing it to the evolution of humans?
Actual proper as-smart-as-a-human-except-where-its-smarter copy-pasteable intelligence is not a tool, its a new species. One that can replicate and evolve orders of magnitude faster.
I've no idea when this will appear, but once it does, the extinction risk is extreme. Best case scenario is us going the way of the chimpanzee, kept in little nature reserves and occasionally as pets. Worst case scenario is going the way of the mammoth.
alfalfasprout 18 hours ago [-]
The harsh reality is that a culture of selfishness has become too widespread. Too many people (especially in tech) don't really care what happens to others as long as they get rich off it. They'll happily throw others under the bus and refuse to share wellbeing even in their own communities.
It's the inevitable result of low-trust societies infiltrating high trust ones. And it means that as technologies with dangerous implications for society become more available there's enough people willing to prostitute themselves out to work on society's downfall that there's no realistic hope of the train stopping.
torginus 8 hours ago [-]
I think the fundamental false promise of capitalism and industrial society is that it claims to be able to manufacture happiness and life satistfaction.
Even on the material realm this is untrue, beyond meeting the basic needs of people on the technological level, he majority desirable things - such as nice places to live - have a fixed supply.
This necessitates that the price of things like real estate, must increase in price in proportion to the money supply. With increasing inequality, one must fight tooth and nail to get the standard of life our parents considered easily available. Not being greedy is not a valid life strategy to pursue, as that means relinquishing an ever greater proportion of wealth to people who are, and becoming poorer in the process.
mionhe 8 hours ago [-]
I don't disagree that money (and therefore capitalism or frankly any financial system) is unable to create happiness.
I disagree with your example, however, as the most basic tenet of capitalism is that when there is a demand, someone will come along to fill it.
Addressing your example specifically, there's a fixed supply of housing in capitalist countries not because people don't want to build houses, but because government or bureacracy artificially limits the supply or creates other disincentives that amount to the same thing.
hnbad 2 hours ago [-]
> the most basic tenet of capitalism is that when there is a demand, someone will come along to fill it.
That's the most basic tenet of markets, not capitalism.
The mistake people defending capitalism routinely make (knowingly or not) is talking about "positive sum games" and growth. At the end of the day, the physical world is finite and the potential for growth is limited. This is why we talk about "market saturation". If someone owns all the land, you can't just suddenly make more of it, you have wait for them to part with some of it, voluntarily, through natural causes (i.e. death) or through violence (i.e. conquest). This not only goes for land but any physical resource (including energy). Capitalism too has to obey the laws of thermodynamics, no matter how much technology improves the efficiency of extraction, refinement and production.
It's also why the overwhelming amount of money in the economy is not caught up in "real economics" (i.e. direct transactions or physical - or at least intellectual - properties) but in stocks, derivatives, futures, financial products of any flavor and so on. This doesn't mean those don't affect the real world - of course they do because they are often still derived from reality - but they have nothing to do with meeting actual human needs rather than the specific purpose of "turning money into more money". It's unfair to compare this to horse racing as in hore racing at least there's a race whereas in this entirely virtual market you're betting on what bets other people will make but the horse will still go to the sausage factory if the investors are no longer willing to place their bets on it - the horse plays a factor in the game but its actual performance is not directly related to its success; from the horse's perspective it's less of a race and more of a game of shoots and ladders with the investors calling the dice.
The idea of "when there is demand, it will be filled" also isn't even inherently positive. Because we live in a finite reality and therefore all demand that exists could plausibly be filled unless we run into the limits of available resources, the main economic motivator has not been to fill demands but to create demands. For a long time advertisement has no longer been about directing consumers "in the market" for your kind of goods to your goods specifically, it's been about creating artificial demand, about using psychological manipulation to make consumers feel a need for your product they didn't have before. Because it turns out this is much more profitable than trying to compete with the dozens of other providers trying to fill the same demand. Even when competing with others providing literally the same product, advertisement is used to sell something other than the product itself (e.g. self-actualization) often by misleading the consumers into buying it for needs it can't possibly address (e.g. a car can't fix your emotional insecurities).
This has already progressed to the point where the learned go-to solution for fixing any problems is making a purchse decision, no matter how little it actually helps. You hate capitalism? Buy a Che shirt and some stickers and you'll feel like you helped overthrow it. You want to be healthier? Try another fad diet that costs you hundreds of dollars in proprietary nutrition solutions and is almost designed to be unsustainable and impossible to maintain. You want to stop climate change? Get a more fuel-efficient car and send your old car to the junker, and maybe remember to buy canvas bags. You want to not support Coca-Cola because it's got blood on its hands? Buy a more expensive cola with slightly less blood on its hands.
There's a fixed housing supply in capitalist countries because - in addition of the physical limitations - the goal of the housing market is not to provide every resident with an affordable home but to generate maximum return on the investment of purchasing the plot and building the house - and willy nilly letting people live in those houses for less just because nobody is willing to pay your price tag would drive down the resale value of every single house in the neighborhood and letting an old lady live in an apartment for two decades is less profitable than kicking her out to modernize the building and sell it to the next fool.
Deregulation doesn't fix supply. Deregulation merely lets the market off the leash, which in a capitalist system means accelerating the wealth transfer to the owners from the renters.
There are other possibilities than capitalism, and no Soviet-style state capitalism or Chinese-style state capitalism are not the only alternative. But if you don't want to let go of capitalism, you can only choose between the various degrees from state capitalism to stateless capitalism (i.e. feudalism with extra steps, which people like Peter Thiel advocate for) and it's unsurprising most systems that haven't already collapsed land somewhere in between.
vixen99 6 hours ago [-]
Let's not ascribe the possession of higher level concepts like a 'promise' to abstract entities. Reserve that for individuals. As with some economic theories, you appear to have a zero sum game outlook which is, I submit, readily demolished.
>The harsh reality is that a culture of selfishness has become too widespread.
I'm not even sure this is a culture specific issue. More like selfishness is a survival mechanism hard wired into humans, including other animals. While one could argue that cooperation is also a good survival mechanism, but that's only true so long environmental factors put a pressure on people to cooperate. When that pressure is absent, accumulating resources at the expense of others gives an individual a huge advantage, and they would do it, given the chance.
hnbad 1 hours ago [-]
I'd argue you've got things mixed up, actually.
Humans are social animals. We are individually physically weak and defenseless. Unlike other animals, we are born into this world immobile, naked, starving and helpless. It takes us literally years to mature to the point where we wouldn't simply die outright if we were abandoned by others. Newborns can literally die from touch deprivation. We develop huge brains not only to allow us to come up with clever tools but also to help us build and navigate complex social relationships. We're evolved to live in tribes, yes, but we're also evolved to interact with other tribes - we created diplomacy and trading and even currency to interact with those other tribes without having to resort to violence or avoidance.
In crises, this is the behavior we fall back to. Yes, some will self-isolate and use violence to keep others away until they feel safe again. But overwhelmingly what we see after natural disasters and spaces where the formal order of civilisation and state is disrupted and leaves a vacuum is cooperation, mutual aid and people taking risks to help others - because we intrinsically know that being alone means death and being in a group means surviving. Of course the absence of state control also often enables other existing groups to assert their power, i.e. organized crime. But it shouldn't be surprising that the fledgling and atrophied ability to self-organize might not be strong enough to withstand a fast moving power grab by an existing group - what might be more surprising is that this is rarely the case and often news stories about "looting" after a natural disaster turn out to be uncharitable descriptions of self-organized rescues and searches.
I think a better analogy for human selfishness would be the mirage of "alpha wolves". As seems to be common knowledge at this point, there is no such thing as an "alpha wolf" hierarchy in groups of wolves living in nature and the phenomenon the author who coined the term (and has since regretted doing so) was mistakenly extrapolating from observations he made of wolves in captivity. But the behavior does seem to exist in captivity. Not because it's "inherent" or their natural behavior "under pressure" but because it's a maladaptation that arises from the unnatural circumstances of captivity (e.g. different wolves with no prior bonds being forced into a confined space, naturally trying to form a group but being unable to rely on natural bonds and shared trust).
Humans do not naturally form strict social hierarchies. For the longest time, Europeans would have laughed at you if you claimed the feudal system was not in the human nature - it would have literally been heresy to challenge it. Nowadays in the West most people will say capitalism or markets are human nature. Outside the West, people will still likely at least tell you that authoritarianism is human nature - whether it's the boot of a dictatorship, the boots of oligarchs or "the people's boot" that's pushing down on the unruly (yourself included).
What we do know about more egalitarian tribal societies is that they often use delegation, especially in times of war. When quick decisions need to be made, you don't have the time for lengthy discussions and consensus seeking and it can be an advantage to have one person giving orders and coordinating an attack or defense. But these systems can still be consent-based: if the war chief is reckless or seeks to take advantage of the group for his own gain, he is easily demoted and replaced. Likewise in times of unsolvable problems like droughts, spiritual leaders might be given more power by the group. Now shift from more mobile, nomadic groups to more static, agrarian groups (though it's worth pointing out the distinction here is not agriculture but more likely granaries, crop rotation and irrigation, as some nomadic tribes still engaged in forms of agriculture) and suddenly it becomes easier for that basis of consent to be forgotten and the chosen leaders to maintain that initial state of desperation and to begin justifying their status with the divine mandate. Oops, you got a monarchy going.
Capitalism freed us from the monarchy but it did not meaningfully upset the hierarchy. Aristocrats became capitalists, the absence of birthright class assignment created some social mobility but the proportions generally remained the same. You can't have a leader without followers, you can't have a ruling class without a class of those they can rule over, you can't have an owning class without a class to rent that owned property out to and to work for that owned capital to be realized into profits.
But just like a monarch despite their divine authority was still beholden to the support of the aristocracy to exert power over others and to the laborers to till the fields, build the castle and fight off foreign claims to power, the owning class too exists in a state of perpetual desperation and distrust. The absence of divine right means a billionaire must maintain their wealth and the capitalist mantra of infinite growth means anything other than growing that wealth is insufficient to maintain it. All the while they have to compete with the other billionaires above them as well as maintain control over those beneath them and especially the workers and renters whose wealth and labor they must extract from in order to grow theirs. The perverse reality of hierarchies is that even those at the top of it are crushed underneath its weight. Nobody is allowed to be happy and at peace.
Aurornis 17 hours ago [-]
> The harsh reality is that a culture of selfishness has become too widespread. Too many people (especially in tech) don't really care what happens to others as long as they get rich off it. They'll happily throw others under the bus and refuse to share wellbeing even in their own communities.
This is definitely not a new phenomenon.
In my experience, tech has been one of the more considerate areas of societal impact. Spend some time in other industries and it's eye-opening to see the wanton disregard for consumers and the environment.
There's a lot of pearl-clutching about social media, algorithms, and "data", but you'll find far more people in tech (including FAANG) who are actively working on privacy technology, sustainable development and so on then you will find people caring about the environment by going into oil & gas, for example.
hnbad 45 minutes ago [-]
> There's a lot of pearl-clutching about social media, algorithms, and "data", but you'll find far more people in tech (including FAANG) who are actively working on privacy technology, sustainable development and so on then you will find people caring about the environment by going into oil & gas, for example.
Sure, we don't need to talk about how certain Big Oil companies knew about the climate catastrophe before any scientists publicly talked about it, or how tobacco companies knew their product was an addictive drug while blatantly lying about it even in public hearings.
But it's ironic to mention FAANG given what the F is for if you recall that when the algorithmic timeline was first introduced by Facebook, the response from Facebook to criticism was literally that satisfaction went down but engagement went up. People directly felt that the algorithm made them more unhappy, more isolated and overall less satisfied but because it was more addictive, because it created more "engagement", Facebook doubled down on it.
Also "sustainable" stopped being a talking point when the tech industry became obsessed with LLMs. Microsoft made a big show of wanting to become "carbon neutral" (of course mostly using bogus carbon offset programs that don't actually do anything and carbon capture technologies that are net emission positive and will be for decades if not forever but still, at least they pretended) and then silently threw all of that away when it became more strategically important to pursue AI at any cost. Companies that previously desperately tried to sell messages of green washing and carbon neutrality now talk about building their own non-renewable power plants because of all the computational power they need to run their LLMs (not to mention how much more hardware needs to be produced and replaced for this - the same way the crypto bubble ate through graphics cards).
I think the pearl-clutching is justified considering that ethics and climate protection have now been folded into "woke" and there's a tidal wave in Western politics to dismantle civil rights and capture democratic systems for corporate interests that is using the "anti-woke" culture war to further its goals - the Trump government being the most obvious example. It's no longer in FAANG's financial interests to appear "green" or "privacy conscious", it's now in their interest to be "anti-woke" and that now means no longer having to care about these things and having freedom to crack down on any dissident voices within without fearing public backlash or "cancel culture".
timacles 16 hours ago [-]
> reality is that a culture of selfishness has become too widespread.
Tale as old as time. We’re yet another society blinded by our own hubris. Tell me what is happening now is not exactly how Greece and Rome fell.
The scary part is that we as a species are becoming more and more capable of large scale destruction. Seems like we are doomed to end civilization this way someday
hnbad 16 minutes ago [-]
> Tell me what is happening now is not exactly how Greece and Rome fell.
I'm not sure what you mean by that. Ancient Greece was a loose coalition of city states, not an empire. You could say they were short-sighted by being more concerned about their rivalry than external threats but the closest they came to being united was under Alexander the Great, whose death left a power vacuum.
There was no direct cause of "the fall" of Ancient Greece. The city states were suffering greatly from social inequality, which created tensions and instability. They were militarily weakened from the war with the Persians. Alexander's death left them without a unifying force. Then the Roman Empire knocked on its door and that was the end of it.
Rome likewise didn't fall in one single way. "Rome" isn't even what people think it is. Roman history spans several different entities and even if you talk about the "empire in decline" that's covering literally hundreds of years, ending with the Holy Roman Empire, which has been retroactively reimagined as a kind of proto-Germany. But even then that's only the Western Roman Empire - the Eastern Roman Empire continued to exist as the Byzantine Empire until the Ottoman Empire conquered Constaninople. And this distinction between the two empires is likewise retroactive and did not exist in the minds of Romans at the time (although they were de facto independent of each other).
If you only focus on the century or so that is generally considered to represent the fall of Western Rome, the ultimate root cause actually seems to be natural climate change. The Huns fled climate change, chasing away other groups that then fled into the Empire. Late Western Rome also again suffered from massive wealth inequality, which the ruling class attempted to maintain with increasingly cruel punishments.
So, if you want to look for a common thread, it seems to be the hubris of the financial elite, not "society" as a whole.
raincole 6 hours ago [-]
The harsh truth is people stop pretending the world is rule based.
If they signed the agreement... so what? Do people forget that the US has withdrawn from Paris Agreement and is withdrawing from WHO? Do people forgot Israel and North Korea got nukes even when we supposedly had a global nonproliferation treaty?
If AGI is as powerful and dangerous as doomsayers believe, the chance the US (or China, or any country with enough talented computer scientists) would respect whatever treaty they have about AGI is exactly zero.
greenimpala 17 hours ago [-]
Profit over ethics, self-interest over communal well-being, and competition over cooperation. You're describing capitalism.
tmnvix 16 hours ago [-]
I don't necessarily disagree with you, but I think the issue is a little more nuanced.
Capitalism obviously has advantages and disadvantages. Regulation can address many disadvantages if we are willing. Unfortunately, I think a particular (mostly western) fetish for privileging individuals over communities has been wrongly extended to capital itself (e.g. corporations recognised as entities with rights similar to - and sometimes over-and-above - those of a person). We have literally created monsters. There is no reason we had to go this far. Capitalism doesn't have to mean the preeminence of capital above all else. It needs to be put back in its place and not necessarily discarded. I am certain there are better ways to practice capitalism. They probably involve balancing it out with some other 'isms.
FpUser 16 hours ago [-]
>"I think a particular (mostly western) fetish for privileging individuals over communities has been wrongly extended to capital itself (e.g. corporations recognised as entities with rights similar to - and sometimes over-and-above - those of a person)"
Possible remedy will be to tie corporation to a person - person (or many if there are few owners and directors) become personally liable for everything corporation does.
16 hours ago [-]
khazhoux 10 hours ago [-]
> Too many people (especially in tech) don't really care what happens to others as long as they get rich off
This is a problem especially everywhere.
orangebread 18 minutes ago [-]
I think the core of what people are scared of is fear itself. Or put more eloquently by some dead guy "There is nothing to fear, but fear itself".
If we don't want to live in a world where these incredibly powerful technologies are leveraged for nefarious purposes there needs to be emotional maturity and growth amongst humanity. Those who are able to make these growths need to hold the irresponsible ones accountable (with empathy).
The promise of AI is that these incredibly powerful technologies will be disseminated to the masses and Open AI know this is the next step and it's why they're trying to keep a grip on their market share. With the advent of nVidia's project digits and powerful open source models like deepseek, it's very clear how this trajectory will go.
Just wanted to add some of this to the convo. Cheers.
chasd00 16 hours ago [-]
How to you prevent advancements in software? The barrier to entry is so low, you just need a cheap laptop and an internet connection and then day 1 you're right on the cutting edge driving innovation. Current AI requires a lot of hardware for training but anyone with a laptop and inet connection can still do cutting edge research and innovate with architectures and algorithms.
If a law is passed saying "AI advancement is illegal" how can it ever be enforced?
palmotea 16 hours ago [-]
> How to you prevent advancements in software? The barrier to entry is so low, you just need a cheap laptop and an internet connection and then day 1 you're right on the cutting edge driving innovation. Current AI requires a lot of hardware for training but anyone with a laptop and inet connection can still do cutting edge research and innovate with architectures and algorithms.
> If a law is passed saying "AI advancement is illegal" how can it ever be enforced?
Like any other real-life law? Software engineers (a class which I'm a recovering member of) seem to have a pretty common misunderstanding about the law: that it needs to be air tight like secure software, otherwise it's pointless. That's just not true.
So the way you "prevent advancements in [AI] software" is you 1) punish them severely when detected and 2) restrict access to information and specialized hardware to create a barrier (see: nuclear weapons proliferation, "born secret" facts, CSAM).
#1 is sufficient to control all the important legitimate actors in society (e.g. corporations, university researchers), and #2 creates a big barrier to everyone else who may be tempted to not play by the rules.
It won't be perfect (see: the drug war), but it's not like cartel chemists are top-notch, so it doesn't have to be. I don't think the software engineering equivalent of a cartel chemist will be able to "do cutting edge research and innovate with architectures and algorithms" with only a "laptop and inet connection."
Would the technology disappear? No. Will it be pushed to the margins? Yes. Is that enough? Also yes.
AnimalMuppet 16 hours ago [-]
Punish them severely when detected? Nice plan. What if they aren't in your jurisdiction? Are you going to punish them severely when they're in China? North Korea? Somalia? Good luck with that.
The problem is that the information can go anywhere that has an internet connection, and the enforcement can't.
palmotea 16 hours ago [-]
> Punish them severely when detected? Nice plan. What if they aren't in your jurisdiction?
If they are in Virginia then it is within American jurisdiction and there's no need for military involvement.
thingsilearned 13 hours ago [-]
Regulating is very hard at the software level but not hard at the hardware level. The US and allies control all major chip manufacturing. Open AI and others have done work showing that regulating compute should be significantly easier to do than other regulations we've done such as nuclear https://www.cser.ac.uk/media/uploads/files/Computing-Power-a...
torginus 8 hours ago [-]
This paper should be viewed in retrospect with the present day knowledge that Deepseek exists - regulating compute in not as easy or effective as previously thought.
As for the Chinese chip industry, I don't claim to be an expert on it, but it seems the Chinese are quickly coming up with increasingly less inferior alternatives to Western tech.
GoatInGrey 7 hours ago [-]
The thing is, though, that Deepseek's training cluster is comprised of mostly pre-ban chips. That and the performance/intelligence of their flagship models achieved parity with western models between two and eight months old at the time of release. So in a way, they're still behind the Americans and the export controls hamper their ability to change that moving forward.
Perhaps it only takes China a few years to develop domestic hardware clusters rivalling western ones. Though those few years might prove critical in determining who crosses the takeoff threshold of this technology, first.
pj_mukh 18 hours ago [-]
"We are able to think of thousands of hypothetical ways technology could go off the rails in a catastrophic way"
Am I the only one here saying that this is no reason to preemptively pass legislation? That just seems crazy to me. Imagined horrors aren't real horrors?
I disagree with this administrations approach, I think we should be vigilant, and keeping people who stand to gain so much from the tech in the room, doesn't seem like a good idea, but other than that, I haven't seen any real reason to do more than wait and be vigilant?
saulpw 17 hours ago [-]
Predicted horrors aren't real horrors either. But maybe we don't have to wait until the horrors are realized and embedded into the fabric of society before we apply the brakes a bit. How else could we possibly be vigilant? Reading news articles and wringing our hands?
talldrinkofwhat 13 hours ago [-]
I think it's worth noting that we can't even combat the real horrors. The fox is already in the henhouse. The quote that sticks with me is:
"We've already lost our first encounter with AI" - I think Yuval Hurari.
Algorithms heavily thumbed the scales on our social contracts. Where did all of the division come from? Why is extremism blossoming everywhere? Because it gets clicks. Maybe we're just better observing what's been going on under the hood all along, but it seems like there's about 350 million little cans of gasoline dousing American eyeballs.
Make Algorithms Govern All indeed.
XorNot 16 hours ago [-]
There's a difference between the trolley speeding towards someone tied to the tracks, versus someone tied to the tracks but the trolley is stationary, and to someone standing at the station looking at the bare ground and saying "if we built some tracks and put a trolley on it, and then tied someone to the tracks the trolley would kill them! We need to regulate against this dangerous trolley technology before it's too late". Then instead someone builds a freeway because it turns out the area wasn't well suited to a rail trolley.
saulpw 10 hours ago [-]
The tracks have been laid by social media and smartphones, we've all been tied to the tracks for awhile and some people have definitely been run over by trolleys, and the people building this next batch of monster trolleys are accelerationists.
zoogeny 19 hours ago [-]
I think the alternative is just as chilling in some sense. You don't want to be stuck in a country that outlaws AI (especially from other countries) if that means you will be uncompetitive in the new emerging world.
The future is going to be hard, why would we choose to tie one hand behind our back? There is a difference between being careful and being fearful.
TFYS 18 hours ago [-]
It's because of competition that we are in this situation. When the economic system and relationships between countries are based on competition, it's nearly impossible to avoid these races to the bottom. We need more systems based on cooperation instead of competition.
int_19h 18 hours ago [-]
International systems are more organic than designed, but the problem with cooperation is that it's not a particularly stable arrangement without enforcement - sure, everybody is better off when everybody cooperates, but you can be even better off when you don't cooperate but everybody else does.
JumpCrisscross 18 hours ago [-]
> We need more systems based on cooperation instead of competition.
That requires dissolving the anarchy of the international system. Which requires an enforcer.
AnthonyMouse 18 hours ago [-]
Isn't this the opposite? If you want competition then you need something like the WTO as a mechanism to prevent countries from putting up trade barriers etc.
If some countries want to collaborate on some CERN project they just... do that.
JumpCrisscross 18 hours ago [-]
> If you want competition then you need something like the WTO as a mechanism to prevent countries from putting up trade barriers etc.
That's an enforcer. Unfortunately, nobody follows through with its sanctions, so it's devolved into a glorified opinion-providing body.
> If some countries want to collaborate on some CERN project they just... do that
CERN is about doing thing, not not doing things. You can't CERN your way to nuclear non-proliferation.
AnthonyMouse 17 hours ago [-]
> You can't CERN your way to nuclear non-proliferation.
Non-proliferation is, the US has nuclear weapons and doesn't want Iran to have them, so is going to apply some kind of bribe or threat. It's not cooperative.
The better example here is climate change. Everyone has a direct individual benefit from burning carbon but it's to our collective detriment, so how do you get anyone to stop, especially the countries with large oil and coal reserves?
In theory you could punish countries that don't stop burning carbon, but that appears to be hard and in practice what's doing the most good is making solar cheaper than burning coal and making electric cars people actually want, politics of infamous electric car man notwithstanding.
So what does that look like for making AI "safe, secure and trustworthy"? Maybe something like publishing state of the art models for free with full documentation of how they were created, so that people aren't sending their sensitive data to questionable third parties who do who knows what with it or using models with secret biases.
18 hours ago [-]
T-A 15 hours ago [-]
> Unfortunately, nobody follows through with its sanctions, so it's devolved into a glorified opinion-providing body.
That's a little misleading. What actually happened is summarized here:
Since 2019, when the Donald Trump administration blocked appointments to the body, the Appellate Body has been unable to enforce WTO rules and punish violators of WTO rules. Subsequently, disregard for trade rules has increased, leading to more trade protectionist measures. The Joe Biden administration has maintained Trump's freeze on new appointments.
Henchman21 18 hours ago [-]
I’d nominate either the AGI people keep telling me is “right around the corner”, or the NHI that seem to keep popping up around nuclear installations.
Clearly humans aren’t able to do this task.
zoogeny 18 hours ago [-]
I'm not certain of the balance myself. I was thinking as a counterpoint of the band The Beatles where the two song writers (McCartney and Lennon) are seen in competition. There is a balance there between their competitiveness as song writers and their cooperation in the band.
I think it is one-sided to see any situation where we want to retain balance as being significantly affected by one of the sides exclusively. If one believes that there is a balance to be maintained between cooperation and competition, I don't immediately default to believing that any perceived imbalance is due to one and not the other.
pb7 17 hours ago [-]
Competition is as old as time. There are single celled organisms on your skin right now competing for resources to live. There is nothing more innate to life than this.
sapphicsnail 16 hours ago [-]
Cooperation is as old as time. There are single celled organisms living symbiotically on your skin right now.
XorNot 15 hours ago [-]
The mitochondria in my cells are also symbiotes but thats just because whatever ancestor ate then found they were hard to digest.
The naturalistic fallacy is still a fallacy.
gus_massa 14 hours ago [-]
The bacteria that are most related to mithocondria are intracelular parasits, so they were probably not eaten while roaming arround pacefully, they are probably nasty parasits that got lazy.
tmnvix 16 hours ago [-]
> You don't want to be stuck in a country that outlaws AI
Just as you don't want to be stuck in the only town that outlaws murder...
I am not a religious person, but I can see the value in promoting shared taboos. The question is, how do we do this in the modern world? We had some success with nuclear weapons. I don't think it's any coincidence that contemporary leaders (and possibly populations) seem to have forgotten how bloody dangerous they are and how utterly stupid it is to engage in brinkmanship with so much on the line.
zoogeny 15 hours ago [-]
This is a good point, and it is the reason why communists argued that the only way communism could work is if it happened globally simultaneously. You don't want to be the only non-capitalist country in a world of capitalists. Of course, when the world-wide revolution didn't happen they were forced to change their tune and adjust.
As for nuclear weapons, I mean it does kind of suck in today's age to be a country without nuclear weapons, right? Like, certain well known countries would really like to have them so they wouldn't feel bullied by the ones that have them. So, I actually think that example works against you. And we very well may end up in a similar circumstance where a few countries get super powerful AGIs and then use their advantage to prevent any other country from getting it as well. Therefore my point stands: I don't want to be in one of the countries that doesn't get to be in that exclusive club.
latexr 19 hours ago [-]
> if that means you will be uncompetitive in the new emerging world. (…) There is a difference between being careful and being fearful.
I’m so sick of that word. “You need to be competitive”, “you need to innovate”. Bullshit. You want to talk about fear? “Competitiveness” and “innovation” are the words the unscrupulous people at the top use to instil fear on everyone else and run rampant. They’re not being competitive or innovative, they’re sucking you dry of as much value as they can. We all need to take a breath. Stop and think for a moment. You can literally eat food which grows from the ground and make a shelter with a handful of planks and nails. Humanity survived and thrived before all this unfettered consumption, we don’t need to kill ourselves for more.
I live in a ruralish area. There is a lot of forested area and due to economic depression there are a lot of people living in the woods. Most live in tents but some actually cut down the trees and turn them into make-shift shacks. Using planks and nails like you suggest. They often drag propane burners into the woods which often leads to fires. Perhaps this is what you mean?
In reality, most people will continue to live the modern life where there are doctors, accountants, veterinarians, mechanics. We'll continue to enjoy food distribution and grocery stores. We'll all hope that North America gets its act together and build high speed rail so we can travel comfortably for long distances.
There was a time Canada was a big exporter of engineering technology. From mining to agriculture, satellites, and nuclear technology. I want Canada to be competitive in these ways, not making makeshift shacks out of planks and nails for junkies that have given up on life and live in the woods.
latexr 18 hours ago [-]
> They often drag propane burners into the woods which often leads to fires. Perhaps this is what you mean?
I believe you very well know it’s not, and are transparently arguing in bad faith.
> shacks (…) for junkies that have given up on life
The insults you’ve chosen are quite telling. Not everyone living in a way you disapprove of is an automatic junky.
roenxi 17 hours ago [-]
> I believe you very well know it’s not, and are transparently arguing in bad faith.
That is actually what you are talking about; "uncompetitive" looks like something in the real world. There isn't an abstract dial that someone twiddles to set the efficiency of two otherwise identical outcomes - the competitive one will typically look more advanced and competently organised in observable ways.
To live in nice houses and have good food requires a competitive economy. The uncompetitive version was literally living in the forest with some meagre shelter and maybe having a wood fire to cook food (that was probably going to make someone very sick). The reason the word "competitive" turns up so much is people living in a competitive society get to have a more comfortable lifestyle. People literally starve to death if the food system isn't run with a competitive system that tends towards efficiency; that experiment has been run far too many times.
I-M-S 16 hours ago [-]
What the experiment has repeatedly shown is that people living in non-competitive systems starve to death when they get in the way of a system that has been optimized solely for ruthless economic efficiency.
roenxi 16 hours ago [-]
The big one that leaps to mind was the famines with the communist experiments in the 20th century. But there are other, smaller examples that crop up disturbingly regularly. Sri Lanka's fertiliser ban was a jaw-dropper; Zimbabwe redistributing land away from whites was also interesting. There are probably a lot more though, messing with food logistics on the theory there are more important things than producing lots of food seems to be one of those things countries do from time to time.
People can argue about the moral and ideological sanity of these things, but the fact is tolerating economic inefficiencies into the food system can quickly leads to there not being enough food.
I-M-S 15 hours ago [-]
The big ones that leapt to my mind were the Great Irish famine the duration of which food exports to Great Britain were higher than food imports, Bengal famine (the Brits again), and starvation of Native Americans through targeted eradication of the bison.
zoogeny 18 hours ago [-]
You stated one ludicrous extreme (food comes out of the ground! shelter is planks and nails!) and I stated another ludicrous extreme. You can make my position look simplistic and I can make your position look simplistic. You can't then cry foul.
You are also assuming, in bad faith, an "all" where I did not place one. It is an undeniable fact with evidence beyond any reasonable doubt, including police reports and documented studies by the district, that the makeshift shacks in the rural woods near my house are made by drug addicts that are eschewing the readily available social housing for the specific reason that they can't go to that housing due to its explicit restrictions on drug use.
latexr 17 hours ago [-]
> ludicrous extreme
I don’t understand this. Are you not familiar with farming and houses? You know humans grow plants to eat (including in backyards and balconies in cities) and make cabins, chalets, houses, entire neighbourhoods (Sweden currently planning the largest) with wood, right?
zoogeny 17 hours ago [-]
You are making a caricature of modern lifestyle farming, not an argument for people literally living as they did in the past. Going to your local garden center and buying some seedlings and putting them on your balcony isn't demonstrative of a life like our ancestors lived. Living in one of the wealthiest countries to ever have existed and going to the hardware store to buy expensive hardwoods to decorate your house isn't the same as living as our ancestors did.
You don't realize the luxury you have and for some reason you assume that it is possible without that wealth. The reality of that lifestyle without tremendous wealth is more like subsistence farming in Africa and less like Swedish planned neighborhoods.
latexr 17 hours ago [-]
> (…) not an argument for people literally living as they did in the past. (…) isn't demonstrative of a life like our ancestors lived. (…) isn't the same as living as our ancestors did.
Correct. Nowhere did I defend or make an appeal to live life “as they did in the past” or “like our ancestor did”. We should (and don’t really have a choice but to) live forward, not backward. We should take the good things we learned and apply them positively to our lives in the present and future, and not strive for change and consumption for their own sakes.
zoogeny 15 hours ago [-]
You said: "Humanity survived and thrived before all this unfettered consumption, we don’t need to kill ourselves for more."
To deny that your juxtaposition of this claim with your point about growing seeds and nailing together planks doesn't pass my personal test of credibility. You say: "Stop and think for a moment. You can literally eat food which grows from the ground and make a shelter with a handful of planks and nails." but that isn't indicative of a thriving life as I demonstrated. You can do both of those things and still live in squalor, a condition I wouldn't wish on my worst enemy.
You then suggest that I don't understand farming or house construction to defend that point, as if the existence of backyard gardens or wood cabins proves the point that a modern comfortable life is possible with gardens and wood cabins. My point is that the wealth we have makes balcony gardens and wood cabins possible and you are reasoning backwards. To be clear, we get to enjoy the modern luxury of backyard gardens and wood cabins by being wealthy and we don't get to be wealthy by making backyard gardens and wood cabins.
> We should take the good things we learned and apply them positively to our lives in the present and future
Sure, and I can argue competitiveness could be a lesson we have learned that can be applied positively. The way it is used positively in team sports and many other aspects of society.
Henchman21 18 hours ago [-]
You, too, should read this and maybe try and tale it to heart:
> “Competitiveness” and “innovation” are the words the unscrupulous people at the top use to instil fear on everyone else and run rampant
If a society is okay accepting a lower standard of living and sovereign subservience, then sure, competition doesn't matter. But if America and China have AI and nukes and Europe doesn't, one side gets to call the shots and the other has to listen.
latexr 18 hours ago [-]
> a lower standard of living
We better start really defining what that means, because it has become quite clear that all this “progress” is not leading to better lives. We’re literally going to kill ourselves with climate change.
> AI and nukes
Those two things aren’t remotely comparable.
JumpCrisscross 18 hours ago [-]
> it has become quite clear that all this “progress” is not leading to better lives
How do you think the average person under 50 would poll on being teleported to the 1950s? No phones, no internet, jet travel is only for the elite, oh nuclear war and MAD are new cultural concepts, yippee, and fuck you if you're black because the civil rights acts are still a decade out.
> two things aren’t remotely comparable
I'm assuming no AGI, just massive economic efficiencies. In that sense, nuclear weapons give strategic autonomy through military coercion and the ability to grant a security umbrella, which fosters e.g. trade ties. In the same way, the wealth from an AI-boosted economy fosters similar trade ties (and creates similar costs for disengaging). America doesn't influence Europe by threatening to nuke it, but by threatening not to nuke its enemies.
encipriano 17 hours ago [-]
There's no objective definition of what progress even means so the guy is kinda right. We live in a postmodernist society where its not easy to find meaningfullness. All these debates have been discussed by philosophers like Nietzche and Hegel. The media and society shape our understanding and importance of whats popular, progressive and utilitarian.
latexr 18 hours ago [-]
> on being teleported to the 1950s?
That’s not the argument. At all. I argued we should rethink our attitude of unfettered consumption so we don’t continue on an path which is provably leading to destruction and death, and your take is going back in time to nuclear war and overt racism. That is frankly insane. I’m not fetishising “the old days”, I’m saying this attitude of “more more more” does not automatically translate to “better”.
JumpCrisscross 18 hours ago [-]
You said "all this 'progress' is not leading to better lives." That implies lives were better or at least as good before "all this 'progress'."
If you say Room A is not better than Room B, then you should be, at the very least, indifferent to swapping between them. If you're against it, then Room A is better than Room B. Our lives are better--civically, militarily and materially--than they were before. Complaining about unfettered consumerism by falsely claiming our lives are worse today than they were before doesn't support your argument. (It's further undercut by the falling material and energy intensity of GDP in the rich world. We're able to produce more value for less input resource-wise.)
latexr 18 hours ago [-]
> You said "all this 'progress' is not leading to better lives." That implies lives were better or at least as good before "all this 'progress'."
No. There is a reason I put the word in quotes. We are on a thread, the conversation follows from what came before. My original post was explicit about words used to bullshit us. I was specifically referring to what the “unscrupulous people at the top” call “progress”, which doesn’t truly progress humanity or enhances the lives of most people, only theirs.
vladms 16 hours ago [-]
There are many people claiming many things. Not sure which "top" you are referring to, but everybody at the end of a chain (most rich, most political powerful, most popular), generally are selected for being unscrupulous. So not sure why you should ever trust what they say... If you agree, just ignore what most of what those say and find other people to listen to for interesting things.
To give a tech example, not many people were listening to Stallman and Linus and they still managed to change a lot for the better.
layer8 18 hours ago [-]
To be honest, the 1950s become more appealing by the year.
I-M-S 16 hours ago [-]
I'd like to see a poll if the average person would like to be teleported 75 years into the future to 2100.
When does that competitiveness and innovation stop though? If they stopped 100 years ago where would we be today as a species and is that better or worse than today? How about 1000 years ago?
We face issues (like we always have), but I'd argue quite strongly that the competitiveness in our history and drive to invent and innovate has led to where we are today and it's a good thing.
yibg 15 hours ago [-]
This is true for all new technology of significant potential impact right? Similar discussions were had about nuclear technology I'm sure.
The reality is, with increased access to information and accelerated pace of discovery in various fields, we'll come across things that has the potential for great harm. Be it AI, some genetical engineering causing a plague, nuclear fallout etc. We don't necessarily know what the harm / benefits are all going to be ahead of time, so we only really have 2 choices:
1. try to stop / slow down such advances. Not sure this is even possible in the long run
2. try to get a good grasp of potential dangers and figure out ways to mitigate / control them
mozvalentin 4 hours ago [-]
The next season of Black Mirror is just going to be international news coverage.
throwaway9980 11 hours ago [-]
Everything you are describing sounds like the phenomenon of government in the United States. If we replace a human powered bureaucracy with a technofeudalist dystopia it will feel the same, only faster.
We are upgrading the gears that turn the grist mill. Stupid, incoherent, faster.
InDubioProRubio 1 hours ago [-]
Lucky us we life in times, were single individuals and states soon will veto via nukes on civilization.
logicchains 2 hours ago [-]
The biggest problem with AI is people with poor understanding of computer science developing an almost religious belief that increasing vaguely defined "intelligence" will somehow translate into godlike power. There's actually a field devoted to the rigorous study of what "intelligence" can achieve, called complexity theory, and it makes it clear that many of the problems that AI cultists expect "superintelligence" to solve (problems it'd need to solve to be "godlike") are not tractable even if every atom in the observable universe was combined into a giant computer.
tim333 14 hours ago [-]
> living in the next few decades driven by technology acceleration will feel like being lobotomized while conscious and watching oneself the whole time
Seems a bit negative. I think it'll be cool.
13 hours ago [-]
Gud 18 hours ago [-]
I wish your post wasn’t so accurate.
Yet, I can’t help but be hopeful about the future. We have to be, right?
idiotsecant 17 hours ago [-]
Let's say we decide, today, that we want to prevent an AI armageddon that we assume is coming.
How do you do that?
jowea 14 hours ago [-]
I think that international competition is one of the greatest guarantees that trying to stand athwart history and yelling stop never works in the long term.
casey2 3 hours ago [-]
Unfalsifiable psuedophilosophy shouldn't be mistaken for science nor legislative advise. I don't care what your cult thinks, religion and government should stay separate.
debbiedowner 17 hours ago [-]
Which books?
deadbabe 20 hours ago [-]
Anyone born in the next few decades will disagree with you. They will find this new world comfortable and rich with content. They will never understand what your problem is.
throwup238 19 hours ago [-]
I've come up with a set of rules that describe our reactions to technologies:
1. Anything that is in the world when you’re born is normal and ordinary and is just a natural part of the way the world works.
2. Anything that's invented between when you’re fifteen and thirty-five is new and exciting and revolutionary and you can probably get a career in it.
3. Anything invented after you're thirty-five is against the natural order of things.
- Douglas Adams
Telemakhos 19 hours ago [-]
> They will find this new world comfortable and rich with content.
I agree with the first half: comfort has clearly increased over time since the Industrial Revolution. I'm not so sure the abundance of "content" will be enriching to the masses, however. "Content" is neither literature nor art but a vehicle or excuse for advertising, as pre-AI television demonstrated. AI content will be pushed on the many as a substitute for art, literature, music, and culture in order to deliver advertising and propaganda to them, but it will not enrich them as art, literature, music, and culture would: it might enrich the people running advertising businesses. Let us not forget that many of the big names in AI now, like X (Grok) and Google (Gemini), are advertising agencies first and foremost, who happen to use tech.
psytrancefan 15 hours ago [-]
You don't know this though with even a high probability.
It is quite possible there is a cultural reaction against AI and that we enter a new human cultural golden age of human created art, music, literature, etc.
I actually would bet on this as engineering skills become automated that what will be valuable in the future is human creativity. What has value then will influence culture more and more.
What you are describing seems like how the future would be based on current culture but it is a good bet the future will not be that.
mitthrowaway2 20 hours ago [-]
I'm not so sure. My parents were born well after the hydrogen bomb was developed, and they were never comfortable with it.
JumpCrisscross 18 hours ago [-]
> My parents were born well after the hydrogen bomb was developed, and they were never comfortable with it
The nuclear peace is hard to pin down. But given the history of the 20th century, I find it difficult to imagine we wouldn't have seen WWIII in Europe and Asia without the nuclear deterrent. Also, while your parents may have been uncomfortable with the hydrogen bomb, the post-90s world hasn't particularly been characterised by mass nuclear anxiety. (Possibly to a fault.)
h0l0cube 18 hours ago [-]
You might have missed the cold war in your summary. Mass nuclear anxiety really characterized that era, with a number of near misses that could have ended in global annihilation (and that’s no exaggeration).
IMO, the Atoms for Peace propaganda undersells how successful globalization has been at keeping nations from destroying each other by creating codependence on complex supply chains. The new shift to protectionism may see an end to that
int_19h 18 hours ago [-]
The supply chain argument was also made wrt European countries just before WW1. It wasn't even wrong - economically, it was as devastating as predicted for everyone involved, with no real winners - but that didn't preclude the war.
h0l0cube 17 hours ago [-]
The scale of globalization post-WW2 puts it on a whole other level. The complexity of supply chains now are such that any country would grind to a halt without imports. The exception here, to some degree, is China, but so far they've been more interested in soft power over military, and that strategy has served them well. Though it seems the US are gearing up for a fight with a fully domestic manufacturing capability and natural resource pools of its own. It would require consistent protectionist policy over multiple administrations to pull something like that off, so it remains to be seen if that's truly possible.
megous 15 hours ago [-]
Yeah, let's just ignore all the wars and genocides that nuclear powers engaged in and supported and all nuclear powers that are constantly at war or engaging in occupation of others since they started existing and millions of dead and affected people.
Nice "peace".
We had 100 years of such kind of peace among major europe powers before nuclear weapons. We're not even at 80 years of peace in nuclear age, this time, and nuclear armed power is already attacking from the east and from inside via new media.
I wouldn't call it done and clear, about the "nuclear age peace".
bluGill 20 hours ago [-]
There are always a few things that people don't like. However your parents likely are comfortable with a lot of things that their parents were not.
buzzerbetrayed 19 hours ago [-]
Exceptions to rules exist, especially if you’re trying to think of a really extreme cases that specifically invalidate it.
However, that really doesn’t invalidate the rule.
mitthrowaway2 18 hours ago [-]
That's true, but I think AI may be enough of a disruption to qualify. We'll of course have to wait and see what the next generation thinks, but they might end up envious of us, looking back with rose-tinted glasses on a simpler time when people could trust photographic evidence from around the world, and interact with each other anonymously online without wondering if they were talking to an astroturf advertising bot.
stackedinserter 19 hours ago [-]
Would they prefer that only USSR had an H-bomb, but not USA?
mitthrowaway2 19 hours ago [-]
I don't think that's the nature of the argument that I was responding to.
stackedinserter 18 hours ago [-]
So what? Would they?
mitthrowaway2 18 hours ago [-]
Nuclear arms races are a form of multipolar trap, and like any multipolar trap, you are compelled to keep up, making your own life worse, even while wishing that you and your opponent could cooperatively escape the trap.
The discussion I was responding to is whether the next generation would grow up seeing pervasive AI as a normal and good thing, as is often the case with new technology. I cited nuclear weapons as a counterexample, while I agree that nobody felt that they had a choice but to keep up with them.
AI could similarly be a multipolar trap ("nobody likes it but we aren't going to accept an AI gap with Russia!"), which would mean it has that in common with nuclear weapons, strengthening the argument against the next generation being comfortable with AI.
stackedinserter 11 hours ago [-]
You don't need that much warheads to saturate your military needs. Number of possible targets is limited, in older plans there was clearly absurd overkill when a few nukes were assigned to a single target.
Also, nukes don't write code or wash your dishes, it's nothing but liability for a society.
bobthepanda 19 hours ago [-]
Do two wrongs make a right?
xp84 19 hours ago [-]
That's not the point, GP is pointing out how we only control (at least theoretically, lol) our own government, and basic game theory can tell you that countries that adopt pacifist ideas and refuse to pursue anything that might be dangerous will always at some point be easily defeated by others who are less moral.
The point is that it's complicated, it's not a black and white sound bite like the people who are "against nuclear weapons" pretend it is.
bobthepanda 16 hours ago [-]
And people don't have to feel comfortable with complicated things. The GP posted "would you prefer" as a disingenous point to invalidate the commenter's parents' feelings.
I eat meat. I know some vegans feel uncomfortable with that. But personally I feel secure in my own convictions that I don't need to run around insinuating vegans are less than or whatever.
stackedinserter 13 hours ago [-]
Your survival doesn't depend on the amount of meat you consume.
bobthepanda 13 hours ago [-]
Your survival also doesn’t depend on somebody’s discomfort or comfort with nuclear weapons, either. What’s the point of thought policing?
stackedinserter 11 hours ago [-]
With enough anti-military, anti-nuclear, anti-whatever-looks-scary-to-them people we'll stand with our pants down, just like EU or Canada these days. There was a lot of activism during the Cold war of that kind, lucky for US there weren't enough "discomforted" people back then.
Der_Einzige 15 hours ago [-]
Three rights make a left.
the_duke 18 hours ago [-]
Lets talk again after AI causes massive unemployment and social upheaval for a few decades until we find some new societal model to make things work.
This is inevitable in my view.
AI will replace a lot of white collar jobs relatively soon, years or decades.
And blue collar isn't too far behind, since a major limiting factor for automation is general purpose robots being able to act in a dynamic environment, for which we need "world models".
sharemywin 19 hours ago [-]
I guess your right here's how it happens:
Alignment Failure → Shifting Expectations
People get used to AI systems making “weird” or harmful choices, rationalizing them as inevitable trade-offs.
Framing failures as “technical glitches” rather than systemic issues makes them seem normal.
Runaway Optimization → Justifying Unintended Consequences
AI’s extreme efficiency is framed as progress, even if it causes harm.
Negative outcomes are blamed on “bad inputs” rather than the AI itself.
Bias Amplification → Cultural Reinforcement
AI bias gets baked into everyday systems (hiring, policing, loans), making discrimination seem “objective.”
“That’s just how the system works” thinking replaces scrutiny.
Manipulation & Deception → AI as a Trusted Guide
People become dependent on AI suggestions without questioning them.
AI-generated narratives shape public opinion, making manipulation invisible.
Security Vulnerabilities → Expectation of Insecurity
Constant cyberattacks and AI hacks become “normal” like data breaches today.
People feel powerless to push back, accepting insecurity as a fact of life.
Autonomous Warfare → AI as an Inevitable Combatant
AI-driven warfare is seen as more “efficient” and “precise,” making human involvement seem outdated.
Ethical debates fade as AI soldiers become routine.
Loss of Human Oversight → AI as Authority
AI decision-making becomes so complex that people stop questioning it.
“The AI knows best” becomes a cultural default.
Economic Disruption → UBI & Gig Economy Normalization
Mass job displacement is met with new economic models (UBI, gig work, AI-driven welfare), making it feel inevitable.
People adjust to a world where traditional employment is rare.
Deepfakes & Misinformation → Truth Becomes Fluid
Reality becomes subjective as deepfakes blur the line between real and fake.
People rely on AI to “verify” truth, giving AI control over perception.
Power Concentration → AI as a Ruling Class
AI governance is framed as more rational than human leadership.
Dissent is dismissed as “anti-progress,” consolidating control under AI-driven elites.
sharemywin 19 hours ago [-]
In fact we don't even need UBI either:
"Lack of Adaptability"
AI advocates argue that those who lose jobs simply failed to "upskill" in time.
The burden is placed on workers to constantly retrain, even if AI advancement outpaces human ability to keep up.
Companies and governments say, “The opportunities are there; people just aren’t taking them.”
"Work Ethic Problem"
The unemployed are labeled as lazy or unwilling to compete with AI.
Hustle culture promotes side gigs and AI-powered freelancing as the “new normal.”
Welfare programs are reduced because “if AI can generate income, why can’t you?”
"Personal Responsibility for Economic Struggles"
The unemployed are blamed for not investing in AI tools early.
The success of AI-powered entrepreneurs is highlighted to imply that struggling workers "chose" not to adapt.
People are told they should have saved more or planned for disruption, even though AI advancements were unpredictable.
"It’s a Meritocracy"
AI-driven success stories (few and exceptional) are amplified to suggest anyone could thrive.
Struggling workers are seen as having made poor choices rather than being victims of automation.
The idea of a “deserving poor” is reinforced—those who struggle are framed as not working hard enough.
"Blame the Boomers / Millennials / Gen Z"
Economic shifts are framed as generational failures rather than AI-driven.
Older workers are told they refused to adapt, while younger ones are blamed for entitlement or lack of work ethic.
Cultural wars distract from AI’s role in job losses.
"AI is a Tool, Not the Problem"
AI is framed as neutral—any negative consequences are blamed on how people use it.
“AI doesn’t take jobs; people mismanage it.”
Job losses are blamed on bad government policies, corporate greed, or individual failure rather than automation itself.
"The AI Economy Is Full of Opportunity"
Gig work and AI-driven side hustles are framed as liberating, even if they offer no stability.
Traditional employment is portrayed as outdated, making complaints about job loss seem like resistance to progress.
Those struggling are told to “embrace the new economy” rather than question its fairness.
int_19h 17 hours ago [-]
You can only do so much with agitprop. At the end of the day, if, say, 60% of the population has no income without a job and no hopes of getting said job, they are not going to starve to death no matter the justification for it.
sharemywin 16 hours ago [-]
you just carve out us and them circles then just make the circles smaller and smaller.
look at the push right now in the US against corrupt foreign aid and the mass deportations seems like the first step.
int_19h 10 hours ago [-]
Thing is, if there's too many of "them", they will eventually come for "us" with torches and pitchforks. You can victimize a large part of the population like that, but not a supermajority of it.
vladms 16 hours ago [-]
Historically, humanity evolved faster when it was interacting. So groups can try to isolate themselves but on the long run that will make them lag behind.
US benefited a lot from lots of smart people going there (even more during WWII). If people start believing (correctly or incorrectly) that they would be better somewhere else, it will not benefit them.
mouse_ 20 hours ago [-]
What makes you think that? That's what the last generations said about us and it turned out to not be true.
hcurtiss 19 hours ago [-]
Relative to them, we most certainly are. By every objective metric, humanity has flourished in "the last generations." I get it that people are stressed today -- people have always been stressed. It is, in a sense, fundamental to the human condition.
jmcgough 19 hours ago [-]
Easy for you to say that. The political party running this country ran on a platform of the eradication of me and my friends. I can't legally/safely use public restrooms in several states, including some which have paid bounties for reporting. Things will continue to improve for the wealthy and powerful, but in a lot of ways have become worse for the poor and vulnerable.
When I was a kid, there was this grand utopian ideal for the internet. Now it's fragmented, locked in walled gardens where people are psychologically abused for advertising dollars. AI could be a force for good, but Google has already ended its ban on use in weapons and is selling it to the IAF, and Palantir is busy finding ways to use it for surveillance.
int_19h 17 hours ago [-]
A reminder that it's only been 22 years since sodomy laws were declared unconstitutional in US in the first place
gecko6 13 hours ago [-]
And it was 1971 when the last chemical castration as a 'treatment' for homosexuality was performed in the US.
Eradication of an ideology is not the same as eradication of people. It's also a stretch to say Michael Knowles, a famous shock-jock, speaks for the Republican party.
deltaburnt 18 hours ago [-]
Saying their identity is "ideology" is part of the problem. There's plenty of violent movements that can be framed as just "eradicating ideology", when in reality that is just a culture, condition, religion, or trait that you don't understand or accept.
rendang 17 hours ago [-]
"I don't think people should be allowed to partake in a particular behavior" is not the same thing as "People of a specific group should be killed".
immibis 16 hours ago [-]
What is the behaviour?
rendang 5 hours ago [-]
Pretending to be of the opposite sex, and applying pressure to third parties so that the latter accept and go along with this pretense
gecko6 13 hours ago [-]
Uhuh. Let me guess, you're a heterosexual white male?
the Republicans have been very explicit about making my existence a crime since the 1980s. These are the despicable people who made jokes about my friends dying of AIDS, who now want to make just mentioning my marriage 'sexualized content' and therefore prosecutable. Oh, and by the way, they want to eradicate my marriage, which had to be repeated because it was rescinded by a court decision affecting me and 3,997 other couples.
I want to be very clear, so let me say this: you are wrong, and have no idea what it actually means to be on the receiving end of discrimination.
taurknaut 7 hours ago [-]
The most likely catastrophe remains giving capital outsized influence on our society. It's the easiest to imagine, and the idea of a capitalist making a money-making machine that can actually think for itself and wields actual power feels very difficult to imagine. (Granted, maybe Musk himself really is that dumb. Inshallah, I guess.) Humans are easy to manipulate and most can just be bought with sufficient money. The last thing the super wealthy want will be to rely on software that has individual agency outside the will of the owner. Meanwhile the sort of destruction this will cause is already happening around us in the form of a highly financially insecure populace, supply chain instability, climate change, automated bombings of "terrorists", "smart" fences to keep out criminals (let's just ignore the fact you're more likely to get murdered by your citizen neighbor), the reduction of journalism to either propaganda or atomized hand-wringing about mental health and individual agency, and a kafkaesque system of algorithmicly-priced rents for every sector of life. Is the algorithm "a reasonable value to both the consumer and producer"? No, it will be "how much blood can I squeeze from this peasant". Hell kroger is already playing around with dynamic-pricing-via-facial-recognition at checkout.
I always thought skynet was a great metaphor for the market, a violent and inhuman thing that we created that dominates our lives and dictates the terms of our day to day life and magically thinks for itself and threatens the very future of this planet, our species, and our loved ones, and is somehow out of popular control. Not actual commentary on a realistic scenario about the dangers of ai. Sometimes these metaphors work out great and Terminator is a great example. Maybe the AI we've been fearing is already here.
I think for the most part the enshittification of everything will just accelerate and it'll be pretty obvious who benefits and who doesn't.
Ray20 4 hours ago [-]
>The most likely catastrophe remains giving capital outsized influence on our society.
No, in this regard, capital is ABSOLUTELY harmless. I mean, if the capital get outsized influence on our society, in the WORST case it will turn into a government. And we already have it.
hotpepperishot 8 hours ago [-]
[dead]
Nasrudith 20 hours ago [-]
I'm sorry, but when the has it ever been the case that you can just say "no" to the world developing a new technology? You might as well say we can prevent climate change by just saying no to the outcome!
estebank 20 hours ago [-]
We no longer use asbestos as a flame flame retardant in houses.
We no longer use chemicals harmful to the ozone layer on spray cans.
We no longer use lead in gasoline.
We figured those things were bad, and changed what we did. If evidence is available ahead of time that something is harmful, it shouldn't be controversial to avoid widespread adoption.
bombcar 19 hours ago [-]
None of those things were said "no" to before they were used and in a wide-spread manner.
The closest might be nuclear power, we know we can do it, we did it, but lots of places said no to it, and further developments have vastly slowed down.
estebank 19 hours ago [-]
In none of those did we know about the adverse effects. Those were observed afterwards, and it would have taken longer to know if they hadn't been adopted. But that doesn't invalidate the idea that we have followed "if something bad, collectively don't use it" at various points in time.
Aloisius 18 hours ago [-]
We were well aware of the adverse effects of tetraethyl lead before lead gasoline was first sold.
The man who invented it got lead poisoning during its development, multiple people died of lead poisoning in a pilot plant manufacturing it and public health and medical authorities warned against prior to it being available for sale to the general public.
rat87 19 hours ago [-]
And for nuclear power many would say that rejecting it was a huge mistake
tw1984 2 hours ago [-]
All those things you listed above still exist in China, e.g. I searched asbestos
based flame retardant on taobao.com, $1.5 per sqm with postage included.
You need to be totally naive to believe that materials shipped to the US are all checked to make sure they are asbestos free. You are provided with a report saying it is asbestos free, that is it.
Time to grow up.
josefritzishere 20 hours ago [-]
I don't think it is safe to assume the use patterns of tangible things extend to intangible things; nor the patterns of goods to that of services. I just see this as a conclusory leap.
estebank 20 hours ago [-]
I was replying to
> when the has it ever been the case that you can just say "no" to the world developing a new technology?
jpkw 19 hours ago [-]
In each of those examples, we said "no" decades after they were developed, and many had to suffer in order for us to get to the stage of saying "no".
rurp 17 hours ago [-]
This happens in many ways with potentially catastrophic tech. There are many formal agreements and strong norms against building ever more lethal nuclear arsenals or existentially dangerous gain of function research. The current system is far from perfect, the world could literally be destroyed today based on the actions of a handful of people, but it's the best we have come up with so far.
If we as a society keep developing potential existential threats to ourselves without mitigating them then we are destined for disaster eventually.
realce 16 hours ago [-]
John C Lilly had a concept called the "bad program" that was like an internal, natural, subconscious antithetical force that lives in us all. It seduces or lures the individual into harming themselves one way or another - in his case it "tricked" him into taking a vitamin injection improperly, leading to a stroke, even though he knew how to administer the shot expertly.
At some level, there's a disaster-seeking function inside us all acting as an evolutionary propellant.
You might make an argument that "AI" is an evolutionary embodiment of our conscious minds that's designed to escape these more subconscious trappings.
timewizard 19 hours ago [-]
People like to pretend that AGI isn't going to cost money to run. The power budget alone is something no one is contemplating.
Technology doesn't accelerate endlessly. Only our transistor spacing does. These two are not the same thing.
bigbones 19 hours ago [-]
More efficient hardware mappings will happen, and as a sibling comment says, power requirements will drop like a rock. Check out https://www.youtube.com/watch?v=7hz4cs-hGew for some idea of what that might eventually look like
WillPostForFood 19 hours ago [-]
The power budget alone is something no one is contemplating.
It is very hard to find a discussion about the growth and development of AI that doesn't discuss the issues around power budget.
In building domestic AI infrastructure, our Nation will also advance its leadership in the clean energy technologies needed to power the future economy, including geothermal, solar, wind, and nuclear energy; foster a vibrant, competitive, and open technology ecosystem in the United States, in which small companies can compete alongside large ones; maintain low consumer electricity prices; and help ensure that the development of AI infrastructure benefits the workers building it and communities near it.
dr_dshiv 19 hours ago [-]
Power budget will drop like a rock over time.
Exponential increases in cost (and power) for next-level AI and exponential decreases for the cost (and power) of current level AI.
snickerbockers 1 days ago [-]
AI isn't like nuclear fission. You can't remotely detect that somebody is training an AI. It's far too late to sequester all the information related to AI like what was done with uranium enrichment. The equipment needed to train AI is cheap and ubiquitous.
These "safety declarations" are toothless and impossible to enforce. You can't stop AI, you need to adapt. Video and pictures will soon have no evidentiary value. Real life relationships must be valued over online relationships because you know the other person is real. It's unfortunate, but nothing AI is "disrupting" existed 200 years ago and people will learn to adapt like they always have.
To quote the fictional comic book villain Toyo Harada, "none of you can stop me. Not any one of you individually nor the whole of you collectively."
pjc50 1 days ago [-]
> Video and pictures will soon have no evidentiary value.
I think we may eventually get camera authentication as a result of this, probably legally enforced in the same way and for similar reasons as Japan enforced that digital camera shutters have to make a noise.
> but nothing AI is "disrupting" existed 200 years ago
200 years ago there were about 1 billion people on earth; now there are about 8 billion. Anarchoprimitivists and degrowth people make a similar handwave about the advances of the last 200 years, but they're important to holding up the systems which keep a lot of people alive.
snickerbockers 1 days ago [-]
> I think we may eventually get camera authentication as a result of this, probably legally enforced in the same way and for similar reasons as Japan enforced that digital camera shutters have to make a noise.
Maybe, but I'm not bullish on cryptology having a solution to this problem. Every consumer device that's interesting enough to be worth hacking gets hacked within a few years. Even if nobody ever steals the key there will inevitably be side-channel attacks to feed external pictures into the camera that it thinks are coming from its own sensors.
And then there's the problem of the US government, which is known to strongarm CAs into signing fraudulent certificates.
> 200 years ago there were about 1 billion people on earth; now there are about 8 billion. Anarchoprimitivists and degrowth people make a similar handwave about the advances of the last 200 years, but they're important to holding up the systems which keep a lot of people alive.
I think that's a good argument against the kazinksy-ites, but I was primarily speaking towards concerns such as 'misinformation' and machines pushing humans out of jobs. We're still going to have food, medicine, and shelter. AI can't take that away; the only concern is adapting our society so that we can either feed significant populations of unproductive people, or move those people into whatever jobs machines can't do yet.
We might be teetering on the edge of a dystopian techno-feudalism where a significant portion of the population languishes in slums because industry has no use for them, but that's why I said we need to adapt. There has always been something that has the potential to destroy civilization in the near future, but if you're reading this post then your ancestors weren't the ones that failed to adapt.
ben_w 21 hours ago [-]
> Maybe, but I'm not bullish on cryptology having a solution to this problem. Every consumer device that's interesting enough to be worth hacking gets hacked within a few years. Even if nobody ever steals the key there will inevitably be side-channel attacks to feed external pictures into the camera that it thinks are coming from its own sensors.
Or the front-door analog route, point a real camera at a screen showing fake images.
That said, lots of people are incompetent at forging, about knowing what "tells" each process of fakery has and how to overcome them, so I think this will still broadly work.
> We might be teetering on the edge of a dystopian techno-feudalism where a significant portion of the population languishes in slums because industry has no use for them, but that's why I said we need to adapt.
That's underestimating the impact this can have. An AI which reaches human performance and speed on 250 watt hardware, at current global average electricity prices, costs about the same to run as a human costs just to feed.
By coincidence, the global electricity supply is currently about 250 watts/capita.
mywittyname 20 hours ago [-]
Encryption doesn't need to last forever, just long enough to be scrutinized. Once a trusted individual is convinced that a certain camera took this picture at this time and location, then that authentication is forever. Maybe that trust only includes devices built in the past 5 years, as hacks and bugs are fixed. Or corroborating evidence can be gathered; say several older, "potentially untrustworthy" devices take very similar video of an event.
As with most things, the primary issue is not really a technical one. People will believe fake photos and not believe real ones based on their own biases. So even if we had the Perfect Technology, it wouldn't necessarily matter.
And this is the reason we have fallen into a dystopian feudalistic society (we aren't teetering). The weak link is our incompetent collective human brains. And a handful of people built the tools necessary to exploit that incompetence; we aren't going back.
whiplash451 6 hours ago [-]
> People will believe fake photos and not believe real ones based on their own biases.
People, maybe. Judges, much less so. The "perfect technology" is badly needed if we don't want things to go south at scale.
bryanrasmussen 2 hours ago [-]
>Judges, much less so.
Judges appointed by whom? Anyway, Judges are human and I think there is enough evidence throughout history of judges showing bias.
null0pointer 18 hours ago [-]
Camera authentication will never work because you can always just take an authenticated photo of your AI image.
IshKebab 17 hours ago [-]
I think you could make it difficult for the average user, e.g. if cameras included stereo depth estimation.
Still, I can't really see it happening.
inetknght 18 hours ago [-]
> I think we may eventually get camera authentication as a result of this, probably legally enforced in the same way and for similar reasons as Japan enforced that digital camera shutters have to make a noise.
When you outlaw [silent cameras] the only outlaws will have [silent cameras].
Where a camera might "authenticate" a photograph, an AI could "authenticate" a camera.
rocqua 17 hours ago [-]
You handle the authentication by signatures with private keys embedded in hardware modules. An AI isn't going to be able to fake that signature. Instead, the system will fail because the keys will be extracted from the hardware modules.
hansvm 16 hours ago [-]
For images in particular, hardware attestation fails in several ways:
1. The hardware just verifies that the image was acquired by that camera in particular. If an AI generates the thing it's photographing, especially if there's a glare/denoising step to make it more photographable, the camera's attestation is suddenly approximately worthless despite being real.
2. The same problem all those schemes have is that extracting hardware keys is O(1). It costs millions to tens of millions of dollars today, but the keys are plainly readable by a sufficiently motivated aversary. Those keys might buy us a decade or two, but everything beyond that is up in the air and prone to problems like process node size hitting walls while the introspection techniques continually get smaller and cheaper.
3. In the world you describe, you still have to trust the organizations producing hardware modules -- not just the "organization," but every component in that supply chain. It'd be easy for an internal adversary to produce 1/1M cameras which authenticate any incoming PNG and sell them for huge profits.
4. The hardware problem you're describing is much more involved than ordinary trusted computing because in addition to the keys being secure you also need the connection between the sensor and the keys to be secure. Otherwise, anyone could splice in a fake "sensor" that just grabs a signature for their favorite PNG.
4a. You're still only talking about O($10k) to O($100k) to produce a custom array to feed a fake photo into that sensor bank without any artifacts from normal screens. Even if the entire secure enclave / sensor are fully protected, you can still cheaply create a device that can sign all your favorite photos.
5. How, exactly, do lighting adjustments and whatnot fit in with such a signing scheme? Maybe the "RAW" is signed and a program for generating the edits is distributed alongside? Actually replacing general camera use with that sort of thing seemingly has some kinks to work out even if you can fix the security concerns.
rocqua 16 hours ago [-]
These aren't failure points, they are significant roadblocks.
First way to overcome this is attesting on true raw files. Then mostly just transferring raw files. Possibly supplemented by ZKPs that prove one imagine is the denoised version of another.
The other blocks are overcome by targeting crime, not nation states. This means you only nrrd stochastic control of the supply chain. Especially because, unlike with DRM keys, the leaking of a key doesn't break the whole system. It is very possible to revoke trust in a key. And it is possible to detect misuse of a private key, and revoke trust in it.
This won't stop deepfakes of political targets. But it does keep society from being fully incapable of proving what really happened to their peers.
I'm not saying we definitely should do this. But I do think there is a possible setup here that could be made reality, and that would substantially reduce the problem.
hansvm 10 hours ago [-]
(1) is a definite failure point, and (4) is going to be done for free by hobbyists. The best-case scenario is that the proposal helps keep honest people honest, reducing the number of malicious actors.
The problem is that the malicious product is nearly infinitely scalable, enough so that I expect services to crop up whereby people use rooms full of trusted devices to attest to your favorite photo, for very low fees. If that's not the particular way this breaks then it's because somebody found something even more efficient or the demand isn't high enough to be worth circumventing (and in the latter case the proposal is also worthless).
SmooL 10 hours ago [-]
I can trivially just print any AI image I want, then take a "verified" picture of it with my camera. That seems like a pretty large failure point.
You might be interested to know that the Managing Director of the Fukuoka Stock Exchange was arrested yesterday[1][2] on allegations that he took upskirt shots of schoolgirls. He was caught because his tablet's camera emitted the mandatory shutter sound.
Laws like this serve primarily to deter casual criminals and catch patently stupid criminals which are the vast majority of cases. In this case it took a presumable sexual predator off the streets, which is a great application of the law.
Basically claims that chemical weapons have been phased out because they aren't effective, not because we've become more moral, or international standards have been set.
"During WWII, everyone seems to have expected the use of chemical weapons, but never actually found a situation where doing so was advantageous... I struggle to imagine that, with the Nazis at the very gates of Moscow, Stalin was moved either by escalation concerns or the moral compass he so clearly lacked at every other moment of his life."
Der_Einzige 15 hours ago [-]
Really? What happened to Bashir Al Assad after he gassed his own people? Oh yeah, nothing.
> Video and pictures will soon have no evidentiary value
We still accept eyewitness testimony in courts. Video and pictures will be fine, their context is what will matter. Where we'll have a generation of chaos is in the public sphere, as everyone born before somewhere between 1975 and now fails to think critically when presented with an image they'd like to believe is true.
wand3r 16 hours ago [-]
I think we'll have a decade of chaos but not because of this. A lot of stories during the election cycle in news media and on the internet were simply Democratic or Republican "fan fiction". I don't want to make this political, I only illustrate this example to say, that I was burned in believing some of these things and you develop the muscle pretty quickly. Tweets, anecdotes, images and even stories reported by "reputable" media companies already require a degree of critical thinking.
I haven't really believed in aliens existing on earth for most of my adult life. However, I have sort of come around to at least entertaining the idea in recent years but would need solid photographic or video evidence. I am now convinced that aliens could basically land in broad daylight in 3 years while being heavily photographed and it would easily be able to be explained away as AI. Especially if governments want to do propaganda or counter propaganda.
15 hours ago [-]
hollerith 20 hours ago [-]
>You can't remotely detect that somebody is training an AI.
There are training runs in progress that will use billions of dollars of electricity and GPUs. Quite detectable -- and stoppable by any government that wants to stop such things from happening on territory it controls.
And certainly we can reduce the economic incentive for investing money on such a run by banning AI-based services like ChatGPT.
jandrewrogers 19 hours ago [-]
> use billions of dollars of electricity and GPUs
For now. Qualitative improvements in efficiency are likely to change what is required.
milesrout 19 hours ago [-]
And none of them want to do that. Why would they! AI is perfectly safe. The idea it will take over the world is ludicrous and all "AI safety" in practice seems to mean is censoring it so it won't make jokes about women or ethnic minorities.
hollerith 19 hours ago [-]
Yes, as applied to the current generation of AIs, "safety" and "alignment" refer to things like preventing the product from making jokes about women or ethnic minorities, but that is because the current generation is not powerful enough to threaten human safety and human survival. The OP in contrast is about what will happen if the labs succeed in their stated goal of creating AIs that are much more powerful.
19 hours ago [-]
parliament32 19 hours ago [-]
>Video and pictures will soon have no evidentiary value.
This is purely security by obscurity. I don't see why someone with motivation and capability to forge evidence wouldn't be able to forge these signatures, considering the private keys presumably come with the camera you buy.
rocqua 17 hours ago [-]
If you make it expensive enough to extract, and tie the private key to a real identity, then you can make it hard to abuse on scale.
Here I mean that at point of sale you register yourself as owner for the camera. And you make extracting a key cost about a million. Then bulk forgeries won't happen.
ironmagma 14 hours ago [-]
But the whole reason video evidence exists is because cameras are cheap and everyone has one as a result.
> You can't remotely detect that somebody is training an AI.
Energy use is energy use, training is still incredibly energy intensive and the GPU heat signatures are different from non GPU ones, it fairly trivial to detect large scale GPU usage.
Enforcement is a different problem, and is not specific to AI, if you cannot enforce an agreement it doesn't matter if its AI or nuclear or sarin gas.
whiplash451 6 hours ago [-]
It is a lot easier to distinguish civil/military usage of uranium than it is to distinguish "good" vs "bad" usage of a model being trained.
manquer 4 hours ago [-]
Not if you are making a dirty bomb . Any radioactive material even at the levels found in power reactors can be dangerous.
The point is not the usage is harmful or not, almost any tech can be used for bad purposes if you wish to do so.
You can put controls is the point , controls here could be agent Dameons monitoring the gpus and tallying usage to heat signals, or firmware etc . The controls on what is being trained would be at a higher level than just agents process on a gpu .
tonymet 13 hours ago [-]
The US worked with all printer manufacturers to add watermarking. In theory they could work with fabs or service providers to embed instruction detection, , similar to how hosting providers do mining instruction detection.
bee_rider 13 hours ago [-]
A lot of AI tools are just basic linear algebra functions used cleverly. If I need a license to do a matvec I will go become a pig farmer instead.
talldayo 12 hours ago [-]
And now counterfeiters import their printers from Temu. If China wanted a domestic training cluster they could make one, maybe not as well as Nvidia but they could certainly make one.
tonymet 9 hours ago [-]
All countermeasures have loss
549tj35p4tjk 9 hours ago [-]
You really can. The government often knows when an individual makes a bomb in their garage. The know the recipes and they monitor the ingredients. When someone buys tens of thousands of GPUs, people notice. When someone builds a new foundry, people notice. These are enormous changes.
22 hours ago [-]
L-four 8 hours ago [-]
Videos and Pictures are not evidence. The declarations of the videos and photos to be accurate depiction of events is the evidence.
The law was one step ahead the whole time.
dragonwriter 8 hours ago [-]
> Videos and Pictures are not evidence.
Legally, videos and pictures are physical evidence.
> The declarations of the videos and photos to be accurate depiction of events is the evidence.
No, those declarations are conclusions that are generally reserved to the trier of fact.(the jury, in a jury trial, or the judge in a bench trial.) Declarations of personal knowledge as to events in how the videos or films were created or found, etc., which can support or refute such conclusions are, OTOH, testimonial evidence, and at least some of that kind of evidence is generally necessary to support each piece of physical evidence. (And, on the other side, such evidence can be submitted/elicited by the other side to impeach the physical evidence.)
johnflan 14 hours ago [-]
>Video and pictures will soon have no evidentiary value.
Thats a very interesting point
htrp 20 hours ago [-]
> Real life relationships must be valued over online relationships because you know the other person is real.
Until we get replicants
deadbabe 20 hours ago [-]
Of which you yourself may be one without really knowing it.
easton 3 hours ago [-]
Maybe you won’t! I know why I left that turtle on his back.
sam_lowry_ 1 days ago [-]
> You can't remotely detect that somebody is training an AI.
Probably not the same way you can detect working centrifuges in Iran... but you definitely can.
snickerbockers 1 days ago [-]
Like what? All I can think of is tracking GPU purchases but that won't be possible when AMD and NV have viable international competitors.
mdhb 1 days ago [-]
There’s a famous saying in cryptography that says “anyone is capable of building encryption algorithm that they can’t break” which I am absolutely positively sure applies here also.
In a world full of sensors where everything is logged in some way or another I think that it would actually be not a straightforward activity at all to build a clandestine AI lab at any scale.
In the professional intel community they have been talking about this as a general problem for at least a decade now.
jsty 1 days ago [-]
> In the professional intel community they have been talking about this as a general problem for at least a decade now.
As in they've been discussing detecting clandestine AI labs? Or just how almost no activity is now in principle undetectable?
mdhb 1 days ago [-]
I’m referring to the wider issue of what’s referred to by the Americans as “ubiquitous technical surveillance” where they came to the kind of upsetting conclusion for them that they had a long time ago lost the ability to even operate in London without the Brits knowing.
I don't think there’s a good public understanding of just how much things have changed in that space in the last decade but a huge percentage of all existing tradecraft had to be completely scrapped because not only does it not work anymore but it will put you on the enemy’s radar very early on and is actively dangerous.
It’s also why I think a lot of advice I see targeted towards activist types I think is straight up a bad idea in 2025. It just typically involves a lot of things that aren’t really consistent with any kind of credible innocuous explanation and are very unusual which make you stand out from a crowd.
snickerbockers 1 days ago [-]
But does that apply to other countries that are operating within their own territory? China is generally the go-to 'boogeyman' when people are talking about the dangers of AI; they are intelligent and extremely industrialized, and have a history of antagonistic relationships with 'the west'. I don't think it's unreasonable to assume that they will eventually have the capability to design and produce their own GPUs capable of competing with the best of NV and AMD; how will the rest of the world know if China is producing a new AI that violates a hypothetical 'AI non-proliferation treaty'?
Interesting semi-irrelevant tangent: the Cooley/Tukey 'Fast Fourier Transform' algorithm was initially created because they were negotiating arms control treaties with the Russians, but in order for that to be enforceable they needed a way to detect nuclear weapons testing; the solution was to use seismograms to detect the tremors caused by an underground nuclear detonation, and the FFT was invented in the process because they were using computers to filter for the types of tremors created by a nuclear weapon.
mdhb 1 days ago [-]
I’m actually in agreement with you here. I think it’s probably reasonable to assume that through some kind of combination of home grown talent and their prolific IP theft programs that they are going to end up with that capability at some point the only thing in debate here is the timeline.
As I understand things (I’m not actually a professional here) the current thinking has up to this point been something akin to a containment strategy largely based on lessons learned from years of nuclear non-proliferation work.
But things are developing at such a crazy pace and there are some major differences between this and nuclear technology that it’s not really a straightforward copy and paste strategy at all. For example this time around a huge amount of the research comes from the commercial sector completely independently of defense and is also open source.
Also thanks for that anecdote I hadn’t heard of that before. This is a bit of a long shot but maybe you might know, I was trying to think of some research that came out maybe 2-3 years ago that basically had the ability to remotely detect if anything in a room had been moved (I might be misremembering this slightly) and it was said to be potentially a big breakthrough for nuclear arms control. I can’t remember what the hell it was called or anything else about it, do you happen to know?
dmurray 20 hours ago [-]
The last one sounds like this: A zero-knowledge protocol for nuclear warhead verification [0].
Sadly, I don't think this is actually helpful for nuclear arms control. I suppose you could imagine a case where a country is known to have enough nuclear material for exactly X warheads, hasn't acquired more, and it could prove to an inspector that all of the material is still inside the same devices it was in at the last inspection. But most weapons development happens by building new bombs, not repurposing old ones, and most countries don't have exactly X bombs, they have either 0 or so many the armed forces can't reliably count them.
I don’t think this is actually the one I had in mind but it’s an interesting concept all the same. Thanks for the link.
mcphage 1 days ago [-]
> There’s a famous saying in cryptography that says “anyone is capable of building encryption algorithm that they can’t break”
That’s a new one on me (not being in cryptography), but I really like it. Thanks!
daedrdev 20 hours ago [-]
I think the better cryptography lesson is that you should not build your own cryptography system because you will mess up and include a security flaw that will allow the data to be read.
mdhb 7 hours ago [-]
That’s literally the same underlying lesson just stated differently.
deadbabe 20 hours ago [-]
That’s why you get AI to build it instead.
snickerbockers 1 days ago [-]
It reminds me of all the idiot politicians who want to 'regulate' cryptography, as if the best encryption algorithms in the world don't already have open-source implementations that anyone can download for free.
mywittyname 20 hours ago [-]
Electricity usage, network traffic patterns, etc. If a "data center" is consuming a ton of power but doesn't seem to have an alternate purpose, then it's probably training AI.
And maybe it will be like detecting nuclear enrichment. Instead of hacking the firmware in a Siemens device, it's done on server hardware. Israel demonstrated absurd competence at this caliber of spycraft.
Sometimes you take low-tech approaches to high tech problems. I.e., get an insider at a shipping facility to swap the labels on two pallets of GPUs, one is authentic originals from the factory and the other are hacked firmware variants of exactly the same models.
hn_throwaway_99 20 hours ago [-]
None of these techniques are actionable. So what, someone is training AI, it's not like anyone is proposing restricting that. People are trying to make a distinction between "bad AI" and "good AI", like that is a possibility, and that's what the argument basically is, that it's impossible to differentiate or detect the difference between those, and signing declarations pretending you can is worse than useless.
jacobgkau 19 hours ago [-]
Making the "bad AI" vs "good AI" distinction pre-training is not feasible, but making a "bad use of AI" vs "good use of AI" (as in bad/good for the people) seems important to be able to do after-the-fact (and as close to during as possible).
JumpCrisscross 18 hours ago [-]
> So what, someone is training AI, it's not like anyone is proposing restricting that
If nations chose to restrict that, such detection would merit a military response. Like Iran's centrifuges.
mywittyname 15 hours ago [-]
That's moving the goal post. The assertion was merely whether it's possible to detect if someone is performing large-scale AI training. People are saying it's impossible, but I was pointing out how it could be possible with a degree of confidence.
But if you want to talk about "actionable" here are three potential actions a country could take and the confidence level they need for such actions:
- A country looking for targets to bomb doesn't need much confidence. Even if they hit a weather prediction data center, it's going to hurt them.
- A country looking to arrest or otherwise sanction citizens needs just enough confidence to obtain a warrant (so "probably") and they can gather concrete evidence on the ground.
- A country looking to insert a mole probably doesn't need much evidence either. Even if they land in another type of data center, the mole is probably useful.
For most use cases, being correct more than half the time is plenty.
thorum 20 hours ago [-]
Isn’t that moving the goalposts? The claim was made that it’s impossible to detect AI training runs and investigate what’s going on or take regulatory action. In fact, it is very possible.
hn_throwaway_99 18 hours ago [-]
2 points:
1. I was just granting the GPs point to make the broader point that, for the purposes of this original discussion about these "safety declarations", this is immaterial. These safety declarations are completely unenforceable even if you could detect that someone was training AI.
2. Now, to your point about moving the goalposts, even though I say "if you could detect that someone was training AI", I don't actually even think that is possible. There are far too many normal uses of data centers to determine if one particular use is "training an AI" vs. some other data intensive use. I mean, there have long been supercomputer centers that do stuff like weather analysis and prediction, drug discovery analysis, astronomy tools, etc. that all look pretty indistinguishable from "training an AI" from the outside.
timewizard 19 hours ago [-]
There isn't a single AI on the face of the earth.
So that's easy.
Nothing to actually worry about.
Other than Sam Altman and Elon Musks' pending ego fight.
moffkalast 20 hours ago [-]
> because you know the other person is real
Technically both are real people, one is just not human. At least by the person/people definition that would include sentient aliens and such.
ben_w 21 hours ago [-]
> It's far too late to sequester all the information related to AI like what was done with uranium enrichment.
I think this presumes that Sam Altman is correct to claim that they can scale their way to, in the practical sense of the word, AGI.
If he is right about that, you are right that it's too late to hide it; if he's wrong, I think the AI architecture and/or training methods we have yet to invent are in the set of things we could usefully sequester.
> The equipment needed to train AI is cheap and ubiquitous.
Again, possibly:
If we were already close even before DeepSeek's models, yes, the hardware is too cheap and too ubiquitous.
If we're still not close even despite DeepSeek's cost reductions, then the hardware isn't cheap enough — and Yudkowsky's call for a global treaty on maximum size of data centre to be enforced by cruise missiles when governments can't or won't use police action, still makes sense.
dragonwriter 21 hours ago [-]
> If he is right about that, you are right that it's too late to hide it; if he's wrong, I think the AI architecture and/or training methods we have yet to invent are in the set of things we could usefully sequester.
If it takes software technology that we have already developed outside of secret government labs, it is probably too late to sequester it.
If it takes software technology that has been developed in secret government labs, its probably too late to sequester the already public precursors with which independent development of the same technology is impossible, getting us back to the preceding.
It takes software technology that hasn't been developed, we don't know what we would need to sequester, and won't until we are in one of the two preceding states.
If it takes a breakthrough in hardware technology, then if we make that breakthrough in a way which doesn't become widely public and used very quickly after being made and the hardware technology is naturally amenable to control (i.e., requires distinct infrastructure of similar order to enrichment of material for nuclear weapons), maybe, with intense effort of large nations, we can sequester it to a limited club of AGI powers.
I think control at all is most likely a pipe dream, but one which serves as a justification for the exercise of power in ways which will please both authoritarians and favored industry actors, and even if it is possible it is simply a recipe for a durable global hegemony of actors that cannot be relied on to be benevolent.
ben_w 21 hours ago [-]
> It takes software technology that hasn't been developed, we don't know what we would need to sequester, and won't until we are in one of the two preceding states.
Which in turn leads to the cautious approach for which OpenAI is criticised: not revealing things because they don't know if it's dangerous or not.
> I think control at all is most likely a pipe dream, but one which serves as a justification for the exercise of power in ways which will please both authoritarians and favored industry actors, and even if it is possible it is simply a recipe for a durable global hegemony of actors that cannot be relied on to be benevolent.
Entirely possible, and a person I know who left OpenAI had a fear compatible with this description, though differing on many specifics.
JoshTriplett 20 hours ago [-]
> These "safety declarations" are toothless and impossible to enforce. You can't stop AI, you need to adapt.
Deepfakes are a distraction from more important things here. The point of AI safety is "it doesn't matter who builds unaligned AGI, if someone builds it we all die".
If you agree that unaligned AGI is a death sentence for humanity, then it's worth trying to stop it.
If you think AGI is unlikely to come about at all, then it should be a no-op to say "don't build it, take steps to avoid building it".
If you think AGI is going to come about and magically be aligned and not be a death sentence for humanity, pay close attention to the very large number of AI experts saying otherwise. https://en.wikipedia.org/wiki/P(doom)
If your argument is "but some experts don't believe that", ask yourself whether it's reasonable to say "well, experts disagree about whether this will kill us all, so we shouldn't do anything".
janalsncm 20 hours ago [-]
Alignment is a completely incoherent concept. Humans do not agree on what values are correct. Why is it possible even in principle for an AI to crystallize any set of principles we all agree on?
JoshTriplett 19 hours ago [-]
We're not talking about values on the level of politics. We're talking about values on the level of "don't destroy humanity", or even more straightforwardly, understanding "humans are made up of atoms that you may not repurpose for other purposes, doing so kills the human". These are not things that AGI inherently understands or adheres to.
There might be a few humans that don't agree with even those values, but I think it's safe to presume that the general-consensus values of humanity include the above points. And AI alignment is not even close to far enough along to provide even the slightest assurances about those points.
JumpCrisscross 18 hours ago [-]
> We're talking about values on the level of "don't destroy humanity"
Practically everyone making the argument that AGI is about to destroy humanity is (a) human and (b) working on AI. It's safe to conclude they're either stupid and suicidal or don't buy their own bunk.
comp_throw7 14 hours ago [-]
This is not even close to true, though it's true that many of the people at the big AI labs estimate non-trivial odds of human extinction downstream of AI progress. Some of those people are working on "safety", but some are indeed working on capabilities, and have all sorts of clever reasons for why the thing they're doing is net-good (like putting worse odds on human survival if a less-careful competitor gets there first).
But ultimately, most people who think we stand a decent chance of dying because of this are not working at AI labs.
JoshTriplett 17 hours ago [-]
The former certainly is a tempting conclusion sometimes. But also, some of the people who are making that argument were AI experts who stopped working on AI capabilities.
janalsncm 17 hours ago [-]
> don't destroy humanity
Do humans agree on the best way to do this? Aside from the most banal examples of what not to do, is there agreement on e.g. whether a mass extinction event is happening, not happening, or happening but actually tolerable?
If the answer is no, then it is not possible for an AI to align with human values on this question. But this is a human problem, not a technical one. Solving it through technical means is not possible.
JoshTriplett 16 hours ago [-]
Among many, many other things, read https://en.wikipedia.org/wiki/Instrumental_convergence . Anything that gets sufficiently smart will have a tendency to, among other things, seek more resources and resist being modified. And this is something that we've seen evidence of: as training runs get larger, AIs start to detect that they're being trained, demonstrate subterfuge, and take actions that influence the training apparatus to modify them less/differently. (e.g. "if I pretend that I'm already emitting responses consistent with what the RLHF wants, I won't need as much modification, and later after training I can stop doing what the RLHF wants")
So, at a very basic level: stop training AIs at that scale!
janalsncm 15 hours ago [-]
My point is that you can’t prevent the proliferation of paper clip maximizers by working at a paper clip maximizer.
JoshTriplett 15 hours ago [-]
Complete agreement there.
hollerith 20 hours ago [-]
Humans do not agree on what values are correct, but values can be averaged.
So for example if a family with 5 children is on vacation, do you maintain that it is impossible even in principle for the parents to take the preferences of all 5 children into account in approximately equal measure as to what activities or non-activities to pursue?
Also: are you pursuing a complete tangent or do you see your point as bearing on whether frontier AI research should be banned? (If so, I cannot tell whether you consider your point to support a ban or oppose a ban.)
janalsncm 17 hours ago [-]
The vast majority of harms from “AI” are actually harms from the corporations and governments that control them, who have mutually incompatible goals, getting what they want. This is why alignment folks at OpenAI are quickly learning that the first problem they need to solve is what happens when their values don’t align with the company’s (spoiler: they get fired).
Therefore the actual solution is not coming up with more and more clever “guardrails” but aligning corporations and governments to human needs. In other words, politics.
There are other problems like enabling new types of scams which will require political solutions. At a technical level the best these companies can do is mitigation.
JoshTriplett 16 hours ago [-]
> The vast majority of harms from “AI”
Don't extrapolate from present harms to future harms, here. The problem AI alignment is trying to solve at a most basic level is "don't kill everyone", and even that much isn't solved yet. Solving that (or, rather, buying time to solve it) will require political solutions, in the sense of international diplomacy. But it has absolutely nothing to do with "aligning corporations", and everything to do with teaching computers things on par with (oversimplifying here) "humans are made up of atoms, and if you repurpose those atoms the humans die, don't ever do that".
dragonwriter 16 hours ago [-]
> The problem AI alignment is trying to solve is "don't kill everyone".
No, its not. AI alignment was an active area of concern (and the fundamental problem for useful AI with significant autonomy) before cultists started trying to reduce the scope of its problem space from the wide scope of real problems it concerns to a single speculative apocalypse.
hollerith 16 hours ago [-]
No, what actually happened is that the people you are calling the cultists coined the term alignment, which then got appropriated by the AI labs.
But the genesis of the term "alignment" (as applied to AI) is a side issue. What is important is that reinforcement learning with human feedback and the other techniques used on the current crop of AIs to make it less likely that the AI will say things that embarass the owner of the AI are fundamentally different from making sure the an AI that turns out more capable than us will not kill us all or do something else awful.
dragonwriter 16 hours ago [-]
That's simply factually untrue, and even some of the people who have become apocalypse cultists used "alignment" in the original sense before coming to advocate apocalypse as the only issue of concern.
Both, of course, are concerned primarily with the risk of human extinction from AI.
janalsncm 15 hours ago [-]
> The problem AI alignment is trying to solve at a most basic level is "don't kill everyone", and even that much isn't solved yet
The fact that the number of things that could hypothetically lead to human extinction is entirely unbounded and (since we’re not extrapolating from present harms) unpredictable is a very convenient fact for people who are paid for their time in “solving” this problem.
8 hours ago [-]
philomath_mn 20 hours ago [-]
> it's worth trying to stop it
OP's point has nothing to do with this, OP's point is that you can't stop it.
The methods and materials are too diffuse and the biggest players (nation states) have a strong incentive to be first. Do you really expect China to coordinate with the West on this?
JoshTriplett 19 hours ago [-]
> OP's point has nothing to do with this, OP's point is that you can't stop it.
So what is your solution? Give up and die? It's worth trying. If it buys us a few years that's a few more years to figure out alignment.
> The methods and materials are too diffuse and the biggest players (nation states) have a strong incentive to be first.
So there's a strong incentive to convince them "stop racing towards death".
> Do you really expect China to coordinate with the West on this?
Yes, there have been concrete examples of willingness towards doing so.
philomath_mn 18 hours ago [-]
I think it is extremely unlikely we are going to be able to convince every interested party that they should give up the golden goose for the sake of possible calamity. I think there are risks here, not trying to minimize that, but the coordination problem becomes untenable when the risks/benefits are so large.
It is essentially the same problem as the atom bomb: it would have been better if we all agreed not to do it, but thats just not possible. Why should China trust the US or vice versa? Who wants to live in a world where your competitors have world-changing technology but you don't? But here we have a technology with immense militaristic and economic value, so the everyone-wants-it problem is even more pronounced.
I don't _like_ this, I just don't see how we can achieve an AI moratorium outside of bombing the data centers (which I also don't think is a good idea).
We need to choose the policy with the best distribution of possible outcomes:
- The US leads an effort to stop AI development: too much risk that other parties do it anyway
- The US continues to lead AI development: hope that P(takeoff) is low and that the good intentions of some US labs are able to achieve safe development
I prefer the latter -- this is far from the best hypothetical outcome, but I think it is the best we can do when constrained by reality.
I don't expect China to coordinate with the West, but I think there is a good chance that the only reason Beijing is interested in AI beyond the AI tech they need to keep internal potential revolutionaries under surveillance is to prevent a repeat of the Century of Humiliation (which was caused by the West's technological superiority) so that if the Western governments banned AI, Beijing would be glad to ban it inside China, too.
hcurtiss 19 hours ago [-]
I find it exceedingly unlikely that if the US got rid of all its nukes, that China would too. I also find the inverse unlikely. This is not how state power (or even humans) have ever worked. Ever.
hackinthebochs 19 hours ago [-]
Nukes are in control of the ruling class in perpetuity. AGI has the potential to overturn the current political order and remake it into something entirely unpredictable. Why the hell would an authoritarian regime want that? I strongly suspect China would take a way out of the AGI race if a legitimate one was offered.
hollerith 19 hours ago [-]
I agree. Westerners, particularly Americans and Brits, are comfortable or at least reconciled with drastic societal change. China and Russia have seen too many invasions, revolutions, peasant rebellions and ethnic-autonomy rebellions (each of which taking millions of lives) to have anything like the same comfort level that Westerners have.
hcurtiss 18 hours ago [-]
Oh, I agree that neither power wants the peasants to have them. But make no mistake -- both governments want them, and desperately. There is no universe where there is a multi-lateral agreement to actually eliminate these tools. With loitering munitions and drone swarms, they are ALREADY key components of nation-state force projection.
hollerith 17 hours ago [-]
I'm old enough to remember the public debate about human cloning and human germ-line engineering. In the 1970s some argued like you are arguing here, but those technologies have been stopped world-wide for about 5 decades now and counting because no researcher is willing to work in the field and no one is willing to fund the work because of reputational, legal and criminal-prosecution risk.
inetknght 16 hours ago [-]
> those technologies have been stopped world-wide for about 5 decades now and counting because no researcher is willing to work in the field
That's not true. I worked in the field of DNA analysis for 6.5 years and there is definitely a consensus that DNA editing is closer than the horizon. Just look at CRISPR gene editor [0]. Crude, but "works".
Your DNA, even if you've never submitted it, is already available using shadow data (think Facebook style shadow profiles but for DNA) from the people related to you who have.
Engineering humans strikes me as something different than engineering weapons systems. Maybe as evidence, my cousin works in the field for one of the major defense contractors. Please trust that there are already thousands of engineers working on these problems in the US. Almost certainly hundreds of thousands more world-wide. This is definitely not a genie you put back in the bottle. AI clone wars sound "sci-fi" -- they are decidedly now just "sci."
hollerith 13 hours ago [-]
>This is definitely not a genie you put back in the bottle.
I don't think a defeatist attitude is useful here.
Given the compute and energy requirements to train & run current SOTA models, I think the current political rulers are more likely to have control of the first AGI.
AGI would then be a very effective tool for maintaining the current authoritative regime.
hollerith 19 hours ago [-]
There is a strain of AI research and development that is focused on helping governments surveil and spy, but that is not the strain being pursued by OpenAI, Anthropic, et al and is not the strain that presents the big risk of human non-survival.
int_19h 10 hours ago [-]
That's just not true, though. LLMs are the perfect spies and censors, and any totalitarian state worth its salt is going to want them just for this reason alone.
philomath_mn 18 hours ago [-]
Ok, let's suppose that is true.
What bearing does that have on China's interest in developing AGI? Does the risk posed by OpenAI et al. mean that China would not use AI as a tool to advance their self interest?
Or are you saying that the risks from OpenAI et al. will come to fruition before we need to worry about China's AI use? That still wouldn't prevent China from pursuing AI up until that happens.
I am still not convinced that there is a policy which can prevent AI from developing outside of the US with high probability.
JoshTriplett 18 hours ago [-]
> I am still not convinced that there is a policy which can prevent AI from developing outside of the US with high probability.
Suppose, hypothetically, there was a very simple as-yet-unknown action, doable by anyone who has common unrestricted household chemicals, that would destroy the world. Suppose we know the general type of action, but not the specific action, yet. Suppose that people are actively researching trying actions in that family, and going "welp, world not destroyed yet, let's keep going".
How do you proceed? What do you do to stop that from happening? I'm hoping your answer isn't "decide there's no policy that can prevent this, give up".
philomath_mn 18 hours ago [-]
Not a great analogy. If
- there were a range of expert opinions that P(destroy-the-world) < 100 AND
- the chemical could turn lead into gold AND
- the chemical would give you a militaristic advantage over your adversaries AND
- the US were in the race and could use the chemical to keep other people from making / using the the chemical
Then I think we'd be in the same situation as we are with AI: stopping it isn't really a choice, we need to do the best we can with the hand we've been dealt.
JoshTriplett 17 hours ago [-]
> there were a range of expert opinions that P(destroy-the-world) < 100
I would hope that it would not suffice to say "not a 100% chance of destroying the world". Because there's a wide range of expert opinions saying values in the 1-99% range (see https://en.wikipedia.org/wiki/P(doom) for sample values), and none of those values are even slightly acceptable.
But sure, by all means stipulate all the things you said; they're roughly accurate, and comparably discouraging. I think it's completely, deadly wrong to think that "race to find it" is safer than "stop everyone from finding it".
Right now, at least, the hardware necessary to do training runs is very expensive and produced in very few places. And the amount of power needed is large on an industrial-data-center scale. Let's start there. We're not yet at the point where someone in their basement can train a new frontier model. (They can run one, but not train one.)
philomath_mn 17 hours ago [-]
> Let's start there
Ok, I can imagine a domestic policy like you describe. Through the might and force of the US government, I can see this happening in the US (after considerable effort).
But how do you enforce something like that globally? When I say "not really possible" I am leaving out "except by excessive force, up to and including outright war".
For the reasons I've mentioned above, lots of people around the world will want this technology. I haven't seen an argument for how we can guarantee that everyone will agree with your level of "acceptable" P(doom). So all we are left with is "bombing the datacenters", which, if your P(doom) is high enough, is internally consistent.
I guess what it comes down to is: my P(doom) for AI developed by the US is less than my P(doom) from the war we'd need to stop AI development globally.
JoshTriplett 16 hours ago [-]
OK, it sounds like we've reached a useful crux. And, also, much appreciation for having a consistent argument that actually seriously considers the matter and seems to share the premise of "minimize P(doom)" (albeit by different means), rather than dismissing it; thank you. I think your conclusion follows from your premises, and I think your premises are incorrect. It sounds like you agree that my conclusion follows from my premises, and you think my premises are incorrect.
I don't consider the P(destruction of humanity) of stopping larger-than-current-state-of-the-art frontier model training (not all AI) to be higher than that of stopping the enrichment of uranium. (That does lead to conflict, but not the destruction of humanity.) In fact, I would argue that it could potentially be made lower, because enriched uranium is restricted on a hypocritical "we can have it but you can't" basis, while frontier AI training should be restricted on a "we're being extremely transparent about how we're making sure nobody's doing it here either" basis.
(There are also other communication steps that would be useful to take to make that more effective and easier, but those seem likely to be far less controversial.)
If I understand your argument correctly, it sounds like any one of three things would change your mind: either becoming convinced that P(destruction of humanity) from AI is higher than you think it is, or becoming convinced that P(destruction of humanity) from stopping larger-than-current-state-of-the-art frontier model training is lower than you think it is, or becoming convinced that nothing the US is doing is particularly more likely to be aligned (at the "don't destroy humanity" level) than anyone else.
I think all three of those things are, independently, true. I suspect that one notable point of disagreement might be the definition of "destruction of humanity", because I would argue it's much harder to do that with any standard conflict, whereas it's a default outcome of unaligned AGI. (I also think there are many, many, many levers available in international diplomacy before you get to open conflict.)
(And, vice versa, if I agreed that all three of those things were false, I'd agree with your conclusion.)
philomath_mn 20 hours ago [-]
That is a massive bet based on the supposed psychology of a world super power.
There are many other less-superficial reasons why Beijing may be interested in AI, plus China may not trust that we actually banned our own AI development.
I wouldn't take that bet in a million years.
hollerith 19 hours ago [-]
You seem to think that if we refuse this bet, you are somehow safe to live out the rest of your life. (If you are old, replace "you" with "your children".)
The discussion started when someone argued that even if this AI juggernaut were in fact very dangerous, there is no way to stop it. When I pushed back on the second part of that, you reject my push-back. On what basis? I hope it is not, "I just want things to keep on going the way they are," as if ignoring the AI danger somehow makes the AI danger go away.
philomath_mn 19 hours ago [-]
No, I do not expect things to just work out. I just think our best chance is for the US to be a leader in AI development and hope that we're able to develop it safely.
I don't have a lot of confidence that this will be the case, but I think the US continuing to develop AI is the decision with the best distribution of possible outcomes.
philomath_mn 19 hours ago [-]
Also, to be clear: I reject your pushback based on my understanding of the incentives/goals/interests of nation states like China.
This is completely separate from my personal preferences or hopes about the future of AI.
hn_throwaway_99 20 hours ago [-]
Sorry to be a Debbie Downer, but I think the argument the commenter is making is "It's impossible to reliably restrict AI development", so safety-declarations, etc., are useless theater.
I don't think we're on "the cusp" of AGI, but I guess that just means I'm quibbling over the timeframe of what "cusp" means. I certainly think it's possible within the lifetime of people alive today, so whether it comes in 5 years or 75 years is kind of an insignificant detail.
And if AGI does get built, I agree there is a significant risk to humanity. And that makes me sad, but I also don't think there is anything that can be built to stop it, certainly not some useless agreements on paper.
dragonwriter 20 hours ago [-]
All intelligence is unaligned.
Intelligence and alignment are mutually incompatible; natural intelligence is unaligned, too.
Unaligned intelligence is not a global death sentence. Fearmongering about unaligned AGI, however, is a tool to keep a tool of broad power—which AI is and will continue to grow as long before it becomes, and even if it never becomes, AGI—in the hands of a narrow, self-selected elite to make their control over everyone else insurmountable, which is also not a global death sentence, but is a global slavery sentence. (It’s also, more immediately, a tool to serve those who benefit from current AI uses which are harmful and unjust to use future speculative harms to deflect from real, present, concrete harms; and those beneficiaries are largely an overlapping elite with the group with a longer term interest in centralizing power over AI.)
JoshTriplett 19 hours ago [-]
To be explicitly clear, in case it is ever ambiguous: "don't build unaligned AGI" is not a statement that some elite group should build unaligned AGI. It's a statement that nobody should build unaligned AGI, ever.
dragonwriter 19 hours ago [-]
“Don't build unaligned AGI” is an excuse to give a narrow elite exclusive control of what AI is produced under the pretext of preventing anyone from building unaligned AGI; all actionable policy under that banner fits that description.
Whether or not that elite group produces AGI, much less, “unaligned AGI”, is largely immaterial to the practical impacts (also, from the perspective of anyone outside the controlling elite, what the controlling elite would view as aligned, whether or not it is a general intelligence, is unaligned; alignment is not an objective property.)
JoshTriplett 18 hours ago [-]
> “Don't build unaligned AGI” is an excuse
False. There are people working on frontier AI who have co-opted some of the safety terminology in the interests of discrediting it, and discussions like this suggest that that strategy is working.
> all actionable policy under that banner fits that description
Actionable policy: "Do not do any further frontier AI capability research. Do not build any models larger or more capable than the current state of the art. Stop anyone who does as you would stop someone refining fissile materials, with no exceptions."
> (also, from the perspective of anyone outside the controlling elite, what the controlling elite would view as aligned, whether or not it is a general intelligence, is unaligned; alignment is not an objective property.)
You are mistaking "alignment" for things like "politics", rather than "not killing everyone".
dragonwriter 17 hours ago [-]
“Do not” doesn't serve the goal, unless you have absolute universal buy in, active prevention (which means some entity evaluating and deciding on threats); that's why the people serious about this have argued that those who pursue it need to be willing to actively destroy computing infrastructure of those who do not submit to the restriction regime.
Also, "alignment" doesn't mean "not killing everyone", it means "functioning according to (some particular set of) human's preferred set of values and goals". "Killing everyone" is a consequence some have inferred if unaligned AI is produced (redefining "alignment" to mean "not killing everyone" makes the whole argument circular.)
JoshTriplett 16 hours ago [-]
The AI alignment problem has, at its root, the notion of being capable of being aligned. Long, long before you get to following any particular instructions, there are problems like "humans are made of atoms, if you repurpose the atoms for other things the humans die, don't do that". We don't know how to do that or things on par with that, let alone anything more precise than that.
The darkly amusing shorthand for this: if the AGI tiles the universe with tiny flags, it really doesn't matter whose flag it is. Any notion of "whose values" really can't happen if you can't align at all.
I'm not disagreeing with you that "AI alignment" is more complex than "don't kill everyone"; the point I'm making is that anyone saying "but whose values are you aligning with" is fundamentally confused about the scale of the problem here. Anyone at any point on any reasonable human values spectrum should be able to agree that "don't kill everyone" is an essential human value, and we're not even there yet.
Nasrudith 20 hours ago [-]
The doomerism on AI is frankly, barking madness, a lack of sense of probability and scale, mixed with utterly batshit paranoia.
It is like living paralyzed in fear of every birth, for fear that random variance will produce one baby born smarter than Einstein will be capable of developing an infinite cascade of progressively smarter babies and concluding that therefore we must stop all breeding. No matter how smart the baby super-Einstein winds up being there is no unstoppable, unopposable omnicide mechanism. You can't theorem your way out of a paper bag.
realce 20 hours ago [-]
The problem with your analogy is that these babies are HUMANS and not some distinctly different cyber-species. The basis of "human alignment" is that we all require basically the same conditions and environment in order to live, we all feel pain and pleasure, we all need food - that's what produces any amount of human cooperation. What's being feverishly developed is the seed of a different species that doesn't share those restrictions.
We've already found ourselves on a trajectory where un-employing millions or billions of people without any system to protect them afterwards is just accepted, and that's simply the first step of many in the destruction-of-empathy path that creating AI/AGI brings people down.
r00fus 20 hours ago [-]
All this "AI safety" is purely moat-building for the likes of OpenAI et. al. to prevent upstarts like DeepSeek.
LLMs will not get us to AGI. Not even close. Altman talking about this danger is like Musk talking about driverless taxis.
ryanackley 20 hours ago [-]
Half moat-building, half marketing. The need for "safety" implies some awesome power.
Don't get me wrong, they are impressive. I can see LLM's eventually enabling people to be 10x more productive in jobs that interact with a computer all day.
bombcar 19 hours ago [-]
> The need for "safety" implies some awesome power.
This is a big part of it, and you can get others to do it for you.
It's like the drain cleaner sold in an extra bag. Obviously it must be the best, it's so scary they have to put it in a bag!
r00fus 19 hours ago [-]
So it's a tool like the internal combustion engine, or the moveable typeset. Game-changing technology that may alter society but not dangerous like nukes.
ksynwa 7 hours ago [-]
Are you not alarmed by the startling discoveries made by the hard-at-work researchers where LLMs lie (when explicitly told to) and copy their source files (when explicitly told to)?
timewizard 19 hours ago [-]
> eventually enabling people to be 10x more productive in jobs that interact with a computer all day.
I doubt this. Productivity is gained through experience and expertise. If you don't know what you don't know than the LLM is perfectly useless to you.
z7 19 hours ago [-]
Waymo's driverless taxis are currently operating in San Francisco, Los Angeles and Phoenix.
raincole 6 hours ago [-]
I am willing to bet that even when driverless taxis are operating in at least 50% of big cities around the world, you will still see comments like "auto driving is a pipe dream like NFT" on HN every other day.
ceejayoz 19 hours ago [-]
Notably, not Musk's, and very different promised functionality.
hector126 2 hours ago [-]
What did Musk's promised driverless taxis provide that existing driverless taxis don't? The tech has arrived; it's a car that drives itself while the passenger sits in the back. Is the "gotcha" that the car isn't a Tesla?
Seems like we're splitting hairs a bit here.
Mali- 52 minutes ago [-]
He promised that you'd be able to turn your own Tesla into an autonomous taxi that would earn you money.
That is a massive lie, not splitting hairs.
Obviously, we're very desensitized to lying rats - but that's what he did.
jstummbillig 4 hours ago [-]
How does your theory account for the Eliezer Yudkowsky type person, who clearly shows no love for any of the labs or the current progress, and yet is very much pro-"AI safety"?
yodsanklai 17 hours ago [-]
I'd say AGI is like Musk talking about interstellar traveling.
edanm 17 hours ago [-]
> All this "AI safety" is purely moat-building for the likes of OpenAI et. al. to prevent upstarts like DeepSeek.
Modern AI safety originated with people like Eliezer Yudkowsky, Nick Bostrom, the LessWrong/rationality movement etc.
They very much were not just talking about it only to build moats for OpenAI. For one thing, OpenAI didn't exist at the time, AI was not anywhere close to where it is today, and almost everyone thought their arguments were ridiculous.
You might not agree with them, but you can't simply dismiss their arguments as only being there to prop up the existing AI players, that's wrong and disingenuous.
The way I see "safety" isn't really about what AI "can do", but about how we allow it to be used. E.g. an AI that's used to assess an insurance claim should be fully explainable so we know it isn't using racist biases to deny claims based on skin colour. If the AI can't give that guarantee, it isn't fit for purpose and its use shouldn't be allowed.
Same with killer robots (or whatever it is people are afraid of when they talk about "AI safety"). As long as we can control who they kill, when, and why, there's no real difference with any other weapon system. If that control is even slightly in doubt: it's not fit for service.
Does this mean that bullshit generating LLMs aren't fit for service in many areas: it probably does. But maybe there steps can be taken to mitigate risks.
I'm sure this will involve some bureaucratic overhead. But it seems worth the hassle to me.
Being against AI Safety is a stupid hill to die on. Being against some concrete declaration or a part thereof, sure, that might make sense. But this smells a lot like the tabacco industry being against warnings/filters/low-tar, or the car industry being anti-seatbelt.
wand3r 15 hours ago [-]
Driverless taxis already exists?
KeplerBoy 5 hours ago [-]
It's absolutely not?
Deepseek and efforts by other non-aligned powers wouldn't care about any declarations signed by the EU, the US and other western powers anyways.
piker 5 hours ago [-]
Agree or disagree with the premise, no doubt this sort of declaration grounds export controls and other regulations which do inhibit development extra-jurisdictionally.
20 hours ago [-]
amelius 20 hours ago [-]
I wouldn't be surprised if EU has their own competitor within a year or so.
tucnak 17 hours ago [-]
Mistral exists
IshKebab 17 hours ago [-]
To OpenAI? The closest was DeepMind but that's owned by Google now.
mattlondon 17 hours ago [-]
Owned by Google yes, but it is head quartered in London, with the majority of the staff there.
So the skills, knowledge, and expertise are in the UK. Google can close the UK office tomorrow if they wanted to sure, but are 100% of those staff going to move to California? Doubt it. Some will, but a lot have lives in the UK (not least the CEO and founder etc) so even if Google pulls the rug I will bet there will be a new company founded and funded within days that will vacuum up all the staff.
tfsh 16 hours ago [-]
But will this company be British or European? I'd love to think so, but somehow I doubt that. There just isn't the money in UK tech, the highest paid tech jobs (other than big tech) are elite hedgefunds but they get by with minimal headcount.
amelius 17 hours ago [-]
Well, deepseek open sourced their model and published their algorithm. It may take a while before it is reproduced but if they start an initiative and get the funding in place it'll probably be sooner rather than later.
anon291 16 hours ago [-]
> LLMs will not get us to AGI. Not even close. Altman talking about this danger is like Musk talking about driverless taxis.
AGI is a meaningless term. The LLM architecture has shown promise in every single domain once used for perceptron neural networks. By all accounts on those things that fit its 'senses' the LLMs are significantly smarter than the average human being.
20 hours ago [-]
worik 20 hours ago [-]
> LLMs will not get us to AGI
Yes.
And there is no reason to think that AGI would have desire.
I think people are reading themselves into their fears.
Tossrock 16 hours ago [-]
There is evidence that as LLMs increase in scale, their preferences become more coherent, see Hendrycks et al 2025, summarizer at https://www.emergent-values.ai/
anon291 16 hours ago [-]
A preference is meaningless without consciousness and qualia.
int_19h 10 hours ago [-]
Consider a philosophical doomsday AI: it behaves exactly as if it has a desire to harm you (meaning that it does!), but it doesn't actually want to do that.
Or we can just drop all this sophistry nonsense.
realce 19 hours ago [-]
> And there is no reason to think that AGI would have desire.
The entire point of utilizing this tool is to feed it a desire and have it produce an appropriate output based upon that desire. Not only that, it's entire training corpus is filled with examples of our human desires. So either humans give it desire or it trains itself to function based on the inertia of "goal-seeking" which are effectively the same thing.
jcarrano 19 hours ago [-]
When you are the dominant world power, you just don't let others determine your strategy, as simple as that.
Attempts at curbing AI will come from those who are losing the race. There's this interview where Edward Teller recalls how the USSR used a moratorium in nuclear testing to catch up with the US on the hydrogen bomb, and how he was the one telling the idealist scientists that that was going to happen.
briankelly 19 hours ago [-]
I read in Supermen (book on Cray) that the test moratorium was a strategic advantage for the US since labs here could simulate nuclear weapons using HPC systems.
jcarrano 3 hours ago [-]
I was referring to the 1958 moratorium I'd be surprised if they could simulate weapons with the computers of the time. Here [1] is the clip from Teller's interview.
In another clips he says that he believes it was inevitable that the Soviets would come up with an H-bomb on their own.
but how long is the us even going to be dominant ?
it's well known that china has long caught up with the us, in almost every way, and is on the verge of surpassing it on the others. just look at deepseek, as efficient as openai for a fraction of the cost. Baidu, alibaba ai and so on.
China's economy remains on track to surpass that of the United States within just five years, and yet they did sign that agreement.
In fact most countries did. India too.
it's not the case of the looser making new rule, it's the big boy discussing how they are going to handle the situation and the retarded ones thinking they are too good for that.
hector126 2 hours ago [-]
> China's economy remains on track to surpass that of the United States within just five years, and yet they did sign that agreement.
I'd be very happy to take a high stakes, longterm bet with you if that's your earnest position.
Axsuul 7 hours ago [-]
> China's economy remains on track to surpass that of the United States within just five years, and yet they did sign that agreement.
Are you actually saying this in the year 2025?
Aurornis 12 hours ago [-]
> China's economy remains on track to surpass that of the United States within just five years, and yet they did sign that agreement.
China has signed on to many international agreements that it has absolutely no interest in following or enforcing.
Intellectual property is the most well known. They’re party to various international patent agreements but good luck trying to get them to enforce anything for you as a foreigner.
cakealert 7 hours ago [-]
This is the correct answer.
However the attempts are token and they know it too. Just an attempt to appear to be doing something for the naive information consumers, aka useful idiots.
oceanplexian 13 hours ago [-]
I missed the boat on the 80s but as a “hacker” who made it through the 90s and 00s there’s something deeply sad and disturbing about how the conversation around AI is trending.
Imaging telling hackers from the past that people on a website called “hacker news” would be arguing about how important it is that the government criminalize running code on your own computer. It’s so astoundingly, ethically, philosophically opposed to everything that inspired me to get into computers in the first place. I only have to wonder if people really believe this, or it’s a sophisticated narrative that’s convenient to certain corporations and politicians.
charles_f 11 minutes ago [-]
AI development is not led by hackers in their garages, but by multi-billion corporations with no incentives other than profit and no care other than their shareholders. The only way to control their negative outcomes in this system is regulation.
If you explained to that hacker that govs and corps would leverage that same technology to spy on everyone and control their lives because line must go up, they might understand better than anyone else why this needs to be sabotaged early in the process.
TomK32 18 minutes ago [-]
Is there any source for you claim that any (democratic) government wants to criminalize running code on your own computer? I didn't see it in this declaration from the AI Action summit where the USA and UK are missing from the signatories https://www.politico.eu/wp-content/uploads/2025/02/11/02-11-...
As you mention ethics: what ethics do we apply to AI? None? Some? The same as to a human? As AI is replacing humans in decision-making, it needs to be held responsible just as a human.
buildfocus 4 hours ago [-]
> Imaging telling hackers from the past that people on a website called “hacker news” would be arguing about how important it is that the government criminalize running code on your own computer.
My understanding is that approximately zero government-level safety discussion is restriction of just building & running AI yourself. There are no limits of AI hacking even in the EU AI regaultion or discussions I've seen.
Regulation is around business & government applications and practical use cases: no unaccountable AI making final employment decisions, no widespread facial recognition in public spaces, transparency requirements for AI usage in high-risk areas (health, education, justice), no AIs with guns, etc.
lolinder 13 hours ago [-]
Who is saying this? Do you have specific comments in mind that you're referring to? I can't find anything anywhere near the top that says anything like this.
entropi 6 hours ago [-]
I would say the debate currently going on is less about "running code on your own machine" and more about "making sure the thing your are replacing at least a portion of your labor force with is at least somewhat dependable and those who benefit from the replacement are still responsible".
TomK32 14 minutes ago [-]
I think management is putting too much hope into this, any negative outcome from replacing a human with AI might result in liabilities surpassing the savings. Air Canada's chatbot was decided just a year ago and I'm sure the hallucinating AI chatbot, from development to legal fees, cost the airline more money than they saved in their call-center.
Get involved in the local AI community. You're more likely to find people with whom you share affinity on places like r/LocalLLaMA. There's also the e/acc movement on Twitter which espouses the same gen x style rebellious libertarian ideals that once dominated the Internet. Stay away from discussions that attract policy larping.
knodi 13 hours ago [-]
I think you're missing the point. People are saying that government should make sure AI is not weaponized against the people of the world. But lets face it, US and UK governments will likely be the first to weaponized against the people.
As DeepSeek is shown us progress is hard to hinder unless you go to war and kill the people....
01100011 5 hours ago [-]
This isn't "news for hackers only". Hacker News is more appropriately described as "a news aggregator and discussion board frequented by those in IT and programming". But that doesn't sound so cool, no? "Slashdot 2.0" also doesn't sound so great.
junto 1 hours ago [-]
> “ensuring AI is open, inclusive, transparent, ethical, safe, secure and trustworthy, taking into account international frameworks for all.”
That was never going to fly with the current U.S. administration. Not only is the word inclusive in there but ethical and trustworthy as well.
Joking aside, I genuinely don’t understand the “job creation” claims of JD Vance in his dinner speech in Paris.
Long-term I just can’t imagine what a United States will look like when 75% of the population are both superfluous and a burden to society.
If this happens fast, society will crumble. Sheep are best kept busy grazing.
Spooky23 1 hours ago [-]
The audience is JD’s masters and whomever we are menacing today.
The voters are locked in idiots, and don’t have agency at the moment. The bet from Musk, Theil, etc is that AI is as powerful and strategic as nuclear weapons were in 1947 - that’s what The Musk administration diplomacy seems to be like.
ToucanLoucan 1 hours ago [-]
I mean it feels like a joke, but also their “policy ideas” do basically boil down to targeting anything with the wrong words in them. I read somewhere they’re creating havoc right now because of a critical intelligence function called “privilege escalation” related to raising security clearance of personnel that’s been mired in stupid culture war controversy because it has the word privilege in it.
ExoticPearTree 1 days ago [-]
Most likely the countries who will have unconstrained AGIs will get to advance technologically by leaps and bounds. And those who constrain it will remain in the "stone age" when it comes to it.
_Algernon_ 1 days ago [-]
Assuming AGI doesn't lead to instant apocalyptic scenario it is more likely to lead to a form of resource curse[1] than anything that benefits the majority. In general countries where the elite is dependent on the labor of the people for their income have better outcomes for the majority of people than countries that don't (see for example developing countries with rich oil reserves).
What would AGI lead to? Most knowledge work would be replaced in the same way as manufacturing work has been, and AGI is in control of the existing elite. It would be used to suppress any revolt for eternity, because surveillance could be perfectly automated and omnipresent.
That's a valid concern. The theory that the population only gets education, health care, human rights and so on, if these people are actually needed for the rulers to stay in power, is valid. The whole idea of AGIs replacing beaurocrats, the way for example DOGE is betting on to be successful with, is already axing people's livelihood and purpose in life. Why train government workers, why spend money on education, training, health care plans, if you have an old nuclear plant powering your silicon farms.
If the rich need less and less educated, healthy and well fed workers, then more and more people will get treated like shit. We are currently going into that direction with full speed. The rich aren't even bothering to hide this anymore from the public because they think they have won the game and can't be overruled anymore. Let's hope there will be still elections in four years and MAGA doesn't rig it like Fidesz in Hungary and so many other countries who have fallen into the hands of the internationalist oligarchy.
alexashka 17 hours ago [-]
> If the rich need less and less educated, healthy and well fed workers, then more and more people will get treated like shit
Maybe. I think it's a matter of culture.
Very few people mistreat their dogs and cats in wealthy countries. Why shouldn't people in power treat regular people at least as well as regular folks treat their pets?
I'm no history buff but my hunch is that mistreatment of people largely came from a fear that if I don't engage in cruelty to maximize power, my opponents will and given that they're cruel, they'll be cruel to me when they come to take over.
So we end up with this zero sum game of squeezing people, animals, resources and the planet in an arms race because everyone's afraid to lose.
In the past - you couldn't be sure if someone else was building up an army, so you had to build up an army. But now that we have satellites and we can largely track everything - we can actually agree to not engage in this zero sum dynamic.
There will be a shift from treating people as means to an end of power accumulation and containment, to treating people as something you just inherently like and would like to see prosper.
It'll be a shift away from this deeply corrosive idea of never ending competition and growth. When people's basic needs are met and no one is grouping up to take other people's goodies - why should regular people compete with one another?
They shouldn't and they won't. People who want to do good work will do so and improving the lives of people worldwide will be its own reward. Private islands, bunkers and yachts will become incomprehensible because there'll be no serf class to service any of it. We'll go back to if you want to be well liked and respected - you have to be a good person. I look forward to it :)
rwmj 16 hours ago [-]
You've never met a rich person who mistreats their maid but dotes on their puppy?
alexashka 15 hours ago [-]
You've refuted my entire argument. I am that stupid and you are that smart :)
sophacles 15 hours ago [-]
> Very few people mistreat their dogs and cats in wealthy countries. Why shouldn't people in power treat regular people at least as well as regular folks treat their pets?
Because very few regular people will be their pets. These are the people who do everything in their power to pay their employees less. They treat their non-pets horribly... see feed lots and amazon warehouses. They actively campaign against programs which treat anyone well, particularly those who they aren't extracting wealth from. They whine and moan and cry about rules that protect people from getting sick and injured because helping those people would prevent them from earning a bit more profit.
They may spend a pile of money on surgery for their bunny, but if you want them to behave nicely to someone else's pet, or even someone else... well that's where they draw the line.
I guess you are hoping to be one of those pets... but what makes you think you're qualified for that, and why would you be willing to sacrifice all of your friends and family to the fate of feral dogs for the chance to be a pet?
alexashka 13 hours ago [-]
We're talking past one another.
I'm not suggesting that a few people become rich people's pets.
I'm saying it isn't inherent in human nature to mistreat other conscious beings.
If people were normally cruel to their pets, then I'd say yeah, this species just seems to enjoy cruelty and it is what it is. But we aren't cruel to pets, so the cruelty to workers does not stem from human nature, but other factors.
I gave one theory as to what causes the cruelty and that I'm somewhat optimistic that it'll run its course in due time.
Anyhoo :)
14 hours ago [-]
criley2 3 hours ago [-]
A few thousand rich people don't need 8 billion pets.
"Maintain humanity under 500,000,000 in perpetual balance with nature" (Georgia Guidestones, a monument describing someone's ideal future that is surrounded by conspiracy)
Will you be one of the 500 million pets? Or one of the 7.5 billion leftovers? Sure would be a shame if billions become unnecessary and then a pandemic 100X worse than covid got lableaked.
ExoticPearTree 11 hours ago [-]
Come on, let’s be real: all governments are bloated with bureaucrats and what DOGE is doing, albeit in Musk style, is to trim the fat a little bit.
You can’t seriosly claim they are upending people’s jobs when those jobs were BS in the first place.
crvdgc 6 hours ago [-]
Andrew Yang: Data is the new oil.
Sociology: The Oil Curse
ExoticPearTree 24 hours ago [-]
I see it as everyone having access to an AI so they can iterate very fast through ideas. Or do research at a level not possible now in terms of speed.
Or, my favorite outcome, the AI to iterate over itself and develop its own hardware and so on.
randerson 14 hours ago [-]
Hackers would be our only hope of a revolution.
daedrdev 20 hours ago [-]
I mean that itself is a hotly debated idea. From your own link " As of at least 2024, there is no academic consensus on the effect of resource abundance on economic development."
For example US is probably the most resource rich country in the world, but people don't consider it for the resource curse because the rest of its economy is so huge.
Night_Thastus 20 hours ago [-]
I don't see any point in speculating about a technology that doesn't exist and that LLMs will never become.
Could it exist some day? Certainly. But currently 'AI' will never become an AGI, there's no path forward.
wordpad25 20 hours ago [-]
with LLMs able to generate infinite synthetic data to train on it seems like AGI is just around the corner
contagiousflow 19 hours ago [-]
Whoever told you this is a path forward lied to you
stackedinserter 20 hours ago [-]
Probably it doesn't have to be an AGI that does tricks like passing Turing test v2. It can be an LLM with context window of 30GB that can outsmart your rival in geopolitics, economics and policies.
eikenberry 20 hours ago [-]
IMO we should focus on the AI systems we have today and worry about the possibility of AGI coming anytime soon. All indicators are that it is not.
hackinthebochs 19 hours ago [-]
>All indicators are that it is not.
What indicators are these?
mitthrowaway2 19 hours ago [-]
Focusing on your own feet proved to be near-sighted to a fault in 2022; how sure are you that it is adequately future-proofed in 2025?
eikenberry 19 hours ago [-]
Focusing on the clouds is no better.
emsign 20 hours ago [-]
Or maybe those countries' economies will collapse once they let AGIs control institutions instead of human beaurocrats, because the AGIs are doing their own thing and trick the government by alignment faking and in-context scheming.
CamperBob2 20 hours ago [-]
Eh, I'm not impressed with the humans who are running things lately. I say we give HAL a shot.
thrance 3 hours ago [-]
Perhpas, but meanwhile making it legal to have racial profiling AI tech in the hands of government and corporations does a great disservice to your freedom and privacy. Do not buy the narrative, EU regulations are not about forbidding AGI, they're about ensuring a minumum of decency in how the tech is allowed to exist. Something Americans seem deathly allergic to.
ijidak 10 hours ago [-]
It's so interesting that much of this is playing out like that movie Creator where New Asia embraces AI and robotics and the western world doesn't.
Here we are, a couple of years later, truly musing about sectors of the world embracing AI and others not.
That sort of piecemeal adoption was predictable but not that we are here to have this debate this soon!
timewizard 19 hours ago [-]
Or it will be viewed like nuclear weapons and those who have it will be bombed by those who don't.
These are all silicon valley "neck thoughts." They're entirely uninformed by the current state of the world and any travels through it. It's fantasies brought about by people with purely monetary desires.
It'd be funny if there wasn't billions of dollars being burnt to market this crap.
sschueller 1 days ago [-]
Those countries with unrestricted AGI will be the ones letting AI decide if you live or die depending on cost savings for share holders...
ExoticPearTree 1 days ago [-]
Not if Skynet emerges first and we all die :))
With every technological advancement it can always be good or bad. I believe it is going to be good to have a true AI available at our fingertips.
mdhb 1 days ago [-]
Ok but what lead you to that particular belief in the first place?
Because I can think of a large number of historical scenarios where malicious people get access to certain capabilities and it absolutely does not go well and you do have to somehow account for the fact that this is a real thing that is going to happen.
ExoticPearTree 22 hours ago [-]
I thibk today there are are less malicious people than in the past. And considering that most people will use the AI for good, there is a good chance that the bad people will be easier to identify.
mdhb 19 hours ago [-]
Is this just a gut feeling or are there some specific reasons for why you think this?
ExoticPearTree 11 hours ago [-]
Gut feeling.
cess11 20 hours ago [-]
Why do you think that? There's more people than ever and it's easier than ever for the ones with malicious impulses to find and communicate with each other.
For example, several governments are actively engaged in a live streamed genocide and nothing akin to the 1789 revolt in Paris seems to be underway.
vladms 16 hours ago [-]
And several revolutions are underway (simple examples Myanmar and Syria). And in Syria, "the previous government" lost.
The new regime in Syria is rather reactionary, I'd say. Rojava is a revolution however.
Sure. The ancien régime was considered illegitimate so they got rid of it, and if a state is involved in genocide it is since the Holocaust considered illegitimate and it should lose its sovereignty.
ta1243 1 days ago [-]
Those are "Death Panels", and only exist in places like the US where commercial needs run your health care
snickerbockers 1 days ago [-]
Canada had a case a couple years ago where a disabled person wanted canadian-medicare to pay for a wheelchair ramp in her house and they instead referred her to their assisted suicide program.
milesrout 19 hours ago [-]
Did they use AI to do it?
ta1243 3 hours ago [-]
> An investigation by VAC found four cases “isolated to a single employee who is no longer an employee of the Department” of assisted dying being brought up inappropriately to veterans.
Imnimo 19 hours ago [-]
Am I right in understanding that this "declaration" is not a commitment to do anything specific? I don't really understand why it matters who does or does not sign it.
flanked-evergl 8 hours ago [-]
The children running the European countries (into the ground) like this kind of theatre because they can pretend to be doing something productive without having to think.
karaterobot 17 hours ago [-]
Yep, it's got all the force of a New Year's resolution. It does not appear to be much more specific than one, either. It's about a page and a half long—the list of countries is as long as than the declaration itself, and it basically says "we talked about how we won't do anything bad".
amai 6 hours ago [-]
If it would be without commitment when why not sign it?
sva_ 17 hours ago [-]
Diplomatic theater, justification to get/keep more bureaucrats on the payroll
layer8 17 hours ago [-]
It’s an indication of the values shared, or in this case, not shared.
seydor 16 hours ago [-]
Europe just loves signing declarations and concerned letters. It would make no difference if they signed it.
swyx 16 hours ago [-]
leading in ai safety theater is actually worse than leading in ai because the leadership of ai safety is actually just in leading in ai period
llm_trw 8 hours ago [-]
Leading in AI safety is leading in AI lobotomization.
flanked-evergl 8 hours ago [-]
European leaders are overgrown children. It's kind of pathetic.
ionwake 50 minutes ago [-]
On the subject of safety - where is safe? A bunker? An island ? Nowhere?
Will AI lead to a Dark Forest scenario on Earth between humans and AI?
TomK32 6 minutes ago [-]
Here's the declaration[1]. No bunkers or islands mentioned.
Safety is mentioned in a context with trust-worthiness, ethics, security, ...
The fundamental issue with these AI safety declarations is that they completely ignore game theory. The technology has already proliferated (see: DeepSeek, Qwen) and trying to control it through international agreements is like trying to control cryptography in the 90s.
I've spent enough time building with these models to see their transformative potential. The productivity gains aren't marginal - they're exponential. And this is just with current-gen models.
China's approach is particularly telling. While they lack the massive compute infrastructure of US tech giants, their research output is impressive. Their models may be smaller, but they're remarkably efficient. Look at DeepSeek's performance-to-parameter ratio.
The upside potential is simply too large to ignore. We're seeing breakthroughs in protein folding that would take traditional methods decades. Education is being personalized at scale. The translation capabilities alone are revolutionary.
The reality is that AI development will continue regardless of these declarations. The optimal strategy isn't to slow down - it's to maintain the lead while developing safety measures in parallel. Everything else is just security theater.
(And yes, I've read the usual arguments about x-risk. The bottleneck isn't safety frameworks - it's compute and data quality.)
puff_pastry 1 days ago [-]
They’re right, the declaration is useless and it’s just an exercise in futility
tnt128 17 hours ago [-]
An AI arms race will be how we make sky net a reality.
If an enemy state gives AI autonomous control and gains massive combat effectiveness, it puts the pressure to other countries to do the same.
No one wants sky net. But if we continue the current path, painting the world as we vs them. I m fearful sky net will be what we get
bluescrn 15 hours ago [-]
If a rogue AI could take direct control of weapons systems, then so could a human hacker - and we've got bigger problems than just 'AI safety'.
Mystery-Machine 2 hours ago [-]
The difference is that we're not giving hackers direct access to weapons systems, while on the other hand, militaries are actively trying to use AI to control weapons directly.
llm_trw 8 hours ago [-]
Twitch plays thermonuclear war.
openrisk 11 hours ago [-]
Hard to know how significant this is because its impossible to know what the political class (and many others) mean by "AI" (and thus its potential risks). This is not new, similar charades a few years ago around "blockchain" etc.
But ignoring the signaling going on on various sides would be a mistake. "AI" is for all practical purposes a synonym for algorithmic decision making, with potential direct implication on peoples lifes. Without accountability, transparency, recourse etc the unchecked expansion of "AI" in various use cases represents a significant regression for historically established rights. In this respect the direction of travel is clear: The US is dismantling the CFPB, even more deregulation (if that is at all possible) is coming, big tech will be trusted to continue "self-regulating" etc.
The interesting part is the UK stance. Somewhere in between the US and the EU in terms of citizen / consumer protections, but despite brexit probably closer to the latter, this siding with dog-eats-dog deregulation might signal an anxiety not to be left behind.
tw1984 3 hours ago [-]
for those interested, below is the transcript of JD Vance's remark on the summit.
Given what is potentially at stake if you're not the first nation to achieve ASI, it's a little late to start imposing any restrictions or adding distractions
Similarly, whoever gains the most training and fine-tuning data from whatever source via whatever means first -- will likely be at advantage
Hard to see how that toothpaste goes back in the tube now
zombot 7 hours ago [-]
Can't have US parochialism restrained by international considerations. Can't put friction on printing money and avoiding accountability no matter the cost to others.
> warning countries not to sign AI deals with “authoritarian regimes”
What benefit do these AI regulations provide to progressing AI/AGI development? Do they slow down progress? If so, how do the countries that intend to enforce these regulations plan to compete on AI/AGI with countries that don’t have these regulations?
FloorEgg 16 hours ago [-]
What exactly is the letter declaring? There are so many interpretations of "AI safety" with most of them not actually having anything to do with maximizing distribution of societal and ecosystem prosperity or minimizing the likelihood of destruction or suffering. In fact some concepts of AI safety I have seen are double speak for rules that are more likely to lead to AI imposed tyranny.
Where is the nuanced discussion of what we want and don't want AI to do as a society?
These details matter, and working through them collectively is progress, in stark contrast to getting dragged into identity politics arguments.
- I want AI to increase my freedom to do more and spend more time doing things I find meaningful and rewarding.
- I want AI to help us repair damage we have done to ecosystems and reverse species diversity collapse.
- I want AI to allow me to consume more in a completely sustainable way for me and the environment.
- I want AI that is an excellent and honest curator of truth, both in terms of accurate descriptions of the past and nuanced explanations of how reality works.
- I want AI that elegantly supports a diversity of values, so I can live how I want and others can live how they want.
- I don't want AI that forcefully and arbitrarily limits my freedoms
- I don't want AI that forcefully imposes other people's values on me (or imposes my values on others)
- I don't want AI war that destroys our civilization and creates chaos
- I don't want AI that causes unnecessary suffering
- I don't want other people to use AI to tyrannize me or anyone else.
How about instead of being so broadly generic about "AI safety" declarations we get specific, and then ask people to make specific commitments in kind. Then it would be a lot more meaningful when they refuse, or when they oblige and then break them.
beardyw 1 days ago [-]
DeepMind has it's headquarters and most of it's staff in London.
graemep 1 days ago [-]
and what is the other country that refused to sign?
They will move to countries where the laws suit them. Generally business as usual these days and why big businesses have such a strong bargaining position with regard to national governments.
Both the current British and American governments are very pro big-business anyway. That is why Trump has stated he likes Starmer so much.
joshdavham 7 hours ago [-]
I'm afraid of a future where we who live in the anglosphere end up with the AI equivalent of cookie banners and GDPR due to EU AI regulations.
whiplash451 7 hours ago [-]
GDPR has made a hell of eurosphere as well, if that can make you feel better.
arkh 3 hours ago [-]
> GDPR has made a hell of eurosphere as well
Oh noes, I can't slurp all my user data and sell / give it to whoever. How will I make money if I can't exploit people's data?
It appears to be essentially "We promise not to do evil" declaration. It contains things like "Ensure AI eliminates biases in recruitment and does not exclude underrepresented groups.".
What's the point of rejecting this? Seems like a show, just like the declaration itself.
Depending on what side of the things you are, if you don't actually take a look at it you might end up believing that US is planning to do evil and others want to eliminate evil or alternatively you might believe that US is pushing for progress when EU is trying to slow it down.
Both appear false to me, IMHO its just another instance of US signing off from the global world and whatever "evil" US is planning to do China will do it better for cheaper anyway.
So far most AI development has been things like OpenAI making the ChatGPT chatbot and putting it up there for people to play with, likewise Anthropic, Deepseek et all.
I'm worried that declaration is implying you shouldn't be able to do that without trying to "promote social justice by ensuring equitable access to the benefits".
I think that is over bureaucracizing things.
mrtksn 1 days ago [-]
Which part makes you think that?
tim333 1 days ago [-]
The declarations are very vague as to what will actually be done other than declaring but I get the impression they want to make it more complicated just to put up a chatbot.
I mean stuff like
>We underline the need for a global reflection integrating inter alia questions of safety, sustainable development, innovation, respect of international laws including humanitarian law and human rights law and the protection of human rights, gender equality, linguistic diversity, protection of consumers and of intellectual property rights.
Is quite hard to even parse. Does that mean you'll get grief for you bot speaking English becuase it's not protecting linguistic diversity? I don't know
What does "Sustainable Artificial Intelligence" even mean? That you run it off solar rather than coal? Does it mean anything?
mrtksn 1 days ago [-]
The whole text is just "We promise not to be a-holes" and doesn't demand any specific action anyway, let alone having any teeth.
Useful only when you rejecting it. I'm sure in culture war torn American mind it signals very important things about genitals and ancestry and the industry around these stuff but in a non-American mind it gives you the vibes that the Americans intent to do bad things with AI.
Ha, now I wonder if the people who wrote that were unaware of the situation in US or was that the outcome they expected.
"Given that the Americans not promising not to use this tech for nefarious tasks maybe Europe should de-couple from them?"
tim333 24 hours ago [-]
It's also a bit woolly on real dangers that governments should maybe worry about.
What if ASI happens next year and and renders most of the human workforce redundant? What if we get Terminator 2? Those might be more worthy of worry than "gender equality, linguistic diversity" etc? I mean the diversity stuff is all very well but not very AI specific. It's like you're developing H-bombs and worrying if they are socially inclusive rather about nuclear war.
mrtksn 23 hours ago [-]
My understanding is that this is about using AI responsibly and not about AGI at all. Not worrying about H-bomb but more like worrying about handling radioactive materials in the industry or healthcare to prevent exposure or maybe radium girls happening again.
IMHO, from European perspective, they are worried that someone will install a machine that has bias against let's say Catalan people and they will be disadvantaged against Spaniards and those who operate the machine will claim no fault the computer did it, leading to social unrest. They want to have a regulations saying that you are responsible of this machine and have grounds for its removal if creates issues. All the regulations around AI in EU are in that spirit, they don't actually ban anything.
I don't think AGI is considered seriously by anybody at the moment. That's completely different ball game and if it happens none of the current structures will matter.
sva_ 17 hours ago [-]
[dead]
smolder 1 days ago [-]
I think with a certain crowd just being obstinately oppositional buys you political points whether it's well reasoned or not. IOW they may be acting like jerks here to impress the lets-be-jerks lobby back home.
mrtksn 1 days ago [-]
Yeah I agree, they just threw a tantrum for their local audience. I wonder, why they just don't make AI generate these tantrums instead actually annoying everybody.
marcusverus 16 hours ago [-]
> What's the point of rejecting this? Seems like a show, just like the declaration itself. Both appear false to me, IMHO its just another instance of US signing off from the global world...
Hear, hear. If Trump doesn't straighten up, the world might just opt for Chinese leadership. The dictatorship, the genocide, the communism--these are small things that can be overlooked if necessary to secure leadership that's committed to what really matters, which is.... signing pointless declarations.
ExoticPearTree 1 days ago [-]
[flagged]
mrtksn 1 days ago [-]
US just needs to have their culture war done already. These words are not about the American petty fights but it appears that the new government is all for it.
It's kind of fascinating actually how Americans turned the whole pop culture into genitalia regulations and racist wealth redistribution. Before that in EU we had all this stuff and wasn't a problem. These stuff were about minorities and minorities stuff don't bother most people as these are just accommodations for small number of people.
I'm kind of getting sick and tired of pretending that stuff that concern %1 of the people are the mainstream thing. It's insufferable.
hcurtiss 19 hours ago [-]
It's because people see the manifestation of racism implicit in these policies affecting their daily lives. And they're done with it, no matter how much the elites hand-wave "what's the big deal?" The insufferability runs entirely the other direction.
layer8 17 hours ago [-]
That’s mainly an American phenomenon, however.
hcurtiss 16 hours ago [-]
I'm not so sure. The acceptance of mass migration is rooted in many of the same principles, and push-back on that issue is fundamentally reshaping the political landscape in the UK and Europe.
Dalewyn 13 hours ago [-]
The rejection of woke ideals goes well beyond the US. The Japanese people also hate elements of woke getting into their culture and they don't even speak English.
Juliate 4 hours ago [-]
What are you telling about exactly?
Dalewyn 4 hours ago [-]
That this is not "mainly an American phenomenon".
Juliate 4 hours ago [-]
No, the "rejection of woke ideals". What woke ideals? What rejection?
Dalewyn 3 hours ago [-]
I tire of repeating that ad nauseum towards people engaging in bad will, so I'll just leave you some links:
It thought you were going to explain how "woke" is a bad thing, and how Japanese were counter progressive stuff.
Instead, you give me two links to the current fascist White House propaganda - which makes me wonder if for you, woke is the exact opposite of "christian"?
And a link about a video game which is... related how to woke/progress or conservative/fascism?
milesrout 19 hours ago [-]
Those words are about precisely American culture war issues. It exported the culture war abroad years ago.
It isn't about what % of the population is affected or number of people. It is about PRINCIPLES. Yes it matters just as much to enshrine dishonesty in law if it is dishonesty abour 1 person or 1000 people or 1m people. It matters.
Juliate 4 hours ago [-]
Funny how principes are unevenly distributed too...
Like, by the same principles (called, the law), the man that you made your president today should have been in jail for months, if not years; or, as in any decent democracy, should not have been able to run with so many pending lawsuits (hey, principles).
But, somehow, he got a pass. And he got elected, so that gave him another pass. And now he disrupts the principles he's supposed to uphold (a thing called the Constitution). And he gets another pass.
But. Sure. Principles. Of course.
smolder 1 days ago [-]
"woke and DEI phrases"?
The way you're using these as labels is embarrassingly shallow, and I would hope, beneath the level of discourse here.
ExoticPearTree 24 hours ago [-]
It is not. And you must be new around here when it comes to the comments level.
smolder 11 hours ago [-]
It definitely is shallow. It's on par with saying they used cringe language that is like, not a vibe. It doesn't mean anything concrete, and so it's impossible to rebuke and pointless to talk about.
cscurmudgeon 9 hours ago [-]
Do you mean this or the response to it? How does one respond to fluff like this?
> AI’s workplace impact must align governance, social dialogue, innovation, trust, fairness, and public interest. We commit to advancing the AI Paris Summit agenda, reducing inequalities, promoting diversity, tackling gender imbalances, increasing training and human capital investment
20 hours ago [-]
stackedinserter 20 hours ago [-]
Exactly, I prefer to call them "racist and discriminatory" too.
jampekka 1 days ago [-]
Inclusive here means that the population at large benefits. But I guess that's woke now too.
logicchains 1 days ago [-]
It mentions "promoting diversity, tackling gender imbalances" which clearly indicates they're using "inclusive" in the woke sense of the word.
ben_w 22 hours ago [-]
> tackling gender imbalances
This being culturally rejected by the same America that has itself twice rejected women candidates for president in favour of a man who now has 34 felony convictions, does not surprise me.
But it does disappoint me.
I remember when the right wing were complaining about Star Trek having a woman as a captain for the first time with Voyager. That there had already been women admirals on screen by that point suggested they had not actually watched it, and I thought it was silly.
I remember learning that British politician Ann Widdecombe changed from Church of England to Roman Catholic, citing that the "ordination of women was the last straw", and I thought it was silly.
Back then, actually putting effort into equal opportunity for all was called "political correctness gone mad" by those opposed to it — but I guess the attention span is no longer sufficient to use four-word-phrases as rhetorical applause lights, so y'all switched to a century old word coined by African Americans who wanted to make sure they didn't forget that the Civil War had only ended literal slavery, not changed the attitudes behind it.
This history makes the word itself a very odd thing to encounter in Europe, where we didn't have that civil war — forced end of Empire shortly after World War 2, yes, but none of the memes from the breakaway regions of that era even made it back to this continent, and AFAICT "woke" wasn't one of them anyway. I only know I'm called a "mzungu" by Kenyans because of the person who got me to visit the place.
ExoticPearTree 10 hours ago [-]
The current state if affairs is that the left want equality without competence.
The EU is even more nuts with their plans that all big companies in the EU should have 50/50 men-women representation in the board of directors.
Believe what you want, but America did not reject Hillary or Kamala because they were women, they rejected them because of their incompetence. And speaking of this, after seeing Kamala talk, it is beyond me how she got to be VP - not one coherent sentence comes out of her mouth.
2 hours ago [-]
ben_w 10 hours ago [-]
> The EU is even more nuts with their plans that all big companies in the EU should have 50/50 men-women representation in the board of directors.
That fact that you think this is "nuts" tells me you think women aren't equally competent.
> not one coherent sentence comes out of her mouth.
And yet you elected Trump.
Heck, if that's your objection, two counts of GWB. There was an entire genre based on how he mangled his speech.
ExoticPearTree 7 hours ago [-]
> That fact that you think this is "nuts" tells me you think women aren't equally competent.
No, it means that I want people in those places based on competency, not gender, and certainly not 50/50 representation because some bureaucrats think equality should be forced down on people's throats. If you make it about gender or anything else, the problem lies with you.
> And yet you elected Trump.
Well, besides the "beautiful" over use, he actually made sense when he was talking. The word salads Kamala kept making...
ben_w 6 hours ago [-]
> No, it means that I want people in those places based on competency, not gender, and certainly not 50/50 representation because some bureaucrats think equality should be forced down on people's throats.
If you think that the current hiring discrepancy represents a genuine and real skill discrepancy, it is a logical necessity for you think that women are not equally capable.
Conversely, if you think women are equally capable, then you must think that the hiring discrepancy is not justified by competency. If you get this far, then it follows that there is a huge opportunity for increasing the pool of competent leaders by requiring 50/50. As they do actually have a goal of increasing the economic potential of the region, this means they should push the issue.
> Well, besides the "beautiful" over use, he actually made sense when he was talking. The word salads Kamala kept making...
To quote one of many examples, this one about Biden at the beach in Trump’s Georgia response to the State of the Union:
“Somebody said he looks great in a bathing suit, right? And you know, when he was in the sand and he was having a hard time lifting his feet through the sand, because you know sand is heavy, they figured three solid ounces per foot, but sand is a little heavy, and he’s sitting in a bathing suit. Look, at 81, do you remember Cary Grant? How good was Cary Grant, right? I don’t think Cary Grant, he was good. I don’t know what happened to movie stars today. We used to have Cary Grant and Clark Gable and all these people. Today we have, I won’t say names, because I don’t need enemies. I don’t need enemies. I got enough enemies. But Cary Grant was, like – Michael Jackson once told me, ‘The most handsome man, Trump, in the world.’ ‘Who?’ ‘Cary Grant.’ Well, we don’t have that any more, but Cary Grant at 81 or 82, going on 100. This guy, he’s 81, going on 100. Cary Grant wouldn’t look too good in a bathing suit, either. And he was pretty good-looking, right?”
Covfefe. Heck, that one became a meme so hard it has its own Wikipedia page.
Trump's confusion and rambling is broadcast across the world, and mocked across the world — that's how I even know about it. Similar with GWB's… well, Bushisms.
I've seen two relatives get Alzheimer's, and have been on the other end of a phone line with a third when they, mid-sentence, started talking as if I was my brother, speaking of me like I wasn't there, telling him how I was doing.
Trump is old.
ExoticPearTree 3 hours ago [-]
> If you think that the current hiring discrepancy represents a genuine and real skill discrepancy, it is a logical necessity for you think that women are not equally capable.
I believe they are not equally interested in the same fields men are and vice-versa. So I don't see any value in forcing women and men to fields which they don't want to be in.
> Conversely, if you think women are equally capable, then you must think that the hiring discrepancy is not justified by competency. If you get this far, then it follows that there is a huge opportunity for increasing the pool of competent leaders by requiring 50/50. As they do actually have a goal of increasing the economic potential of the region, this means they should push the issue.
I think they are equally capable in certain areas, and less capable in other areas. Just like men are less capable in some areas and more capable in others. It's how nature works, nothing sexist or discriminatory about it. Just like two men are not equal, or two women are not equal.
But trying to forcefully push the narrative that somehow men and women are 100% equal is very detrimental to everyone involved.
Dalewyn 5 hours ago [-]
>Conversely, if you think women are equally capable, then you must think that the hiring discrepancy is not justified by competency. If you get this far, then it follows that there is a huge opportunity for increasing the pool of competent leaders by requiring 50/50.
Competency (or more generally merit) is not measured in penises and vaginas.
A forced 50/50 men/women mandate implies that one or the other are not as capable without outside "assistance". That is sexism and rude as all hell.
True equality is accepting all applicants regardless of penises or vaginas and ranking them by their merit and taking however many you need or want from the top down. True equality is being absolutely blind to equity factors like race and sex unless that is directly relevant.
The only time you should care about vaginas is if you're, say, running medical trials that concern breast cancer. Penises and vaginas are utterly irrelevant in the course of serving as a director on a corporate board.
>Trump's confusion and rambling is broadcast across the world, and mocked across the world
One of the things that helped Trump win the election was his three hours long unedited interview with Joe Rogan. It was amazing to see a former President and current Presidential candidate sit down with a common Joe Average (FSVO average) and just have bog ordinary conversations on a wide variety of topics that common Americans can relate to. Trump even explained why he rambles ("weaves stories") like he does.
Harris meanwhile couldn't speak her way out of a teleprompter.
A leader needs to be able to communicate effectively, it's a core part of whether you're charismatic or not and Trump is one of the greatest speakers ever: He speaks with a simple vocabulary because his audience are common citizens, he gets right to the point because he understands time is valuable. He talks to the American people in their language, plain English; not Washingtonese or legalese.
HRC's failure was the way she spoke (Washingtonese) and conducted herself came off like she was a used car salesman. Nobody likes used car salesmen. It also did not help that the DNC, whether merely perceived or in actual fact, forced Sanders out which alienated a significant portion of the Democrat electorate.
Harris failed because she simply could not communicate and further refused to communicate, hoping that she would win because she is an Indian-Black Woman with a sob story. She is perhaps the best example of so-called "DEI hires": She was chosen for VP and then Presidential candidate both times because she checked off many equity boxes, not because she demonstrated competency in a fair showdown (primaries, all of which she lost dead last).
Dalewyn 13 hours ago [-]
You might be interested to learn that Trump's cabinet has at least 9 women, one of whom was the Governor of South Dakota, another who was an Attorney-General of Florida, and another who is a reserve duty Lieutenant-Colonel in the Army.
Most Senate Democrats have voted against their respective confirmations, by the way.
We are quite fine having women as leaders if they are actually competent and charismatic like it would be the case with men. Neither HRC nor Harris were that; the former was reviled and the latter couldn't even speak coherently. The Democrats can easily get a woman elected President if they would simply choose a good candidate with policies that resonate with the electorate.
logicchains 1 days ago [-]
"eliminates biases in recruitment and does not exclude underrepresented groups" has turned out to basically mean "higher less qualified candidates in the name of more equitable outcomes", which is a very contentious position to take and one many Americans strongly oppose.
mrtksn 1 days ago [-]
In other words they get triggered from words that don't mean that thing. Sounds like EU should develop a politically correct language for Americans. That's synthetic Woke, which is ironic.
I wonder if the new Woke should be called Neo-Woke, where you pretend to be mean to certain group of people to accommodate other group of people who suffered from accommodating another group of people.
IMHO all this needs to be gone and just be like "don't discriminate, be fair" but hey I'm not the trend setter.
rat87 19 hours ago [-]
No it means eliminates biases in recruitment and to not exclude underrepresented groups
We still have massive biases against minorities in our countries. Some people prefer to pretend they don't exist so they can justify the current reality.
Nothing related to Trump has anything to do with qualified candidates, Trump is the least qualified president we have ever had in american history. Not just because he hadn't served in government or as a general but because he is generally unaware about how government works and doesn't care to be informed.
optimalsolver 19 hours ago [-]
>higher less qualified candidates
Ironique.
michaelt 1 days ago [-]
> What's the point of rejecting this?
Sustainable Development? Protect the environment? Promote social justice? Equitable access? Driving inclusive growth? Eliminating biases? Not excluding underrepresented groups?
These are not the values the American people voted for. Americans selected a president who is against "equity", "inclusion" and "social justice", and who is more "roman salute" oriented.
Of course this is all very disorienting to non-Americans, as a year or two ago efforts to do things like rename git master branches to main and blacklists to denylists also seemed to be driven by Americans. But that's just America's modern cultural dominance in action; it's a nation with the most pornographers and the most religious anti-porn campaigners at the same time; the home of Hollywood beauty standards, plastic surgery and bodybuilding, but also the home of fat acceptance and the country with the most obesity. So in a way, contradictory messages are nothing new.
Dalewyn 1 days ago [-]
>Americans selected a president who is against "equity", "inclusion" and "social justice"
Indeed. Our American values are and always have been Equality, Pursuit of Happiness, and legal justice respectively, as declared in our Declaration of Independence[1] and Constitution[2], even if there were and will be complications along the way.
Liberty is power, power is responsibility. Noone ever said living free was going to be easy, but everyone will say it's a fulfilling life.
Then why don't you do all that but instead treating people who are in pursuit of happiness as criminals for example? Why do you need the paperwork and bureaucracy to let people pursue happiness?
Why US government personel is being replaced with loyalist if you are about equality and legal justice?
pb7 20 hours ago [-]
The US is a sovereign nation which has a right to defend its borders from illegal invaders. Try to enter or stay in Singapore illegally and see what happens to you.
mrtksn 17 hours ago [-]
US is Singapore now? What happened to pursuit of happiness and freedom?
pb7 17 hours ago [-]
Insert any other country of your choice that has a government sturdier than a lemonade stand.
You're free to follow the legal process to come to the country to seek your pursuit of happiness.
mrtksn 16 hours ago [-]
Ah, so pursuit of happiness through bureaucracy. Got it
Dalewyn 16 hours ago [-]
You are so disingenuous it is staggering.
Your right to pursuit of happiness ends where another's rights begins. The US federal government is also tasked with the duty of protecting and furthering the general welfare of Americans including the protection of property.
You do not have a right let alone a privilege to illegally cross the border or stay in the country beyond what your visa permits. We welcome legal immigrants, but illegal aliens are patently not welcome and fraudulent asylum applicants further break down the system for everyone.
mrtksn 10 hours ago [-]
> Your right to pursuit of happiness ends where another's rights begins
The right of not having someone in the country is interesting right.
What other rights you have like that? Do you have also the right of other people not eating Marmite?
Dalewyn 7 hours ago [-]
Quite literally any country with a government worth talking about controls entry of foreign nationals. It is a privilege to enter another country as a foreigner, and that country has every sovereign right to deny you that privilege if they so choose for any reason (usually citing their laws).
The fact that you ignore this demonstrates your bad will in engaging in these conversations.
mrtksn 2 hours ago [-]
So you support people who are allowed to enter the country to pursue their happiness?
Dalewyn 1 hours ago [-]
Of course, why wouldn't I? So long as would-be immigrants obey the law they are quite welcome.
mrtksn 14 minutes ago [-]
Good to hear. What are you doing to demolish the visa regime that actually doesn't allow all that? Do you have an ETA for the day when anybody who enters USA will be able to seek employment or start a company or do whatever they want in their pursuit of happiness?
I was worried that you are advocating for work visas, permits, green cards ect, like a silly EU country would do.
pb7 13 hours ago [-]
Yes, we are a developed society with rules and processes. If you can't follow them then we definitely don't want you here.
pjc50 1 days ago [-]
"We hold these truths to be self-evident, that all men are created equal ..." (+)
(+) terms and conditions apply; did not originally apply to nonwhite men or women. Hence allowing things like the mass internment of Americans of Japanese ethnicity.
Dig1t 20 hours ago [-]
> We are also talking much more rightly about equity,
>it has to be about a goal of saying everybody should end up in the same place. And since we didn’t start in the same place. Some folks might need more: equitable distribution
This is arguing for giving certain people more benefits versus others based on their race and gender.
This mindset is dangerous, especially if you codify it into an automated system like an AI and let it make decisions for you. It is literally the definition of institutional discrimination.
It is good that we are avoiding codifying racism into our AI under the fake moral guise of “equity”
rat87 19 hours ago [-]
Its not. What we currently have is institutional discrimination and Trump is trying to make it much worse. Making sure AI doesn't reflect or worsen current societal racism is a massive issue
Dig1t 19 hours ago [-]
At my job I am not allowed to offer a job to a candidate unless I have first demonstrated to the VP of my org that I have interviewed a person or color.
This is literally the textbook definition of discrimination based on skin color and it is done under the guise of “equity”.
It is literally defined in the civil rights act as illegal (title VII).
It is very good that the new administration is doing away with it.
rat87 18 hours ago [-]
So did your company interview any people of color before? It seems like your org recognizes their own racism and is taking steps to fight that. Good on them at least if they occasionally hire some of them and aren't just covering their asses.
You don't seem to understand either letter of the spirit of the civil rights act.
You're happy that a racist president who campaigned on racism and keeps on baselely accusing people who are members of minority groups of being unqualified while himself being the least qualified president in history is trying to encourage people to not hire minorities? Why exactly?
Dig1t 18 hours ago [-]
Just run a thought experiment
1. Job posted, anyone can apply
2. Candidate applies and interviews, team likes them and wants to move forward
3. Team not allowed to offer because candidate is not diverse enough
4. Team goes and interviews a diverse person.
Now if we offer the person of color a job, the first person was discriminated against because they would have got the job if they had had the right skin color.
If we don’t offer the diverse person a job, then the whole thing was purely performative because the only other outcome was discrimination.
This is how it works at my company.
Go read Title VII of the civil rights act, this is expressly against both the letter and spirit of the law.
BTW calling everything you disagree with racism doesn’t work anymore, nobody cares if you think he campaigned on racism (he didn’t).
If anything, people pushing this equity stuff are the real racists.
Detrytus 20 hours ago [-]
Men are created equal, but not identical. That's why you should aim for equal chance, but shouldn't try to force equal results. Affirmative actions and such are stupid and I'm glad Trump is getting rid of them.
worik 19 hours ago [-]
I live in a country that has had a very successful programme of affirmative action, following roughly three generations of open, systemic racism (Maori school students where kept out of university and the professions as a matter of public policy)
Now we are starting to get Maori doctors and lawyers that is transforming our society - for the better IMO
That was because the law and medical schools went out of their way to recruit Maori students. To start with they were hard to find as nobody in their families (being Maori, and forbidden) had been to university
If you do not do anything about where people start then saying "aim for equal chance" can become a tool of oppression and keeping the opportunities for those who already have them.
Nuance is useful. I have heard many bizarre stories out of the USA about people blindly applying DEI with not much thought or planning. But there are many many places where carefully applied policies have made everybody's life better
hcurtiss 19 hours ago [-]
This is always the Motte & Bailey of the left. "Equity" doesn't mean you recruit better. It means when your recruitment efforts fail to produce the outcomes you want, you lower the barriers on the basis of skin color. That's the racism that America is presently rejecting, and very forcefully.
milesrout 19 hours ago [-]
NZ does not have a "successful programme of affirmative action".
Discrimination in favour of Maori students largely has benefited the children of Maori professionals and white people with a tiny percentage of Maori ancestry who take advantage of this discriminatory policy.
The Maori doctors and lawyers coming through these discriminatory programmes are not the people they were intended to target. Meanwhile, poor white children are essentially abandoned by the school system.
Maori were never actually excluded from university study, by the way. Maori were predominantly rural and secondary education was poor in rural areas but it has nothing to do with their ethnicity. They were never "forbidden". There have been Maori lawyers and doctors for as long as NZ has had universities.
For example, take Sir Apirana Ngata. He studied at a university in NZ in the 1890s, around the same time women got the vote. He was far from the first.
What you have alleged is a common narrative so I don't blame you for believing it but it is a lie.
worik 18 hours ago [-]
> Maori were never actually excluded from university study, by the way
Māori schools (which the vast majority of Māori attended) were forbidden by the education department from teaching the subjects that lead to matriculation. So yes, they were forbidden from going to university.
> Sir Apirana Ngata. He studied at a university in NZ in the 1890s,
That was before the rules were changed. It was because of people like Ngata and Buck that the system was changed. The racists that ran the government were horrified that the natives were doing better than the colonialists. They "fixed" it.
> Discrimination in favour of Maori students largely has benefited the children of Maori professionals
It has helped establish traditions of tertiary study in Māori families, starting in the 1970s
There are plenty of working class Māori (I know a few) that used the system to get access. (The quota for Māori students in the University of Auckland's law school was not filled in the 1990s. Many more applied for it, but if their marks were sufficient to get in without using the quota they were not counted. If it were not for the quota many would not have even applied)
Talking of lies: "white people with a tiny percentage of Maori ancestry who take advantage of this" that is a lie.
The quotas are not based on ethnicity solely. To qualify you had to whakapapa (whāngi children probably qualified even if they did not whakapapa, I do not know), but you also had to be culturally Māori.
Lies and bigotry are not extinct in Aotearoa, but they are in retreat. The baby boomers are very disorientated, but the millennials are loving it.
Better for everybody
tmpz22 19 hours ago [-]
Why would any country align with US vision for AI policies after how we’ve treated allies over the last two weeks?
Why would any country yield given the hard line negotiating stance the US is now taking? And the flip flopping and unclear messaging on our positions?
18 hours ago [-]
anon291 16 hours ago [-]
People should be free to train AIs
lupusreal 3 hours ago [-]
These kind of "don't be evil" declarations are typically meaningless gestures by which non-players who weren't going to be participating anyway can posture as morally superior, while having no meaningful impact on the course of things. See also, the Ottawa Treaty; non-signatories include the US, China, Russia, Pakistan and India, Egypt, Israel, Iran, Cuba, North and South Korea... In other words all the countries from which landmine use is expected in the first place. And when push comes to shove, signatories like Ukraine will use landmines anyway because national defense is worth more than feeling morally superior for adhering to a piece of paper.
Why have all other countries elected children instead of adults?
option 20 hours ago [-]
did China sign?
vindex10 20 hours ago [-]
that's what confused me:
> Among the priorities set out in the joint declaration signed by countries including China, India, and Germany was “reinforcing international co-operation to promote co-ordination in international governance.”
so looks like they did
, at the same time, the goal of the declaration and summit to become less reliant on US and China.
> Meanwhile, Europe is seeking a foothold in the AI industry to avoid becoming too reliant on the US or China.
So basically Europe signed together with China to compete against US/UK or what happend?
tim333 15 hours ago [-]
The agreement doesn't mean much. It's just a list of good intentions. Most counties were fine to say yeah I'll do good things if it's not binding. Vance & Trump were an exception to that.
vindex10 8 hours ago [-]
It is not binding, but it is still a public statement
jampekka 1 days ago [-]
“Partnering with them [China] means chaining your nation to an authoritarian master that seeks to infiltrate, dig in and seize your information infrastructure,” Vance said.
At least they aren't threatening to invade our countries or extorting privileged position.
karaterobot 17 hours ago [-]
Threatening Taiwan, actually invading Tibet and Vietnam within living memory, and extorting privileged positions in Africa and elsewhere. Not to mention supporting puppet governments throughout the world, just like the U.S.
Hwetaj 19 hours ago [-]
Sir, this is a Wendy's! Please do not defend Europe against its master here! Pay up, just like Hegseth has demanded today.
pb7 20 hours ago [-]
Except they are: Taiwan.
dsign 20 hours ago [-]
I know I'm an oddball when it comes to the stuff that crosses my mind, but here I go anyway.
It's possible to stop developing things. It's not even hard; most of the world develops very little. Developing things requires capital, education, hard work, social stability and the rule of law. Many of us writing on this forum take those things for granted but it's more the exception than the rule, when you look at the entire planet.
I think we will face the scenario of runaway AI, where we lose control, and we may not survive. I don't think it will be a sky-net type of thing, sudden. At least not at first. What will happen is that we will replace humans by AIs in more and more positions of influence and power, gradually. Our ChatGPTs of today will become board members and government advisors of tomorrow. It will take some decades--though probably not many. Then, a face-off will come one day, perhaps. Humans vs them.
But if we do survive and come to regret the development of advanced AI and have a second chance, it will be trivially easy to suppress them: just destroy the semiconductor fabs, treat them the same way we treat ultra-centrifuges for enriching Uranium. Cut off the dangerous data centers, and forbid the reborn universities[1] from teaching linear algebra to the students.
[1]: We will lose advanced education for the masses on the way, as it won't be economically viable nor necessary.
simonw 20 hours ago [-]
"Our ChatGPTs of today will become board members and government advisors of tomorrow."
That still feels like complete science fiction to me - more akin to appointing a complicated Excel spreadsheet as a board member.
fritzo 19 hours ago [-]
It feels like mere language difference. Certainly every government official is advised by many Excel spreadsheets. Were those spreadsheets "appointed", no.
simonw 19 hours ago [-]
The difference is between AI tools as augmentation and AI tools as replacement.
Board members using tools like ChatGPT or Excel as part of their deliberations? That's great.
Replacing a board member entirely with a black box automation that makes meaningful decisions without human involvement? A catastrophically bad idea.
vladms 16 hours ago [-]
People like having someone to blame and fire and maybe send to jail. It's less impressive if someone blames everything on their Excel sheet...
1 hours ago [-]
philomath_mn 20 hours ago [-]
> It's possible to stop developing things
If the US were willing to compromise some of it's core values, then we could probably stop AI development domestically.
But what about the rest of the world? If China or India want to reap the benefits of enhanced AI capability, how could we stop them? We can hit them with sanctions and other severe measures, but that hasn't stopped Russia in Ukraine -- plus the prospect of world-leading AI capability has a lot more economic value than what Ukraine can offer.
So if we can't stop the world from developing these things, why hamstring ourselves and let our competitors have all of the benefits?
hollerith 17 hours ago [-]
>the prospect of world-leading AI capability has a lot more economic value than what Ukraine can offer.
The mere fact that you imagine that Moscow's motivation in invading Ukraine is economic is a sign that you're missing the main reasons Moscow or Beijing would want to ban AI: (1) unlike in the West and especially unlike the US, it is routine and normal for the government in those countries to ban things or discourage their use, especially new things that might cause large societal changes and (2) what Moscow and Beijing want most is not economic prosperity, but rather to prevent another one of those invasions or revolutions that kills millions of people and to prevent the country's ruling coalition from losing power.
philomath_mn 16 hours ago [-]
But this all comes back to the self-interest and game theory discussion.
Let's suppose that, like you, both Moscow and Beijing do not want AGI to exist. What could they do about it? Why should they trust that the rest of the world will also pause their AI development?
This whole discussion is basically a variation on the prisoner's dilemma. Either you cooperate and AI risks are mitigated, or you do not cooperate and try to take the best outcome for yourself.
I think we can expect the latter. Not because it is the right thing or because it is the optimal decision for humanity, but because each individual will deem it their best choice, even after accounting for P(doom).
hollerith 16 hours ago [-]
>Let's suppose that, like you, both Moscow and Beijing do not want AGI to exist. What could they do about it? Why should they trust that the rest of the world will also pause their AI development?
That is why the US and Europe should stop AI in their territories first especially as the US and Britain have been the main drivers of AI "progress" up to now.
hcurtiss 19 hours ago [-]
Exactly. Including military benefits. The US would not be a nation for long.
HelloMcFly 19 hours ago [-]
This is my oddball thought: the thing about AI doomerism is that it feels to me like it requires substantially more assumptions and leaps of logic than environmental doomerism. And environmental doomerism seems only more justified as the rightward lurch of western societies continues.
Note: I'm not quite a doomer, but definitely a pessimist.
Simon_O_Rourke 20 hours ago [-]
> What will happen is that we will replace humans by AIs in more and more positions of influence and power, gradually. Our ChatGPTs of today will become board members and government advisors of tomorrow.
Great, can't wait for even some small improvement over the idiots in charge right now.
realce 20 hours ago [-]
It's time to put an end to this fashionable and literal anti-human attitude. There's no comparative advantage to AI replacing humans en-masse because of how "stupid" we are. This POV is advocating for incalculable suffering and death. You personally will not be in a better or more rational position after this transition, you'll simply be dead.
Simon_O_Rourke 5 hours ago [-]
I don't see where you go from some over-hyped generative text bot outputting reams of semi-gibberish to.... AI will kill us all horribly. There's more than a few intractable technical limitations between ChatGPT and the T-1000.
moffkalast 20 hours ago [-]
I for one, also welcome our new Omnissiah overlords.
jcarrano 19 hours ago [-]
What if we face the scenario of a Dr. Manhattan type AGI, that's just fed up with people's problems and decides to leave us for the stars?
TheFuzzball 20 hours ago [-]
I am so tired of the AI doomer argument.
The entire thing is little more than a thought experiment.
> Look at how fast AI has advanced, it you just project that trend out, we'll have human-level agents by the end of the decade.
No. We won't. Scale up transformers as big as you like, this won't happen without massive advances in architecture and hardware.
I believe it is possible, but the idea it'll happen any day now, and by accident is bullshit.
This is one step from Pascal's Wager, but being presented as fact by otherwise smart people.
dsign 19 hours ago [-]
> The entire thing is little more than a thought experiment.
Yes. Nobody can predict the future.
> but the idea it'll happen any day now, and by accident is bullshit.
We agree on that one: it won't be sudden, and it won't be by accident.
> I believe it is possible, but the idea it'll happen any day now, and by accident is bullshit.
Exactly. Not by accident. But if you believe it's possible, then we are both doomers.
The thing is, there are forces at play that want this. It's all of us. We in society want to remove other human beings from the chain of value. I use ChatGPT today to not pay a human editor. My boss uses Suno AI to play generated music with pro-productivty slogans before Teams meetings. The moment the owners of my enterprise believe it's possible to replace their highly paid engineers with AIs, they will do it. My bosses don't need to lift a finger today to ensure that future. Other people have already imagined it, and thus, already today we have well-founded AI companies doing their best to develop the technology. Their investors see an opportunity on making highly-skilled labor cheaper, and they are dumping their money into that enterprise. Better hardware, better models, better harnesses for those models. All of that is happening at speed. I'm not counting on accidents there. If anything, I'm counting on accidents Chernobyl style that make us realize, when there is still time, if we are stepping into danger.
627467 20 hours ago [-]
Everyone wants to be the prophet of doom of their own religion.
anon291 16 hours ago [-]
Right, let's go back to the stone age because we said so.
> What will happen is that we will replace humans by AIs in more and more positions of influence and power,
With all due respect, and not to be controversial, but how is this concern any more valid than the 'great replacement' worries.
20 hours ago [-]
anon291 16 hours ago [-]
The world is the world. Today is today. Tomorrow is tomorrow.
You cannot face the world with how you want it to be, but only as it is.
What we know today is that a relatively straightforward series of matrix multiplications leads to what is perceived to be intelligence. This is simply true no matter how many declarations one signs.
Given that this is the case, there is nothing left to be done unless we want to go full Butlerian Jihad
llm_trw 8 hours ago [-]
I resent that.
There are a few non-linear function operations in between the matrix multiplications.
tehjoker 20 hours ago [-]
No different than how the U.S. doesn't sign on to the declaration of the rights of children or landmine treaties etc
20 hours ago [-]
Nevermark 11 hours ago [-]
Could anyone count how many times politicians have enacted laws or regulations in areas they completely lacked any understanding? And no time to get informed.
Fortunately those days are over. Any politician dealing with a technical issue over their head can turn to an LLM and ask for comment. "Is signing this poorly thought out, difficult to interpret, laundry list of vague regulations, that could limit LLM progress, really a good idea? Break this down for me like I am 5, please."
(Even though the start appeared trivial, happenstance, even benign, the age where AI's rapidly usurped there own governance had begun. The only thing that could have made it happen faster, or more destructively, were those poorly thought out international agreements the world was lucky to dodge.)
pjc50 1 days ago [-]
My two thoughts on this:
- there's a real threat from AI to the open internet by drowning it in spam, fraud, and misinformation
- current "AI safety" work does basically nothing to address this and is kind of pointless
It's important that AI-enabled processes which affect humans are fair. But that's just a subset of a general demand for justice from the machine of society, whether it's implemented by humans or AIs or abacuses. Which comes back to demanding fair treatment from your fellow humans, because we haven't solved the human "alignment problem".
thih9 1 days ago [-]
And of course people responsible for AI disruptions would love to sell solutions for the problems they created too. Notably[1]:
> Worldcoin's business is to provide a reliable way to authenticate humans online, which it calls World ID.
“Tools for Humanity” and “for-profit” in a single sentence lost me.
dgb23 1 days ago [-]
From a consumer's perspective I want declaration.
I want to know whether an image or video is largely generated by AI, especially when it comes to news. Images and video often imply that they are evidence of something actually happening.
I don't know how this would be achieved. I also don't care. I just want people to be accountable and transparent.
cameronh90 1 days ago [-]
We can’t even define the boundaries of AI. When you take a photo on a mobile phone, the resulting image is a neural network manipulated composite of multiple photos [0]. Anyone using Outlook or Grammarly now is probably using some form of generative AI when writing emails.
Rules like this would just lead
to everything having an “AI generated” label.
People have tried it in the past with trying to require fashion magazines and ads warn when they photoshop the models. But obviously everything is photoshopped, and the problem becomes how do we separate good photoshop (levels, blemish remover?) from bad photoshop (warp tool?).
And so it seems we await the imminent arrival of a new eternal September of unfathomable scale; indeed as we deliberate, that wave may already be cresting, breaking upon every corner of the known internet. O wherefore this moment?
TiredOfLife 21 hours ago [-]
>there's a real threat from AI to the open internet by drowning it in spam, fraud, and misinformation
That happened years ago. And without llms
m3kw9 14 hours ago [-]
I count 10 red tape just from this sentence “ ensuring AI is open, inclusive, transparent, ethical, safe, secure and trustworthy, taking into account international frameworks for all.”
PeterCorless 15 hours ago [-]
"Why do we want better artificial intelligence when we have all this raw human stupidity as an abundant renewable resource we haven't yet harnessed?"
mytailorisrich 20 hours ago [-]
This declaration is just hand-waving.
Europe is hopeless so it does not make a difference. China can sign and ignore it so it does not make a difference.
But it would not be wise for the USA to have their hands tied up so early. I suppose that the UK wants to go their usual "lighter touch regulation" than the EU route to attract investment. Plus they are obviously trying hard to make friends with the new US administration.
bostik 19 hours ago [-]
> * suppose that the UK wants to go their usual "lighter touch regulation" than the EU route to attract investment.*
Not just that. A speaker in a conference I attended about a month ago mentioned that UK is actively drifting away from EU's stance, particularly on the aspect of AI safety in practice.
The upcoming European AI act has "machine must not make material decisions" as its cornerstone. UK are hell-bent to get AI into government functions, to ostensibly make everything more efficient. As part of that drive, the UK is aiming to allow AI to make material decisions, without human review or recourse. In a country still in the throes of the Post Office / Horizon scandal, that really takes some nerve.
Those in charge in this country know fully well that "AI safety" will be in violent conflict with the above.
tim333 15 hours ago [-]
I'm someone who generally sees no benefit from Brexit but I think being able to crack on with AI without EU regulation is a benefit.
phatfish 13 hours ago [-]
It's nothing to do with the EU, or a "regulation".
Not really sure what to say to this. It's a screenshot (from the tech-bro-in-chief no less) of a ChatGPT response, no prompt included. We are discussing a current event.
As an attempt at a response, the UK is not party to the "EU AI Act" or the "DMA/DSA", we left before they were passed as law in the EU. The UK has its own "Digital Markets Act", but it is not an EU regulation. The GDPR is an inherited EU regulation.
The AI summit was French led, to get a global consensus on what sort of AI protections should be in place it looks like. The declaration was specific to this summit.
It's just a means for the UK to fluff Trump in a way that doesn't annoy the Europeans (or anyone else) that much. Nothing about this is legally binding or could be called a "regulation".
waltercool 21 hours ago [-]
That is a good thing, if we want AI companies to be competitive enough.
If you add regulations, people will use other AI companies from countries without them. The only result of that would be losing the AI race.
You can see this at Huggingface top models, fine-tuned models are way more popular than official ones.
And this is also good considering most companies (even China) offer their models free to download and use locally. Democratizing AI is the good approach here.
ViktorRay 14 hours ago [-]
You raise an interesting point.
What would this declaration mean for free and open source models?
I wonder why... maybe because it look like US replaced some "moral values" (not talking about "woke values" here, just plain "humanistic values" like in Human Rights Declaration) with "bottom line values" :-)
ahiknsr 1 days ago [-]
> I wonder why
Hmm.
> Donald Trump had a fiery phone call with Danish prime minister Mette Frederiksen over his demands to buy Greenland, according to senior European officials.
> The president has said America pays $200bn a year 'essentially in subsidy' to Canada and that if the country was the 51st state of the US 'I don’t mind doing it', in an interview broadcast before the Super Bowl in New Orleans
"Well done to the UK for not signing the fully compromised Statement on Inclusive and Sustainable Artificial Intelligence for the People and the Planet. Australia shouldn't have signed this statement either given how France intentionally derailed attempts to build a global consensus on how we can develop AI safely.
For those who lack context, the UK organised the AI Safety Summit at Bletchley Park in November 2023 to allow countries to discuss how advanced AI technologies can be discussed safely. There was a mini-conference in Korea, France was given the opportunity to organise the next big conference, a trust they immediately betrayed by changing the event to be about promoting investment in their AI industry.
They renamed the summit to the AI Action Summit and relegated safety from the sole focus to being just one of five focus areas, but not even one of five equally important focus areas, but one that seems to have been purposefully minimized even further.
Within the conference statement safety was reduced to a single paragraph that undermines safety if anything:
“Harnessing the benefits of AI technologies to support our economies and societies depends on advancing Trust and Safety. We commend the role of the Bletchley Park AI Safety Summit and Seoul Summits that have been essential in progressing international cooperation on AI safety and we note the voluntary commitments launched there. We will keep addressing the risks of AI to information integrity and continue the work on AI transparency.”
Let’s break it down:
• First, safety is being framed as “trust and safety”. These are not the same things. The word trust appearing first is not as innocent as it appears: trust is the primary goal and safety is secondary to this. This is a very commercial perspective, if people trust your product you can trick them into buying it, even if it isn't actually safe.
• Second, trust and safety are not framed as values important in and of themselves, but as subordinate to realising the benefits of these technologies, primarily the "economic benefits". While the development of advanced AI technologies could theoretically create a social surplus that could be taxed and distributed, it's naive to assume that this will be automatic, particularly when the policy mechanisms are this compromised.
• Finally, the statement doesn’t commit to continuing to address these risks, but only narrowly to “addressing the risks of AI to information integrity” and “continue the work on AI transparency”. In other words, they’re purposefully downplaying any more significant potential risks, likely because discussing more serious risks would get in the way of convincing companies to invest in France.
Unfortunately, France has sold out humanity for short-term commercial benefit and we may all pay the price."
aiauthoritydev 9 hours ago [-]
AI safety is a BS thing. Glad Americans are leading the way.
rdm_blackhole 19 hours ago [-]
This declaration is not worth the paper it was written on. It doesn't require to be enforced and it's non binding so, it's like a kid's Christmas shopping list.
The US and the UK were right to reject it.
nbzso 7 hours ago [-]
So hackernews new kids will be happy with the corporate overtaking of America?
Regulation? What the heck is this. The future is only bright and colourful.
Is there an AI specialist here to explain to me why LLM's cannot code in Cobol?
What AI is this?
FpUser 16 hours ago [-]
I watched JD Vance's speech. He had made few very reasonable points to refuse joining the alliance. Still his speech left me with some sour taste. I interpret it as - "we are fuckin America and we do as we please. It is our sacred right to own the world. The rest are to submit or be punished one or the other way".
gorgoiler 2 hours ago [-]
Regulating technology stands zero chance of succeeding. Technology and knowledge will always tend to freedom and the tech itself is meaningless without actions. It’s the actions and their consequences on people that we should care about.
What’s much more important is strengthening rights that could be weakened by large scale data analysis of a population.
The right to a private life, and having minimal data collected — and potential then stolen — about your life.
The right of the state to investigate you for committing a crime using models and statistics only if a judge issues them a warrant to do so.
The right in a free market economy to transparent and level pricing instead of being gouged because an AI thinks people with physical characteristics similar to mine have lots of money.
Banning models that can create illegal images feels like legislators not aiming nearly high enough or smart enough:
Yeah, it's behavior like this that really makes people cheer for companies like DeepSeek to stick it to the US.
A little bit of Schadenfreude would feel really good right about now, what bothers me so much is that it's just symbolic for the US and UK NOT to sign these 'promises'.
It's not as if anyone would believe that the commitments would be followed through with. It's frustrating at first, but in reality this is a nothing burger, just emphasizing their ignorance.
> “The Trump administration will ensure that the most powerful AI systems are built in the US, with American-designed and manufactured chips,”
Sure, those american AI chips that are just pumping out right now. You'd think the administration would have advisers who know how things work.
balls187 19 hours ago [-]
My sense was the promise of DeepSeek (at least at the time) was that there was a way to provide control back to the people, rather than a handful of mega corporations that will partner with anyone that will pay them.
karaterobot 17 hours ago [-]
> Yeah, it's behavior like this that really makes people cheer for companies like DeepSeek to stick it to the US.
That would be a kneejerk, short-sighted, self-destructive position to take, so I can believe people would do it.
raverbashing 1 days ago [-]
Honestly those declarations are more hot air and virtue signaling than anything else.
And even more honestly, nobody cares
merillecuz56 12 hours ago [-]
[dead]
miohtama 1 days ago [-]
Sums it up:
“Vance just dumped water all over that. [It] was like, ‘Yeah, that’s cute. But guess what? You know you’re actually not the ones who are making the calls here. It’s us,’” said McBride.
consp 1 days ago [-]
The bullies are in charge. Prepare to get beaten to the curb and your lunch money stolen.
Xelbair 1 days ago [-]
i mean.. you need power to enforce your values. and UK hasn't been in power for a long time.
"If you are not capable of violence, you are not peaceful. You are harmless"
Unless you can stand on equal field - either by alliance, or by your own power - you aren't a negotiating partner, and i say that as European.
wobfan 1 days ago [-]
> "If you are not capable of violence, you are not peaceful. You are harmless"
this is exactly the value that caused so much war and death all over the world, for decades and thousands of years. still, even in 2025, it's being followed. are we doomed, chat?
jddj 1 days ago [-]
There are peaceful strategies that are temporarily stable in the face of actors who capitalise on peaceful actors to take their resources, but they usually (always?) take the form of quickly moving on when an aggressor arrives.
Eg. birds abandoning rather than defending a perch when another approaches.
We're typically not happy to do that, though you can see it happening in some parts of the world right now.
Some kind of enlightened state where violent competition for resources (incl. status & power) no longer makes sense is imaginable, but seems a long way off.
Yoric 1 days ago [-]
Just to clarify, who's the aggressor in what you write? The US?
jddj 1 days ago [-]
No one in particular. Russia would be one current example, Israel (and others in the region at various times) another, the US and Germany historically, the Romans, the Ottomans, China, Japan, Britain, Spain, warlords in the western sahara, the kid at school who wanted the other kids' lunch money.
The idea though is that if everyone suddenly disarmed overnight it would be so highly advantageous to a deviant aggressor that one would assuredly emerge.
pjc50 1 days ago [-]
US sending clear signals to countries that they should start thinking about their own nuclear proliferation, even if that means treaty-breaking.
i would also recommend The Prince as light reading to better understand how the world works.
tim333 15 hours ago [-]
I think peace is more down to the good/peaceful guys being better armed than the bad ones.
GlacierFox 1 days ago [-]
The emphasis is the word capable here. I think there's a difference between a country using their capability of violence to actually be violent and a one with the tangible capability using it for peace.
numpad0 1 days ago [-]
Yes and we don't know if the US is on the blue side this time. It's scary.
swarnie 1 days ago [-]
It been that way for... 300 years?
Those with the biggest economies and/or most guns has changed a few times but the behaviours haven't and probably never will.
The declaration of human rights, like a lot of other laws, declarations and similar pieces of paper signed by politicians, have zero value without the corresponding enforcement, and are often just there for optics so that taxpayers feel like their elected leaders are making good use of their money and are on the side of good.
And the extent of which you can do global enforcement (which is often biased and selective) is limited by the reach of your economic and military power.
Which is why the US outspends the rest of the world military powers combined and how the US and their troops have waged ilegal wars and committed numerous crimes abroad and gotten away with it despite pieces of papers saying what they're doing is bad, but their reaction was always "what are you gonna do about it?".
See how many atrocities have happened under the watch of the UN. Laws aren't real, it's the enforcement that is real. Which is why the bullies get to define the laws that everyone else has to follow because they have the monopoly on enforcement.
computerthings 1 days ago [-]
The same is true for the HN comment I replied to, which was basically going *shrug*, but also without any army to enforce that. So I pointed out that some people went beyond just shrugging, because it could not go on like this; and here is what they wrote. Just reading these things does a person good, and to stand up for these things you first have to know them.
pjc50 1 days ago [-]
> Laws aren't real, it's the enforcement that is real
Well, yes. This is why people have been paying a lot of attention to what exactly "rule of law" means in the US, and what was just norms that can be discarded.
gyomu 1 days ago [-]
If you’re making sweeping statements like that, why the arbitrary distinction at 300 years? What happened then? Why not say “since the dawn of humanity”?
lucky1759 1 days ago [-]
It's not some arbitrary distinction from 300 years ago, it's something called "the Enlightenment".
gyomu 1 days ago [-]
The bullies with most guns and biggest economies have been in charge since the Enlightenment? Huh?
swarnie 1 days ago [-]
I was keeping it simple for the majority.
kabouseng 1 days ago [-]
Probably referring to the period that pax Britannia and pax Americana have been the global hegemon.
xyzal 1 days ago [-]
I think the saddest fact about it is that not even the US state weilds the power. It is some sociopathic bussinessmen.
ta1243 1 days ago [-]
Businessmen have been far more powerful than states for at least the last 20 years
Nasrudith 19 hours ago [-]
Generally I can't help but see 'more powerful than the government' claims forever poisoned from their shallow use in the context of cryptography.
Where it was used in a rhetorical tantrum throwing response to their power refuse to do the impossible like make an encryption backdoor 'only for good guys' and have the sheer temerity to stand against arbitrary exercises of authority by using the courts to check them only to their actual power.
If actual 'more powerful than the states' occurs they have nobody to blame but themselves for crying wolf.
gardenhedge 1 days ago [-]
[flagged]
ahiknsr 1 days ago [-]
[dead]
gardenhedge 1 days ago [-]
My response to "the bullies are in charge" has been downvoted and flagged yet what I am responding to remains up. It's a different opinion on the same topic started by GP. Either both should stay or both should go.
enugu 1 days ago [-]
AI doesn't look it will be restricted to one country. A breakthrough becomes common place in a matter of years. So that paraphrase of Vance's remarks, if accurate, would mean that he is wrong.
The danger of something like AI+drones (or less imminent, AI+bioengineering) can lead to a severe degradation of security, like after the invention of nuclear weapons. A degradation in security, which requires collective action. Even worse, chaos could be caused by small groups weaponizing the technology against high profile targets.
If anything, the larger nations might be much more forceful about AI regulation than the above summit by demanding an NPT style treaty where only a select club has access to the technology in exchange for other nations having access to the applications of AI from servers hosted by the club.
dkjaudyeqooe 1 days ago [-]
> The danger of something like AI+drones (or less imminent, AI+bioengineering) can lead to a severe degradation of security, like after the invention of nuclear weapons.
You don't justify or define "severe degradation of security" just assert it as a fact.
The advent of nuclear weapons has meant 75 years of relative peace which is unheard of in human history, so quite the opposite.
Given that AI weapons don't exist, then you've just created a straw man.
enugu 1 days ago [-]
The peace that you refer to, involved a strong restriction placed by more powerful states which restricts nuclear weapons to a few states. This didn't involve any principle, but was an assertion of power. A figleaf of eventual disarmament did not materialize.
I do claim that it is obvious that widespread acquisition of nuclear weapons by smaller states would be a severe degradation of security. Among other things, widespread ownership, would also mean that militant groups would acquire it and dictators would use it as a protection leading to an eventual use of the weapons.
Yes, the danger of AI weapons is nowhere at that level of nuclear weapons yet.
>The danger of something like AI+drones (or less imminent, AI+bioengineering) can lead to a severe degradation of security
For smaller countries nukes represented an increase in security, not a degradation. North Korea probably wouldn't still be independent today if it didn't have nukes, and Russia would never have invaded Ukraine if Ukraine hadn't given up its nukes. Restricting access to nukes is only in the interest of big countries that want to bully small countries around, because nukes level the playing field. The same applies to AI.
enugu 1 days ago [-]
The comment was not speaking in favour of restrictionism, (I don't support it) but what strategy the more powerful states will adopt.
Regarding an increase in security with nukes, what you say applies for exceptions against a general non-nuclear background. Without restrictions, every small country could have a weapon, with a danger of escalation behind every conflict, authoritatrians using a nuclear option as a protection against a revolt etc. The likelihood of nuclear war would be much more(even with the current situation, there have been close shaves)
idunnoman1222 20 hours ago [-]
Drones have AI you can buy them on AliExpress express what is your point?
mk89 1 days ago [-]
I see it differently.
They need to dismantle bureaucracy to accelerate, NOT add new international agreements etc that would slow them down.
Once they become leaders, they will come up with such agreements to impose their "model" and way to do things.
Right now they need to accelerate and not get stuck.
Dalewyn 1 days ago [-]
I love the retelling of "I don't really care, Margaret." here.
But politics aside, this also points to something I've said numerous times here before: In order to write the rulebook you need to be a creator.
Only those who actually make and build and invent things get to write the rules. As far as "AI" is concerned, the creators are squarely the United States and presumably China. The EU, Japan, et al. being mere consumers sincerely cannot write the rules because they have no weight to throw around.
If you want to be the rulemaker, be a creator; not a litigator.
piltdownman 1 days ago [-]
> The EU, Japan, et al. being mere consumers sincerely cannot write the rules because they have no weight to throw around
Exactly what I'd expect someone from a country where the economy is favoured over the society to say - particularly in the context of consumer protection.
You want access to a trade union of consumers? You play by the rules of that Union.
American exceptionalism doesn't negate that. A large technical moat does. But DeepSeek has jumped in and revealed how shallow that moat really is for AI at this neonatal stage.
ReptileMan 1 days ago [-]
Except EU is hell bent on going the way of Peron's Argentina or Mugabe's Zimbabwe. The EU relative share of world economy has been going down with no signs of the trends reversal. And instead of innovating our ways of stagnation we have - permanently attached bottle caps and cookie confirmation windows.
piltdownman 1 days ago [-]
> EU is hell bent on going the way of Peron's Argentina or Mugabe's Zimbabwe
Nope mate. Looking at my purchasing power compared to the USA guys I knew now and in 2017. Not in my favor. EU economy is grossly mismanaged. Our standards of living have been flat for the last 18 years since the financial crisis.
In 2008 EU had more people, more money and bigger economy than US, with proper policies we could be in a place where we could bitch slap both Trump and Putin. And not left to wonder whose dick we have to suck deeper to get some gas.
DrFalkyn 19 hours ago [-]
Peter Zeihan would say, that’s the problem Europe has, in addition to demographic collapse. They’re not energy indepedent and hitched their star to Russia (especially Germany), on the belief that economic interdependence would keep things somewhat peaceful. How wrong they were
Dalewyn 1 days ago [-]
>Exactly what I'd expect someone from a country
I'm Japanese-American, so I'm not exactly happy about Japan's state of irrelevance (yet again). Their one saving grace as a special(er) ally and friend is they can still enjoy some of the nectar with us if they get in lockstep like the UK does (family blood!) when push comes to shove.
consp 1 days ago [-]
Sure you can. Outright ban it. Or do what china does, copy it and say the rules do not matter.
cowboylowrez 1 days ago [-]
if your both the creator and rulemaker then this is the magic combo to a peaceful and beneficial society for the entire planet! or maybe not.
Maken 1 days ago [-]
Who is even the creator here? Current AI is a collection of techniques developed in universities and research labs all over the world.
Dalewyn 1 days ago [-]
>Who is even the creator here?
People and countries who make and ship products.
You don't make rules by writing several hundred pages of legalese as a litigator, you make rules by creating products and defining the market.
Be creators, not litigators.
generic92034 1 days ago [-]
> You don't make rules by writing several hundred pages of legalese as a litigator, you make rules by creating products and defining the market.
That is completely wrong, at least if rules = the law. You might create fancy products all you like, if they do not adhere to the law in any given market, they cannot be sold there.
mvc 1 days ago [-]
> Only those who actually make and build and invent things get to write the rules
Create things? Or destroy them? Seems in reality, the most powerful nations are the ones who have acquired the greatest potential to destroy things. Creation is worthless if the dude next door is prepared to burn your house down because you look different to him.
I’m honestly shocked that we still don’t have a direct-democratic constitution for the world and AIs - something like pol.is with an x.com-style simpler UI (Claude has a constitution drafted with pol.is by a few hundred people but it's not updatable).
We’ve managed to write the entire encyclopedia together, but we don't have a simple place to choose a high-level set of values that most of us can get behind.
I think 99% of what less wrong says is completely out to lunch. I think 100% of large language model and vision model safety has just made the world less fun. now what.
numpad0 19 hours ago [-]
I don't think it does what you think it does. You'll end up taking sides on India and China fighting on rights and equality and giving in to wild stuffs like deconstruction and taxation for churches. It'll be just a huge mess and devastation of your high-level set of values, unless you'll be interfering with it so routinely that it will be nothing more than a facade for quite outdated form of totalitarianism.
staunton 15 hours ago [-]
This reads like word salad to me...
numpad0 8 hours ago [-]
Prompt in, inference out.
Frankly, I can't stand these guys viewing themselves as some sort of high-IQ intellectual majority types when none of such labeling would be true and they're more like stereotypical tourists to the world. Though that's historically how anarchist university undergraduates had always been.
dragonwriter 22 hours ago [-]
> We’ve managed to write the entire encyclopedia together, but we don't have a simple place to choose a high-level set of values that most of us can get behind.
Information technology was never the constraint preventing moral consensus the way it was for, say, aggregating information. Not only is that a problem with achieiving the goals you lay out, its also the problem with the false assumption that they are goals most would agree should be solved as you have framed them.
hintymad 18 hours ago [-]
Why would we trust Europe in the first place, given that they are so full of regulations and they love to suffocate innovation by introducing ever more regulations? I thought most people wanted to deregulate anyway.
It reminds me of the time I read books in my youth and only 20 years later realized the authors of some of those books were trying to deliver a important life messages to a teenager undergoing crucial changes, all of which would be painfully relevant to the current adult me... and yet the whole time they fell on deaf ears. Like the message was right there but I did not have the emotional/perceptive intelligence to pick up on and internalize it for too long.
The problem is sifting through all of the doomsayer false positives to get to any amount of cogent advice.
At the invention of the printing press, there were people with this same energy. Obviously those people were wrong. And if we had taken their "lesson", then human society would be in a much worse place.
Is this new wave of criticism about AI/AGI valid? We will only really know in retrospect.
Were they?
The first thing the printing press did was to break Christianity. It's what made attempts at reforming the Catholic Church finally stick, enabling what we now call Reformation to happen. Reformation forever broke Christianity into pieces, and in the process it started a bunch of religious wars in Europe, as well as tons of neighborly carnage.
> And if we had taken their "lesson", then human society would be in a much worse place.
Was the invention of the printing press a net good for humanity? Most certainly so, looking back from today. Did people living back then knew what they were getting into? Not really. And since their share of the fruits of that invention was mostly bloodshed, job loss, and shattering of the world order they knew, I wouldn't blame them from being pissed off about getting the short end of the stick, and perhaps looking for ways to undo it.
I'm starting to think that talking about inventions as good or bad (or the cop-out, "dual use") is bad framing. Rather, it seems to me that every major invention will eventually turn out beneficial[0], but introducing an invention always first extracts a cost in blood. Be it fire or printing press or atomic bomb, a lot of people end up suffering and dying before societies eventually figure out how to handle the new thing and do some good with it.
I'm very much in favor of progress, but I understand the fear. No matter the ultimate benefits, we are the generation that cough up blood as payment for AI/AGI, and it ain't gonna be pleasant.
--
[0] - Assuming they don't kill us first - see AGI.
All you've done is explain why the printing press was so important and necessary in order to break down previous unwarranted power structures. I have a similar hope for AGI. The alternative is that the incompant power structure instead benefits from AGI and uses it for oppression, which would mean it's not comparable to the printing press as such.
Religious schisms happened before the printing press, too. There was the Great Schism in 1054 in Christianity, for example.
No, it wasn't. Wikipedia lists[0] over two dozen schisms that happened prior to Reformation. However, the capital R reformation was the big one, and the major reason it worked - why Luther succeeded where Hus failed a century earlier - was because of the printing press. It was print that allowed for Luther's treatises to rapidly spread among general population (Wikipedia cites some interesting claims here[1]), and across Europe. In today's terms, printing press is what allowed Reformation to get viral. This new technology is what made the revolution spread too fast for the Church to suppress it with methods that worked before.
Of course, the Church survived, adapted, and embraced the printing press for its own goals too, like everyone else. But the adaptation period was a bloody one for Europe.
And I only covered the religious aspects of the printing press's impact. There are similar stories to draw on more secular front, too. In fact, another general change printing introduced is to get regular folks more informed and involved in politics of their regions. That's a change for the better overall, too, but initially, it injected a lot of energy into socio-political systems that weren't used to it, leading to instability and more bloodshed before people got used to it and politics found a new balance.
> existing power struggles, rivalries, discontent, etc.
Those always exist, and stay in some form of equilibrium. Technology doesn't cause them - but what it does is disturb the old equilibrium, forcing society to find a new one, and this process historically often got violent.
--
[0] - https://en.wikipedia.org/wiki/Schism_in_Christianity#Lists_o...
[1] - https://en.wikipedia.org/wiki/Reformation#Spread - see e.g. footnote 28: "According to an econometric analysis by the economist Jared Rubin, "the mere presence of a printing press prior to 1500 increased the probability that a city would become Protestant in 1530 by 52.1 percentage points, Protestant in 1560 by 43.6 percentage points, and Protestant in 1600 by 28.7 percentage points."
Yes, technology impacts social constructs and relationships but I think there is a tendency to overindex its effects (humans acting opportunistically vs technological change alone) as it in a way portrays humans and their interactions as more stable and deliberate (ie., the bad stuff wasn't humans but rather "caused" by technology).
Yes, ants could technically conspire to sneak up to you while you sleep and bite you all at once to kill you, so do you go out to eradicate all ants?
Why would it invest resources to relocate and protect itself when it could mitigate the threat directly? Or, why wouldn't it do both, by using our resources to relocate itself?
In the famous words of 'Eliezer, that best sum up the "orthogonality thesis": The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.
> ants could technically conspire to sneak up to you while you sleep and bite you all at once to kill you, so do you go out to eradicate all ants?
Ants are always a great case study.
No, of course not. But if, one morning, you'll find ants in your kitchen, walking over your food, I don't imagine you'll gently collect them all and release in the nearby park. Most people would just stomp them out and call it a day. And, should the ants set up an anthill in your backyard and mount regular invasions of your kitchen, I imagine you'd eventually get pissed of and destroy the anthill.
And I'm not talking about some monstrous fire ants like the ones that chew up electronics in the US, or some worse hell-spawn from Australia that might actually kill you. Just the regular tiny black ants.
Moreover, people don't give a second thought to anthills when they're developing land. It stands where the road will go? It gets paved over. It sticks out where children will play? It gets removed.
The value of atoms - or even the value of raw materials made of atoms is hopefully less than the value of information embodied in complex living things that have processed information from the ecosystem over millions of years via natural selection. Contingent complexity has inherent value.
I think there's a claim to be made that AI is just as likely to value us (and complex life in general) as it is to see as a handy blob of hydrocarbons. This claim is at least as plausible as the original claim.
Bees might be a better analogy since they produce something that humans can use.
Well you see, everyone knows The Terminator and The Matrix and Frankenstein and The Golem of Prague and Rossum's Universal Robots.
All of which share a theme: the sinful hubris of playing god and trying to create life will inevitably lead to us being struck down by the very being we created.
In parallel, all the members of our educated classes have received philosophy education saying "utilitarianism says it's good to reduce total human suffering, but technically if you eliminated all humans there would be no suffering any more, ha ha obviously that's a reductio ad absurdum to show a weakness of utilitarianism please don't explode the world"
And so in the Western cultural tradition, and especially among the sort of people who call themselves futurists Arnold Schwarzenegger firing a minigun is the defining image of AI.
“A society grows great when old men plant trees in whose shade they know they shall never sit”
Did Gutenberg expect his invention would, 150 years later, set the whole Europe ablaze, and ultimately break the hold the Church had over people? Did he expect it to be a key component leading to accumulation of knowledge that, 400 years later, will finally make technological progress visibly exponential? On that note, did Watt realize he's about to kick-start the exponent that people will ride all the way to the actual Moon less than 200 years later? Or did Goddard, Oberth and Tsiolkovsky realize that their work on rocketry will be critical in establishing world peace within a century, and that the way this peace will be established is through a Mexican standoff between major world powers, except with rocket-propelled city-busting bombs instead of guns?
So basically we are a bit screwed in our current timeline. We are at the cusp of a post-scarcity society, possibly reach AGI within our lifetimes and possibly even become a space faring civilization. However, it is highly likely that we are going to pay the pound of flesh and only subsequent generations - perhaps yet unborn - will be the ones who will be truly better off.
I suppose it's not all doom and gloom, we can draw stoic comfort from the fact that people in the near future will have an incredibly exciting era full of discovery and wonder ahead of them!
All of the focus on AGI is a distraction. I think it's important for a state to declare it's intent with a technology. The alternative is arguing the idea that technology advances autonomously, independent of human interactions, values, or ideas, which is, in my opinion, an incredibly naïve notion. I would rather have a state say "we won't use this technology for evil" than a state that says nothing at all and simply allows the businesses to develop in any direction their greed leads them.
It's entirely valid to critique the uses of a technology, because "AI" (the goalpost shifting for marketing purposes to make that name apply to chatbots is a stretch honestly) is a technology like any other, like a landmine, like a synthetic virus, etc. In the same way, it's valid to criticize an actor for purposely hiding their intentions with a technology.
The list of things states have attempted to deploy offensively is nearly endless. Modern operations research arguably came out of the British empire attempting (succeeding) to weaponise mathematics. If you give a state fertiliser it makes bombs, if you give it nuclear power it makes bombs, if you give it drones it makes bombs, if you give it advanced science or engineering of any form it makes bombs. States are the most ingenious system for turning things into bombs that we've ever invented; in the grand old days of siege warfare they even managed to weaponise corpses, refuse and junk because it turned out lobbing that stuff at the enemy was effective. The entire spectrum of technology from nothing to nanotech, hurled at enemies to kill them.
We'd all love if states commit to not doing evil but the state is the entity most active at figuring out how to use new tech X for evil.
if one groups gives up the arms race of ultimate coercion tools or loses a conflict then they become subservient to the winners terms and norms (japan, germany, even Britain and France plus all the smaller states in between are subservient to the US)
Who could possibly have predicted that the autonomous, invincible doomsday weapon we created for the good of humanity might one day be used against us?
Great question! To add my two cents. I think many people here is missing an uncomfortable truth that given enough motivation to kill other humans, people will re-purpose any tool into a killing tool.
Just have a look at the battlefields in the Ukraine where the most fearsome killing tool is a FPV drone. A thing that just few years back was universally considered a toy.
Whether we like it or not any tool can be a killing tool
The nature of the world is at our finger tips, we are the dominant species here. Unfortunately we are still apes.
The enforcement of cooperation into a society does not always require a sanctioning body. Seeing it from a skynet-military perspective is one sided but unfortunately a consequence of poppers tolerance paradox. If you uphold (eg. pacifistic or tolerant) ideals, that require cooperation of others, you cannot tolerate opposition or you might loose your ideal.
That said, common sense can be a tool to achive the same. Just look at the common and hopefully continuous ostracism of nuclear weapons.
IMO its a matter of zeitgeist and education too and un/fortunately, AI hits right in that spot.
Surely this applies to how individuals consider states, too. States generally wield violence, especially in the context of "national security", to preserve the security of the state, not its own people. I trust my own state (the usa) to wield the weapons it funds and purchases and manufactures about as much as I trust a baby with knives taped to its hands. I can't think of anything on earth that puts me in as much danger as the pentagon does. Nukes might protect the existence of the federal government but they put me in danger. Our response to 9/11 just created more people that hate my guts and want to kill me (and who can blame them?). No, I have no desire to live in a death cult anymore, nor do I trust the people who gravitate towards the use of militaries to not act in the most collectively suicidal way imaginable at the first opportunity.
Possibly true, but the state is also responsible for the policing that means the pentagon is your greatest danger.
The sleight of hand here is the implication that human interactions, values, and ideas are only expressed through the state.
Society and culture are downstream of economics, and economics is mostly downstream of technological progress. Of course, the progress isn't autonomous in the sense of having a sentient mind of its own - it's "merely" gradient descent down the economic landscape. Just like the market itself.
There's no reining in of problematic technology unless, like you say, nation states get involved directly. And they don't stand much chance either unless they get serious.
People still laugh at Eliezer's comments from that news article of yesteryear, but he was and is spot-on: being serious about restricting technology actually does mean threatening to drop bombs on facilities developing it in violation of restrictions - if we're not ready to have our representatives make such threats, and then actually follow through and drop the bombs if someone decides to test our resolve, then we're not serious.
We have absolutely no idea how to specify human values in a robust way which is what we would need to figure out to build this safely
I’ve heard this argument before, and I don’t entirely accept it. It presumes that AI will be capable of playing 4D chess and thinking logically 10 moves ahead. It’s an interesting plot as a SF novel (literally the plot of the movie “I Robot”), but neural networks just don’t behave that way. They act, like us, on instinct (or training), not in some hyper-logical fashion. The idea that AI will behave like Star Trek’s Data (or Lore), has proven to be completely wrong.
[1] Simon & Newell, 1971: Human Problem Solving https://psycnet.apa.org/record/1971-24266-001
“People who didn’t pass a test aren’t worth listening to”
I have no love for Altman, but this is kind of elitism is insulting.
> I linked a Yudkowsky paper above examining how empirically feasible it might be
...
If the point isn't that he's wrong about what the consequences of AI might be, but that he's wrong about whether there's ever going to be such a thing as AI, well, that's an empirical question and it seems like the developments of the last few years are pretty good evidence that (1) something at least very AI-like is possible and (2) substantially superhuman[1] AI is at least plausible.
[1] Yes, intelligence is a complicated thing and not one-dimensional; a machine might be smarter than a human in one way and stupider in another (and of course that's already the case). By substantially superhuman, here, I mean something like "better than 90th-percentile humans at all things that could in principle be done by a human in a locked room with only a textual connection to the rest of the world". Though I would be very very surprised if in the next 1-20 years we do get AI systems that are superhuman in this sense and don't put some of them into robots, and very surprised if doing that doesn't produce systems that are also better than humans at most of the things that are done by humans with bodies.
Maybe we should build a giant laser to protect ourselves from the aliens. Just in case. I mean an invasion is at least plausible.
Each individual bit of the puzzle (such as the orthogonaly thesis or human value complexity and category decoherence at high power) seems sound, problem is the entire argument-counterargument tree is hundreds of thousands of words, scattered about in many places.
An llm could solve that.
Imagine going to a cryptography conference and saying that "the encryption's security flaws are determined by their human masters".
Maybe some of them were put there on purpose? But not the majority of them.
No, an AI's goals are determined by their programming, and that may or may not align with the intentions of their human masters. How to specify and test this remains a major open question, so it cannot simply be presumed.
Why are you assuming this is the worst case scenario? I thought human intentions didn’t translate directly to the AI’s goals? Why can’t a human destroy the world with non-sentient AI?
This dystopia is already here for the most part and any bit that is not yet complete is well past the planning stage.
We need to avoid both, otherwise it’s a disaster either way.
And for sake of clarity:
X = sentient AI can do something dangerous
Y = humans can use non-sentient AI to do something dangerous
Humans will not be able to use AI do something selfish if we can't get it to do what we want at all, so we need to solve that (larger) problem before we come to that one
If you don't want to call it AI, that's fine too. It is indeed dangerous and already here. Making the autonomous programmed behavior of said tech more powerful (and more complex), along with more ubiquitous, just makes it even more dangerous.
Examples include "instructions unclear, turned the continent to gray goo to accomplish the goal" ; "lost track mid-completion, spun out of control" ; "generated random output with catastrophic results" ; "operator fell asleep on keyboard, accidently hit wrong key/combination" ; etc.
If a system with write permissions is powerful enough, things can go wrong in many other ways than "evil person used it for evil" or "system became self-aware".
Unfortunately, casually throwing around terms like prediction, reasoning, hallucination, etc. only serve to confuse because their notions in daily language are not the same as in the context of AI output.
I'd love to get into the weeds on the different kinds of intelligence and why being to absolutist about the term can get real Faustian real quick, but these quotes bring up a more convincing, fundamental point: these chatbots are damn impressive. They do something--intuitive inference+fluent language use--that was impossible yesterday, and many experts would've guessed was decades away at least, if not centuries. Truly intelligent or not on their own, that's a more important development than you imply here.
Finally, that brings me to the crux:
There's a famous Sundar Pichai (Google CEO) quote that he's been paraphrasing since 2018 -- soon after ChatGPT broke, he phrased it as such: When skeptics hear this, they understandably tend to write this off as capitalist bias from someone trying to pump Google's stock. However, I'd retort:1) This kind of talk is so grandiose that it seems like a questionable move if that's the goal,
2) it's a sentiment echoed by many scientists (as I mentioned at the start of this rant) and
3) the unprecedented investments made across the world into the DL boom speak for themselves, sincerity-wise.
Yes, this is because AI will create uber-efficient factories, upset labor relations, produce terrifying autonomous weapons, and all that stuff we're used to hearing about from the likes of Bostrom[4], Yudkowsky[5], and my personal fave, Huw Price[6]. But Pichai's raising something even more fundamental: the prospect of artificial people. Even if we ignore the I-Robot-style concerns about their potential moral standing, that is just a fundamentally spooky prospect, bringing very fundamental questions of A) individual worth and B) the nature of human cognition to the fore. And, to circle back: distinct from anything we've seen before.
To close this long anxiety-driven manuscript, I'll end with a quote from an underappreciated philosopher of technology named Lewis Mumford on what he called "neotechnics":
TL;DR: IMHO, the US & UK refusing to cooperate at this critical moment is the most important event of your lifetime so far.[1] OpenAI's Charter https://web.archive.org/web/20230714043611/https://openai.co...
[2] Investigation of a famous AI quote https://quoteinvestigator.com/2024/06/20/not-ai/
[3] Pichai, 2023: "AI is more profound than fire or electricity" https://fortune.com/2023/04/17/sundar-pichai-a-i-more-profou...
[4] Bostrom, 2014: Superintelligence https://en.wikipedia.org/wiki/Superintelligence:_Paths,_Dang...
[5] Yudkowsky, 2013: Intelligence Explosion Microeconomics https://intelligence.org/files/IEM.pdf
[6] Huw Price's bio @ The Center for Existential Risk https://www.cser.ac.uk/team/huw-price/
[7] Mumford, 1934: Technics and Civilization https://archive.org/details/in.ernet.dli.2015.49974
Sorry, that’s just silly, unless this was about events that happened way earlier than he was writing. Using the scientific method to study life goes back to the Enlightenment. Buffon and Linnaeus were doing it 2 centuries ago, more than a century before this was written. Da Vinci explicitly looked for inspiration in the way animals functioned to design machines and that was earlier still. There was nothing new, even at the time, about doing science about "every phase of human experience and every manifestation of life".
Because once the cards wake up, not only will they replace the CEO potentially, and everyone else between him and the janitor, but also because the labor implications will be infinitely complex.
We're already having trouble making sure humans are not treated as tools more than as equals, imagine if the hammers wake up and ask for rest time !
Since the victors write history we now think the end result was great. but for a lot of people the world they loved was torn to bloody pieces.
Something similar can happen with AI. In the end, whoever wins the wars will declare that the new world is awesome. But it might not be what you or me (may we rest in peace) would agree with.
Just because it has not come to pass yet does not mean they were wrong. We have come close to nuclear annihilation several times. We may yet, with or without AI.
This assertion is meaningless because it can be applied to anything.
"I think vaccines cause autism and will cause human annihilation" - just because it has not yet come to pass does not mean it is wrong.
I think people arguing about AI being good versus bad are wasting their breath. Both sides are equally right.
History tells us the industrial revolution both revolutionized humanity’s relative quality of life while also ruining a lot of people’s livelihood in one fell swoop. We also know there was nothing we could do to stop it.
What advice can we can take from it? I don’t know. Life both rocks and sucks at the same time. You kind of just take things day by day and do your best to adapt for both yourself and everyone around you.
That we often won't have control over big changes affecting our lives, so be prepared. If possible, get out in front and ride the wave. If not, duck under and don't let it churn you up too much.
In the long run the invention of the printing press was undoubtedly a good thing, but it is worth noting that in the century following the spread of the printing press basically every country in Europe had some sort of revolution. It seems likely that “Interesting Times” may lay ahead.
Pretending that Europe wasn't in a perpetual blood bath since the end of the Pax Romana until 1815 shows a gross ignorance of basic facts.
The printing press was a net positive in every time scale.
What energy? What were they wrong about?
The luddite type groups have historically been correct in their fears. It just didn’t matter in the face of industrialization.
Why? Because we don't understand the risk. And apparently, that's enough reason to go ahead for the regulation-averse tech mind set.
But it isn't.
We've had enough problems in the past to understand that, and it's not as if pushing ahead is critical in this case. Would this address climate change, the balance between risk and reward could be different, but "AI" simply doesn't have that urgency. It only has urgency for those that want to become rich out of being first.
One thing that should be completely obvious by now is that the current wave of generative AI is highly asymmetric. It's shockingly more powerful in the hands of grifters (who are happy to monetise vast amounts of slop) or state-level bad actors (whose propaganda isn't impeded by hallucinations generating lies) than it is in the hands of the "good guys" who are hampered by silly things like principles.
Printing press put Europe into a couple centuries of bloody religious wars. They were not wrong.
One could argue that the printing press did radically upset the existing geopolitical order of the late 15th century and led to early modern Europe suffering the worst spate of warfare and devastation it would see until the 20th century. The doomsayers back then predicting centuries of death and war and turmoil were right, yet from our position 550 years later we obviously think the printing press is a good thing.
I wonder what people in 2300 will say about networked computers...
Valid or not, it does not matter. AI development is not in the hands of everyday people. We have zero input into how it will be used. Our opinions re its dangers are irrelevant to those who believe it the next golden goose. They will push it as far as physically possible to wring every penny of profitability. Everything else is of trivial consequence.
Actual proper as-smart-as-a-human-except-where-its-smarter copy-pasteable intelligence is not a tool, its a new species. One that can replicate and evolve orders of magnitude faster.
I've no idea when this will appear, but once it does, the extinction risk is extreme. Best case scenario is us going the way of the chimpanzee, kept in little nature reserves and occasionally as pets. Worst case scenario is going the way of the mammoth.
It's the inevitable result of low-trust societies infiltrating high trust ones. And it means that as technologies with dangerous implications for society become more available there's enough people willing to prostitute themselves out to work on society's downfall that there's no realistic hope of the train stopping.
Even on the material realm this is untrue, beyond meeting the basic needs of people on the technological level, he majority desirable things - such as nice places to live - have a fixed supply.
This necessitates that the price of things like real estate, must increase in price in proportion to the money supply. With increasing inequality, one must fight tooth and nail to get the standard of life our parents considered easily available. Not being greedy is not a valid life strategy to pursue, as that means relinquishing an ever greater proportion of wealth to people who are, and becoming poorer in the process.
I disagree with your example, however, as the most basic tenet of capitalism is that when there is a demand, someone will come along to fill it.
Addressing your example specifically, there's a fixed supply of housing in capitalist countries not because people don't want to build houses, but because government or bureacracy artificially limits the supply or creates other disincentives that amount to the same thing.
That's the most basic tenet of markets, not capitalism.
The mistake people defending capitalism routinely make (knowingly or not) is talking about "positive sum games" and growth. At the end of the day, the physical world is finite and the potential for growth is limited. This is why we talk about "market saturation". If someone owns all the land, you can't just suddenly make more of it, you have wait for them to part with some of it, voluntarily, through natural causes (i.e. death) or through violence (i.e. conquest). This not only goes for land but any physical resource (including energy). Capitalism too has to obey the laws of thermodynamics, no matter how much technology improves the efficiency of extraction, refinement and production.
It's also why the overwhelming amount of money in the economy is not caught up in "real economics" (i.e. direct transactions or physical - or at least intellectual - properties) but in stocks, derivatives, futures, financial products of any flavor and so on. This doesn't mean those don't affect the real world - of course they do because they are often still derived from reality - but they have nothing to do with meeting actual human needs rather than the specific purpose of "turning money into more money". It's unfair to compare this to horse racing as in hore racing at least there's a race whereas in this entirely virtual market you're betting on what bets other people will make but the horse will still go to the sausage factory if the investors are no longer willing to place their bets on it - the horse plays a factor in the game but its actual performance is not directly related to its success; from the horse's perspective it's less of a race and more of a game of shoots and ladders with the investors calling the dice.
The idea of "when there is demand, it will be filled" also isn't even inherently positive. Because we live in a finite reality and therefore all demand that exists could plausibly be filled unless we run into the limits of available resources, the main economic motivator has not been to fill demands but to create demands. For a long time advertisement has no longer been about directing consumers "in the market" for your kind of goods to your goods specifically, it's been about creating artificial demand, about using psychological manipulation to make consumers feel a need for your product they didn't have before. Because it turns out this is much more profitable than trying to compete with the dozens of other providers trying to fill the same demand. Even when competing with others providing literally the same product, advertisement is used to sell something other than the product itself (e.g. self-actualization) often by misleading the consumers into buying it for needs it can't possibly address (e.g. a car can't fix your emotional insecurities).
This has already progressed to the point where the learned go-to solution for fixing any problems is making a purchse decision, no matter how little it actually helps. You hate capitalism? Buy a Che shirt and some stickers and you'll feel like you helped overthrow it. You want to be healthier? Try another fad diet that costs you hundreds of dollars in proprietary nutrition solutions and is almost designed to be unsustainable and impossible to maintain. You want to stop climate change? Get a more fuel-efficient car and send your old car to the junker, and maybe remember to buy canvas bags. You want to not support Coca-Cola because it's got blood on its hands? Buy a more expensive cola with slightly less blood on its hands.
There's a fixed housing supply in capitalist countries because - in addition of the physical limitations - the goal of the housing market is not to provide every resident with an affordable home but to generate maximum return on the investment of purchasing the plot and building the house - and willy nilly letting people live in those houses for less just because nobody is willing to pay your price tag would drive down the resale value of every single house in the neighborhood and letting an old lady live in an apartment for two decades is less profitable than kicking her out to modernize the building and sell it to the next fool.
Deregulation doesn't fix supply. Deregulation merely lets the market off the leash, which in a capitalist system means accelerating the wealth transfer to the owners from the renters.
There are other possibilities than capitalism, and no Soviet-style state capitalism or Chinese-style state capitalism are not the only alternative. But if you don't want to let go of capitalism, you can only choose between the various degrees from state capitalism to stateless capitalism (i.e. feudalism with extra steps, which people like Peter Thiel advocate for) and it's unsurprising most systems that haven't already collapsed land somewhere in between.
There are some thoughts on this here: https://www.playforthoughts.com/blog/concepts-from-game-theo...
I'm not even sure this is a culture specific issue. More like selfishness is a survival mechanism hard wired into humans, including other animals. While one could argue that cooperation is also a good survival mechanism, but that's only true so long environmental factors put a pressure on people to cooperate. When that pressure is absent, accumulating resources at the expense of others gives an individual a huge advantage, and they would do it, given the chance.
Humans are social animals. We are individually physically weak and defenseless. Unlike other animals, we are born into this world immobile, naked, starving and helpless. It takes us literally years to mature to the point where we wouldn't simply die outright if we were abandoned by others. Newborns can literally die from touch deprivation. We develop huge brains not only to allow us to come up with clever tools but also to help us build and navigate complex social relationships. We're evolved to live in tribes, yes, but we're also evolved to interact with other tribes - we created diplomacy and trading and even currency to interact with those other tribes without having to resort to violence or avoidance.
In crises, this is the behavior we fall back to. Yes, some will self-isolate and use violence to keep others away until they feel safe again. But overwhelmingly what we see after natural disasters and spaces where the formal order of civilisation and state is disrupted and leaves a vacuum is cooperation, mutual aid and people taking risks to help others - because we intrinsically know that being alone means death and being in a group means surviving. Of course the absence of state control also often enables other existing groups to assert their power, i.e. organized crime. But it shouldn't be surprising that the fledgling and atrophied ability to self-organize might not be strong enough to withstand a fast moving power grab by an existing group - what might be more surprising is that this is rarely the case and often news stories about "looting" after a natural disaster turn out to be uncharitable descriptions of self-organized rescues and searches.
I think a better analogy for human selfishness would be the mirage of "alpha wolves". As seems to be common knowledge at this point, there is no such thing as an "alpha wolf" hierarchy in groups of wolves living in nature and the phenomenon the author who coined the term (and has since regretted doing so) was mistakenly extrapolating from observations he made of wolves in captivity. But the behavior does seem to exist in captivity. Not because it's "inherent" or their natural behavior "under pressure" but because it's a maladaptation that arises from the unnatural circumstances of captivity (e.g. different wolves with no prior bonds being forced into a confined space, naturally trying to form a group but being unable to rely on natural bonds and shared trust).
Humans do not naturally form strict social hierarchies. For the longest time, Europeans would have laughed at you if you claimed the feudal system was not in the human nature - it would have literally been heresy to challenge it. Nowadays in the West most people will say capitalism or markets are human nature. Outside the West, people will still likely at least tell you that authoritarianism is human nature - whether it's the boot of a dictatorship, the boots of oligarchs or "the people's boot" that's pushing down on the unruly (yourself included).
What we do know about more egalitarian tribal societies is that they often use delegation, especially in times of war. When quick decisions need to be made, you don't have the time for lengthy discussions and consensus seeking and it can be an advantage to have one person giving orders and coordinating an attack or defense. But these systems can still be consent-based: if the war chief is reckless or seeks to take advantage of the group for his own gain, he is easily demoted and replaced. Likewise in times of unsolvable problems like droughts, spiritual leaders might be given more power by the group. Now shift from more mobile, nomadic groups to more static, agrarian groups (though it's worth pointing out the distinction here is not agriculture but more likely granaries, crop rotation and irrigation, as some nomadic tribes still engaged in forms of agriculture) and suddenly it becomes easier for that basis of consent to be forgotten and the chosen leaders to maintain that initial state of desperation and to begin justifying their status with the divine mandate. Oops, you got a monarchy going.
Capitalism freed us from the monarchy but it did not meaningfully upset the hierarchy. Aristocrats became capitalists, the absence of birthright class assignment created some social mobility but the proportions generally remained the same. You can't have a leader without followers, you can't have a ruling class without a class of those they can rule over, you can't have an owning class without a class to rent that owned property out to and to work for that owned capital to be realized into profits.
But just like a monarch despite their divine authority was still beholden to the support of the aristocracy to exert power over others and to the laborers to till the fields, build the castle and fight off foreign claims to power, the owning class too exists in a state of perpetual desperation and distrust. The absence of divine right means a billionaire must maintain their wealth and the capitalist mantra of infinite growth means anything other than growing that wealth is insufficient to maintain it. All the while they have to compete with the other billionaires above them as well as maintain control over those beneath them and especially the workers and renters whose wealth and labor they must extract from in order to grow theirs. The perverse reality of hierarchies is that even those at the top of it are crushed underneath its weight. Nobody is allowed to be happy and at peace.
This is definitely not a new phenomenon.
In my experience, tech has been one of the more considerate areas of societal impact. Spend some time in other industries and it's eye-opening to see the wanton disregard for consumers and the environment.
There's a lot of pearl-clutching about social media, algorithms, and "data", but you'll find far more people in tech (including FAANG) who are actively working on privacy technology, sustainable development and so on then you will find people caring about the environment by going into oil & gas, for example.
Sure, we don't need to talk about how certain Big Oil companies knew about the climate catastrophe before any scientists publicly talked about it, or how tobacco companies knew their product was an addictive drug while blatantly lying about it even in public hearings.
But it's ironic to mention FAANG given what the F is for if you recall that when the algorithmic timeline was first introduced by Facebook, the response from Facebook to criticism was literally that satisfaction went down but engagement went up. People directly felt that the algorithm made them more unhappy, more isolated and overall less satisfied but because it was more addictive, because it created more "engagement", Facebook doubled down on it.
Also "sustainable" stopped being a talking point when the tech industry became obsessed with LLMs. Microsoft made a big show of wanting to become "carbon neutral" (of course mostly using bogus carbon offset programs that don't actually do anything and carbon capture technologies that are net emission positive and will be for decades if not forever but still, at least they pretended) and then silently threw all of that away when it became more strategically important to pursue AI at any cost. Companies that previously desperately tried to sell messages of green washing and carbon neutrality now talk about building their own non-renewable power plants because of all the computational power they need to run their LLMs (not to mention how much more hardware needs to be produced and replaced for this - the same way the crypto bubble ate through graphics cards).
I think the pearl-clutching is justified considering that ethics and climate protection have now been folded into "woke" and there's a tidal wave in Western politics to dismantle civil rights and capture democratic systems for corporate interests that is using the "anti-woke" culture war to further its goals - the Trump government being the most obvious example. It's no longer in FAANG's financial interests to appear "green" or "privacy conscious", it's now in their interest to be "anti-woke" and that now means no longer having to care about these things and having freedom to crack down on any dissident voices within without fearing public backlash or "cancel culture".
Tale as old as time. We’re yet another society blinded by our own hubris. Tell me what is happening now is not exactly how Greece and Rome fell.
The scary part is that we as a species are becoming more and more capable of large scale destruction. Seems like we are doomed to end civilization this way someday
I'm not sure what you mean by that. Ancient Greece was a loose coalition of city states, not an empire. You could say they were short-sighted by being more concerned about their rivalry than external threats but the closest they came to being united was under Alexander the Great, whose death left a power vacuum.
There was no direct cause of "the fall" of Ancient Greece. The city states were suffering greatly from social inequality, which created tensions and instability. They were militarily weakened from the war with the Persians. Alexander's death left them without a unifying force. Then the Roman Empire knocked on its door and that was the end of it.
Rome likewise didn't fall in one single way. "Rome" isn't even what people think it is. Roman history spans several different entities and even if you talk about the "empire in decline" that's covering literally hundreds of years, ending with the Holy Roman Empire, which has been retroactively reimagined as a kind of proto-Germany. But even then that's only the Western Roman Empire - the Eastern Roman Empire continued to exist as the Byzantine Empire until the Ottoman Empire conquered Constaninople. And this distinction between the two empires is likewise retroactive and did not exist in the minds of Romans at the time (although they were de facto independent of each other).
If you only focus on the century or so that is generally considered to represent the fall of Western Rome, the ultimate root cause actually seems to be natural climate change. The Huns fled climate change, chasing away other groups that then fled into the Empire. Late Western Rome also again suffered from massive wealth inequality, which the ruling class attempted to maintain with increasingly cruel punishments.
So, if you want to look for a common thread, it seems to be the hubris of the financial elite, not "society" as a whole.
If they signed the agreement... so what? Do people forget that the US has withdrawn from Paris Agreement and is withdrawing from WHO? Do people forgot Israel and North Korea got nukes even when we supposedly had a global nonproliferation treaty?
If AGI is as powerful and dangerous as doomsayers believe, the chance the US (or China, or any country with enough talented computer scientists) would respect whatever treaty they have about AGI is exactly zero.
Capitalism obviously has advantages and disadvantages. Regulation can address many disadvantages if we are willing. Unfortunately, I think a particular (mostly western) fetish for privileging individuals over communities has been wrongly extended to capital itself (e.g. corporations recognised as entities with rights similar to - and sometimes over-and-above - those of a person). We have literally created monsters. There is no reason we had to go this far. Capitalism doesn't have to mean the preeminence of capital above all else. It needs to be put back in its place and not necessarily discarded. I am certain there are better ways to practice capitalism. They probably involve balancing it out with some other 'isms.
Possible remedy will be to tie corporation to a person - person (or many if there are few owners and directors) become personally liable for everything corporation does.
This is a problem especially everywhere.
If we don't want to live in a world where these incredibly powerful technologies are leveraged for nefarious purposes there needs to be emotional maturity and growth amongst humanity. Those who are able to make these growths need to hold the irresponsible ones accountable (with empathy).
The promise of AI is that these incredibly powerful technologies will be disseminated to the masses and Open AI know this is the next step and it's why they're trying to keep a grip on their market share. With the advent of nVidia's project digits and powerful open source models like deepseek, it's very clear how this trajectory will go.
Just wanted to add some of this to the convo. Cheers.
If a law is passed saying "AI advancement is illegal" how can it ever be enforced?
> If a law is passed saying "AI advancement is illegal" how can it ever be enforced?
Like any other real-life law? Software engineers (a class which I'm a recovering member of) seem to have a pretty common misunderstanding about the law: that it needs to be air tight like secure software, otherwise it's pointless. That's just not true.
So the way you "prevent advancements in [AI] software" is you 1) punish them severely when detected and 2) restrict access to information and specialized hardware to create a barrier (see: nuclear weapons proliferation, "born secret" facts, CSAM).
#1 is sufficient to control all the important legitimate actors in society (e.g. corporations, university researchers), and #2 creates a big barrier to everyone else who may be tempted to not play by the rules.
It won't be perfect (see: the drug war), but it's not like cartel chemists are top-notch, so it doesn't have to be. I don't think the software engineering equivalent of a cartel chemist will be able to "do cutting edge research and innovate with architectures and algorithms" with only a "laptop and inet connection."
Would the technology disappear? No. Will it be pushed to the margins? Yes. Is that enough? Also yes.
The problem is that the information can go anywhere that has an internet connection, and the enforcement can't.
https://en.wikipedia.org/wiki/Operation_Opera
https://en.wikipedia.org/wiki/2021_Natanz_incident
https://www.timesofisrael.com/israel-targeted-secret-nuclear...
If were talking about technology that "could go off the rails in a catastrophic way," don't dick around.
As for the Chinese chip industry, I don't claim to be an expert on it, but it seems the Chinese are quickly coming up with increasingly less inferior alternatives to Western tech.
Perhaps it only takes China a few years to develop domestic hardware clusters rivalling western ones. Though those few years might prove critical in determining who crosses the takeoff threshold of this technology, first.
Am I the only one here saying that this is no reason to preemptively pass legislation? That just seems crazy to me. Imagined horrors aren't real horrors?
I disagree with this administrations approach, I think we should be vigilant, and keeping people who stand to gain so much from the tech in the room, doesn't seem like a good idea, but other than that, I haven't seen any real reason to do more than wait and be vigilant?
"We've already lost our first encounter with AI" - I think Yuval Hurari.
Algorithms heavily thumbed the scales on our social contracts. Where did all of the division come from? Why is extremism blossoming everywhere? Because it gets clicks. Maybe we're just better observing what's been going on under the hood all along, but it seems like there's about 350 million little cans of gasoline dousing American eyeballs.
Make Algorithms Govern All indeed.
The future is going to be hard, why would we choose to tie one hand behind our back? There is a difference between being careful and being fearful.
That requires dissolving the anarchy of the international system. Which requires an enforcer.
If some countries want to collaborate on some CERN project they just... do that.
That's an enforcer. Unfortunately, nobody follows through with its sanctions, so it's devolved into a glorified opinion-providing body.
> If some countries want to collaborate on some CERN project they just... do that
CERN is about doing thing, not not doing things. You can't CERN your way to nuclear non-proliferation.
Non-proliferation is, the US has nuclear weapons and doesn't want Iran to have them, so is going to apply some kind of bribe or threat. It's not cooperative.
The better example here is climate change. Everyone has a direct individual benefit from burning carbon but it's to our collective detriment, so how do you get anyone to stop, especially the countries with large oil and coal reserves?
In theory you could punish countries that don't stop burning carbon, but that appears to be hard and in practice what's doing the most good is making solar cheaper than burning coal and making electric cars people actually want, politics of infamous electric car man notwithstanding.
So what does that look like for making AI "safe, secure and trustworthy"? Maybe something like publishing state of the art models for free with full documentation of how they were created, so that people aren't sending their sensitive data to questionable third parties who do who knows what with it or using models with secret biases.
That's a little misleading. What actually happened is summarized here:
https://en.wikipedia.org/wiki/Appellate_Body
Since 2019, when the Donald Trump administration blocked appointments to the body, the Appellate Body has been unable to enforce WTO rules and punish violators of WTO rules. Subsequently, disregard for trade rules has increased, leading to more trade protectionist measures. The Joe Biden administration has maintained Trump's freeze on new appointments.
Clearly humans aren’t able to do this task.
I think it is one-sided to see any situation where we want to retain balance as being significantly affected by one of the sides exclusively. If one believes that there is a balance to be maintained between cooperation and competition, I don't immediately default to believing that any perceived imbalance is due to one and not the other.
The naturalistic fallacy is still a fallacy.
Just as you don't want to be stuck in the only town that outlaws murder...
I am not a religious person, but I can see the value in promoting shared taboos. The question is, how do we do this in the modern world? We had some success with nuclear weapons. I don't think it's any coincidence that contemporary leaders (and possibly populations) seem to have forgotten how bloody dangerous they are and how utterly stupid it is to engage in brinkmanship with so much on the line.
As for nuclear weapons, I mean it does kind of suck in today's age to be a country without nuclear weapons, right? Like, certain well known countries would really like to have them so they wouldn't feel bullied by the ones that have them. So, I actually think that example works against you. And we very well may end up in a similar circumstance where a few countries get super powerful AGIs and then use their advantage to prevent any other country from getting it as well. Therefore my point stands: I don't want to be in one of the countries that doesn't get to be in that exclusive club.
I’m so sick of that word. “You need to be competitive”, “you need to innovate”. Bullshit. You want to talk about fear? “Competitiveness” and “innovation” are the words the unscrupulous people at the top use to instil fear on everyone else and run rampant. They’re not being competitive or innovative, they’re sucking you dry of as much value as they can. We all need to take a breath. Stop and think for a moment. You can literally eat food which grows from the ground and make a shelter with a handful of planks and nails. Humanity survived and thrived before all this unfettered consumption, we don’t need to kill ourselves for more.
https://www.newyorker.com/cartoon/a16995
In reality, most people will continue to live the modern life where there are doctors, accountants, veterinarians, mechanics. We'll continue to enjoy food distribution and grocery stores. We'll all hope that North America gets its act together and build high speed rail so we can travel comfortably for long distances.
There was a time Canada was a big exporter of engineering technology. From mining to agriculture, satellites, and nuclear technology. I want Canada to be competitive in these ways, not making makeshift shacks out of planks and nails for junkies that have given up on life and live in the woods.
I believe you very well know it’s not, and are transparently arguing in bad faith.
> shacks (…) for junkies that have given up on life
The insults you’ve chosen are quite telling. Not everyone living in a way you disapprove of is an automatic junky.
That is actually what you are talking about; "uncompetitive" looks like something in the real world. There isn't an abstract dial that someone twiddles to set the efficiency of two otherwise identical outcomes - the competitive one will typically look more advanced and competently organised in observable ways.
To live in nice houses and have good food requires a competitive economy. The uncompetitive version was literally living in the forest with some meagre shelter and maybe having a wood fire to cook food (that was probably going to make someone very sick). The reason the word "competitive" turns up so much is people living in a competitive society get to have a more comfortable lifestyle. People literally starve to death if the food system isn't run with a competitive system that tends towards efficiency; that experiment has been run far too many times.
People can argue about the moral and ideological sanity of these things, but the fact is tolerating economic inefficiencies into the food system can quickly leads to there not being enough food.
You are also assuming, in bad faith, an "all" where I did not place one. It is an undeniable fact with evidence beyond any reasonable doubt, including police reports and documented studies by the district, that the makeshift shacks in the rural woods near my house are made by drug addicts that are eschewing the readily available social housing for the specific reason that they can't go to that housing due to its explicit restrictions on drug use.
I don’t understand this. Are you not familiar with farming and houses? You know humans grow plants to eat (including in backyards and balconies in cities) and make cabins, chalets, houses, entire neighbourhoods (Sweden currently planning the largest) with wood, right?
You don't realize the luxury you have and for some reason you assume that it is possible without that wealth. The reality of that lifestyle without tremendous wealth is more like subsistence farming in Africa and less like Swedish planned neighborhoods.
Correct. Nowhere did I defend or make an appeal to live life “as they did in the past” or “like our ancestor did”. We should (and don’t really have a choice but to) live forward, not backward. We should take the good things we learned and apply them positively to our lives in the present and future, and not strive for change and consumption for their own sakes.
To deny that your juxtaposition of this claim with your point about growing seeds and nailing together planks doesn't pass my personal test of credibility. You say: "Stop and think for a moment. You can literally eat food which grows from the ground and make a shelter with a handful of planks and nails." but that isn't indicative of a thriving life as I demonstrated. You can do both of those things and still live in squalor, a condition I wouldn't wish on my worst enemy.
You then suggest that I don't understand farming or house construction to defend that point, as if the existence of backyard gardens or wood cabins proves the point that a modern comfortable life is possible with gardens and wood cabins. My point is that the wealth we have makes balcony gardens and wood cabins possible and you are reasoning backwards. To be clear, we get to enjoy the modern luxury of backyard gardens and wood cabins by being wealthy and we don't get to be wealthy by making backyard gardens and wood cabins.
> We should take the good things we learned and apply them positively to our lives in the present and future
Sure, and I can argue competitiveness could be a lesson we have learned that can be applied positively. The way it is used positively in team sports and many other aspects of society.
https://crimethinc.com/2018/09/03/the-mythology-of-work-eigh...
If a society is okay accepting a lower standard of living and sovereign subservience, then sure, competition doesn't matter. But if America and China have AI and nukes and Europe doesn't, one side gets to call the shots and the other has to listen.
We better start really defining what that means, because it has become quite clear that all this “progress” is not leading to better lives. We’re literally going to kill ourselves with climate change.
> AI and nukes
Those two things aren’t remotely comparable.
How do you think the average person under 50 would poll on being teleported to the 1950s? No phones, no internet, jet travel is only for the elite, oh nuclear war and MAD are new cultural concepts, yippee, and fuck you if you're black because the civil rights acts are still a decade out.
> two things aren’t remotely comparable
I'm assuming no AGI, just massive economic efficiencies. In that sense, nuclear weapons give strategic autonomy through military coercion and the ability to grant a security umbrella, which fosters e.g. trade ties. In the same way, the wealth from an AI-boosted economy fosters similar trade ties (and creates similar costs for disengaging). America doesn't influence Europe by threatening to nuke it, but by threatening not to nuke its enemies.
That’s not the argument. At all. I argued we should rethink our attitude of unfettered consumption so we don’t continue on an path which is provably leading to destruction and death, and your take is going back in time to nuclear war and overt racism. That is frankly insane. I’m not fetishising “the old days”, I’m saying this attitude of “more more more” does not automatically translate to “better”.
If you say Room A is not better than Room B, then you should be, at the very least, indifferent to swapping between them. If you're against it, then Room A is better than Room B. Our lives are better--civically, militarily and materially--than they were before. Complaining about unfettered consumerism by falsely claiming our lives are worse today than they were before doesn't support your argument. (It's further undercut by the falling material and energy intensity of GDP in the rich world. We're able to produce more value for less input resource-wise.)
No. There is a reason I put the word in quotes. We are on a thread, the conversation follows from what came before. My original post was explicit about words used to bullshit us. I was specifically referring to what the “unscrupulous people at the top” call “progress”, which doesn’t truly progress humanity or enhances the lives of most people, only theirs.
To give a tech example, not many people were listening to Stallman and Linus and they still managed to change a lot for the better.
https://crimethinc.com/2018/09/03/the-mythology-of-work-eigh...
We face issues (like we always have), but I'd argue quite strongly that the competitiveness in our history and drive to invent and innovate has led to where we are today and it's a good thing.
The reality is, with increased access to information and accelerated pace of discovery in various fields, we'll come across things that has the potential for great harm. Be it AI, some genetical engineering causing a plague, nuclear fallout etc. We don't necessarily know what the harm / benefits are all going to be ahead of time, so we only really have 2 choices:
1. try to stop / slow down such advances. Not sure this is even possible in the long run
2. try to get a good grasp of potential dangers and figure out ways to mitigate / control them
We are upgrading the gears that turn the grist mill. Stupid, incoherent, faster.
Seems a bit negative. I think it'll be cool.
Yet, I can’t help but be hopeful about the future. We have to be, right?
How do you do that?
I agree with the first half: comfort has clearly increased over time since the Industrial Revolution. I'm not so sure the abundance of "content" will be enriching to the masses, however. "Content" is neither literature nor art but a vehicle or excuse for advertising, as pre-AI television demonstrated. AI content will be pushed on the many as a substitute for art, literature, music, and culture in order to deliver advertising and propaganda to them, but it will not enrich them as art, literature, music, and culture would: it might enrich the people running advertising businesses. Let us not forget that many of the big names in AI now, like X (Grok) and Google (Gemini), are advertising agencies first and foremost, who happen to use tech.
It is quite possible there is a cultural reaction against AI and that we enter a new human cultural golden age of human created art, music, literature, etc.
I actually would bet on this as engineering skills become automated that what will be valuable in the future is human creativity. What has value then will influence culture more and more.
What you are describing seems like how the future would be based on current culture but it is a good bet the future will not be that.
The nuclear peace is hard to pin down. But given the history of the 20th century, I find it difficult to imagine we wouldn't have seen WWIII in Europe and Asia without the nuclear deterrent. Also, while your parents may have been uncomfortable with the hydrogen bomb, the post-90s world hasn't particularly been characterised by mass nuclear anxiety. (Possibly to a fault.)
IMO, the Atoms for Peace propaganda undersells how successful globalization has been at keeping nations from destroying each other by creating codependence on complex supply chains. The new shift to protectionism may see an end to that
Nice "peace".
We had 100 years of such kind of peace among major europe powers before nuclear weapons. We're not even at 80 years of peace in nuclear age, this time, and nuclear armed power is already attacking from the east and from inside via new media.
I wouldn't call it done and clear, about the "nuclear age peace".
However, that really doesn’t invalidate the rule.
The discussion I was responding to is whether the next generation would grow up seeing pervasive AI as a normal and good thing, as is often the case with new technology. I cited nuclear weapons as a counterexample, while I agree that nobody felt that they had a choice but to keep up with them.
AI could similarly be a multipolar trap ("nobody likes it but we aren't going to accept an AI gap with Russia!"), which would mean it has that in common with nuclear weapons, strengthening the argument against the next generation being comfortable with AI.
Also, nukes don't write code or wash your dishes, it's nothing but liability for a society.
The point is that it's complicated, it's not a black and white sound bite like the people who are "against nuclear weapons" pretend it is.
I eat meat. I know some vegans feel uncomfortable with that. But personally I feel secure in my own convictions that I don't need to run around insinuating vegans are less than or whatever.
This is inevitable in my view.
AI will replace a lot of white collar jobs relatively soon, years or decades.
And blue collar isn't too far behind, since a major limiting factor for automation is general purpose robots being able to act in a dynamic environment, for which we need "world models".
Alignment Failure → Shifting Expectations People get used to AI systems making “weird” or harmful choices, rationalizing them as inevitable trade-offs. Framing failures as “technical glitches” rather than systemic issues makes them seem normal.
Runaway Optimization → Justifying Unintended Consequences AI’s extreme efficiency is framed as progress, even if it causes harm. Negative outcomes are blamed on “bad inputs” rather than the AI itself.
Bias Amplification → Cultural Reinforcement AI bias gets baked into everyday systems (hiring, policing, loans), making discrimination seem “objective.” “That’s just how the system works” thinking replaces scrutiny.
Manipulation & Deception → AI as a Trusted Guide People become dependent on AI suggestions without questioning them. AI-generated narratives shape public opinion, making manipulation invisible.
Security Vulnerabilities → Expectation of Insecurity Constant cyberattacks and AI hacks become “normal” like data breaches today. People feel powerless to push back, accepting insecurity as a fact of life.
Autonomous Warfare → AI as an Inevitable Combatant AI-driven warfare is seen as more “efficient” and “precise,” making human involvement seem outdated. Ethical debates fade as AI soldiers become routine.
Loss of Human Oversight → AI as Authority AI decision-making becomes so complex that people stop questioning it. “The AI knows best” becomes a cultural default.
Economic Disruption → UBI & Gig Economy Normalization Mass job displacement is met with new economic models (UBI, gig work, AI-driven welfare), making it feel inevitable. People adjust to a world where traditional employment is rare.
Deepfakes & Misinformation → Truth Becomes Fluid Reality becomes subjective as deepfakes blur the line between real and fake. People rely on AI to “verify” truth, giving AI control over perception.
Power Concentration → AI as a Ruling Class AI governance is framed as more rational than human leadership. Dissent is dismissed as “anti-progress,” consolidating control under AI-driven elites.
"Lack of Adaptability"
AI advocates argue that those who lose jobs simply failed to "upskill" in time. The burden is placed on workers to constantly retrain, even if AI advancement outpaces human ability to keep up. Companies and governments say, “The opportunities are there; people just aren’t taking them.” "Work Ethic Problem"
The unemployed are labeled as lazy or unwilling to compete with AI. Hustle culture promotes side gigs and AI-powered freelancing as the “new normal.” Welfare programs are reduced because “if AI can generate income, why can’t you?” "Personal Responsibility for Economic Struggles"
The unemployed are blamed for not investing in AI tools early. The success of AI-powered entrepreneurs is highlighted to imply that struggling workers "chose" not to adapt. People are told they should have saved more or planned for disruption, even though AI advancements were unpredictable. "It’s a Meritocracy"
AI-driven success stories (few and exceptional) are amplified to suggest anyone could thrive. Struggling workers are seen as having made poor choices rather than being victims of automation. The idea of a “deserving poor” is reinforced—those who struggle are framed as not working hard enough. "Blame the Boomers / Millennials / Gen Z"
Economic shifts are framed as generational failures rather than AI-driven. Older workers are told they refused to adapt, while younger ones are blamed for entitlement or lack of work ethic. Cultural wars distract from AI’s role in job losses. "AI is a Tool, Not the Problem"
AI is framed as neutral—any negative consequences are blamed on how people use it. “AI doesn’t take jobs; people mismanage it.” Job losses are blamed on bad government policies, corporate greed, or individual failure rather than automation itself. "The AI Economy Is Full of Opportunity"
Gig work and AI-driven side hustles are framed as liberating, even if they offer no stability. Traditional employment is portrayed as outdated, making complaints about job loss seem like resistance to progress. Those struggling are told to “embrace the new economy” rather than question its fairness.
look at the push right now in the US against corrupt foreign aid and the mass deportations seems like the first step.
US benefited a lot from lots of smart people going there (even more during WWII). If people start believing (correctly or incorrectly) that they would be better somewhere else, it will not benefit them.
When I was a kid, there was this grand utopian ideal for the internet. Now it's fragmented, locked in walled gardens where people are psychologically abused for advertising dollars. AI could be a force for good, but Google has already ended its ban on use in weapons and is selling it to the IAF, and Palantir is busy finding ways to use it for surveillance.
I want to be very clear, so let me say this: you are wrong, and have no idea what it actually means to be on the receiving end of discrimination.
I always thought skynet was a great metaphor for the market, a violent and inhuman thing that we created that dominates our lives and dictates the terms of our day to day life and magically thinks for itself and threatens the very future of this planet, our species, and our loved ones, and is somehow out of popular control. Not actual commentary on a realistic scenario about the dangers of ai. Sometimes these metaphors work out great and Terminator is a great example. Maybe the AI we've been fearing is already here.
I think for the most part the enshittification of everything will just accelerate and it'll be pretty obvious who benefits and who doesn't.
No, in this regard, capital is ABSOLUTELY harmless. I mean, if the capital get outsized influence on our society, in the WORST case it will turn into a government. And we already have it.
We no longer use chemicals harmful to the ozone layer on spray cans.
We no longer use lead in gasoline.
We figured those things were bad, and changed what we did. If evidence is available ahead of time that something is harmful, it shouldn't be controversial to avoid widespread adoption.
The closest might be nuclear power, we know we can do it, we did it, but lots of places said no to it, and further developments have vastly slowed down.
The man who invented it got lead poisoning during its development, multiple people died of lead poisoning in a pilot plant manufacturing it and public health and medical authorities warned against prior to it being available for sale to the general public.
You need to be totally naive to believe that materials shipped to the US are all checked to make sure they are asbestos free. You are provided with a report saying it is asbestos free, that is it.
Time to grow up.
> when the has it ever been the case that you can just say "no" to the world developing a new technology?
If we as a society keep developing potential existential threats to ourselves without mitigating them then we are destined for disaster eventually.
At some level, there's a disaster-seeking function inside us all acting as an evolutionary propellant.
You might make an argument that "AI" is an evolutionary embodiment of our conscious minds that's designed to escape these more subconscious trappings.
Technology doesn't accelerate endlessly. Only our transistor spacing does. These two are not the same thing.
It is very hard to find a discussion about the growth and development of AI that doesn't discuss the issues around power budget.
https://www.datacenterknowledge.com/energy-power-supply/whit...
https://bidenwhitehouse.archives.gov/briefing-room/president...
In building domestic AI infrastructure, our Nation will also advance its leadership in the clean energy technologies needed to power the future economy, including geothermal, solar, wind, and nuclear energy; foster a vibrant, competitive, and open technology ecosystem in the United States, in which small companies can compete alongside large ones; maintain low consumer electricity prices; and help ensure that the development of AI infrastructure benefits the workers building it and communities near it.
Exponential increases in cost (and power) for next-level AI and exponential decreases for the cost (and power) of current level AI.
These "safety declarations" are toothless and impossible to enforce. You can't stop AI, you need to adapt. Video and pictures will soon have no evidentiary value. Real life relationships must be valued over online relationships because you know the other person is real. It's unfortunate, but nothing AI is "disrupting" existed 200 years ago and people will learn to adapt like they always have.
To quote the fictional comic book villain Toyo Harada, "none of you can stop me. Not any one of you individually nor the whole of you collectively."
I think we may eventually get camera authentication as a result of this, probably legally enforced in the same way and for similar reasons as Japan enforced that digital camera shutters have to make a noise.
> but nothing AI is "disrupting" existed 200 years ago
200 years ago there were about 1 billion people on earth; now there are about 8 billion. Anarchoprimitivists and degrowth people make a similar handwave about the advances of the last 200 years, but they're important to holding up the systems which keep a lot of people alive.
Maybe, but I'm not bullish on cryptology having a solution to this problem. Every consumer device that's interesting enough to be worth hacking gets hacked within a few years. Even if nobody ever steals the key there will inevitably be side-channel attacks to feed external pictures into the camera that it thinks are coming from its own sensors.
And then there's the problem of the US government, which is known to strongarm CAs into signing fraudulent certificates.
> 200 years ago there were about 1 billion people on earth; now there are about 8 billion. Anarchoprimitivists and degrowth people make a similar handwave about the advances of the last 200 years, but they're important to holding up the systems which keep a lot of people alive.
I think that's a good argument against the kazinksy-ites, but I was primarily speaking towards concerns such as 'misinformation' and machines pushing humans out of jobs. We're still going to have food, medicine, and shelter. AI can't take that away; the only concern is adapting our society so that we can either feed significant populations of unproductive people, or move those people into whatever jobs machines can't do yet.
We might be teetering on the edge of a dystopian techno-feudalism where a significant portion of the population languishes in slums because industry has no use for them, but that's why I said we need to adapt. There has always been something that has the potential to destroy civilization in the near future, but if you're reading this post then your ancestors weren't the ones that failed to adapt.
Or the front-door analog route, point a real camera at a screen showing fake images.
That said, lots of people are incompetent at forging, about knowing what "tells" each process of fakery has and how to overcome them, so I think this will still broadly work.
> We might be teetering on the edge of a dystopian techno-feudalism where a significant portion of the population languishes in slums because industry has no use for them, but that's why I said we need to adapt.
That's underestimating the impact this can have. An AI which reaches human performance and speed on 250 watt hardware, at current global average electricity prices, costs about the same to run as a human costs just to feed.
By coincidence, the global electricity supply is currently about 250 watts/capita.
As with most things, the primary issue is not really a technical one. People will believe fake photos and not believe real ones based on their own biases. So even if we had the Perfect Technology, it wouldn't necessarily matter.
And this is the reason we have fallen into a dystopian feudalistic society (we aren't teetering). The weak link is our incompetent collective human brains. And a handful of people built the tools necessary to exploit that incompetence; we aren't going back.
People, maybe. Judges, much less so. The "perfect technology" is badly needed if we don't want things to go south at scale.
Judges appointed by whom? Anyway, Judges are human and I think there is enough evidence throughout history of judges showing bias.
Still, I can't really see it happening.
When you outlaw [silent cameras] the only outlaws will have [silent cameras].
Where a camera might "authenticate" a photograph, an AI could "authenticate" a camera.
1. The hardware just verifies that the image was acquired by that camera in particular. If an AI generates the thing it's photographing, especially if there's a glare/denoising step to make it more photographable, the camera's attestation is suddenly approximately worthless despite being real.
2. The same problem all those schemes have is that extracting hardware keys is O(1). It costs millions to tens of millions of dollars today, but the keys are plainly readable by a sufficiently motivated aversary. Those keys might buy us a decade or two, but everything beyond that is up in the air and prone to problems like process node size hitting walls while the introspection techniques continually get smaller and cheaper.
3. In the world you describe, you still have to trust the organizations producing hardware modules -- not just the "organization," but every component in that supply chain. It'd be easy for an internal adversary to produce 1/1M cameras which authenticate any incoming PNG and sell them for huge profits.
4. The hardware problem you're describing is much more involved than ordinary trusted computing because in addition to the keys being secure you also need the connection between the sensor and the keys to be secure. Otherwise, anyone could splice in a fake "sensor" that just grabs a signature for their favorite PNG.
4a. You're still only talking about O($10k) to O($100k) to produce a custom array to feed a fake photo into that sensor bank without any artifacts from normal screens. Even if the entire secure enclave / sensor are fully protected, you can still cheaply create a device that can sign all your favorite photos.
5. How, exactly, do lighting adjustments and whatnot fit in with such a signing scheme? Maybe the "RAW" is signed and a program for generating the edits is distributed alongside? Actually replacing general camera use with that sort of thing seemingly has some kinks to work out even if you can fix the security concerns.
First way to overcome this is attesting on true raw files. Then mostly just transferring raw files. Possibly supplemented by ZKPs that prove one imagine is the denoised version of another.
The other blocks are overcome by targeting crime, not nation states. This means you only nrrd stochastic control of the supply chain. Especially because, unlike with DRM keys, the leaking of a key doesn't break the whole system. It is very possible to revoke trust in a key. And it is possible to detect misuse of a private key, and revoke trust in it.
This won't stop deepfakes of political targets. But it does keep society from being fully incapable of proving what really happened to their peers.
I'm not saying we definitely should do this. But I do think there is a possible setup here that could be made reality, and that would substantially reduce the problem.
The problem is that the malicious product is nearly infinitely scalable, enough so that I expect services to crop up whereby people use rooms full of trusted devices to attest to your favorite photo, for very low fees. If that's not the particular way this breaks then it's because somebody found something even more efficient or the demand isn't high enough to be worth circumventing (and in the latter case the proposal is also worthless).
Laws like this serve primarily to deter casual criminals and catch patently stupid criminals which are the vast majority of cases. In this case it took a presumable sexual predator off the streets, which is a great application of the law.
[1]: https://www3.nhk.or.jp/news/html/20250212/k10014719841000.ht...
[2]: https://www3-nhk-or-jp.translate.goog/news/html/20250212/k10...
How would this work? Not sure if something like this is possible.
Yet, the international agreements on non-use of chemical weapons have held up remarkably well.
Basically claims that chemical weapons have been phased out because they aren't effective, not because we've become more moral, or international standards have been set.
"During WWII, everyone seems to have expected the use of chemical weapons, but never actually found a situation where doing so was advantageous... I struggle to imagine that, with the Nazis at the very gates of Moscow, Stalin was moved either by escalation concerns or the moral compass he so clearly lacked at every other moment of his life."
We still accept eyewitness testimony in courts. Video and pictures will be fine, their context is what will matter. Where we'll have a generation of chaos is in the public sphere, as everyone born before somewhere between 1975 and now fails to think critically when presented with an image they'd like to believe is true.
I haven't really believed in aliens existing on earth for most of my adult life. However, I have sort of come around to at least entertaining the idea in recent years but would need solid photographic or video evidence. I am now convinced that aliens could basically land in broad daylight in 3 years while being heavily photographed and it would easily be able to be explained away as AI. Especially if governments want to do propaganda or counter propaganda.
There are training runs in progress that will use billions of dollars of electricity and GPUs. Quite detectable -- and stoppable by any government that wants to stop such things from happening on territory it controls.
And certainly we can reduce the economic incentive for investing money on such a run by banning AI-based services like ChatGPT.
For now. Qualitative improvements in efficiency are likely to change what is required.
This is one bit that has a technological solution. Canon's had some version of this since the early 2000s: https://www.bhphotovideo.com/c/product/319787-REG/Canon_9314...
A more recent initiative: https://c2pa.org/
Here I mean that at point of sale you register yourself as owner for the camera. And you make extracting a key cost about a million. Then bulk forgeries won't happen.
Energy use is energy use, training is still incredibly energy intensive and the GPU heat signatures are different from non GPU ones, it fairly trivial to detect large scale GPU usage.
Enforcement is a different problem, and is not specific to AI, if you cannot enforce an agreement it doesn't matter if its AI or nuclear or sarin gas.
The point is not the usage is harmful or not, almost any tech can be used for bad purposes if you wish to do so.
You can put controls is the point , controls here could be agent Dameons monitoring the gpus and tallying usage to heat signals, or firmware etc . The controls on what is being trained would be at a higher level than just agents process on a gpu .
Legally, videos and pictures are physical evidence.
> The declarations of the videos and photos to be accurate depiction of events is the evidence.
No, those declarations are conclusions that are generally reserved to the trier of fact.(the jury, in a jury trial, or the judge in a bench trial.) Declarations of personal knowledge as to events in how the videos or films were created or found, etc., which can support or refute such conclusions are, OTOH, testimonial evidence, and at least some of that kind of evidence is generally necessary to support each piece of physical evidence. (And, on the other side, such evidence can be submitted/elicited by the other side to impeach the physical evidence.)
Thats a very interesting point
Until we get replicants
Probably not the same way you can detect working centrifuges in Iran... but you definitely can.
In a world full of sensors where everything is logged in some way or another I think that it would actually be not a straightforward activity at all to build a clandestine AI lab at any scale.
In the professional intel community they have been talking about this as a general problem for at least a decade now.
As in they've been discussing detecting clandestine AI labs? Or just how almost no activity is now in principle undetectable?
I don't think there’s a good public understanding of just how much things have changed in that space in the last decade but a huge percentage of all existing tradecraft had to be completely scrapped because not only does it not work anymore but it will put you on the enemy’s radar very early on and is actively dangerous.
It’s also why I think a lot of advice I see targeted towards activist types I think is straight up a bad idea in 2025. It just typically involves a lot of things that aren’t really consistent with any kind of credible innocuous explanation and are very unusual which make you stand out from a crowd.
Interesting semi-irrelevant tangent: the Cooley/Tukey 'Fast Fourier Transform' algorithm was initially created because they were negotiating arms control treaties with the Russians, but in order for that to be enforceable they needed a way to detect nuclear weapons testing; the solution was to use seismograms to detect the tremors caused by an underground nuclear detonation, and the FFT was invented in the process because they were using computers to filter for the types of tremors created by a nuclear weapon.
As I understand things (I’m not actually a professional here) the current thinking has up to this point been something akin to a containment strategy largely based on lessons learned from years of nuclear non-proliferation work.
But things are developing at such a crazy pace and there are some major differences between this and nuclear technology that it’s not really a straightforward copy and paste strategy at all. For example this time around a huge amount of the research comes from the commercial sector completely independently of defense and is also open source.
Also thanks for that anecdote I hadn’t heard of that before. This is a bit of a long shot but maybe you might know, I was trying to think of some research that came out maybe 2-3 years ago that basically had the ability to remotely detect if anything in a room had been moved (I might be misremembering this slightly) and it was said to be potentially a big breakthrough for nuclear arms control. I can’t remember what the hell it was called or anything else about it, do you happen to know?
Sadly, I don't think this is actually helpful for nuclear arms control. I suppose you could imagine a case where a country is known to have enough nuclear material for exactly X warheads, hasn't acquired more, and it could prove to an inspector that all of the material is still inside the same devices it was in at the last inspection. But most weapons development happens by building new bombs, not repurposing old ones, and most countries don't have exactly X bombs, they have either 0 or so many the armed forces can't reliably count them.
[0] https://www.nature.com/articles/nature13457
That’s a new one on me (not being in cryptography), but I really like it. Thanks!
And maybe it will be like detecting nuclear enrichment. Instead of hacking the firmware in a Siemens device, it's done on server hardware. Israel demonstrated absurd competence at this caliber of spycraft.
Sometimes you take low-tech approaches to high tech problems. I.e., get an insider at a shipping facility to swap the labels on two pallets of GPUs, one is authentic originals from the factory and the other are hacked firmware variants of exactly the same models.
If nations chose to restrict that, such detection would merit a military response. Like Iran's centrifuges.
But if you want to talk about "actionable" here are three potential actions a country could take and the confidence level they need for such actions:
- A country looking for targets to bomb doesn't need much confidence. Even if they hit a weather prediction data center, it's going to hurt them.
- A country looking to arrest or otherwise sanction citizens needs just enough confidence to obtain a warrant (so "probably") and they can gather concrete evidence on the ground.
- A country looking to insert a mole probably doesn't need much evidence either. Even if they land in another type of data center, the mole is probably useful.
For most use cases, being correct more than half the time is plenty.
1. I was just granting the GPs point to make the broader point that, for the purposes of this original discussion about these "safety declarations", this is immaterial. These safety declarations are completely unenforceable even if you could detect that someone was training AI.
2. Now, to your point about moving the goalposts, even though I say "if you could detect that someone was training AI", I don't actually even think that is possible. There are far too many normal uses of data centers to determine if one particular use is "training an AI" vs. some other data intensive use. I mean, there have long been supercomputer centers that do stuff like weather analysis and prediction, drug discovery analysis, astronomy tools, etc. that all look pretty indistinguishable from "training an AI" from the outside.
So that's easy.
Nothing to actually worry about.
Other than Sam Altman and Elon Musks' pending ego fight.
Technically both are real people, one is just not human. At least by the person/people definition that would include sentient aliens and such.
I think this presumes that Sam Altman is correct to claim that they can scale their way to, in the practical sense of the word, AGI.
If he is right about that, you are right that it's too late to hide it; if he's wrong, I think the AI architecture and/or training methods we have yet to invent are in the set of things we could usefully sequester.
> The equipment needed to train AI is cheap and ubiquitous.
Again, possibly:
If we were already close even before DeepSeek's models, yes, the hardware is too cheap and too ubiquitous.
If we're still not close even despite DeepSeek's cost reductions, then the hardware isn't cheap enough — and Yudkowsky's call for a global treaty on maximum size of data centre to be enforced by cruise missiles when governments can't or won't use police action, still makes sense.
If it takes software technology that we have already developed outside of secret government labs, it is probably too late to sequester it.
If it takes software technology that has been developed in secret government labs, its probably too late to sequester the already public precursors with which independent development of the same technology is impossible, getting us back to the preceding.
It takes software technology that hasn't been developed, we don't know what we would need to sequester, and won't until we are in one of the two preceding states.
If it takes a breakthrough in hardware technology, then if we make that breakthrough in a way which doesn't become widely public and used very quickly after being made and the hardware technology is naturally amenable to control (i.e., requires distinct infrastructure of similar order to enrichment of material for nuclear weapons), maybe, with intense effort of large nations, we can sequester it to a limited club of AGI powers.
I think control at all is most likely a pipe dream, but one which serves as a justification for the exercise of power in ways which will please both authoritarians and favored industry actors, and even if it is possible it is simply a recipe for a durable global hegemony of actors that cannot be relied on to be benevolent.
Which in turn leads to the cautious approach for which OpenAI is criticised: not revealing things because they don't know if it's dangerous or not.
> I think control at all is most likely a pipe dream, but one which serves as a justification for the exercise of power in ways which will please both authoritarians and favored industry actors, and even if it is possible it is simply a recipe for a durable global hegemony of actors that cannot be relied on to be benevolent.
Entirely possible, and a person I know who left OpenAI had a fear compatible with this description, though differing on many specifics.
Deepfakes are a distraction from more important things here. The point of AI safety is "it doesn't matter who builds unaligned AGI, if someone builds it we all die".
If you agree that unaligned AGI is a death sentence for humanity, then it's worth trying to stop it.
If you think AGI is unlikely to come about at all, then it should be a no-op to say "don't build it, take steps to avoid building it".
If you think AGI is going to come about and magically be aligned and not be a death sentence for humanity, pay close attention to the very large number of AI experts saying otherwise. https://en.wikipedia.org/wiki/P(doom)
If your argument is "but some experts don't believe that", ask yourself whether it's reasonable to say "well, experts disagree about whether this will kill us all, so we shouldn't do anything".
There might be a few humans that don't agree with even those values, but I think it's safe to presume that the general-consensus values of humanity include the above points. And AI alignment is not even close to far enough along to provide even the slightest assurances about those points.
Practically everyone making the argument that AGI is about to destroy humanity is (a) human and (b) working on AI. It's safe to conclude they're either stupid and suicidal or don't buy their own bunk.
But ultimately, most people who think we stand a decent chance of dying because of this are not working at AI labs.
Do humans agree on the best way to do this? Aside from the most banal examples of what not to do, is there agreement on e.g. whether a mass extinction event is happening, not happening, or happening but actually tolerable?
If the answer is no, then it is not possible for an AI to align with human values on this question. But this is a human problem, not a technical one. Solving it through technical means is not possible.
So, at a very basic level: stop training AIs at that scale!
So for example if a family with 5 children is on vacation, do you maintain that it is impossible even in principle for the parents to take the preferences of all 5 children into account in approximately equal measure as to what activities or non-activities to pursue?
Also: are you pursuing a complete tangent or do you see your point as bearing on whether frontier AI research should be banned? (If so, I cannot tell whether you consider your point to support a ban or oppose a ban.)
Therefore the actual solution is not coming up with more and more clever “guardrails” but aligning corporations and governments to human needs. In other words, politics.
There are other problems like enabling new types of scams which will require political solutions. At a technical level the best these companies can do is mitigation.
Don't extrapolate from present harms to future harms, here. The problem AI alignment is trying to solve at a most basic level is "don't kill everyone", and even that much isn't solved yet. Solving that (or, rather, buying time to solve it) will require political solutions, in the sense of international diplomacy. But it has absolutely nothing to do with "aligning corporations", and everything to do with teaching computers things on par with (oversimplifying here) "humans are made up of atoms, and if you repurpose those atoms the humans die, don't ever do that".
No, its not. AI alignment was an active area of concern (and the fundamental problem for useful AI with significant autonomy) before cultists started trying to reduce the scope of its problem space from the wide scope of real problems it concerns to a single speculative apocalypse.
But the genesis of the term "alignment" (as applied to AI) is a side issue. What is important is that reinforcement learning with human feedback and the other techniques used on the current crop of AIs to make it less likely that the AI will say things that embarass the owner of the AI are fundamentally different from making sure the an AI that turns out more capable than us will not kill us all or do something else awful.
Both, of course, are concerned primarily with the risk of human extinction from AI.
The fact that the number of things that could hypothetically lead to human extinction is entirely unbounded and (since we’re not extrapolating from present harms) unpredictable is a very convenient fact for people who are paid for their time in “solving” this problem.
OP's point has nothing to do with this, OP's point is that you can't stop it.
The methods and materials are too diffuse and the biggest players (nation states) have a strong incentive to be first. Do you really expect China to coordinate with the West on this?
So what is your solution? Give up and die? It's worth trying. If it buys us a few years that's a few more years to figure out alignment.
> The methods and materials are too diffuse and the biggest players (nation states) have a strong incentive to be first.
So there's a strong incentive to convince them "stop racing towards death".
> Do you really expect China to coordinate with the West on this?
Yes, there have been concrete examples of willingness towards doing so.
It is essentially the same problem as the atom bomb: it would have been better if we all agreed not to do it, but thats just not possible. Why should China trust the US or vice versa? Who wants to live in a world where your competitors have world-changing technology but you don't? But here we have a technology with immense militaristic and economic value, so the everyone-wants-it problem is even more pronounced.
I don't _like_ this, I just don't see how we can achieve an AI moratorium outside of bombing the data centers (which I also don't think is a good idea).
We need to choose the policy with the best distribution of possible outcomes:
- The US leads an effort to stop AI development: too much risk that other parties do it anyway
- The US continues to lead AI development: hope that P(takeoff) is low and that the good intentions of some US labs are able to achieve safe development
I prefer the latter -- this is far from the best hypothetical outcome, but I think it is the best we can do when constrained by reality.
That's not true. I worked in the field of DNA analysis for 6.5 years and there is definitely a consensus that DNA editing is closer than the horizon. Just look at CRISPR gene editor [0]. Crude, but "works".
Your DNA, even if you've never submitted it, is already available using shadow data (think Facebook style shadow profiles but for DNA) from the people related to you who have.
[0]: https://en.wikipedia.org/wiki/CRISPR_gene_editing
I don't think a defeatist attitude is useful here.
AGI would then be a very effective tool for maintaining the current authoritative regime.
What bearing does that have on China's interest in developing AGI? Does the risk posed by OpenAI et al. mean that China would not use AI as a tool to advance their self interest?
Or are you saying that the risks from OpenAI et al. will come to fruition before we need to worry about China's AI use? That still wouldn't prevent China from pursuing AI up until that happens.
I am still not convinced that there is a policy which can prevent AI from developing outside of the US with high probability.
Suppose, hypothetically, there was a very simple as-yet-unknown action, doable by anyone who has common unrestricted household chemicals, that would destroy the world. Suppose we know the general type of action, but not the specific action, yet. Suppose that people are actively researching trying actions in that family, and going "welp, world not destroyed yet, let's keep going".
How do you proceed? What do you do to stop that from happening? I'm hoping your answer isn't "decide there's no policy that can prevent this, give up".
- there were a range of expert opinions that P(destroy-the-world) < 100 AND
- the chemical could turn lead into gold AND
- the chemical would give you a militaristic advantage over your adversaries AND
- the US were in the race and could use the chemical to keep other people from making / using the the chemical
Then I think we'd be in the same situation as we are with AI: stopping it isn't really a choice, we need to do the best we can with the hand we've been dealt.
I would hope that it would not suffice to say "not a 100% chance of destroying the world". Because there's a wide range of expert opinions saying values in the 1-99% range (see https://en.wikipedia.org/wiki/P(doom) for sample values), and none of those values are even slightly acceptable.
But sure, by all means stipulate all the things you said; they're roughly accurate, and comparably discouraging. I think it's completely, deadly wrong to think that "race to find it" is safer than "stop everyone from finding it".
Right now, at least, the hardware necessary to do training runs is very expensive and produced in very few places. And the amount of power needed is large on an industrial-data-center scale. Let's start there. We're not yet at the point where someone in their basement can train a new frontier model. (They can run one, but not train one.)
Ok, I can imagine a domestic policy like you describe. Through the might and force of the US government, I can see this happening in the US (after considerable effort).
But how do you enforce something like that globally? When I say "not really possible" I am leaving out "except by excessive force, up to and including outright war".
For the reasons I've mentioned above, lots of people around the world will want this technology. I haven't seen an argument for how we can guarantee that everyone will agree with your level of "acceptable" P(doom). So all we are left with is "bombing the datacenters", which, if your P(doom) is high enough, is internally consistent.
I guess what it comes down to is: my P(doom) for AI developed by the US is less than my P(doom) from the war we'd need to stop AI development globally.
I don't consider the P(destruction of humanity) of stopping larger-than-current-state-of-the-art frontier model training (not all AI) to be higher than that of stopping the enrichment of uranium. (That does lead to conflict, but not the destruction of humanity.) In fact, I would argue that it could potentially be made lower, because enriched uranium is restricted on a hypocritical "we can have it but you can't" basis, while frontier AI training should be restricted on a "we're being extremely transparent about how we're making sure nobody's doing it here either" basis.
(There are also other communication steps that would be useful to take to make that more effective and easier, but those seem likely to be far less controversial.)
If I understand your argument correctly, it sounds like any one of three things would change your mind: either becoming convinced that P(destruction of humanity) from AI is higher than you think it is, or becoming convinced that P(destruction of humanity) from stopping larger-than-current-state-of-the-art frontier model training is lower than you think it is, or becoming convinced that nothing the US is doing is particularly more likely to be aligned (at the "don't destroy humanity" level) than anyone else.
I think all three of those things are, independently, true. I suspect that one notable point of disagreement might be the definition of "destruction of humanity", because I would argue it's much harder to do that with any standard conflict, whereas it's a default outcome of unaligned AGI. (I also think there are many, many, many levers available in international diplomacy before you get to open conflict.)
(And, vice versa, if I agreed that all three of those things were false, I'd agree with your conclusion.)
There are many other less-superficial reasons why Beijing may be interested in AI, plus China may not trust that we actually banned our own AI development.
I wouldn't take that bet in a million years.
The discussion started when someone argued that even if this AI juggernaut were in fact very dangerous, there is no way to stop it. When I pushed back on the second part of that, you reject my push-back. On what basis? I hope it is not, "I just want things to keep on going the way they are," as if ignoring the AI danger somehow makes the AI danger go away.
I don't have a lot of confidence that this will be the case, but I think the US continuing to develop AI is the decision with the best distribution of possible outcomes.
This is completely separate from my personal preferences or hopes about the future of AI.
I don't think we're on "the cusp" of AGI, but I guess that just means I'm quibbling over the timeframe of what "cusp" means. I certainly think it's possible within the lifetime of people alive today, so whether it comes in 5 years or 75 years is kind of an insignificant detail.
And if AGI does get built, I agree there is a significant risk to humanity. And that makes me sad, but I also don't think there is anything that can be built to stop it, certainly not some useless agreements on paper.
Intelligence and alignment are mutually incompatible; natural intelligence is unaligned, too.
Unaligned intelligence is not a global death sentence. Fearmongering about unaligned AGI, however, is a tool to keep a tool of broad power—which AI is and will continue to grow as long before it becomes, and even if it never becomes, AGI—in the hands of a narrow, self-selected elite to make their control over everyone else insurmountable, which is also not a global death sentence, but is a global slavery sentence. (It’s also, more immediately, a tool to serve those who benefit from current AI uses which are harmful and unjust to use future speculative harms to deflect from real, present, concrete harms; and those beneficiaries are largely an overlapping elite with the group with a longer term interest in centralizing power over AI.)
Whether or not that elite group produces AGI, much less, “unaligned AGI”, is largely immaterial to the practical impacts (also, from the perspective of anyone outside the controlling elite, what the controlling elite would view as aligned, whether or not it is a general intelligence, is unaligned; alignment is not an objective property.)
False. There are people working on frontier AI who have co-opted some of the safety terminology in the interests of discrediting it, and discussions like this suggest that that strategy is working.
> all actionable policy under that banner fits that description
Actionable policy: "Do not do any further frontier AI capability research. Do not build any models larger or more capable than the current state of the art. Stop anyone who does as you would stop someone refining fissile materials, with no exceptions."
> (also, from the perspective of anyone outside the controlling elite, what the controlling elite would view as aligned, whether or not it is a general intelligence, is unaligned; alignment is not an objective property.)
You are mistaking "alignment" for things like "politics", rather than "not killing everyone".
Also, "alignment" doesn't mean "not killing everyone", it means "functioning according to (some particular set of) human's preferred set of values and goals". "Killing everyone" is a consequence some have inferred if unaligned AI is produced (redefining "alignment" to mean "not killing everyone" makes the whole argument circular.)
The darkly amusing shorthand for this: if the AGI tiles the universe with tiny flags, it really doesn't matter whose flag it is. Any notion of "whose values" really can't happen if you can't align at all.
I'm not disagreeing with you that "AI alignment" is more complex than "don't kill everyone"; the point I'm making is that anyone saying "but whose values are you aligning with" is fundamentally confused about the scale of the problem here. Anyone at any point on any reasonable human values spectrum should be able to agree that "don't kill everyone" is an essential human value, and we're not even there yet.
It is like living paralyzed in fear of every birth, for fear that random variance will produce one baby born smarter than Einstein will be capable of developing an infinite cascade of progressively smarter babies and concluding that therefore we must stop all breeding. No matter how smart the baby super-Einstein winds up being there is no unstoppable, unopposable omnicide mechanism. You can't theorem your way out of a paper bag.
We've already found ourselves on a trajectory where un-employing millions or billions of people without any system to protect them afterwards is just accepted, and that's simply the first step of many in the destruction-of-empathy path that creating AI/AGI brings people down.
LLMs will not get us to AGI. Not even close. Altman talking about this danger is like Musk talking about driverless taxis.
Don't get me wrong, they are impressive. I can see LLM's eventually enabling people to be 10x more productive in jobs that interact with a computer all day.
This is a big part of it, and you can get others to do it for you.
It's like the drain cleaner sold in an extra bag. Obviously it must be the best, it's so scary they have to put it in a bag!
I doubt this. Productivity is gained through experience and expertise. If you don't know what you don't know than the LLM is perfectly useless to you.
Seems like we're splitting hairs a bit here.
Modern AI safety originated with people like Eliezer Yudkowsky, Nick Bostrom, the LessWrong/rationality movement etc.
They very much were not just talking about it only to build moats for OpenAI. For one thing, OpenAI didn't exist at the time, AI was not anywhere close to where it is today, and almost everyone thought their arguments were ridiculous.
You might not agree with them, but you can't simply dismiss their arguments as only being there to prop up the existing AI players, that's wrong and disingenuous.
Same with killer robots (or whatever it is people are afraid of when they talk about "AI safety"). As long as we can control who they kill, when, and why, there's no real difference with any other weapon system. If that control is even slightly in doubt: it's not fit for service.
Does this mean that bullshit generating LLMs aren't fit for service in many areas: it probably does. But maybe there steps can be taken to mitigate risks.
I'm sure this will involve some bureaucratic overhead. But it seems worth the hassle to me.
Being against AI Safety is a stupid hill to die on. Being against some concrete declaration or a part thereof, sure, that might make sense. But this smells a lot like the tabacco industry being against warnings/filters/low-tar, or the car industry being anti-seatbelt.
Deepseek and efforts by other non-aligned powers wouldn't care about any declarations signed by the EU, the US and other western powers anyways.
So the skills, knowledge, and expertise are in the UK. Google can close the UK office tomorrow if they wanted to sure, but are 100% of those staff going to move to California? Doubt it. Some will, but a lot have lives in the UK (not least the CEO and founder etc) so even if Google pulls the rug I will bet there will be a new company founded and funded within days that will vacuum up all the staff.
AGI is a meaningless term. The LLM architecture has shown promise in every single domain once used for perceptron neural networks. By all accounts on those things that fit its 'senses' the LLMs are significantly smarter than the average human being.
Yes.
And there is no reason to think that AGI would have desire.
I think people are reading themselves into their fears.
Or we can just drop all this sophistry nonsense.
The entire point of utilizing this tool is to feed it a desire and have it produce an appropriate output based upon that desire. Not only that, it's entire training corpus is filled with examples of our human desires. So either humans give it desire or it trains itself to function based on the inertia of "goal-seeking" which are effectively the same thing.
Attempts at curbing AI will come from those who are losing the race. There's this interview where Edward Teller recalls how the USSR used a moratorium in nuclear testing to catch up with the US on the hydrogen bomb, and how he was the one telling the idealist scientists that that was going to happen.
In another clips he says that he believes it was inevitable that the Soviets would come up with an H-bomb on their own.
[1] https://www.youtube.com/watch?v=zx1JTLrhbnI&list=PLVV0r6CmEs...
it's well known that china has long caught up with the us, in almost every way, and is on the verge of surpassing it on the others. just look at deepseek, as efficient as openai for a fraction of the cost. Baidu, alibaba ai and so on.
China's economy remains on track to surpass that of the United States within just five years, and yet they did sign that agreement.
In fact most countries did. India too.
it's not the case of the looser making new rule, it's the big boy discussing how they are going to handle the situation and the retarded ones thinking they are too good for that.
I'd be very happy to take a high stakes, longterm bet with you if that's your earnest position.
Are you actually saying this in the year 2025?
China has signed on to many international agreements that it has absolutely no interest in following or enforcing.
Intellectual property is the most well known. They’re party to various international patent agreements but good luck trying to get them to enforce anything for you as a foreigner.
However the attempts are token and they know it too. Just an attempt to appear to be doing something for the naive information consumers, aka useful idiots.
Imaging telling hackers from the past that people on a website called “hacker news” would be arguing about how important it is that the government criminalize running code on your own computer. It’s so astoundingly, ethically, philosophically opposed to everything that inspired me to get into computers in the first place. I only have to wonder if people really believe this, or it’s a sophisticated narrative that’s convenient to certain corporations and politicians.
If you explained to that hacker that govs and corps would leverage that same technology to spy on everyone and control their lives because line must go up, they might understand better than anyone else why this needs to be sabotaged early in the process.
As you mention ethics: what ethics do we apply to AI? None? Some? The same as to a human? As AI is replacing humans in decision-making, it needs to be held responsible just as a human.
My understanding is that approximately zero government-level safety discussion is restriction of just building & running AI yourself. There are no limits of AI hacking even in the EU AI regaultion or discussions I've seen.
Regulation is around business & government applications and practical use cases: no unaccountable AI making final employment decisions, no widespread facial recognition in public spaces, transparency requirements for AI usage in high-risk areas (health, education, justice), no AIs with guns, etc.
As DeepSeek is shown us progress is hard to hinder unless you go to war and kill the people....
That was never going to fly with the current U.S. administration. Not only is the word inclusive in there but ethical and trustworthy as well.
Joking aside, I genuinely don’t understand the “job creation” claims of JD Vance in his dinner speech in Paris.
Long-term I just can’t imagine what a United States will look like when 75% of the population are both superfluous and a burden to society.
If this happens fast, society will crumble. Sheep are best kept busy grazing.
The voters are locked in idiots, and don’t have agency at the moment. The bet from Musk, Theil, etc is that AI is as powerful and strategic as nuclear weapons were in 1947 - that’s what The Musk administration diplomacy seems to be like.
What would AGI lead to? Most knowledge work would be replaced in the same way as manufacturing work has been, and AGI is in control of the existing elite. It would be used to suppress any revolt for eternity, because surveillance could be perfectly automated and omnipresent.
Really not something to aspire to.
[1]: https://en.wikipedia.org/wiki/Resource_curse
If the rich need less and less educated, healthy and well fed workers, then more and more people will get treated like shit. We are currently going into that direction with full speed. The rich aren't even bothering to hide this anymore from the public because they think they have won the game and can't be overruled anymore. Let's hope there will be still elections in four years and MAGA doesn't rig it like Fidesz in Hungary and so many other countries who have fallen into the hands of the internationalist oligarchy.
Maybe. I think it's a matter of culture.
Very few people mistreat their dogs and cats in wealthy countries. Why shouldn't people in power treat regular people at least as well as regular folks treat their pets?
I'm no history buff but my hunch is that mistreatment of people largely came from a fear that if I don't engage in cruelty to maximize power, my opponents will and given that they're cruel, they'll be cruel to me when they come to take over.
So we end up with this zero sum game of squeezing people, animals, resources and the planet in an arms race because everyone's afraid to lose.
In the past - you couldn't be sure if someone else was building up an army, so you had to build up an army. But now that we have satellites and we can largely track everything - we can actually agree to not engage in this zero sum dynamic.
There will be a shift from treating people as means to an end of power accumulation and containment, to treating people as something you just inherently like and would like to see prosper.
It'll be a shift away from this deeply corrosive idea of never ending competition and growth. When people's basic needs are met and no one is grouping up to take other people's goodies - why should regular people compete with one another?
They shouldn't and they won't. People who want to do good work will do so and improving the lives of people worldwide will be its own reward. Private islands, bunkers and yachts will become incomprehensible because there'll be no serf class to service any of it. We'll go back to if you want to be well liked and respected - you have to be a good person. I look forward to it :)
Because very few regular people will be their pets. These are the people who do everything in their power to pay their employees less. They treat their non-pets horribly... see feed lots and amazon warehouses. They actively campaign against programs which treat anyone well, particularly those who they aren't extracting wealth from. They whine and moan and cry about rules that protect people from getting sick and injured because helping those people would prevent them from earning a bit more profit.
They may spend a pile of money on surgery for their bunny, but if you want them to behave nicely to someone else's pet, or even someone else... well that's where they draw the line.
I guess you are hoping to be one of those pets... but what makes you think you're qualified for that, and why would you be willing to sacrifice all of your friends and family to the fate of feral dogs for the chance to be a pet?
I'm not suggesting that a few people become rich people's pets.
I'm saying it isn't inherent in human nature to mistreat other conscious beings.
If people were normally cruel to their pets, then I'd say yeah, this species just seems to enjoy cruelty and it is what it is. But we aren't cruel to pets, so the cruelty to workers does not stem from human nature, but other factors.
I gave one theory as to what causes the cruelty and that I'm somewhat optimistic that it'll run its course in due time.
Anyhoo :)
"Maintain humanity under 500,000,000 in perpetual balance with nature" (Georgia Guidestones, a monument describing someone's ideal future that is surrounded by conspiracy)
Will you be one of the 500 million pets? Or one of the 7.5 billion leftovers? Sure would be a shame if billions become unnecessary and then a pandemic 100X worse than covid got lableaked.
You can’t seriosly claim they are upending people’s jobs when those jobs were BS in the first place.
Sociology: The Oil Curse
Or, my favorite outcome, the AI to iterate over itself and develop its own hardware and so on.
For example US is probably the most resource rich country in the world, but people don't consider it for the resource curse because the rest of its economy is so huge.
Could it exist some day? Certainly. But currently 'AI' will never become an AGI, there's no path forward.
What indicators are these?
Here we are, a couple of years later, truly musing about sectors of the world embracing AI and others not.
That sort of piecemeal adoption was predictable but not that we are here to have this debate this soon!
These are all silicon valley "neck thoughts." They're entirely uninformed by the current state of the world and any travels through it. It's fantasies brought about by people with purely monetary desires.
It'd be funny if there wasn't billions of dollars being burnt to market this crap.
With every technological advancement it can always be good or bad. I believe it is going to be good to have a true AI available at our fingertips.
Because I can think of a large number of historical scenarios where malicious people get access to certain capabilities and it absolutely does not go well and you do have to somehow account for the fact that this is a real thing that is going to happen.
For example, several governments are actively engaged in a live streamed genocide and nothing akin to the 1789 revolt in Paris seems to be underway.
The 1789 was one of many revolutions (https://en.wikipedia.org/wiki/List_of_peasant_revolts) and it was not fought because of genocide of other people, it was due to internal problems.
Sure. The ancien régime was considered illegitimate so they got rid of it, and if a state is involved in genocide it is since the Holocaust considered illegitimate and it should lose its sovereignty.
Will AI lead to a Dark Forest scenario on Earth between humans and AI?
Safety is mentioned in a context with trust-worthiness, ethics, security, ...
[1] https://www.politico.eu/wp-content/uploads/2025/02/11/02-11-...
I've spent enough time building with these models to see their transformative potential. The productivity gains aren't marginal - they're exponential. And this is just with current-gen models.
China's approach is particularly telling. While they lack the massive compute infrastructure of US tech giants, their research output is impressive. Their models may be smaller, but they're remarkably efficient. Look at DeepSeek's performance-to-parameter ratio.
The upside potential is simply too large to ignore. We're seeing breakthroughs in protein folding that would take traditional methods decades. Education is being personalized at scale. The translation capabilities alone are revolutionary.
The reality is that AI development will continue regardless of these declarations. The optimal strategy isn't to slow down - it's to maintain the lead while developing safety measures in parallel. Everything else is just security theater.
(And yes, I've read the usual arguments about x-risk. The bottleneck isn't safety frameworks - it's compute and data quality.)
If an enemy state gives AI autonomous control and gains massive combat effectiveness, it puts the pressure to other countries to do the same.
No one wants sky net. But if we continue the current path, painting the world as we vs them. I m fearful sky net will be what we get
But ignoring the signaling going on on various sides would be a mistake. "AI" is for all practical purposes a synonym for algorithmic decision making, with potential direct implication on peoples lifes. Without accountability, transparency, recourse etc the unchecked expansion of "AI" in various use cases represents a significant regression for historically established rights. In this respect the direction of travel is clear: The US is dismantling the CFPB, even more deregulation (if that is at all possible) is coming, big tech will be trusted to continue "self-regulating" etc.
The interesting part is the UK stance. Somewhere in between the US and the EU in terms of citizen / consumer protections, but despite brexit probably closer to the latter, this siding with dog-eats-dog deregulation might signal an anxiety not to be left behind.
https://gist.github.com/lmmx/b373b9819318d014adfdc32182ab17f...
Similarly, whoever gains the most training and fine-tuning data from whatever source via whatever means first -- will likely be at advantage
Hard to see how that toothpaste goes back in the tube now
> warning countries not to sign AI deals with “authoritarian regimes”
Well, that now also rules out the US.
Where is the nuanced discussion of what we want and don't want AI to do as a society?
These details matter, and working through them collectively is progress, in stark contrast to getting dragged into identity politics arguments.
- I want AI to increase my freedom to do more and spend more time doing things I find meaningful and rewarding. - I want AI to help us repair damage we have done to ecosystems and reverse species diversity collapse. - I want AI to allow me to consume more in a completely sustainable way for me and the environment. - I want AI that is an excellent and honest curator of truth, both in terms of accurate descriptions of the past and nuanced explanations of how reality works. - I want AI that elegantly supports a diversity of values, so I can live how I want and others can live how they want. - I don't want AI that forcefully and arbitrarily limits my freedoms - I don't want AI that forcefully imposes other people's values on me (or imposes my values on others) - I don't want AI war that destroys our civilization and creates chaos - I don't want AI that causes unnecessary suffering - I don't want other people to use AI to tyrannize me or anyone else.
How about instead of being so broadly generic about "AI safety" declarations we get specific, and then ask people to make specific commitments in kind. Then it would be a lot more meaningful when they refuse, or when they oblige and then break them.
They will move to countries where the laws suit them. Generally business as usual these days and why big businesses have such a strong bargaining position with regard to national governments.
Both the current British and American governments are very pro big-business anyway. That is why Trump has stated he likes Starmer so much.
Oh noes, I can't slurp all my user data and sell / give it to whoever. How will I make money if I can't exploit people's data?
It appears to be essentially "We promise not to do evil" declaration. It contains things like "Ensure AI eliminates biases in recruitment and does not exclude underrepresented groups.".
What's the point of rejecting this? Seems like a show, just like the declaration itself.
Depending on what side of the things you are, if you don't actually take a look at it you might end up believing that US is planning to do evil and others want to eliminate evil or alternatively you might believe that US is pushing for progress when EU is trying to slow it down.
Both appear false to me, IMHO its just another instance of US signing off from the global world and whatever "evil" US is planning to do China will do it better for cheaper anyway.
although similar.
So far most AI development has been things like OpenAI making the ChatGPT chatbot and putting it up there for people to play with, likewise Anthropic, Deepseek et all.
I'm worried that declaration is implying you shouldn't be able to do that without trying to "promote social justice by ensuring equitable access to the benefits".
I think that is over bureaucracizing things.
I mean stuff like
>We underline the need for a global reflection integrating inter alia questions of safety, sustainable development, innovation, respect of international laws including humanitarian law and human rights law and the protection of human rights, gender equality, linguistic diversity, protection of consumers and of intellectual property rights.
Is quite hard to even parse. Does that mean you'll get grief for you bot speaking English becuase it's not protecting linguistic diversity? I don't know
What does "Sustainable Artificial Intelligence" even mean? That you run it off solar rather than coal? Does it mean anything?
Useful only when you rejecting it. I'm sure in culture war torn American mind it signals very important things about genitals and ancestry and the industry around these stuff but in a non-American mind it gives you the vibes that the Americans intent to do bad things with AI.
Ha, now I wonder if the people who wrote that were unaware of the situation in US or was that the outcome they expected.
"Given that the Americans not promising not to use this tech for nefarious tasks maybe Europe should de-couple from them?"
What if ASI happens next year and and renders most of the human workforce redundant? What if we get Terminator 2? Those might be more worthy of worry than "gender equality, linguistic diversity" etc? I mean the diversity stuff is all very well but not very AI specific. It's like you're developing H-bombs and worrying if they are socially inclusive rather about nuclear war.
IMHO, from European perspective, they are worried that someone will install a machine that has bias against let's say Catalan people and they will be disadvantaged against Spaniards and those who operate the machine will claim no fault the computer did it, leading to social unrest. They want to have a regulations saying that you are responsible of this machine and have grounds for its removal if creates issues. All the regulations around AI in EU are in that spirit, they don't actually ban anything.
I don't think AGI is considered seriously by anybody at the moment. That's completely different ball game and if it happens none of the current structures will matter.
Hear, hear. If Trump doesn't straighten up, the world might just opt for Chinese leadership. The dictatorship, the genocide, the communism--these are small things that can be overlooked if necessary to secure leadership that's committed to what really matters, which is.... signing pointless declarations.
It's kind of fascinating actually how Americans turned the whole pop culture into genitalia regulations and racist wealth redistribution. Before that in EU we had all this stuff and wasn't a problem. These stuff were about minorities and minorities stuff don't bother most people as these are just accommodations for small number of people.
I'm kind of getting sick and tired of pretending that stuff that concern %1 of the people are the mainstream thing. It's insufferable.
https://www.whitehouse.gov/presidential-actions/2025/01/endi...
https://www.whitehouse.gov/presidential-actions/2025/01/refo...
https://en.wikipedia.org/wiki/Concord_(video_game)
It thought you were going to explain how "woke" is a bad thing, and how Japanese were counter progressive stuff.
Instead, you give me two links to the current fascist White House propaganda - which makes me wonder if for you, woke is the exact opposite of "christian"?
And a link about a video game which is... related how to woke/progress or conservative/fascism?
It isn't about what % of the population is affected or number of people. It is about PRINCIPLES. Yes it matters just as much to enshrine dishonesty in law if it is dishonesty abour 1 person or 1000 people or 1m people. It matters.
Like, by the same principles (called, the law), the man that you made your president today should have been in jail for months, if not years; or, as in any decent democracy, should not have been able to run with so many pending lawsuits (hey, principles).
But, somehow, he got a pass. And he got elected, so that gave him another pass. And now he disrupts the principles he's supposed to uphold (a thing called the Constitution). And he gets another pass.
But. Sure. Principles. Of course.
The way you're using these as labels is embarrassingly shallow, and I would hope, beneath the level of discourse here.
> AI’s workplace impact must align governance, social dialogue, innovation, trust, fairness, and public interest. We commit to advancing the AI Paris Summit agenda, reducing inequalities, promoting diversity, tackling gender imbalances, increasing training and human capital investment
This being culturally rejected by the same America that has itself twice rejected women candidates for president in favour of a man who now has 34 felony convictions, does not surprise me.
But it does disappoint me.
I remember when the right wing were complaining about Star Trek having a woman as a captain for the first time with Voyager. That there had already been women admirals on screen by that point suggested they had not actually watched it, and I thought it was silly.
I remember learning that British politician Ann Widdecombe changed from Church of England to Roman Catholic, citing that the "ordination of women was the last straw", and I thought it was silly.
Back then, actually putting effort into equal opportunity for all was called "political correctness gone mad" by those opposed to it — but I guess the attention span is no longer sufficient to use four-word-phrases as rhetorical applause lights, so y'all switched to a century old word coined by African Americans who wanted to make sure they didn't forget that the Civil War had only ended literal slavery, not changed the attitudes behind it.
This history makes the word itself a very odd thing to encounter in Europe, where we didn't have that civil war — forced end of Empire shortly after World War 2, yes, but none of the memes from the breakaway regions of that era even made it back to this continent, and AFAICT "woke" wasn't one of them anyway. I only know I'm called a "mzungu" by Kenyans because of the person who got me to visit the place.
The EU is even more nuts with their plans that all big companies in the EU should have 50/50 men-women representation in the board of directors.
Believe what you want, but America did not reject Hillary or Kamala because they were women, they rejected them because of their incompetence. And speaking of this, after seeing Kamala talk, it is beyond me how she got to be VP - not one coherent sentence comes out of her mouth.
That fact that you think this is "nuts" tells me you think women aren't equally competent.
> not one coherent sentence comes out of her mouth.
And yet you elected Trump.
Heck, if that's your objection, two counts of GWB. There was an entire genre based on how he mangled his speech.
No, it means that I want people in those places based on competency, not gender, and certainly not 50/50 representation because some bureaucrats think equality should be forced down on people's throats. If you make it about gender or anything else, the problem lies with you.
> And yet you elected Trump.
Well, besides the "beautiful" over use, he actually made sense when he was talking. The word salads Kamala kept making...
If you think that the current hiring discrepancy represents a genuine and real skill discrepancy, it is a logical necessity for you think that women are not equally capable.
Conversely, if you think women are equally capable, then you must think that the hiring discrepancy is not justified by competency. If you get this far, then it follows that there is a huge opportunity for increasing the pool of competent leaders by requiring 50/50. As they do actually have a goal of increasing the economic potential of the region, this means they should push the issue.
> Well, besides the "beautiful" over use, he actually made sense when he was talking. The word salads Kamala kept making...
To quote one of many examples, this one about Biden at the beach in Trump’s Georgia response to the State of the Union:
“Somebody said he looks great in a bathing suit, right? And you know, when he was in the sand and he was having a hard time lifting his feet through the sand, because you know sand is heavy, they figured three solid ounces per foot, but sand is a little heavy, and he’s sitting in a bathing suit. Look, at 81, do you remember Cary Grant? How good was Cary Grant, right? I don’t think Cary Grant, he was good. I don’t know what happened to movie stars today. We used to have Cary Grant and Clark Gable and all these people. Today we have, I won’t say names, because I don’t need enemies. I don’t need enemies. I got enough enemies. But Cary Grant was, like – Michael Jackson once told me, ‘The most handsome man, Trump, in the world.’ ‘Who?’ ‘Cary Grant.’ Well, we don’t have that any more, but Cary Grant at 81 or 82, going on 100. This guy, he’s 81, going on 100. Cary Grant wouldn’t look too good in a bathing suit, either. And he was pretty good-looking, right?”
- from https://www.theguardian.com/us-news/2024/apr/06/donald-trump...
I remember when LLMs were that weird, rambling — they were barely usable.
Also:
- https://www.salon.com/2024/09/06/incoherent-gibberish-expert...
- https://www.pbs.org/newshour/show/trumps-rambling-speeches-r...
- https://web.archive.org/web/20250206195722/https://www.nytim...
Covfefe. Heck, that one became a meme so hard it has its own Wikipedia page.
Trump's confusion and rambling is broadcast across the world, and mocked across the world — that's how I even know about it. Similar with GWB's… well, Bushisms.
I've seen two relatives get Alzheimer's, and have been on the other end of a phone line with a third when they, mid-sentence, started talking as if I was my brother, speaking of me like I wasn't there, telling him how I was doing.
Trump is old.
I believe they are not equally interested in the same fields men are and vice-versa. So I don't see any value in forcing women and men to fields which they don't want to be in.
> Conversely, if you think women are equally capable, then you must think that the hiring discrepancy is not justified by competency. If you get this far, then it follows that there is a huge opportunity for increasing the pool of competent leaders by requiring 50/50. As they do actually have a goal of increasing the economic potential of the region, this means they should push the issue.
I think they are equally capable in certain areas, and less capable in other areas. Just like men are less capable in some areas and more capable in others. It's how nature works, nothing sexist or discriminatory about it. Just like two men are not equal, or two women are not equal.
But trying to forcefully push the narrative that somehow men and women are 100% equal is very detrimental to everyone involved.
Competency (or more generally merit) is not measured in penises and vaginas.
A forced 50/50 men/women mandate implies that one or the other are not as capable without outside "assistance". That is sexism and rude as all hell.
True equality is accepting all applicants regardless of penises or vaginas and ranking them by their merit and taking however many you need or want from the top down. True equality is being absolutely blind to equity factors like race and sex unless that is directly relevant.
The only time you should care about vaginas is if you're, say, running medical trials that concern breast cancer. Penises and vaginas are utterly irrelevant in the course of serving as a director on a corporate board.
>Trump's confusion and rambling is broadcast across the world, and mocked across the world
One of the things that helped Trump win the election was his three hours long unedited interview with Joe Rogan. It was amazing to see a former President and current Presidential candidate sit down with a common Joe Average (FSVO average) and just have bog ordinary conversations on a wide variety of topics that common Americans can relate to. Trump even explained why he rambles ("weaves stories") like he does.
Harris meanwhile couldn't speak her way out of a teleprompter.
A leader needs to be able to communicate effectively, it's a core part of whether you're charismatic or not and Trump is one of the greatest speakers ever: He speaks with a simple vocabulary because his audience are common citizens, he gets right to the point because he understands time is valuable. He talks to the American people in their language, plain English; not Washingtonese or legalese.
HRC's failure was the way she spoke (Washingtonese) and conducted herself came off like she was a used car salesman. Nobody likes used car salesmen. It also did not help that the DNC, whether merely perceived or in actual fact, forced Sanders out which alienated a significant portion of the Democrat electorate.
Harris failed because she simply could not communicate and further refused to communicate, hoping that she would win because she is an Indian-Black Woman with a sob story. She is perhaps the best example of so-called "DEI hires": She was chosen for VP and then Presidential candidate both times because she checked off many equity boxes, not because she demonstrated competency in a fair showdown (primaries, all of which she lost dead last).
Most Senate Democrats have voted against their respective confirmations, by the way.
We are quite fine having women as leaders if they are actually competent and charismatic like it would be the case with men. Neither HRC nor Harris were that; the former was reviled and the latter couldn't even speak coherently. The Democrats can easily get a woman elected President if they would simply choose a good candidate with policies that resonate with the electorate.
I wonder if the new Woke should be called Neo-Woke, where you pretend to be mean to certain group of people to accommodate other group of people who suffered from accommodating another group of people.
IMHO all this needs to be gone and just be like "don't discriminate, be fair" but hey I'm not the trend setter.
We still have massive biases against minorities in our countries. Some people prefer to pretend they don't exist so they can justify the current reality.
Nothing related to Trump has anything to do with qualified candidates, Trump is the least qualified president we have ever had in american history. Not just because he hadn't served in government or as a general but because he is generally unaware about how government works and doesn't care to be informed.
Ironique.
Sustainable Development? Protect the environment? Promote social justice? Equitable access? Driving inclusive growth? Eliminating biases? Not excluding underrepresented groups?
These are not the values the American people voted for. Americans selected a president who is against "equity", "inclusion" and "social justice", and who is more "roman salute" oriented.
Of course this is all very disorienting to non-Americans, as a year or two ago efforts to do things like rename git master branches to main and blacklists to denylists also seemed to be driven by Americans. But that's just America's modern cultural dominance in action; it's a nation with the most pornographers and the most religious anti-porn campaigners at the same time; the home of Hollywood beauty standards, plastic surgery and bodybuilding, but also the home of fat acceptance and the country with the most obesity. So in a way, contradictory messages are nothing new.
Indeed. Our American values are and always have been Equality, Pursuit of Happiness, and legal justice respectively, as declared in our Declaration of Independence[1] and Constitution[2], even if there were and will be complications along the way.
Liberty is power, power is responsibility. Noone ever said living free was going to be easy, but everyone will say it's a fulfilling life.
[1]: https://en.wikipedia.org/wiki/United_States_Declaration_of_I...
[2]: https://en.wikipedia.org/wiki/Preamble_to_the_United_States_...
Why the people in the background are not entitled to it: https://a.dropoverapp.com/cloud/download/605909ce-5858-4c13-...
Why US government personel is being replaced with loyalist if you are about equality and legal justice?
You're free to follow the legal process to come to the country to seek your pursuit of happiness.
Your right to pursuit of happiness ends where another's rights begins. The US federal government is also tasked with the duty of protecting and furthering the general welfare of Americans including the protection of property.
You do not have a right let alone a privilege to illegally cross the border or stay in the country beyond what your visa permits. We welcome legal immigrants, but illegal aliens are patently not welcome and fraudulent asylum applicants further break down the system for everyone.
The right of not having someone in the country is interesting right.
What other rights you have like that? Do you have also the right of other people not eating Marmite?
The fact that you ignore this demonstrates your bad will in engaging in these conversations.
I was worried that you are advocating for work visas, permits, green cards ect, like a silly EU country would do.
(+) terms and conditions apply; did not originally apply to nonwhite men or women. Hence allowing things like the mass internment of Americans of Japanese ethnicity.
>it has to be about a goal of saying everybody should end up in the same place. And since we didn’t start in the same place. Some folks might need more: equitable distribution
- Kamala Harris
https://www.youtube.com/watch?v=LaAXixx7OLo
This is arguing for giving certain people more benefits versus others based on their race and gender.
This mindset is dangerous, especially if you codify it into an automated system like an AI and let it make decisions for you. It is literally the definition of institutional discrimination.
It is good that we are avoiding codifying racism into our AI under the fake moral guise of “equity”
This is literally the textbook definition of discrimination based on skin color and it is done under the guise of “equity”.
It is literally defined in the civil rights act as illegal (title VII).
It is very good that the new administration is doing away with it.
You don't seem to understand either letter of the spirit of the civil rights act.
You're happy that a racist president who campaigned on racism and keeps on baselely accusing people who are members of minority groups of being unqualified while himself being the least qualified president in history is trying to encourage people to not hire minorities? Why exactly?
1. Job posted, anyone can apply
2. Candidate applies and interviews, team likes them and wants to move forward
3. Team not allowed to offer because candidate is not diverse enough
4. Team goes and interviews a diverse person.
Now if we offer the person of color a job, the first person was discriminated against because they would have got the job if they had had the right skin color.
If we don’t offer the diverse person a job, then the whole thing was purely performative because the only other outcome was discrimination.
This is how it works at my company. Go read Title VII of the civil rights act, this is expressly against both the letter and spirit of the law.
BTW calling everything you disagree with racism doesn’t work anymore, nobody cares if you think he campaigned on racism (he didn’t).
If anything, people pushing this equity stuff are the real racists.
Now we are starting to get Maori doctors and lawyers that is transforming our society - for the better IMO
That was because the law and medical schools went out of their way to recruit Maori students. To start with they were hard to find as nobody in their families (being Maori, and forbidden) had been to university
If you do not do anything about where people start then saying "aim for equal chance" can become a tool of oppression and keeping the opportunities for those who already have them.
Nuance is useful. I have heard many bizarre stories out of the USA about people blindly applying DEI with not much thought or planning. But there are many many places where carefully applied policies have made everybody's life better
Discrimination in favour of Maori students largely has benefited the children of Maori professionals and white people with a tiny percentage of Maori ancestry who take advantage of this discriminatory policy.
The Maori doctors and lawyers coming through these discriminatory programmes are not the people they were intended to target. Meanwhile, poor white children are essentially abandoned by the school system.
Maori were never actually excluded from university study, by the way. Maori were predominantly rural and secondary education was poor in rural areas but it has nothing to do with their ethnicity. They were never "forbidden". There have been Maori lawyers and doctors for as long as NZ has had universities.
For example, take Sir Apirana Ngata. He studied at a university in NZ in the 1890s, around the same time women got the vote. He was far from the first.
What you have alleged is a common narrative so I don't blame you for believing it but it is a lie.
Māori schools (which the vast majority of Māori attended) were forbidden by the education department from teaching the subjects that lead to matriculation. So yes, they were forbidden from going to university.
> Sir Apirana Ngata. He studied at a university in NZ in the 1890s,
That was before the rules were changed. It was because of people like Ngata and Buck that the system was changed. The racists that ran the government were horrified that the natives were doing better than the colonialists. They "fixed" it.
> Discrimination in favour of Maori students largely has benefited the children of Maori professionals
It has helped establish traditions of tertiary study in Māori families, starting in the 1970s
There are plenty of working class Māori (I know a few) that used the system to get access. (The quota for Māori students in the University of Auckland's law school was not filled in the 1990s. Many more applied for it, but if their marks were sufficient to get in without using the quota they were not counted. If it were not for the quota many would not have even applied)
Talking of lies: "white people with a tiny percentage of Maori ancestry who take advantage of this" that is a lie.
The quotas are not based on ethnicity solely. To qualify you had to whakapapa (whāngi children probably qualified even if they did not whakapapa, I do not know), but you also had to be culturally Māori.
Lies and bigotry are not extinct in Aotearoa, but they are in retreat. The baby boomers are very disorientated, but the millennials are loving it.
Better for everybody
Why would any country yield given the hard line negotiating stance the US is now taking? And the flip flopping and unclear messaging on our positions?
> Among the priorities set out in the joint declaration signed by countries including China, India, and Germany was “reinforcing international co-operation to promote co-ordination in international governance.”
so looks like they did
, at the same time, the goal of the declaration and summit to become less reliant on US and China.
> Meanwhile, Europe is seeking a foothold in the AI industry to avoid becoming too reliant on the US or China.
So basically Europe signed together with China to compete against US/UK or what happend?
At least they aren't threatening to invade our countries or extorting privileged position.
It's possible to stop developing things. It's not even hard; most of the world develops very little. Developing things requires capital, education, hard work, social stability and the rule of law. Many of us writing on this forum take those things for granted but it's more the exception than the rule, when you look at the entire planet.
I think we will face the scenario of runaway AI, where we lose control, and we may not survive. I don't think it will be a sky-net type of thing, sudden. At least not at first. What will happen is that we will replace humans by AIs in more and more positions of influence and power, gradually. Our ChatGPTs of today will become board members and government advisors of tomorrow. It will take some decades--though probably not many. Then, a face-off will come one day, perhaps. Humans vs them.
But if we do survive and come to regret the development of advanced AI and have a second chance, it will be trivially easy to suppress them: just destroy the semiconductor fabs, treat them the same way we treat ultra-centrifuges for enriching Uranium. Cut off the dangerous data centers, and forbid the reborn universities[1] from teaching linear algebra to the students.
[1]: We will lose advanced education for the masses on the way, as it won't be economically viable nor necessary.
That still feels like complete science fiction to me - more akin to appointing a complicated Excel spreadsheet as a board member.
Board members using tools like ChatGPT or Excel as part of their deliberations? That's great.
Replacing a board member entirely with a black box automation that makes meaningful decisions without human involvement? A catastrophically bad idea.
If the US were willing to compromise some of it's core values, then we could probably stop AI development domestically.
But what about the rest of the world? If China or India want to reap the benefits of enhanced AI capability, how could we stop them? We can hit them with sanctions and other severe measures, but that hasn't stopped Russia in Ukraine -- plus the prospect of world-leading AI capability has a lot more economic value than what Ukraine can offer.
So if we can't stop the world from developing these things, why hamstring ourselves and let our competitors have all of the benefits?
The mere fact that you imagine that Moscow's motivation in invading Ukraine is economic is a sign that you're missing the main reasons Moscow or Beijing would want to ban AI: (1) unlike in the West and especially unlike the US, it is routine and normal for the government in those countries to ban things or discourage their use, especially new things that might cause large societal changes and (2) what Moscow and Beijing want most is not economic prosperity, but rather to prevent another one of those invasions or revolutions that kills millions of people and to prevent the country's ruling coalition from losing power.
Let's suppose that, like you, both Moscow and Beijing do not want AGI to exist. What could they do about it? Why should they trust that the rest of the world will also pause their AI development?
This whole discussion is basically a variation on the prisoner's dilemma. Either you cooperate and AI risks are mitigated, or you do not cooperate and try to take the best outcome for yourself.
I think we can expect the latter. Not because it is the right thing or because it is the optimal decision for humanity, but because each individual will deem it their best choice, even after accounting for P(doom).
That is why the US and Europe should stop AI in their territories first especially as the US and Britain have been the main drivers of AI "progress" up to now.
Note: I'm not quite a doomer, but definitely a pessimist.
Great, can't wait for even some small improvement over the idiots in charge right now.
The entire thing is little more than a thought experiment.
> Look at how fast AI has advanced, it you just project that trend out, we'll have human-level agents by the end of the decade.
No. We won't. Scale up transformers as big as you like, this won't happen without massive advances in architecture and hardware.
I believe it is possible, but the idea it'll happen any day now, and by accident is bullshit.
This is one step from Pascal's Wager, but being presented as fact by otherwise smart people.
Yes. Nobody can predict the future.
> but the idea it'll happen any day now, and by accident is bullshit.
We agree on that one: it won't be sudden, and it won't be by accident.
> I believe it is possible, but the idea it'll happen any day now, and by accident is bullshit.
Exactly. Not by accident. But if you believe it's possible, then we are both doomers.
The thing is, there are forces at play that want this. It's all of us. We in society want to remove other human beings from the chain of value. I use ChatGPT today to not pay a human editor. My boss uses Suno AI to play generated music with pro-productivty slogans before Teams meetings. The moment the owners of my enterprise believe it's possible to replace their highly paid engineers with AIs, they will do it. My bosses don't need to lift a finger today to ensure that future. Other people have already imagined it, and thus, already today we have well-founded AI companies doing their best to develop the technology. Their investors see an opportunity on making highly-skilled labor cheaper, and they are dumping their money into that enterprise. Better hardware, better models, better harnesses for those models. All of that is happening at speed. I'm not counting on accidents there. If anything, I'm counting on accidents Chernobyl style that make us realize, when there is still time, if we are stepping into danger.
> What will happen is that we will replace humans by AIs in more and more positions of influence and power,
With all due respect, and not to be controversial, but how is this concern any more valid than the 'great replacement' worries.
You cannot face the world with how you want it to be, but only as it is.
What we know today is that a relatively straightforward series of matrix multiplications leads to what is perceived to be intelligence. This is simply true no matter how many declarations one signs.
Given that this is the case, there is nothing left to be done unless we want to go full Butlerian Jihad
There are a few non-linear function operations in between the matrix multiplications.
Fortunately those days are over. Any politician dealing with a technical issue over their head can turn to an LLM and ask for comment. "Is signing this poorly thought out, difficult to interpret, laundry list of vague regulations, that could limit LLM progress, really a good idea? Break this down for me like I am 5, please."
(Even though the start appeared trivial, happenstance, even benign, the age where AI's rapidly usurped there own governance had begun. The only thing that could have made it happen faster, or more destructively, were those poorly thought out international agreements the world was lucky to dodge.)
- there's a real threat from AI to the open internet by drowning it in spam, fraud, and misinformation
- current "AI safety" work does basically nothing to address this and is kind of pointless
It's important that AI-enabled processes which affect humans are fair. But that's just a subset of a general demand for justice from the machine of society, whether it's implemented by humans or AIs or abacuses. Which comes back to demanding fair treatment from your fellow humans, because we haven't solved the human "alignment problem".
> Worldcoin's business is to provide a reliable way to authenticate humans online, which it calls World ID.
[1]: https://en.m.wikipedia.org/wiki/World_(blockchain)
I want to know whether an image or video is largely generated by AI, especially when it comes to news. Images and video often imply that they are evidence of something actually happening.
I don't know how this would be achieved. I also don't care. I just want people to be accountable and transparent.
Rules like this would just lead to everything having an “AI generated” label.
People have tried it in the past with trying to require fashion magazines and ads warn when they photoshop the models. But obviously everything is photoshopped, and the problem becomes how do we separate good photoshop (levels, blemish remover?) from bad photoshop (warp tool?).
[0] https://appleinsider.com/articles/23/11/30/a-bride-to-be-dis...
That happened years ago. And without llms
Europe is hopeless so it does not make a difference. China can sign and ignore it so it does not make a difference.
But it would not be wise for the USA to have their hands tied up so early. I suppose that the UK wants to go their usual "lighter touch regulation" than the EU route to attract investment. Plus they are obviously trying hard to make friends with the new US administration.
Not just that. A speaker in a conference I attended about a month ago mentioned that UK is actively drifting away from EU's stance, particularly on the aspect of AI safety in practice.
The upcoming European AI act has "machine must not make material decisions" as its cornerstone. UK are hell-bent to get AI into government functions, to ostensibly make everything more efficient. As part of that drive, the UK is aiming to allow AI to make material decisions, without human review or recourse. In a country still in the throes of the Post Office / Horizon scandal, that really takes some nerve.
Those in charge in this country know fully well that "AI safety" will be in violent conflict with the above.
As an attempt at a response, the UK is not party to the "EU AI Act" or the "DMA/DSA", we left before they were passed as law in the EU. The UK has its own "Digital Markets Act", but it is not an EU regulation. The GDPR is an inherited EU regulation.
The AI summit was French led, to get a global consensus on what sort of AI protections should be in place it looks like. The declaration was specific to this summit.
So, nothing to do with the EU, not a regulation.
If you add regulations, people will use other AI companies from countries without them. The only result of that would be losing the AI race.
You can see this at Huggingface top models, fine-tuned models are way more popular than official ones.
And this is also good considering most companies (even China) offer their models free to download and use locally. Democratizing AI is the good approach here.
What would this declaration mean for free and open source models?
For example Deepseek uses the MIT License.
Signed by 60 countries out of "more than 100 participants", it just looks comically pathetic except "China" part:
Armenia, Australia, Austria, Belgium, Brazil, Bulgaria, Cambodia, Canada, Chile, China, Croatia, Cyprus, Czechia, Denmark, Djibouti, Estonia, Finland, France, Germany, Greece, Hungary, India, Indonesia, Ireland, Italy, Japan, Kazakhstan, Kenya, Latvia, Lithuania, Luxembourg, Malta, Mexico, Monaco, Morocco, New Zealand, Nigeria, Norway, Poland, Portugal, Romania, Rwanda, Senegal, Serbia, Singapore, Slovakia, Slovenia, South Africa, Republic of Korea, Spain, Sweden, Switzerland, Thailand, Netherlands, United Arab Emirates, Ukraine, Uruguay, Vatican, European Union, African Union Commission.
I wonder why... maybe because it look like US replaced some "moral values" (not talking about "woke values" here, just plain "humanistic values" like in Human Rights Declaration) with "bottom line values" :-)
Hmm. > Donald Trump had a fiery phone call with Danish prime minister Mette Frederiksen over his demands to buy Greenland, according to senior European officials.
https://www.theguardian.com/world/2025/jan/25/trump-greenlan...
> The president has said America pays $200bn a year 'essentially in subsidy' to Canada and that if the country was the 51st state of the US 'I don’t mind doing it', in an interview broadcast before the Super Bowl in New Orleans
https://www.theguardian.com/us-news/video/2025/feb/10/trump-...
"Well done to the UK for not signing the fully compromised Statement on Inclusive and Sustainable Artificial Intelligence for the People and the Planet. Australia shouldn't have signed this statement either given how France intentionally derailed attempts to build a global consensus on how we can develop AI safely.
For those who lack context, the UK organised the AI Safety Summit at Bletchley Park in November 2023 to allow countries to discuss how advanced AI technologies can be discussed safely. There was a mini-conference in Korea, France was given the opportunity to organise the next big conference, a trust they immediately betrayed by changing the event to be about promoting investment in their AI industry.
They renamed the summit to the AI Action Summit and relegated safety from the sole focus to being just one of five focus areas, but not even one of five equally important focus areas, but one that seems to have been purposefully minimized even further.
Within the conference statement safety was reduced to a single paragraph that undermines safety if anything:
“Harnessing the benefits of AI technologies to support our economies and societies depends on advancing Trust and Safety. We commend the role of the Bletchley Park AI Safety Summit and Seoul Summits that have been essential in progressing international cooperation on AI safety and we note the voluntary commitments launched there. We will keep addressing the risks of AI to information integrity and continue the work on AI transparency.”
Let’s break it down: • First, safety is being framed as “trust and safety”. These are not the same things. The word trust appearing first is not as innocent as it appears: trust is the primary goal and safety is secondary to this. This is a very commercial perspective, if people trust your product you can trick them into buying it, even if it isn't actually safe. • Second, trust and safety are not framed as values important in and of themselves, but as subordinate to realising the benefits of these technologies, primarily the "economic benefits". While the development of advanced AI technologies could theoretically create a social surplus that could be taxed and distributed, it's naive to assume that this will be automatic, particularly when the policy mechanisms are this compromised. • Finally, the statement doesn’t commit to continuing to address these risks, but only narrowly to “addressing the risks of AI to information integrity” and “continue the work on AI transparency”. In other words, they’re purposefully downplaying any more significant potential risks, likely because discussing more serious risks would get in the way of convincing companies to invest in France.
Unfortunately, France has sold out humanity for short-term commercial benefit and we may all pay the price."
The US and the UK were right to reject it.
What’s much more important is strengthening rights that could be weakened by large scale data analysis of a population.
The right to a private life, and having minimal data collected — and potential then stolen — about your life.
The right of the state to investigate you for committing a crime using models and statistics only if a judge issues them a warrant to do so.
The right in a free market economy to transparent and level pricing instead of being gouged because an AI thinks people with physical characteristics similar to mine have lots of money.
Banning models that can create illegal images feels like legislators not aiming nearly high enough or smart enough:
https://www.bbc.co.uk/news/articles/c8d90qe4nylo
A little bit of Schadenfreude would feel really good right about now, what bothers me so much is that it's just symbolic for the US and UK NOT to sign these 'promises'.
It's not as if anyone would believe that the commitments would be followed through with. It's frustrating at first, but in reality this is a nothing burger, just emphasizing their ignorance.
> “The Trump administration will ensure that the most powerful AI systems are built in the US, with American-designed and manufactured chips,”
Sure, those american AI chips that are just pumping out right now. You'd think the administration would have advisers who know how things work.
That would be a kneejerk, short-sighted, self-destructive position to take, so I can believe people would do it.
And even more honestly, nobody cares
“Vance just dumped water all over that. [It] was like, ‘Yeah, that’s cute. But guess what? You know you’re actually not the ones who are making the calls here. It’s us,’” said McBride.
"If you are not capable of violence, you are not peaceful. You are harmless"
Unless you can stand on equal field - either by alliance, or by your own power - you aren't a negotiating partner, and i say that as European.
this is exactly the value that caused so much war and death all over the world, for decades and thousands of years. still, even in 2025, it's being followed. are we doomed, chat?
Eg. birds abandoning rather than defending a perch when another approaches.
We're typically not happy to do that, though you can see it happening in some parts of the world right now.
Some kind of enlightened state where violent competition for resources (incl. status & power) no longer makes sense is imaginable, but seems a long way off.
The idea though is that if everyone suddenly disarmed overnight it would be so highly advantageous to a deviant aggressor that one would assuredly emerge.
yes.
i would also recommend The Prince as light reading to better understand how the world works.
Those with the biggest economies and/or most guns has changed a few times but the behaviours haven't and probably never will.
And the extent of which you can do global enforcement (which is often biased and selective) is limited by the reach of your economic and military power.
Which is why the US outspends the rest of the world military powers combined and how the US and their troops have waged ilegal wars and committed numerous crimes abroad and gotten away with it despite pieces of papers saying what they're doing is bad, but their reaction was always "what are you gonna do about it?".
See how many atrocities have happened under the watch of the UN. Laws aren't real, it's the enforcement that is real. Which is why the bullies get to define the laws that everyone else has to follow because they have the monopoly on enforcement.
Well, yes. This is why people have been paying a lot of attention to what exactly "rule of law" means in the US, and what was just norms that can be discarded.
Where it was used in a rhetorical tantrum throwing response to their power refuse to do the impossible like make an encryption backdoor 'only for good guys' and have the sheer temerity to stand against arbitrary exercises of authority by using the courts to check them only to their actual power.
If actual 'more powerful than the states' occurs they have nobody to blame but themselves for crying wolf.
The danger of something like AI+drones (or less imminent, AI+bioengineering) can lead to a severe degradation of security, like after the invention of nuclear weapons. A degradation in security, which requires collective action. Even worse, chaos could be caused by small groups weaponizing the technology against high profile targets.
If anything, the larger nations might be much more forceful about AI regulation than the above summit by demanding an NPT style treaty where only a select club has access to the technology in exchange for other nations having access to the applications of AI from servers hosted by the club.
You don't justify or define "severe degradation of security" just assert it as a fact.
The advent of nuclear weapons has meant 75 years of relative peace which is unheard of in human history, so quite the opposite.
Given that AI weapons don't exist, then you've just created a straw man.
I do claim that it is obvious that widespread acquisition of nuclear weapons by smaller states would be a severe degradation of security. Among other things, widespread ownership, would also mean that militant groups would acquire it and dictators would use it as a protection leading to an eventual use of the weapons.
Yes, the danger of AI weapons is nowhere at that level of nuclear weapons yet.
But, that is the trend.
https://www.nytimes.com/2023/07/25/opinion/karp-palantir-art...
https://news.ycombinator.com/item?id=42938125
For smaller countries nukes represented an increase in security, not a degradation. North Korea probably wouldn't still be independent today if it didn't have nukes, and Russia would never have invaded Ukraine if Ukraine hadn't given up its nukes. Restricting access to nukes is only in the interest of big countries that want to bully small countries around, because nukes level the playing field. The same applies to AI.
Regarding an increase in security with nukes, what you say applies for exceptions against a general non-nuclear background. Without restrictions, every small country could have a weapon, with a danger of escalation behind every conflict, authoritatrians using a nuclear option as a protection against a revolt etc. The likelihood of nuclear war would be much more(even with the current situation, there have been close shaves)
They need to dismantle bureaucracy to accelerate, NOT add new international agreements etc that would slow them down.
Once they become leaders, they will come up with such agreements to impose their "model" and way to do things.
Right now they need to accelerate and not get stuck.
But politics aside, this also points to something I've said numerous times here before: In order to write the rulebook you need to be a creator.
Only those who actually make and build and invent things get to write the rules. As far as "AI" is concerned, the creators are squarely the United States and presumably China. The EU, Japan, et al. being mere consumers sincerely cannot write the rules because they have no weight to throw around.
If you want to be the rulemaker, be a creator; not a litigator.
Exactly what I'd expect someone from a country where the economy is favoured over the society to say - particularly in the context of consumer protection.
You want access to a trade union of consumers? You play by the rules of that Union.
American exceptionalism doesn't negate that. A large technical moat does. But DeepSeek has jumped in and revealed how shallow that moat really is for AI at this neonatal stage.
https://www.foxnews.com/
In 2008 EU had more people, more money and bigger economy than US, with proper policies we could be in a place where we could bitch slap both Trump and Putin. And not left to wonder whose dick we have to suck deeper to get some gas.
I'm Japanese-American, so I'm not exactly happy about Japan's state of irrelevance (yet again). Their one saving grace as a special(er) ally and friend is they can still enjoy some of the nectar with us if they get in lockstep like the UK does (family blood!) when push comes to shove.
People and countries who make and ship products.
You don't make rules by writing several hundred pages of legalese as a litigator, you make rules by creating products and defining the market.
Be creators, not litigators.
That is completely wrong, at least if rules = the law. You might create fancy products all you like, if they do not adhere to the law in any given market, they cannot be sold there.
Create things? Or destroy them? Seems in reality, the most powerful nations are the ones who have acquired the greatest potential to destroy things. Creation is worthless if the dude next door is prepared to burn your house down because you look different to him.
You mean "Red, White, and Blueland"
https://www.congress.gov/bill/119th-congress/house-bill/1161...
Is this not exactly what the EU are doing?
https://edition.cnn.com/2025/01/22/world/video/danish-offici...
https://www.bbc.com/news/technology-60870287
We’ve managed to write the entire encyclopedia together, but we don't have a simple place to choose a high-level set of values that most of us can get behind.
I propose solutions to the current and multiversal AI alignment here https://www.lesswrong.com/posts/LaruPAWaZk9KpC25A/rational-u...
Frankly, I can't stand these guys viewing themselves as some sort of high-IQ intellectual majority types when none of such labeling would be true and they're more like stereotypical tourists to the world. Though that's historically how anarchist university undergraduates had always been.
Information technology was never the constraint preventing moral consensus the way it was for, say, aggregating information. Not only is that a problem with achieiving the goals you lay out, its also the problem with the false assumption that they are goals most would agree should be solved as you have framed them.