Page 1 of 1

Artificial Intelligence

Posted: Tue Feb 11, 2025 1:52 pm
by Son of Mathonwy
AI is a unique technology but is uniquely dangerous - it's the only technology which could realistically cause our extinction in the medium term*. So we ought to work together to control this thing as best we can, to try to build safeguards into it. Naturally the US and the UK choose short-term economic gain over the future of humanity.

https://www.theguardian.com/technology/ ... eclaration


* nuclear and biological weapons, pandemic and climate change could all end our civilization but IMO only AI could make us extinct, at the hands (?) of our own creation.

Re: Artificial Intelligence

Posted: Wed Feb 12, 2025 1:05 pm
by Puja
Son of Mathonwy wrote: Tue Feb 11, 2025 1:52 pm AI is a unique technology but is uniquely dangerous - it's the only technology which could realistically cause our extinction in the medium term*. So we ought to work together to control this thing as best we can, to try to build safeguards into it. Naturally the US and the UK choose short-term economic gain over the future of humanity.

https://www.theguardian.com/technology/ ... eclaration


* nuclear and biological weapons, pandemic and climate change could all end our civilization but IMO only AI could make us extinct, at the hands (?) of our own creation.
I will take a moment to note that the thing which has been deemed "Artificial Intelligence" by its creators/publicists is actually nothing of the kind and is a ridiculously long distance from actual AI, if indeed it is a linked technology at all. That's not to say that we shouldn't be cautious about the new technology, which should more properly be called a generative text engine or generative images engine, but that's because of the societal implications of being able to automate and mass-produce the creation of plausible writing/pictures/video, not because there's any danger of it doing any thinking or understanding.

Puja

Re: Artificial Intelligence

Posted: Wed Feb 12, 2025 4:26 pm
by Son of Mathonwy
Puja wrote: Wed Feb 12, 2025 1:05 pm
Son of Mathonwy wrote: Tue Feb 11, 2025 1:52 pm AI is a unique technology but is uniquely dangerous - it's the only technology which could realistically cause our extinction in the medium term*. So we ought to work together to control this thing as best we can, to try to build safeguards into it. Naturally the US and the UK choose short-term economic gain over the future of humanity.

https://www.theguardian.com/technology/ ... eclaration


* nuclear and biological weapons, pandemic and climate change could all end our civilization but IMO only AI could make us extinct, at the hands (?) of our own creation.
I will take a moment to note that the thing which has been deemed "Artificial Intelligence" by its creators/publicists is actually nothing of the kind and is a ridiculously long distance from actual AI, if indeed it is a linked technology at all. That's not to say that we shouldn't be cautious about the new technology, which should more properly be called a generative text engine or generative images engine, but that's because of the societal implications of being able to automate and mass-produce the creation of plausible writing/pictures/video, not because there's any danger of it doing any thinking or understanding.

Puja
I wish I could share your confidence on that.

AI has taken incredible steps in the last few years, consequently sucked in vastly greater amounts of effort and money, consequently will accelerate its progress. The immediate problems are not extinction - there are huge risks which are closer - but if we don't build safeguards into AI now we are much more are risk from a Skynet situation.

Re: Artificial Intelligence

Posted: Wed Feb 12, 2025 10:44 pm
by Son of Mathonwy
Ah, Google. The company whose motto used to be 'don't be evil'. Now ...
the company removed prohibitions against building AI for weapons and surveillance
https://www.theguardian.com/us-news/202 ... ing-ai-dei

https://www.theguardian.com/technology/ ... ey-the-law

Re: Artificial Intelligence

Posted: Wed Feb 12, 2025 10:50 pm
by Sandydragon
Son of Mathonwy wrote: Wed Feb 12, 2025 4:26 pm
Puja wrote: Wed Feb 12, 2025 1:05 pm
Son of Mathonwy wrote: Tue Feb 11, 2025 1:52 pm AI is a unique technology but is uniquely dangerous - it's the only technology which could realistically cause our extinction in the medium term*. So we ought to work together to control this thing as best we can, to try to build safeguards into it. Naturally the US and the UK choose short-term economic gain over the future of humanity.

https://www.theguardian.com/technology/ ... eclaration


* nuclear and biological weapons, pandemic and climate change could all end our civilization but IMO only AI could make us extinct, at the hands (?) of our own creation.
I will take a moment to note that the thing which has been deemed "Artificial Intelligence" by its creators/publicists is actually nothing of the kind and is a ridiculously long distance from actual AI, if indeed it is a linked technology at all. That's not to say that we shouldn't be cautious about the new technology, which should more properly be called a generative text engine or generative images engine, but that's because of the societal implications of being able to automate and mass-produce the creation of plausible writing/pictures/video, not because there's any danger of it doing any thinking or understanding.

Puja
I wish I could share your confidence on that.

AI has taken incredible steps in the last few years, consequently sucked in vastly greater amounts of effort and money, consequently will accelerate its progress. The immediate problems are not extinction - there are huge risks which are closer - but if we don't build safeguards into AI now we are much more are risk from a Skynet situation.
It’s possible that AGImwill be achieved, but we are t there yet. The skynet scenario is still a way off. But, it is right to consider what safeguards should be in place now to avoid problems later.