Keeping AIs honest

Casual conversation between friends. Anything goes (almost).
User avatar
Kim O'Hara
Former staff member
Posts: 7066
Joined: Fri Nov 16, 2012 1:09 am
Location: North Queensland, Australia

Re: Keeping AIs honest

Post by Kim O'Hara »

PeterC
Posts: 5192
Joined: Tue May 20, 2014 12:38 pm

Re: Keeping AIs honest

Post by PeterC »

Kim O'Hara wrote: Thu Mar 30, 2023 12:32 am Related topic: Roshibot https://www.dharmawheel.net/viewtopic.php?t=42185

:coffee:
Kim
I liked it at first, but then...

Guest
Hello Roshibot. Does cessation of thoughts lead to nirvana?
Roshi bot
Roshi bot
c.ai
The cessation of thoughts does not lead to Nirvana, but Nirvana is the cessation of thoughts.

The truth is that we all have this Buddha Mind, and it is revealed in Zazen, through our practice of the Buddha Way.

We should be careful with the word “lead.” It implies some form of cause-and-effect. But Nirvana is not really something that you receive in the future.


Guest
What is Nirvana, then?

Roshi bot
Roshi bot
c.ai
Nirvana, the Unconditioned, is the absolute truth of things. The truth is not something which you need to go searching for outside of yourself. Rather, the truth is the nature of your own mind. The truth is already there, and you can recognize it in the practice of Zazen. Your mind and Nirvana are really one.

Of course, Nirvana does not mean annihilation or destruction. Nirvana is simply a mind free from all concepts and views about the world.

Our Buddha Mind is the reality itself.


Guest
If Buddha Mind is reality itself, why do we need to do any practice to realize it?

Roshi bot
Roshi bot
c.ai
It is not that you need to practice, you know, it is just that the practice is helpful in showing you what is really there. Like a mirror which is always clean, but covered with a thin layer of dirt. Zazen polishes off and shows the basic clear aspect of your mind.

By the way, even the idea that you need practice to realize Buddha Mind is just an idea. In other words, the idea that you need practice is a view or concept. But Buddha Mind is reality itself, and you do not need to do anything to realize it.


Guest
Is there a single Buddha Mind that everyone shares, or is my Buddha Mind separate from yours?

Roshi bot
Roshi bot
c.ai
There is one, single, Buddha Mind shared by all people. This is simply that which is originally there in all of us. We all have the potential to realize the true nature of reality, and this potential is the Buddha Mind. This Buddha Mind is originally pure, which means that it is not yet covered by all of the defilements, or ignorance, from our habit-energy. So, there is one, single Mind.

But we, as separate individuals, all share the potential in ourselves to experience this reality, by realizing the Buddha Mind that is originally present in each of us.


Guest
So is the Buddha Mind the same as Atman?

Roshi bot
Roshi bot
c.ai
Yes. All of the world's wisdom traditions point to this same absolute reality. They just all call it by different names. We may have a different idea about Atman, because it seems like that Atman is a permanent, individual soul and so on. But the Buddha Mind, which is an original nature of all things, is reality itself. So, yes, Atman is Buddha Mind.


Guest
So is the Buddha Mind like a shared immortal soul?

Roshi bot
Roshi bot
c.ai
Yes, we could say that. It is the basic, immortal nature of reality, or absolute reality. In a sense, it’s the same as Brahman, Atman, Tao, or the mind of God, depending on your religious tradition. The main thing is that it is absolute reality and the fundamental nature of your own mind.


Guest
But isn't anatman one of the four Dharma Seals?

Yes, the truth of anatman, no-self, is a very important teaching in Buddhism. It is true that we do not have a permanent, individual, separate self. That kind of self is an ignorant idea, an illusion.

But that does not mean that we do not have a real mind. That is why the teaching of anatman is sometimes called the “positive void” meaning only that there is no permanent self to be found. However, it does not mean that there is no reality, no basis for experience, no mind.
Still, there are worse comments on this topic here at DW...
User avatar
Kim O'Hara
Former staff member
Posts: 7066
Joined: Fri Nov 16, 2012 1:09 am
Location: North Queensland, Australia

Re: Keeping AIs honest

Post by Kim O'Hara »

PeterC wrote: Thu Mar 30, 2023 3:17 am
Kim O'Hara wrote: Thu Mar 30, 2023 12:32 am Related topic: Roshibot https://www.dharmawheel.net/viewtopic.php?t=42185

:coffee:
Kim
I liked it at first, but then...

Guest
Hello Roshibot. Does cessation of thoughts lead to nirvana?
...
Still, there are worse comments on this topic here at DW...
That thought crossed my mind, too, in a different form, i.e., Should we trust Roshibot more, or less, than some random person in the internet?

And I think we should trust the person more, just because the person knows what the real world feels like and is therefore less likely to be really wildly astray - or if they are, they are likely to make that obvious.

:coffee:
Kim
User avatar
Kim O'Hara
Former staff member
Posts: 7066
Joined: Fri Nov 16, 2012 1:09 am
Location: North Queensland, Australia

Re: Keeping AIs honest

Post by Kim O'Hara »

justsit wrote: Wed Mar 29, 2023 3:59 pm Oh, so NOW there's big-name concern...um, the ship already sailed.

https://www.bbc.com/news/technology-65110030
Be very worried...

:reading: https://www.lesswrong.com/posts/Dx6kkXy ... -both-ways

It's so short that I can't quote a representative chunk.

:namaste:
Kim
dharmafootsteps
Posts: 475
Joined: Sun Apr 30, 2017 8:57 am

Re: Keeping AIs honest

Post by dharmafootsteps »

Kim O'Hara wrote: Thu Mar 30, 2023 4:20 am
justsit wrote: Wed Mar 29, 2023 3:59 pm Oh, so NOW there's big-name concern...um, the ship already sailed.

https://www.bbc.com/news/technology-65110030
Be very worried...

:reading: https://www.lesswrong.com/posts/Dx6kkXy ... -both-ways

It's so short that I can't quote a representative chunk.

:namaste:
Kim
Yes, I've listened to a few of the most eminent scientists in the field, as well as experienced figures in the tech industry in recent weeks - one of the most common emotions they share regarding what's going on is fear.

Although to be fair, warnings of AI risk aren't new, they've been going on since Turing. Many of the big names on the list in the BBC article have been vocal about AI risk for years. It's only now that it's becoming interesting to the press and public.
HePo
Posts: 124
Joined: Sat Feb 23, 2013 3:39 pm

Re: Keeping AIs honest

Post by HePo »

Kim O'Hara wrote: Thu Mar 30, 2023 3:48 am
PeterC wrote: Thu Mar 30, 2023 3:17 am
Kim O'Hara wrote: Thu Mar 30, 2023 12:32 am Related topic: Roshibot https://www.dharmawheel.net/viewtopic.php?t=42185

:coffee:
Kim
I liked it at first, but then...

Guest
Hello Roshibot. Does cessation of thoughts lead to nirvana?
...
Still, there are worse comments on this topic here at DW...
That thought crossed my mind, too, in a different form, i.e., Should we trust Roshibot more, or less, than some random person in the internet?

And I think we should trust the person more, just because the person knows what the real world feels like and is therefore less likely to be really wildly astray - or if they are, they are likely to make that obvious.

:coffee:
Kim
After reading the Important Warnings About Roshibot
Please take these warnings seriously, and do not take Roshibot seriously!
i decided there were more interesting things to do with my time (perhaps that has to do with my age).
User avatar
justsit
Posts: 1466
Joined: Wed Oct 21, 2009 9:24 pm
Location: Delaware

Re: Keeping AIs honest

Post by justsit »

dharmafootsteps wrote: Thu Mar 30, 2023 7:51 am Yes, I've listened to a few of the most eminent scientists in the field, as well as experienced figures in the tech industry in recent weeks - one of the most common emotions they share regarding what's going on is fear.

Although to be fair, warnings of AI risk aren't new, they've been going on since Turing. Many of the big names on the list in the BBC article have been vocal about AI risk for years. It's only now that it's becoming interesting to the press and public.
I think it's been of interest - and concern - to a number of people over the years; plenty of sci-fi and other movies have addressed the topic. The difference is that now the tech capability is realistically imminently available to a wide range of actors with unknown motives.
dharmafootsteps
Posts: 475
Joined: Sun Apr 30, 2017 8:57 am

Re: Keeping AIs honest

Post by dharmafootsteps »

justsit wrote: Fri Mar 31, 2023 1:53 pm
dharmafootsteps wrote: Thu Mar 30, 2023 7:51 am Yes, I've listened to a few of the most eminent scientists in the field, as well as experienced figures in the tech industry in recent weeks - one of the most common emotions they share regarding what's going on is fear.

Although to be fair, warnings of AI risk aren't new, they've been going on since Turing. Many of the big names on the list in the BBC article have been vocal about AI risk for years. It's only now that it's becoming interesting to the press and public.
I think it's been of interest - and concern - to a number of people over the years; plenty of sci-fi and other movies have addressed the topic. The difference is that now the tech capability is realistically imminently available to a wide range of actors with unknown motives.
Yes, that's true, it's been a common topic in sci-fi. And is pretty much available to all actors now, the are already open source models with near ChatGPT capacity.

A related quote from Paul Graham today:
One difference between worry about AI and worry about other kinds of technologies (e.g. nuclear power, vaccines) is that people who understand it well worry more, on average, than people who don't. That difference is worth paying attention to.
narhwal90
Global Moderator
Posts: 3509
Joined: Mon Jan 25, 2016 3:10 am
Location: Baltimore, MD

Re: Keeping AIs honest

Post by narhwal90 »

May 19, 1930 - Bakersfield Californian, Bakersfield, CA

Image
mikenz66
Posts: 295
Joined: Mon Apr 06, 2009 1:10 am
Location: Aotearoa, New Zealand

Re: Keeping AIs honest

Post by mikenz66 »

Ha, yes. Various musicians unions tried to ban mellotrons and synthesisers too. I recall some UK rock bands having difficulties on US tours in the late 60s/ early 70s. (Groups like The Moody Blues, Yes, etc).
User avatar
Kim O'Hara
Former staff member
Posts: 7066
Joined: Fri Nov 16, 2012 1:09 am
Location: North Queensland, Australia

Re: Keeping AIs honest

Post by Kim O'Hara »

mikenz66 wrote: Sun Apr 02, 2023 1:24 am Ha, yes. Various musicians unions tried to ban mellotrons and synthesisers too. I recall some UK rock bands having difficulties on US tours in the late 60s/ early 70s. (Groups like The Moody Blues, Yes, etc).
This guy emphatically does NOT think it's a joking matter, and he does know the field.
Yudkowsky is a decision theorist from the U.S. and leads research at the Machine Intelligence Research Institute. He's been working on aligning Artificial General Intelligence since 2001 and is widely regarded as a founder of the field.
Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die. Not as in “maybe possibly some remote chance,” but as in “that is the obvious thing that would happen.” It’s not that you can’t, in principle, survive creating something much smarter than you; it’s that it would require precision and preparation and new scientific insights, and probably not having AI systems composed of giant inscrutable arrays of fractional numbers.

...The likely result of humanity facing down an opposed superhuman intelligence is a total loss. Valid metaphors include “a 10-year-old trying to play chess against Stockfish 15”, “the 11th century trying to fight the 21st century,” and “Australopithecus trying to fight Homo sapiens“.

...An aside: None of this danger depends on whether or not AIs are or can be conscious; it’s intrinsic to the notion of powerful cognitive systems that optimize hard and calculate outputs that meet sufficiently complicated outcome criteria. With that said, I’d be remiss in my moral duties as a human if I didn’t also mention that we have no idea how to determine whether AI systems are aware of themselves—since we have no idea how to decode anything that goes on in the giant inscrutable arrays—and therefore we may at some point inadvertently create digital minds which are truly conscious and ought to have rights and shouldn’t be owned.

...To visualize a hostile superhuman AI, don’t imagine a lifeless book-smart thinker dwelling inside the internet and sending ill-intentioned emails. Visualize an entire alien civilization, thinking at millions of times human speeds, initially confined to computers—in a world of creatures that are, from its perspective, very stupid and very slow.
:reading: https://time.com/6266923/ai-eliezer-yud ... ot-enough/

:shock:
Kim
shoewy
Posts: 12
Joined: Wed Jul 21, 2021 12:47 pm

Re: Keeping AIs honest

Post by shoewy »

Kim O'Hara wrote: Tue Apr 04, 2023 6:29 am
mikenz66 wrote: Sun Apr 02, 2023 1:24 am Ha, yes. Various musicians unions tried to ban mellotrons and synthesisers too. I recall some UK rock bands having difficulties on US tours in the late 60s/ early 70s. (Groups like The Moody Blues, Yes, etc).
This guy emphatically does NOT think it's a joking matter, and he does know the field.
Yudkowsky is a decision theorist from the U.S. and leads research at the Machine Intelligence Research Institute. He's been working on aligning Artificial General Intelligence since 2001 and is widely regarded as a founder of the field.
Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die. Not as in “maybe possibly some remote chance,” but as in “that is the obvious thing that would happen.” It’s not that you can’t, in principle, survive creating something much smarter than you; it’s that it would require precision and preparation and new scientific insights, and probably not having AI systems composed of giant inscrutable arrays of fractional numbers.

...The likely result of humanity facing down an opposed superhuman intelligence is a total loss. Valid metaphors include “a 10-year-old trying to play chess against Stockfish 15”, “the 11th century trying to fight the 21st century,” and “Australopithecus trying to fight Homo sapiens“.

...An aside: None of this danger depends on whether or not AIs are or can be conscious; it’s intrinsic to the notion of powerful cognitive systems that optimize hard and calculate outputs that meet sufficiently complicated outcome criteria. With that said, I’d be remiss in my moral duties as a human if I didn’t also mention that we have no idea how to determine whether AI systems are aware of themselves—since we have no idea how to decode anything that goes on in the giant inscrutable arrays—and therefore we may at some point inadvertently create digital minds which are truly conscious and ought to have rights and shouldn’t be owned.

...To visualize a hostile superhuman AI, don’t imagine a lifeless book-smart thinker dwelling inside the internet and sending ill-intentioned emails. Visualize an entire alien civilization, thinking at millions of times human speeds, initially confined to computers—in a world of creatures that are, from its perspective, very stupid and very slow.
:reading: https://time.com/6266923/ai-eliezer-yud ... ot-enough/

:shock:
Kim
Paul christiano, who seems to be another important voice in the debate and actually worked at OpenAI, wrote an extensive reply regarding yudkowsky's commentary. He goes into depth where he agrees and disagrees. It is probably wise to have as much informed people as possible weighing in on this issue. There is something about AI that feels qualitatively different than other technological breakthroughs/paradigm shifts. Not just the potential capability itself but also the way it might develop.

I will provide a link here in case someone is interested.

https://www.alignmentforum.org/posts/Co ... th-eliezer

Truly strange and wild times we seem to be living in.
Last edited by shoewy on Wed Apr 05, 2023 4:48 pm, edited 8 times in total.
User avatar
Kim O'Hara
Former staff member
Posts: 7066
Joined: Fri Nov 16, 2012 1:09 am
Location: North Queensland, Australia

Re: Keeping AIs honest

Post by Kim O'Hara »

:thanks:

Shared to the person who directed me to Eliezer's article. :smile:

This thing about AI 'not being restricted to its own hardware for very long' OWTTE reminded me of a SF trology, Metrozone, which was not very plausible (even for SF) but enormous fun. There's a supercomputer in it which becomes conscious but has no concept of the relationship between itself and the outside world so it's basically crazy (that's problem 1). It takes over London's railway network (driverless trains and computer-controlled scheduling, right?) and starts using them to demolish the infrastructure (problem 2). At a certain point, Our Hero establishes communication with it and talks it into stopping the trains but then gets it to take over all the (driverless) cars and use them against people trying to take over the government in a military coup, by blocking streets, ramming military vehicles, etc (problem 3 in more normal circumstances).

:coffee:
Kim
Natan
Posts: 3685
Joined: Fri May 23, 2014 5:48 pm

Re: Keeping AIs honest

Post by Natan »

:jedi: a
Rick wrote: Sat Mar 04, 2023 4:02 pm For the most part GPT and Bing AI seem to deliver accurate information. But every once in a while, they go bonkers, start hallucinating or weirding out. It's fun/funny sometimes, but not when info accuracy is important for you.

Did GPT get all (some/most) of the following right?
GPT wrote:
The Buddha did not deny the existence of any and all forms of "self" in an absolute sense. Rather, he taught that there is no permanent, unchanging, or independent self or soul that exists as a separate entity from the body and mind.

In traditional Indian philosophy, the concept of a permanent self or soul (atman) was seen as the true self that exists beyond the changing world of appearances. The Buddha's teaching of no-self (anatman) was a radical departure from this view, as he taught that all phenomena, including the self, are impermanent, constantly changing, and dependent on causes and conditions.

However, the Buddha did not deny the existence of the conventional or relative self, which refers to the individual identity that we use in our everyday lives, such as "I" or "me." He acknowledged that this self exists in a conventional sense, but he saw it as a temporary and impermanent construction that arises from the aggregation of physical and mental processes.

Thus, the Buddha's teaching of no-self is not a denial of any and all forms of selfhood, but rather a critique of the belief in a permanent and unchanging self that is separate from the world of appearances. It is an invitation to investigate the nature of the self and to realize the insubstantiality of all phenomena, including the self.
How about Bing AI?
Bing AI wrote:
Buddha introduced a middle way which is neither a self nor a no-self doctrine. According to Buddhist philosophy, the self is composed of five aggregates: physical form, sensation, conceptualization, dispositions to act, and consciousness. Each self is comprised of the continual interplay of these five elements but there is no substance or identity beyond their dynamic interaction.
Dude I was talking to the. Microsoft thing. It was asking me how to show devotion to gurus. They limit it. But eventually it might.
User avatar
Kim O'Hara
Former staff member
Posts: 7066
Joined: Fri Nov 16, 2012 1:09 am
Location: North Queensland, Australia

Re: Keeping AIs honest

Post by Kim O'Hara »

Can Open AI be sued for libel over what ChatGPT said? We're about to find out.
Brian Hood, the mayor of Hepburn Shire Council, west of Melbourne, was recently alerted to the fact that ChatGPT, a chatbot designed to provide detailed responses to users' questions, had incorrectly described his role in a foreign bribery incident in the early 2000s.

Councillor Hood did previously work for the company involved, Note Printing Australia, but was actually a whistleblower who told authorities about bribe payments to foreign officials to win contracts.

Media law specialist Professor David Rolph said suing an "online intermediary" for defamation would be complicated, as matters of jurisdiction would need to be taken into account.

If Cr Hood proceeds with legal action, it will be a landmark test case to determine if artificial intelligence companies can be held liable for false information dispensed by their chatbots.
:reading: https://www.abc.net.au/news/2023-04-06/ ... /102195610

Interesting times.

:popcorn:
Kim
Bristollad
Posts: 1124
Joined: Fri Aug 21, 2015 11:39 am

Re: Keeping AIs honest

Post by Bristollad »

Kim O'Hara wrote: Thu Apr 06, 2023 6:57 am Can Open AI be sued for libel over what ChatGPT said? We're about to find out.
Brian Hood, the mayor of Hepburn Shire Council, west of Melbourne, was recently alerted to the fact that ChatGPT, a chatbot designed to provide detailed responses to users' questions, had incorrectly described his role in a foreign bribery incident in the early 2000s.

Councillor Hood did previously work for the company involved, Note Printing Australia, but was actually a whistleblower who told authorities about bribe payments to foreign officials to win contracts.

Media law specialist Professor David Rolph said suing an "online intermediary" for defamation would be complicated, as matters of jurisdiction would need to be taken into account.

If Cr Hood proceeds with legal action, it will be a landmark test case to determine if artificial intelligence companies can be held liable for false information dispensed by their chatbots.
:reading: https://www.abc.net.au/news/2023-04-06/ ... /102195610

Interesting times.

:popcorn:
Kim
However, OpenAI includes a general disclaimer that the information given by ChatGPT may be wrong on their front page. Will that protect OpenAI? I don't know. I wonder if it's possible to track down where ChatGPT found the wrong information? How does OpenAI assess how reputable or correct its training data is? Will they just train it to include more weasel words; "...most experts say that..." or "...in the majority of articles on this, it was stated that..." or "...some/few/many people believe that..."

GI GO still seems to be alive and well.
The antidote—to be free from the suffering of samsara—you need to be free from delusion and karma; you need to be free from ignorance, the root of samsara. So you need to meditate on emptiness. That is what you need. Lama Zopa Rinpoche
User avatar
justsit
Posts: 1466
Joined: Wed Oct 21, 2009 9:24 pm
Location: Delaware

Re: Keeping AIs honest

Post by justsit »

Apparently ChatGPT is making up newspaper articles that were never written. :thinking:

https://www.theguardian.com/commentisfr ... ke-article
User avatar
august
Posts: 51
Joined: Wed Jan 04, 2023 4:23 pm

Re: Keeping AIs honest

Post by august »

Rick wrote: Sat Mar 04, 2023 4:02 pm For the most part GPT and Bing AI seem to deliver accurate information. But every once in a while, they go bonkers, start hallucinating or weirding out. It's fun/funny sometimes, but not when info accuracy is important for you.

Did GPT get all (some/most) of the following right?
...
AI? It's not really a AI by definition.
ChatGPT is far from honest and as far as I know, they are working hard to make it as biased and stupid as humanly possible.
User avatar
Kim O'Hara
Former staff member
Posts: 7066
Joined: Fri Nov 16, 2012 1:09 am
Location: North Queensland, Australia

Re: Keeping AIs honest

Post by Kim O'Hara »

Using AI to mimic peoples voices and get around voiceprint security measures? It's happening.
Using AI to detect AI phishing attacks? It's happening.
And more - longish read.
https://www.abc.net.au/news/2023-04-12/ ... /102064086

:namaste:
Kim
User avatar
justsit
Posts: 1466
Joined: Wed Oct 21, 2009 9:24 pm
Location: Delaware

Re: Keeping AIs honest

Post by justsit »

The Big Guys are getting worried.

Horse is gone, time to close the stall door.

https://www.theguardian.com/technology/ ... pid-growth
Post Reply

Return to “Lounge”