Artificial Intelligence & Sentience

Whether you're exploring Buddhism for the first time or you're already on the path, feel free to ask questions of any kind here.

Re: Artificial Intelligence & Sentience

Postby Astus » Fri May 10, 2013 9:45 am

Hickersonia wrote:This sparked an odd vision in my mind of a bunch of supercomputers talking to eachother, arguing "...a biochemical processor, that is not possible..."

I agree that artificial intelligence is unlikely, but I won't use words like "impossible."


I say it's not possible because that would mean the mind is actually made of insentient elements and bound to a fixed causality. Intelligence requires the ability of reflection, to be able to review, modify and adapt. In short, this is called self-awareness. The mind is not bound to any fixed idea or system, that's why it is basically free, open, empty. If you program something, it means there is a fixed system and that sets its boundaries, it is not open.
"There is no such thing as the real mind. Ridding yourself of delusion: that's the real mind."
(Sheng-yen: Getting the Buddha Mind, p 73)

"Neither cultivation nor seated meditation — this is the pure Chan of Tathagata."
(Mazu Daoyi, X1321p3b23; tr. Jinhua Jia)

“Don’t rashly seek the true Buddha;
True Buddha can’t be found.
Does marvelous nature and spirit
Need tempering or refinement?
Mind is this mind carefree;
This face, the face at birth."

(Nanyue Mingzan: Enjoying the Way, tr. Jeff Shore; T2076p461b24-26)
User avatar
Astus
Former staff member
 
Posts: 4203
Joined: Tue Feb 23, 2010 11:22 pm
Location: Budapest

Re: Artificial Intelligence & Sentience

Postby Wayfarer » Fri May 10, 2013 12:56 pm

René Descartes wrote:if there were such machines with the organs and shape of a monkey or of some other non-rational animal, we would have no way of discovering that they are not the same as these animals. But if there were machines that resembled our bodies and if they imitated our actions as much as is morally possible, we would always have two very certain means for recognizing that, none the less, they are not genuinely human. The first is that they would never be able to use speech, or other signs composed by themselves, as we do to express our thoughts to others. For one could easily conceive of a machine that is made in such a way that it utters words, and even that it would utter some words in response to physical actions that cause a change in its organs—for example, if someone touched it in a particular place, it would ask what one wishes to say to it, or if it were touched somewhere else, it would cry out that it was being hurt, and so on. But it could not arrange words in different ways to reply to the meaning of everything that is said in its presence, as even the most unintelligent human beings can do. The second means is that, even if they did many things as well as or, possibly, better than anyone of us, they would infallibly fail in others. Thus one would discover that they did not act on the basis of knowledge, but merely as a result of the disposition of their organs. For whereas reason is a universal instrument that can be used in all kinds of situations, these organs need a specific disposition for every particular action.


Discourse on Method (1637), trans. Desmond M. Clarke, Penguin
Learn to do good, refrain from evil, purify the mind ~ this is the teaching of the Buddhas
User avatar
Wayfarer
 
Posts: 1934
Joined: Sun May 27, 2012 8:31 am
Location: Sydney AU

Re: Artificial Intelligence & Sentience

Postby Jesse » Fri May 10, 2013 4:57 pm

Astus wrote:I say it's not possible because that would mean the mind is actually made of insentient elements and bound to a fixed causality. Intelligence requires the ability of reflection, to be able to review, modify and adapt. In short, this is called self-awareness. The mind is not bound to any fixed idea or system, that's why it is basically free, open, empty. If you program something, it means there is a fixed system and that sets its boundaries, it is not open.


In an AI, this would be accomplished by giving it access to it's own source code.
"We know nothing at all. All our knowledge is but the knowledge of schoolchildren. The real nature of things we shall never know." - Albert Einstein
User avatar
Jesse
 
Posts: 825
Joined: Wed May 08, 2013 6:54 am
Location: Virginia, USA

Re: Artificial Intelligence & Sentience

Postby Astus » Fri May 10, 2013 5:42 pm

Jesse wrote:In an AI, this would be accomplished by giving it access to it's own source code.


Wouldn't it still be a code to overwrite another code? That is, all possible actions are predetermined.
"There is no such thing as the real mind. Ridding yourself of delusion: that's the real mind."
(Sheng-yen: Getting the Buddha Mind, p 73)

"Neither cultivation nor seated meditation — this is the pure Chan of Tathagata."
(Mazu Daoyi, X1321p3b23; tr. Jinhua Jia)

“Don’t rashly seek the true Buddha;
True Buddha can’t be found.
Does marvelous nature and spirit
Need tempering or refinement?
Mind is this mind carefree;
This face, the face at birth."

(Nanyue Mingzan: Enjoying the Way, tr. Jeff Shore; T2076p461b24-26)
User avatar
Astus
Former staff member
 
Posts: 4203
Joined: Tue Feb 23, 2010 11:22 pm
Location: Budapest

Re: Artificial Intelligence & Sentience

Postby shel » Fri May 10, 2013 6:01 pm

Predetermined actions? Look at this video...



I don't think they programed Titan to punch people.
shel
 
Posts: 1500
Joined: Fri Mar 12, 2010 9:38 pm

Re: Artificial Intelligence & Sentience

Postby Jesse » Fri May 10, 2013 6:33 pm

Astus wrote:
Jesse wrote:In an AI, this would be accomplished by giving it access to it's own source code.


Wouldn't it still be a code to overwrite another code? That is, all possible actions are predetermined.


In some ways yes, some ways no. AI works using learning algorithms, Their decisions are based on what they learn, and integrate. They are not based on hard-coded "mandates". These of course are loosely based on our own brains, and minds.

For example:

http://io9.com/5820624/computer-teaches ... vilization
"We know nothing at all. All our knowledge is but the knowledge of schoolchildren. The real nature of things we shall never know." - Albert Einstein
User avatar
Jesse
 
Posts: 825
Joined: Wed May 08, 2013 6:54 am
Location: Virginia, USA

Re: Artificial Intelligence & Sentience

Postby Astus » Fri May 10, 2013 9:14 pm

Jesse wrote:In some ways yes, some ways no. AI works using learning algorithms, Their decisions are based on what they learn, and integrate. They are not based on hard-coded "mandates". These of course are loosely based on our own brains, and minds.

For example:
http://io9.com/5820624/computer-teaches ... vilization


Being able to adapt to some level is not the same as being free from codes. The difference I'm talking about is like this: a computer may be able to handle an artificial language (regular grammar, no phrases and other difficulties), and operates within the boundaries of that language. Human mind, however, is capable of learning several languages and reflecting on language, moving beyond grammar and words. That is, intelligence is more than just codes and rules.

The linked example is too brief a description (e.g. I doubt it could actually read the manual as you or I can) and I think we would need to be able to comprehend all the details to take it into account.
"There is no such thing as the real mind. Ridding yourself of delusion: that's the real mind."
(Sheng-yen: Getting the Buddha Mind, p 73)

"Neither cultivation nor seated meditation — this is the pure Chan of Tathagata."
(Mazu Daoyi, X1321p3b23; tr. Jinhua Jia)

“Don’t rashly seek the true Buddha;
True Buddha can’t be found.
Does marvelous nature and spirit
Need tempering or refinement?
Mind is this mind carefree;
This face, the face at birth."

(Nanyue Mingzan: Enjoying the Way, tr. Jeff Shore; T2076p461b24-26)
User avatar
Astus
Former staff member
 
Posts: 4203
Joined: Tue Feb 23, 2010 11:22 pm
Location: Budapest

Re: Artificial Intelligence & Sentience

Postby Jesse » Fri May 10, 2013 9:56 pm

Astus wrote:
Jesse wrote:In some ways yes, some ways no. AI works using learning algorithms, Their decisions are based on what they learn, and integrate. They are not based on hard-coded "mandates". These of course are loosely based on our own brains, and minds.

For example:
http://io9.com/5820624/computer-teaches ... vilization


Being able to adapt to some level is not the same as being free from codes. The difference I'm talking about is like this: a computer may be able to handle an artificial language (regular grammar, no phrases and other difficulties), and operates within the boundaries of that language. Human mind, however, is capable of learning several languages and reflecting on language, moving beyond grammar and words. That is, intelligence is more than just codes and rules.

The linked example is too brief a description (e.g. I doubt it could actually read the manual as you or I can) and I think we would need to be able to comprehend all the details to take it into account.


The computer code is irrelevant, because the code it's-self is simply the means by which it learns. You could compare that code to our brains, we learn and in the process of learning we change both the structure of our brain, and the connections within it. The computers may start out simple, but in the process of learning they can become more complex and able to do more. Just like a child growing into an adult.

intelligence is more than just codes and rules


We'll just have to disagree on this point, but codes and rules are simply causality conceptualized into language.

They are indeed able to learn new languages, (just like anything else), and apply that. The computers and AI of today are limited, and not very intelligent. This will change though.

If you'd like, you can "talk" to one here: http://www.cleverbot.com (no, it's not sentient.) :tongue:

You are correct about human languages though, at the moment it's proving extraordinarily difficult for computers to comprehend languages, but this will change too (it's already getting pretty advanced in some ways.)

Here is some information about that:
http://plato.stanford.edu/entries/turing-test/

If you forget "code", and simply say that by the process of observation and reflection, it can learn, change it's views, and adapt to new situations, does it sound so different from any other living being?
"We know nothing at all. All our knowledge is but the knowledge of schoolchildren. The real nature of things we shall never know." - Albert Einstein
User avatar
Jesse
 
Posts: 825
Joined: Wed May 08, 2013 6:54 am
Location: Virginia, USA

Re: Artificial Intelligence & Sentience

Postby shel » Fri May 10, 2013 9:59 pm

I asked cleverbot"who is Buddha?" an it knew.
shel
 
Posts: 1500
Joined: Fri Mar 12, 2010 9:38 pm

Re: Artificial Intelligence & Sentience

Postby Wayfarer » Fri May 10, 2013 11:18 pm

Jesse wrote:The computers and AI of today are limited, and not very intelligent. This will change though.


The challenge to AI is simply that computers are not intelligent at all, except for in an allegorical sense. They don't comprehend, they process information. They are built to replicate certain functions of intelligence, but they are not intelligent, as they are not beings, but devices. And there's a categorical difference between beings and devices, or organisms and mechanisms. This is the basis of the critiques of AI by philosophers like Searle, Dreyfuss, Penrose, Talbott and others.
Learn to do good, refrain from evil, purify the mind ~ this is the teaching of the Buddhas
User avatar
Wayfarer
 
Posts: 1934
Joined: Sun May 27, 2012 8:31 am
Location: Sydney AU

Re: Artificial Intelligence & Sentience

Postby Jesse » Fri May 10, 2013 11:42 pm

jeeprs wrote:
Jesse wrote:The computers and AI of today are limited, and not very intelligent. This will change though.


The challenge to AI is simply that computers are not intelligent at all, except for in an allegorical sense. They don't comprehend, they process information. They are built to replicate certain functions of intelligence, but they are not intelligent, as they are not beings, but devices. And there's a categorical difference between beings and devices, or organisms and mechanisms. This is the basis of the critiques of AI by philosophers like Searle, Dreyfuss, Penrose, Talbott and others.


Comprehension is simply data that we have conceptualized, and stored in our memories. We associate things, places, events, concepts, and ideas then build a network between the data via our neurons. It's the ultimate functionality that matters, not the pieces that comprise it. If you looked at all the pieces that make up the human body, you could easily say the same things about them. How does a mass of meat, water and electricity allow for sentience?

It's possible we have some strangely high regard for consciousness, when it's really not that impressive. Just a series of conditions that allow for it to manifest.
"We know nothing at all. All our knowledge is but the knowledge of schoolchildren. The real nature of things we shall never know." - Albert Einstein
User avatar
Jesse
 
Posts: 825
Joined: Wed May 08, 2013 6:54 am
Location: Virginia, USA

Re: Artificial Intelligence & Sentience

Postby undefineable » Sat May 11, 2013 12:02 am

Jesse wrote:
intelligence is more than just codes and rules


We'll just have to disagree on this point, but codes and rules are simply causality conceptualized into language.

True, but in Buddhist circles, "intelligence" is often linked with "vidya", which is also translated as "wisdom". There are atleast three problems with your definition of intelligence, vis-a-vis the wider definition that some will feel the term 'Artificial Intelligence' conveys (i.e. entire artificial minds):
1) Could AI receive a repackaged summary of the information if processes in the form of intuition - i.e. knowledge presented in a way that feels like a direct interface of self and other rather than mere perception/calculation?;
2) Could AI reflect on the various codes and rules it learns from, whether or not this is done in the way I just described?;
And crucially -following on from the other two-
3) Could we know whether or not AI is sentient - i.e. that an experience of any of its internal processes is liable to occur alongside them-?
randomseb wrote:Our "intelligence" is an arising pattern that emerges from the complexity of the interaction of billions of individual lifeforms in our head.. Do you think we can replicate this artificially?

'We' being a race in which the struggle of each against all only intensifies in times of plenty - rather than easing off as it does in other species-?
Nah :tongue:
"Removing the barrier between this and that is the only solution" {Chogyam Trungpa - "The Lion's Roar"}
undefineable
 
Posts: 500
Joined: Fri Feb 03, 2012 1:34 am

Re: Artificial Intelligence & Sentience

Postby Astus » Sat May 11, 2013 12:05 am

Jesse,

I don't equate the mind with the brain. If I believed that intelligence is simply a biochemical process I might as well agree with the idea that AI is possible. However, according to my understanding, mind is basically immaterial, therefore it's not possible to build an intelligent machine. Being a mechanic device is in itself contrary to intelligence, to sentience. And that's where I see the impossibility of achieving AI. Even if they could build a robot that has a human body - quite necessary to replicate ordinary experience - and add to that a set of very complex processors, it would still have to follow strict deterministic rules. Intelligence, on the other hand, has no rules, it is not bound by any thought, feeling or sensory impression.

If you forget "code", and simply say that by the process of observation and reflection, it can learn, change it's views, and adapt to new situations, does it sound so different from any other living being?


To be able to observe and reflect you require consciousness, the very ability of knowing. But that knowing, that consciousness of the mind is open, not fixed. So it is not possible to build from fixed structures an unbound awareness. That is, mind is not simply a causality bound process, while a machine necessarily is.
"There is no such thing as the real mind. Ridding yourself of delusion: that's the real mind."
(Sheng-yen: Getting the Buddha Mind, p 73)

"Neither cultivation nor seated meditation — this is the pure Chan of Tathagata."
(Mazu Daoyi, X1321p3b23; tr. Jinhua Jia)

“Don’t rashly seek the true Buddha;
True Buddha can’t be found.
Does marvelous nature and spirit
Need tempering or refinement?
Mind is this mind carefree;
This face, the face at birth."

(Nanyue Mingzan: Enjoying the Way, tr. Jeff Shore; T2076p461b24-26)
User avatar
Astus
Former staff member
 
Posts: 4203
Joined: Tue Feb 23, 2010 11:22 pm
Location: Budapest

Re: Artificial Intelligence & Sentience

Postby Jesse » Sat May 11, 2013 12:30 am

To be able to observe and reflect you require consciousness


Consciousness is the ability to observe and reflect. How could they be separate?

I don't equate the mind with the brain.


Well, you can't deny they are connected, if you suffer brain damage your mind will suffer as a consequence.

It would still have to follow strict deterministic rules. Intelligence, on the other hand, has no rules, it is not bound by any thought, feeling or sensory impression.


Intelligence most certainty has rules, it is bound by the knowledge it possesses. Without learning, we are trapped in ignorance, and I believe the very fact we can enable machines to learn, and self reflect via allowing them to change their own programming they most certainty can become sentient. Though perhaps we are both looking from different irreconcilable perspectives. Maybe we should just agree to disagree. :tongue:
"We know nothing at all. All our knowledge is but the knowledge of schoolchildren. The real nature of things we shall never know." - Albert Einstein
User avatar
Jesse
 
Posts: 825
Joined: Wed May 08, 2013 6:54 am
Location: Virginia, USA

Re: Artificial Intelligence & Sentience

Postby Wayfarer » Sat May 11, 2013 12:39 am

Jesse wrote:
jeeprs wrote:
Jesse wrote:The computers and AI of today are limited, and not very intelligent. This will change though.


The challenge to AI is simply that computers are not intelligent at all, except for in an allegorical sense. They don't comprehend, they process information. They are built to replicate certain functions of intelligence, but they are not intelligent, as they are not beings, but devices. And there's a categorical difference between beings and devices, or organisms and mechanisms. This is the basis of the critiques of AI by philosophers like Searle, Dreyfuss, Penrose, Talbott and others.


Comprehension is simply data that we have conceptualized, and stored in our memories. We associate things, places, events, concepts, and ideas then build a network between the data via our neurons. It's the ultimate functionality that matters, not the pieces that comprise it. If you looked at all the pieces that make up the human body, you could easily say the same things about them. How does a mass of meat, water and electricity allow for sentience?

It's possible we have some strangely high regard for consciousness, when it's really not that impressive. Just a series of conditions that allow for it to manifest.


You're not correct. Comprehension is not data. Data is comprehended and interpreted by an intelligence. That intelligence does not reside in the data. A mass of matter does not allow for sentience. Consciousness, in the broadest sense, has had a causal role in the origin of sentience. This is not comprehended by evolutionary materialism. But you can't equate intelligence with any material operation, because intelligence is not something that can be fully objectified. You can objectify some facets of it but ultimately intelligence is an attribute of a subject, and the subject is never disclosed by objective analysis.

I can see you are advocating a materialist view since you started posting here. I am not sure why you want to present this on a Buddhist forum. You're aware of Buddhism"s view of materialism, aren't you? What draws you here, if you don't mind me asking?
Learn to do good, refrain from evil, purify the mind ~ this is the teaching of the Buddhas
User avatar
Wayfarer
 
Posts: 1934
Joined: Sun May 27, 2012 8:31 am
Location: Sydney AU

Re: Artificial Intelligence & Sentience

Postby undefineable » Sat May 11, 2013 12:52 am

Jesse wrote:Comprehension is simply data that we have conceptualized, and stored in our memories. We associate things, places, events, concepts, and ideas then build a network between the data via our neurons.

The network between the data is neither simply a 'network', nor is it other data - The conceptualisation involved in associating memorised data in the mind of a sentient being is qualitatively and quantitatively different to the corresponding process in computing. See my description of intuition for an example that demonstrates this; Astus has also clarified the difference between living and artificial systems:
Astus wrote:To be able to observe and reflect you require consciousness, the very ability of knowing. But that knowing, that consciousness of the mind is open, not fixed. So it is not possible to build from fixed structures an unbound awareness. That is, mind is not simply a causality bound process, while a machine necessarily is.

People here are more likely to respond based on the findings of lived experience rather than on belief in theories they've read.
Jesse wrote:It's possible we have some strangely high regard for consciousness, when it's really not that impressive.

I understand this is something of a cliche in anglophone philosophy, and although consciousness is indeed pretty unimpressive most of the time, what else is there to any of us that can't be torn away as if it had never had anything to do with us? Permanently tear away consciousness and you're left with what exactly? _ _

One wonders if consciousness evolved (and might even evolve in AI) once a 'critical mass' is reached in a system's ability to balance each aspect of its current situation with all kinds of internal factors -including past experience- in order to determine its response. If it just reacted predictably to simple chemical or electrical inputs, it wouldn't need a mind, would it?
"Removing the barrier between this and that is the only solution" {Chogyam Trungpa - "The Lion's Roar"}
undefineable
 
Posts: 500
Joined: Fri Feb 03, 2012 1:34 am

Re: Artificial Intelligence & Sentience

Postby Jesse » Sat May 11, 2013 12:59 am

Data is comprehended and interpreted by an intelligence.


There is no intelligence without data. A baby learns simple rules by observation. Baba means bottle, I can drink it. It makes me not hungry. Before that it simply cries. Sure there are instincts which probably are passed along through DNA.

But you can't equate intelligence with any material operation, because intelligence is not something that can be fully objectified.


I'm sorry but just something isn't fully understood, doesn't mean it can't be objectified.

Code: Select all
I can see you are advocating a materialist view since you started posting here. I am not sure why you want to present this on a Buddhist forum. You're aware of Buddhism"s view of materialism, aren't you? What draws you here, if you don't mind me asking?


It's interesting that I've been accosted with this sort of sentiment a few times since changing my name. Suggesting I take my leave from the forums etc. I am a Buddhist, but, have my own opinions which I base in fact and logic. I somehow suspect that the Buddha did not have an opinion on artificial intelligence, nor many of the philosophical questions it begs. It seems people are rather afraid of breaking from traditional beliefs, even when they are absurd. I'm sorry my beliefs offend you.


P.S: viewtopic.php?f=10&t=12690
Last edited by Jesse on Sat May 11, 2013 1:09 am, edited 1 time in total.
"We know nothing at all. All our knowledge is but the knowledge of schoolchildren. The real nature of things we shall never know." - Albert Einstein
User avatar
Jesse
 
Posts: 825
Joined: Wed May 08, 2013 6:54 am
Location: Virginia, USA

Re: Artificial Intelligence & Sentience

Postby Jesse » Sat May 11, 2013 1:05 am

undefineable wrote:
Jesse wrote:Comprehension is simply data that we have conceptualized, and stored in our memories. We associate things, places, events, concepts, and ideas then build a network between the data via our neurons.

The network between the data is neither simply a 'network', nor is it other data - The conceptualisation involved in associating memorised data in the mind of a sentient being is qualitatively and quantitatively different to the corresponding process in computing. See my description of intuition for an example that demonstrates this; Astus has also clarified the difference between living and artificial systems:
Astus wrote:To be able to observe and reflect you require consciousness, the very ability of knowing. But that knowing, that consciousness of the mind is open, not fixed. So it is not possible to build from fixed structures an unbound awareness. That is, mind is not simply a causality bound process, while a machine necessarily is.

People here are more likely to respond based on the findings of lived experience rather than on belief in theories they've read.
Jesse wrote:It's possible we have some strangely high regard for consciousness, when it's really not that impressive.

I understand this is something of a cliche in anglophone philosophy, and although consciousness is indeed pretty unimpressive most of the time, what else is there to any of us that can't be torn away as if it had never had anything to do with us? Permanently tear away consciousness and you're left with what exactly? _ _

One wonders if consciousness evolved (and might even evolve in AI) once a 'critical mass' is reached in a system's ability to balance each aspect of its current situation with all kinds of internal factors -including past experience- in order to determine its response. If it just reacted predictably to simple chemical or electrical inputs, it wouldn't need a mind, would it?


As for your intuition claim:
http://en.wikipedia.org/wiki/Fuzzy_logic


People here are more likely to respond based on the findings of lived experience rather than on belief in theories they've read.


Interesting, because I'm seeing the exact opposite behavior. You're claim that my opinions are based solely on read theories is interesting, care to explain how you know this?
"We know nothing at all. All our knowledge is but the knowledge of schoolchildren. The real nature of things we shall never know." - Albert Einstein
User avatar
Jesse
 
Posts: 825
Joined: Wed May 08, 2013 6:54 am
Location: Virginia, USA

Re: Artificial Intelligence & Sentience

Postby undefineable » Sat May 11, 2013 1:12 am

Jesse wrote:
It would still have to follow strict deterministic rules. Intelligence, on the other hand, has no rules, it is not bound by any thought, feeling or sensory impression.


Intelligence most certainty has rules, it is bound by the knowledge it possesses. Without learning, we are trapped in ignorance, and I believe the very fact we can enable machines to learn, and self reflect via allowing them to change their own programming they most certainty can become sentient. Though perhaps we are both looking from different irreconcilable perspectives. Maybe we should just agree to disagree. :tongue:

It looks as if we're talking about two different things, rather than from two different perspectives. You're talking about learning that, say, x=2y - 'IQ' really; others here are talking about why 'x=2y' makes sense to us, and about how we view that sense. A program that allows other programs to change doesn't (in itself) have the full sense of 'self-reflect', and -for that reason- fails to prove sentience. Yet again, the term 'comprehension' also implies more than the rote knowledge you appear to be discussing - to the exclusion of the higher-level mental processes.
"Removing the barrier between this and that is the only solution" {Chogyam Trungpa - "The Lion's Roar"}
undefineable
 
Posts: 500
Joined: Fri Feb 03, 2012 1:34 am

Re: Artificial Intelligence & Sentience

Postby Jesse » Sat May 11, 2013 1:22 am

It looks as if we're talking about two different things, rather than from two different perspectives. You're talking about learning that, say, x=2y - 'IQ' really; others here are talking about why 'x=2y' makes sense to us, and about how we view that sense


No, I understand the topic. I believe the "Why and how we understand", is based on rules and logic, rather than Intelligence being some supernatural phenomena.
"We know nothing at all. All our knowledge is but the knowledge of schoolchildren. The real nature of things we shall never know." - Albert Einstein
User avatar
Jesse
 
Posts: 825
Joined: Wed May 08, 2013 6:54 am
Location: Virginia, USA

PreviousNext

Return to Exploring Buddhism

Who is online

Users browsing this forum: No registered users and 10 guests

>