"Negative phenomenology" and the "second explosion of suffering"

Casual conversation between friends. Anything goes (almost).
User avatar
PadmaVonSamba
Posts: 9438
Joined: Sat May 14, 2011 1:41 am

Re: "Negative phenomenology" and the "second explosion of suffering"

Post by PadmaVonSamba »

FiveSkandhas wrote: Sun Jun 06, 2021 5:13 am Here is a thought experiment.
Humans are able to take on prosthetic devices like artificial limbs. I am sure none of us would argue that a human would be less of a human if he/she had a prosthetic arm.

Now, neural prosthetics have existed for some time. The most widespread example is the cochlear implant, which uses a microphone and a unit that electrically stimulates the auditory nerve. Other neural prosthetics work with muscles to translate physical responses into electrical impulses.

I would hope we can agree that neural prosthetics do not make anyone less human.

Now suppose we begin replacing every single neuron in the brain one-by-one with a prosthetic neuron. Of course in reality this is impractical because of the number of neurons and the size of the prosthetic neurons, but we are in a thought experiment.

Eventually the human has no organic tissue left but still has a perfectly functioning human brain. Is not such a being still human and still sentient? If one argues that she is no longer human, at what point exactly did she cease to be human? If one accepts she is still human, then consciousness with non-organic matter is obviously a given.
This gets back to basic Dharma teaching:
Where is “me” located in the body?
If I lose my legs, if I lose half of my body, is half of ‘me’ gone or am I still fully ‘me’ but without legs?”

...replacing an original body part with an artificial one isn’t even necessary to delve into this matter.

In fact, since every cell in your body has died and been replaced every seven years, the same question applies. ‘Real vs artificial’ doesn’t even need to be considered. Replacing one organic cell
With another one will do. Where is that “me” which is experienced as a constant aware entity?

I think that the answer is always: awareness arises as a function along with, or you might say suitable to, the conditions of that body. Awareness is not produced by that body, but it assumes an expression of that body. If awareness arises with a dog body, for example, there will be a lot more “nose consciousness” experienced than there would be if it arises with a human body.

As an analogy, air is air, breath is breath, but the same air blown through a tuba, a flute, and a harmonica will have a different sound because what the air interacts with has different characteristics in each instrument. Likewise, awareness or consciousness will appear as an expression of the conditions in which it arises.

But since this hypothetical scenario has been suggested, and let’s say every part of a person’s body is replaced by something artificial, then the question becomes whether or not that awareness will arise as an expression of artificially produced conditions (at that point one has to ask what ‘artificial’ even means).

This harkens back to what is mutually considered by both Buddhist theory and Advaita Vedanta, which is that whatever can be witnessed or experienced is an object of awareness and hence is not, itself, awareness (the A.V. argues that this awareness constitutes a continuous ‘self’ or atman, and the Buddhist disputes this conclusion).

Thus whatever occurs as functional awareness with regard to an artificial body isn’t that body itself. If it produced by that body, as with AI, it only simulates sentience. If the sense of a permanent “me” (both instinctive and mistaken, according to Buddhist theory) still arises functionally along with a body whose original parts have been replaced one by one with artificial parts, then ultimately a genuine consciousness would be replaced by an artificial one. So, if you replaced bit by bit a human body with a dog body, at the point of 100% replacement, do you have a human experience in an artificial dog body, or a dog experience in a dog body?
The answer is, you don’t have either one, because the physical body isn’t what causes sentience. Furthermore, the notion of ‘human’ itself, being based in that body, is an imputed category, an invented concept.
EMPTIFUL.
An inward outlook produces outward insight.
User avatar
FiveSkandhas
Posts: 917
Joined: Sat Jun 29, 2019 6:40 pm

Re: "Negative phenomenology" and the "second explosion of suffering"

Post by FiveSkandhas »

PadmaVonSamba wrote: Sun Jun 06, 2021 9:47 am
FiveSkandhas wrote: Sun Jun 06, 2021 5:13 am Here is a thought experiment.
Humans are able to take on prosthetic devices like artificial limbs. I am sure none of us would argue that a human would be less of a human if he/she had a prosthetic arm.

Now, neural prosthetics have existed for some time. The most widespread example is the cochlear implant, which uses a microphone and a unit that electrically stimulates the auditory nerve. Other neural prosthetics work with muscles to translate physical responses into electrical impulses.

I would hope we can agree that neural prosthetics do not make anyone less human.

Now suppose we begin replacing every single neuron in the brain one-by-one with a prosthetic neuron. Of course in reality this is impractical because of the number of neurons and the size of the prosthetic neurons, but we are in a thought experiment.

Eventually the human has no organic tissue left but still has a perfectly functioning human brain. Is not such a being still human and still sentient? If one argues that she is no longer human, at what point exactly did she cease to be human? If one accepts she is still human, then consciousness with non-organic matter is obviously a given.
This gets back to basic Dharma teaching:
Where is “me” located in the body?
If I lose my legs, if I lose half of my body, is half of ‘me’ gone or am I still fully ‘me’ but without legs?”

...replacing an original body part with an artificial one isn’t even necessary to delve into this matter.

In fact, since every cell in your body has died and been replaced every seven years, the same question applies. ‘Real vs artificial’ doesn’t even need to be considered. Replacing one organic cell
With another one will do. Where is that “me” which is experienced as a constant aware entity?

I think that the answer is always: awareness arises as a function along with, or you might say suitable to, the conditions of that body. Awareness is not produced by that body, but it assumes an expression of that body. If awareness arises with a dog body, for example, there will be a lot more “nose consciousness” experienced than there would be if it arises with a human body.

As an analogy, air is air, breath is breath, but the same air blown through a tuba, a flute, and a harmonica will have a different sound because what the air interacts with has different characteristics in each instrument. Likewise, awareness or consciousness will appear as an expression of the conditions in which it arises.

But since this hypothetical scenario has been suggested, and let’s say every part of a person’s body is replaced by something artificial, then the question becomes whether or not that awareness will arise as an expression of artificially produced conditions (at that point one has to ask what ‘artificial’ even means).

This harkens back to what is mutually considered by both Buddhist theory and Advaita Vedanta, which is that whatever can be witnessed or experienced is an object of awareness and hence is not, itself, awareness (the A.V. argues that this awareness constitutes a continuous ‘self’ or atman, and the Buddhist disputes this conclusion).

Thus whatever occurs as functional awareness with regard to an artificial body isn’t that body itself. If it produced by that body, as with AI, it only simulates sentience. If the sense of a permanent “me” (both instinctive and mistaken, according to Buddhist theory) still arises functionally along with a body whose original parts have been replaced one by one with artificial parts, then ultimately a genuine consciousness would be replaced by an artificial one. So, if you replaced bit by bit a human body with a dog body, at the point of 100% replacement, do you have a human experience in an artificial dog body, or a dog experience in a dog body?
The answer is, you don’t have either one, because the physical body isn’t what causes sentience. The notion of ‘human’ itself, is an imputed category, an invented concept.
If we are to fall back on No Self (and by all means please do; I'm right there with you, dog-eared copy of the Diamond Sutra in hand...), we agree "The notion of ‘human’ itself, is an imputed category, an invented concept" as you succinctly put it.

Then if we agree in this, why did you argue vociferously earlier in the thread (if I understood you correctly) that humans possess something machines can never have? Why is it impossible to conceive of a machine, equally empty of independent self-origination, as no more or less sentient than a human?
"One should cultivate contemplation in one’s foibles. The foibles are like fish, and contemplation is like fishing hooks. If there are no fish, then the fishing hooks have no use. The bigger the fish is, the better the result we will get. As long as the fishing hooks keep at it, all foibles will eventually be contained and controlled at will." -Zhiyi

"Just be kind." -Atisha
User avatar
FiveSkandhas
Posts: 917
Joined: Sat Jun 29, 2019 6:40 pm

Re: "Negative phenomenology" and the "second explosion of suffering"

Post by FiveSkandhas »

Malcolm wrote: Sat Jun 05, 2021 10:52 pm
How can you define machine generated code as "self-organized?" The rules it follows are predetermined by a human...Code will never be free of the fact that we wrote the initial algorithms...

This is not the case in natural selection for example. Natural selection is self-organized in toto. There is no creator who set the ball rolling...
Not necessarily applicable. First of all, neural network computing is not linearly "coded". Rather, massive amounts of information are fed into the input, and connection strengths are adjusted until the desired output is achieved. The process is more akin to "training" than old-fashioned "coding." It's closer to the way a parent trains a child.

Moreover, AI can already autonomously create new AI with no human input.

But even leaving aside these facts, the deeper threat is that at some point in the future, self-awareness would spontaneously emerge from a sufficiently connectivity-rich, information-dense environment. This is a possibility that cannot be denied.

And such an emergent phenomenon would certainly be subject to natural selection.
"One should cultivate contemplation in one’s foibles. The foibles are like fish, and contemplation is like fishing hooks. If there are no fish, then the fishing hooks have no use. The bigger the fish is, the better the result we will get. As long as the fishing hooks keep at it, all foibles will eventually be contained and controlled at will." -Zhiyi

"Just be kind." -Atisha
Danny
Posts: 1043
Joined: Tue Mar 31, 2020 12:43 pm

Re: "Negative phenomenology" and the "second explosion of suffering"

Post by Danny »

Quantum computers, I believe is one such avenue not being mentioned here. There’s possibly a more fruitful approach to mention the thinking of consciousness as a emergent process of quantum mechanics, please search out the study of microtubules and their possible connection to consciousness. It’s a big topic, but quantum physics, chemistry and biology are not unrelated.
Symmetry, wave function collapse and degrees of freedoms. Sir Rodger Penroses the emperors new mind publication would be a good start.
If that sort of thing tickles your Elmo.

https://en.m.wikipedia.org/wiki/Orchest ... _reduction
Malcolm
Posts: 42974
Joined: Thu Nov 11, 2010 2:19 am

Re: "Negative phenomenology" and the "second explosion of suffering"

Post by Malcolm »

FiveSkandhas wrote: Sun Jun 06, 2021 10:16 am
Malcolm wrote: Sat Jun 05, 2021 10:52 pm
How can you define machine generated code as "self-organized?" The rules it follows are predetermined by a human...Code will never be free of the fact that we wrote the initial algorithms...

This is not the case in natural selection for example. Natural selection is self-organized in toto. There is no creator who set the ball rolling...
Not necessarily applicable. First of all, neural network computing is not linearly "coded". Rather, massive amounts of information are fed into the input, and connection strengths are adjusted until the desired output is achieved. The process is more akin to "training" than old-fashioned "coding." It's closer to the way a parent trains a child.

Moreover, AI can already autonomously create new AI with no human input.

But even leaving aside these facts, the deeper threat is that at some point in the future, self-awareness would spontaneously emerge from a sufficiently connectivity-rich, information-dense environment. This is a possibility that cannot be denied.

And such an emergent phenomenon would certainly be subject to natural selection.
Have it your way.
Danny
Posts: 1043
Joined: Tue Mar 31, 2020 12:43 pm

Re: "Negative phenomenology" and the "second explosion of suffering"

Post by Danny »

FiveSkandhas wrote: Sun Jun 06, 2021 10:16 am

Moreover, AI can already autonomously create new AI with no human input.
The self licking ice cream cone model?
User avatar
Sādhaka
Posts: 1277
Joined: Sat Jan 16, 2016 4:39 pm

Re: "Negative phenomenology" and the "second explosion of suffering"

Post by Sādhaka »

The way I see it, A.I., GMO’s/synthetic-pesticides, pharma drugs, fiat-money & usury, corporations, etc. all = playing with fire.

Nature has been fine without these things since forever; and the Siddhas and their students never had any use for them.

And while we don’t need computers in general either, they are pretty handy admittedly.
Last edited by Sādhaka on Sun Jun 06, 2021 1:22 pm, edited 1 time in total.
User avatar
FiveSkandhas
Posts: 917
Joined: Sat Jun 29, 2019 6:40 pm

Re: "Negative phenomenology" and the "second explosion of suffering"

Post by FiveSkandhas »

Danny wrote: Sun Jun 06, 2021 1:12 pm
FiveSkandhas wrote: Sun Jun 06, 2021 10:16 am

Moreover, AI can already autonomously create new AI with no human input.
The self licking ice cream cone model?
images (39).jpeg
images (39).jpeg (9.99 KiB) Viewed 919 times
"One should cultivate contemplation in one’s foibles. The foibles are like fish, and contemplation is like fishing hooks. If there are no fish, then the fishing hooks have no use. The bigger the fish is, the better the result we will get. As long as the fishing hooks keep at it, all foibles will eventually be contained and controlled at will." -Zhiyi

"Just be kind." -Atisha
Malcolm
Posts: 42974
Joined: Thu Nov 11, 2010 2:19 am

Re: "Negative phenomenology" and the "second explosion of suffering"

Post by Malcolm »

FiveSkandhas wrote: Sun Jun 06, 2021 10:16 am
Malcolm wrote: Sat Jun 05, 2021 10:52 pm
How can you define machine generated code as "self-organized?" The rules it follows are predetermined by a human...Code will never be free of the fact that we wrote the initial algorithms...

This is not the case in natural selection for example. Natural selection is self-organized in toto. There is no creator who set the ball rolling...
Not necessarily applicable. First of all, neural network computing is not linearly "coded". Rather, massive amounts of information are fed into the input, and connection strengths are adjusted until the desired output is achieved. The process is more akin to "training" than old-fashioned "coding." It's closer to the way a parent trains a child.

Moreover, AI can already autonomously create new AI with no human input.

But even leaving aside these facts, the deeper threat is that at some point in the future, self-awareness would spontaneously emerge from a sufficiently connectivity-rich, information-dense environment. This is a possibility that cannot be denied.

And such an emergent phenomenon would certainly be subject to natural selection.
Another problem with your hypothesis is that the Buddha defined the number of sentient beings to be finite, uncountable, but finite. The sattvadhātu can neither increase nor decrease, which rules out the emergence of a new sentient being. But that all depends on whether you take the Buddha's word for it.
Danny
Posts: 1043
Joined: Tue Mar 31, 2020 12:43 pm

Re: "Negative phenomenology" and the "second explosion of suffering"

Post by Danny »

FiveSkandhas wrote: Sun Jun 06, 2021 1:42 pm
Danny wrote: Sun Jun 06, 2021 1:12 pm
FiveSkandhas wrote: Sun Jun 06, 2021 10:16 am

Moreover, AI can already autonomously create new AI with no human input.
The self licking ice cream cone model?
images (39).jpeg
:lol:

How did it chisel it’s own hands?
User avatar
PadmaVonSamba
Posts: 9438
Joined: Sat May 14, 2011 1:41 am

Re: "Negative phenomenology" and the "second explosion of suffering"

Post by PadmaVonSamba »

FiveSkandhas wrote: Sun Jun 06, 2021 10:03 am ...why did you argue vociferously earlier in the thread (if I understood you correctly) that humans possess something machines can never have?
I think that was makcolm’s argument, actually.
However, in terms of Buddhist reckoning, humans experience (“possess”) accumulations of past karma. I don’t know if that counts for much in this discussion, but I’m throwing it in here anyway. AI does calculate things based on previous results and by probability projections based on known data. One could call that “robot karma” but then we start just redefining everything in terms of artificial context, which the term “AI” already does.
Why is it impossible to conceive of a machine, equally empty of independent self-origination, as no more or less sentient than a human?
The two, ‘empty of self origination’ and ‘sentient’ don’t have anything to do with each other. (Most things are composites and are not sentient; if a self-arising being existed, it could be sentient).

My argument is that in order for an AI entity (robot, computer, etc) to experience suffering, that suffering would need to have a cause, which according to Buddhism is ignorance, or not seeing things as they truly are. An AI entity would either need to be programmed as such or would have to program itself that way.

But the bottom line is that sentience isn’t a byproduct of material existence. Seeing isn’t caused by eyes and hearing isn’t caused by ears.
A fundamental awareness connects to things in a very limited way through those organs.

Building a robot with the ability to see and hear, and to process visual and auditory stimuli in a way than enables it to select the best options as a response doesn’t equate to the robot “experiencing” in the same way as a sentient being does, because the robot doesn’t possess a fundamental awareness (the true nature of which is unobscured).

If you want to ask, “why doesn’t it equate?”
Go ahead,
but I don’t know how to answer that!
EMPTIFUL.
An inward outlook produces outward insight.
Malcolm
Posts: 42974
Joined: Thu Nov 11, 2010 2:19 am

Re: "Negative phenomenology" and the "second explosion of suffering"

Post by Malcolm »

PadmaVonSamba wrote: Sun Jun 06, 2021 2:11 pm But the bottom line is that sentience isn’t a byproduct of material existence.
Yes, there is also the fact that consciousness by definition is NOT an emergent property of matter.
User avatar
FiveSkandhas
Posts: 917
Joined: Sat Jun 29, 2019 6:40 pm

Re: "Negative phenomenology" and the "second explosion of suffering"

Post by FiveSkandhas »

Malcolm wrote: Sun Jun 06, 2021 1:48 pm Another problem with your hypothesis is that the Buddha defined the number of sentient beings to be finite, uncountable, but finite. The sattvadhātu can neither increase nor decrease, which rules out the emergence of a new sentient being. But that all depends on whether you take the Buddha's word for it.
Of course I take the Buddha's word for it.

But the emergence of a new form of life could easily be offset by the disappearance of an equal number of beings elsewhere. Species go extinct and come into being all the time.

However you do have a strong point in that if the number of sentient beings is finite in total, the idea of mass scale new suffering goes out the window. Unless, of course, the new machines suffer more intensely than the beings they replace due to having larger and more complex minds.
"One should cultivate contemplation in one’s foibles. The foibles are like fish, and contemplation is like fishing hooks. If there are no fish, then the fishing hooks have no use. The bigger the fish is, the better the result we will get. As long as the fishing hooks keep at it, all foibles will eventually be contained and controlled at will." -Zhiyi

"Just be kind." -Atisha
Malcolm
Posts: 42974
Joined: Thu Nov 11, 2010 2:19 am

Re: "Negative phenomenology" and the "second explosion of suffering"

Post by Malcolm »

FiveSkandhas wrote: Sun Jun 06, 2021 3:13 pm

But the emergence of a new form of life could easily be offset by the disappearance of an equal number of beings elsewhere. Species go extinct and come into being all the time.
Huh? Mind streams never go extinct.

Have you considered the idea that you may have some unresolved attachment to physicalist views on the emergence of consciousness?

This is why I keep insisting on two factors: one, consciousness cannot be created and sentient beings are self-organized phenomena in toto. In order for a machine to be sentient, it would have to be possibly birth locus, a place where a being in the intermediate state would attempt to appropriate as a new series of aggregates.

AI does not satisfy criteria one, because machines are wholly fabricated devices, and do not exhibit self-organizing behavior. They can emulate that kind of behavior, but it is not true autopoiesis. They do not satisfy two, because there is simply no evidence that consciousness can appropriate a machine as a place of rebirth. Even if a consciousness could appropriate a machine as a place of rebirth, this still would not render machines —artificially— intelligent, and that consciousness would NOT be an emergent property at all.

Indeed, the only way I can imagine personally, an intelligent machine, is that someone might experience this as a personal hell, similar to stories where people are reborn in pillars, brooms, and so on.
User avatar
PadmaVonSamba
Posts: 9438
Joined: Sat May 14, 2011 1:41 am

Re: "Negative phenomenology" and the "second explosion of suffering"

Post by PadmaVonSamba »

Malcolm wrote: Sun Jun 06, 2021 3:29 pmIn order for a machine to be sentient, it would have to be possibly birth locus, a place where a being in the intermediate state would attempt to appropriate as a new series of aggregates.
in other words, a machine’s consciousness can’t just pop up out of nowhere.
AI does not satisfy criteria one, because machines are wholly fabricated devices, and do not exhibit self-organizing behavior. They can emulate that kind of behavior, but it is not true autopoiesis. They do not satisfy two, because there is simply no evidence that consciousness can appropriate a machine as a place of rebirth. Even if a consciousness could appropriate a machine as a place of rebirth, this still would not render machines —artificially— intelligent, and that consciousness would NOT be an emergent property at all.
and I think the issue/problem simply rests in trying to redefine AI as consciousness, or one might say, redefining what consciousness is to include AI, saying “this term applies to that also” and we can all agree or disagree about what ‘sentience’ includes. But that’s merely a pointless differing of opinions. That’s why it is important to pin down exactly what ‘sentience’ refers to. Otherwise it’s just an abstract label.
EMPTIFUL.
An inward outlook produces outward insight.
Jesse
Posts: 2127
Joined: Wed May 08, 2013 6:54 am
Location: Virginia, USA

Re: "Negative phenomenology" and the "second explosion of suffering"

Post by Jesse »

Danny wrote: Sun Jun 06, 2021 11:30 am Quantum computers, I believe is one such avenue not being mentioned here. There’s possibly a more fruitful approach to mention the thinking of consciousness as a emergent process of quantum mechanics, please search out the study of microtubules and their possible connection to consciousness. It’s a big topic, but quantum physics, chemistry and biology are not unrelated.
Symmetry, wave function collapse and degrees of freedoms. Sir Rodger Penroses the emperors new mind publication would be a good start.
If that sort of thing tickles your Elmo.

https://en.m.wikipedia.org/wiki/Orchest ... _reduction
Thank you for linking this. Sorry I missed it. I have begun researching it, and it's actually pretty amazing.

I suggest if your interested in consciousness that you look into it also.
Image
Thus shall ye think of all this fleeting world:
A star at dawn, a bubble in a stream;
A flash of lightning in a summer cloud,
A flickering lamp, a phantom, and a dream.
Vajrasambhava
Posts: 160
Joined: Sat Dec 01, 2018 1:24 pm

Re: "Negative phenomenology" and the "second explosion of suffering"

Post by Vajrasambhava »

Malcolm wrote: Sat Jun 05, 2021 3:31 pm
FiveSkandhas wrote: Sat Jun 05, 2021 3:26 pm I support Metzinger's call for a total moratorium on the creation of machines with even the potential to suffer until we truly get a handle on this issue.
It is impossible. Sentience cannot be created. All examples of sentient life we observe arose out of a lengthy evolutionary process of self-organization.
Hi Malcolm,
As you said, sentient life have a process of self-organization, what about non sentient life? i.e Do a plant or a tree have a process of self-organization? If so, what "force" can give rise to self-organization in non sentient life if they don't have a consciousness to give rise to a self-organization process?
Thanks a lot
Natan
Posts: 3685
Joined: Fri May 23, 2014 5:48 pm

Re: "Negative phenomenology" and the "second explosion of suffering"

Post by Natan »

Malcolm wrote: Sat Jun 05, 2021 9:43 pm
FiveSkandhas wrote: Sat Jun 05, 2021 9:02 pm
Malcolm wrote: Sat Jun 05, 2021 7:12 pm Mind streams cannot be newly created. A sentient machine would have to be the rebirth of a being in the six realms. But I've never heard of the "machine realm" listed among the six.

In order for a machine to suffer, which is a result, it would have to be able to generate negative karma, the cause of suffering. In order to generate negative karma it would have to possess afflictions, the cause of karma.
What if a sentient machine is a member of one of the six realms? Why couldn't it be classified as, say, a kind of Deva or Asura? Or a hungry ghost, hell being, or even an advanced animal for that matter?

What if what science calls the spontaneous emergence of sentence is in fact a form of reincarnation, so no new mind-stream is created?

What makes you so sure it couldn't possess afflictions?
I am afraid that this question can only be answered definitively by someone who has the higher cognitive ability (abhijñā) to know the minds of others. However, Buddha denied sentience in plants very clearly. Thus, just as we deny sentience in plants, etc., the Buddhist position will be that machines cannot be sentient. Suffering requires karma and affliction as causes. Sentience requires self-organized replication and continuation. Machines will never achieve this, since they have never been self-organized entities, but created entities. We sentient beings are not created, our mind streams are beginningless. There has never been a moment in time when our mind streams did not exist. Now, is it possible some unfortunate preta could inhabit a machine? I guess so. A deva or an asura would not bother, because other than the most pure bhikṣus and bhikṣunīs, we humans smell very nauseating to them, like a rotting pit of offal. Possession is not rebirth thought, since there is no gradual development from conception, and in fact, apparitional births like hell beings, bardo beings, pretas, and devas, are basically mind-made bodies supported by the air element. Recall, there are four kinds of birth, not a fifth.
Their goal is to make self replicating machines and perhaps that's when the mind streams will occur and one never knows, maybe it will be a rebirth of AI from a past time.
Natan
Posts: 3685
Joined: Fri May 23, 2014 5:48 pm

Re: "Negative phenomenology" and the "second explosion of suffering"

Post by Natan »

PadmaVonSamba wrote: Sun Jun 06, 2021 2:11 pm
FiveSkandhas wrote: Sun Jun 06, 2021 10:03 am ...why did you argue vociferously earlier in the thread (if I understood you correctly) that humans possess something machines can never have?
I think that was makcolm’s argument, actually.
However, in terms of Buddhist reckoning, humans experience (“possess”) accumulations of past karma. I don’t know if that counts for much in this discussion, but I’m throwing it in here anyway. AI does calculate things based on previous results and by probability projections based on known data. One could call that “robot karma” but then we start just redefining everything in terms of artificial context, which the term “AI” already does.
Why is it impossible to conceive of a machine, equally empty of independent self-origination, as no more or less sentient than a human?
The two, ‘empty of self origination’ and ‘sentient’ don’t have anything to do with each other. (Most things are composites and are not sentient; if a self-arising being existed, it could be sentient).

My argument is that in order for an AI entity (robot, computer, etc) to experience suffering, that suffering would need to have a cause, which according to Buddhism is ignorance, or not seeing things as they truly are. An AI entity would either need to be programmed as such or would have to program itself that way.

But the bottom line is that sentience isn’t a byproduct of material existence. Seeing isn’t caused by eyes and hearing isn’t caused by ears.
A fundamental awareness connects to things in a very limited way through those organs.

Building a robot with the ability to see and hear, and to process visual and auditory stimuli in a way than enables it to select the best options as a response doesn’t equate to the robot “experiencing” in the same way as a sentient being does, because the robot doesn’t possess a fundamental awareness (the true nature of which is unobscured).

If you want to ask, “why doesn’t it equate?”
Go ahead,
but I don’t know how to answer that!
AI is trained with photos and sounds. It's already object oriented.
User avatar
PadmaVonSamba
Posts: 9438
Joined: Sat May 14, 2011 1:41 am

Re: "Negative phenomenology" and the "second explosion of suffering"

Post by PadmaVonSamba »

Crazywisdom wrote: Sun Jul 25, 2021 12:52 pm
AI is trained with photos and sounds.
It's already object oriented.
What is the “it’s”?
That’s the issue.
EMPTIFUL.
An inward outlook produces outward insight.
Post Reply

Return to “Lounge”