Keeping AIs honest

Casual conversation between friends. Anything goes (almost).
User avatar
Aryjna
Posts: 1625
Joined: Mon Mar 27, 2017 12:45 pm

Re: Keeping AIs honest

Post by Aryjna »

dharmafootsteps wrote: Sun Mar 19, 2023 1:47 pm [...]
Sure, if you meant that an AI can be a lot more convincing than the average forum user and able to create output based on a much greater number of sources in comparison, while being occasionally/often wrong, similarly to an average person on this forum, then I guess that's true. You kind of mentioned that caveat in your previous post as well, but the last sentence seemed somehow contradictory, which, along with my dislike for ChatGPT, prompted my post.

In addition to the issues above (and this point is unrelated to your post), I think it needs to be said that if this were to be done, and it may already have been done to some extent given the fact(?) that ChatGPT and other AI systems seem to have been trained on all kinds of copyrighted material, it is/would be an egregious case of stealing in general, and of stealing from the sangha etc. in particular.
dharmafootsteps
Posts: 475
Joined: Sun Apr 30, 2017 8:57 am

Re: Keeping AIs honest

Post by dharmafootsteps »

Aryjna wrote: Sun Mar 19, 2023 2:05 pm In addition to the issues above (and this point is unrelated to your post), I think it needs to be said that if this were to be done, and it may already have been done to some extent given the fact(?) that ChatGPT and other AI systems seem to have been trained on all kinds of copyrighted material, it is/would be an egregious case of stealing in general, and of stealing from the sangha etc. in particular.
Yes, this seems to be a thorny issue. Personally I don't see the problem with it, but I acknowledge a lot of people do, so it's clearly something legislators are going to have to take a stance on.

From my own perspective I would hold them to a similar standard as a person producing knowledge work, e.g. plagiarism is an issue. So if they mostly worked more like a semantic search engine and just regurgitated copyright text from elsewhere that would be a problem. I'm sure the odd instance of that will crop up and have to be dealt with, but they generally don't work by regurgitating text though, in fact they are much more like us, in that they learn from examples and can then present their own "understanding". Or, where you do get them to source something from a specific document (as in my Dharma example above) they can be made to cite their sources, like Bing Chat. In fact that's actually desirable give the user would want to be able to verify the information.

Can you explain to me why you see a language model learning something from a legally owned source as stealing, but for a person it's not?
User avatar
Aryjna
Posts: 1625
Joined: Mon Mar 27, 2017 12:45 pm

Re: Keeping AIs honest

Post by Aryjna »

dharmafootsteps wrote: Sun Mar 19, 2023 2:30 pm Yes, this seems to be a thorny issue. Personally I don't see the problem with it, but I acknowledge a lot of people do, so it's clearly something legislators are going to have to take a stance on.

From my own perspective I would hold them to a similar standard as a person producing knowledge work, e.g. plagiarism is an issue. So if they mostly worked more like a semantic search engine and just regurgitated copyright text from elsewhere that would be a problem. I'm sure the odd instance of that will crop up and have to be dealt with, but they generally don't work by regurgitating text though, in fact they are much more like us, in that they learn from examples and can then present their own "understanding". Or, where you do get them to source something from a specific document (as in my Dharma example above) they can be made to cite their sources, like Bing Chat. In fact that's actually desirable give the user would want to be able to verify the information.

Can you explain to me why you see a language model learning something from a legally owned source as stealing, but for a person it's not?
Because it is not a person. It is a system developed by a company, that will be used to make money. And they use copyrighted material to do it. In addition, they are going against the wishes of the people who actually created the content in the first place, and (where applicable) they will possibly diminish the money the actual creators can make from their own work. It is all around vile in my opinion.

Edit: There is also the issue that they do not credit the actual authors/creators at all. But, even if they did (and I assume they may do that at some point if they are forced to), that would still not change very much unless they also have to pay the creators accordingly.
dharmafootsteps
Posts: 475
Joined: Sun Apr 30, 2017 8:57 am

Re: Keeping AIs honest

Post by dharmafootsteps »

Aryjna wrote: Sun Mar 19, 2023 3:58 pm
dharmafootsteps wrote: Sun Mar 19, 2023 2:30 pm Yes, this seems to be a thorny issue. Personally I don't see the problem with it, but I acknowledge a lot of people do, so it's clearly something legislators are going to have to take a stance on.

From my own perspective I would hold them to a similar standard as a person producing knowledge work, e.g. plagiarism is an issue. So if they mostly worked more like a semantic search engine and just regurgitated copyright text from elsewhere that would be a problem. I'm sure the odd instance of that will crop up and have to be dealt with, but they generally don't work by regurgitating text though, in fact they are much more like us, in that they learn from examples and can then present their own "understanding". Or, where you do get them to source something from a specific document (as in my Dharma example above) they can be made to cite their sources, like Bing Chat. In fact that's actually desirable give the user would want to be able to verify the information.

Can you explain to me why you see a language model learning something from a legally owned source as stealing, but for a person it's not?
Because it is not a person. It is a system developed by a company, that will be used to make money. And they use copyrighted material to do it. In addition, they are going against the wishes of the people who actually created the content in the first place, and (where applicable) they will possibly diminish the money the actual creators can make from their own work. It is all around vile in my opinion.

Edit: There is also the issue that they do not credit the actual authors/creators at all. But, even if they did (and I assume they may do that at some point if they are forced to), that would still not change very much unless they also have to pay the creators accordingly.
I find the different reactions to this really interesting. I'm trying to understand your view but find it difficult to be honest, especially the strength of it. I'm not involved in the field, but if I was I'd find it so strange that people consider what I do "vile", I really can't see anything ethically untoward about it.

The fact that at some point these models were trained on certain data doesn't seem any different to all the other work they stand on e.g. the computers they were built with, the offices they were built in, the programming techniques used by the developers learned from their professors, or colleagues or textbooks, the research articles in the field of machine learning and so on. To me a book that the model reads is not different to the rest of that, provided it's legally owned, it's just one of the many things that went into building a system. None of those things were specifically designed to make a large language model, but they are all being used for their intended purpose, including the book, which is being read for knowledge. If the data isn't legally owned that's a whole other issue, but that wouldn't be an AI specific issue, just an illegal use of data issue.

You mention the reason it's not ok for a computer to learn from copyright material is because it's not a person. Purely as a thought experiment, if these models were to become sentient, would you then be OK with them learning from legally owned copyright material? So the distinction would be that a sentient being can ethically learn from something they own, but an insentient system cannot. I don't know if this is something you can articulate or it's more just a personal feeling, but if this is the issue, why would you say it's not ok for a system to learn from something, but it is ok for a sentient being?
User avatar
Aryjna
Posts: 1625
Joined: Mon Mar 27, 2017 12:45 pm

Re: Keeping AIs honest

Post by Aryjna »

I think to discuss the matter from the angle of AI systems as potentially eventually sentient simply obscures the fact that the company who owns the system takes copyrighted material as input and then outputs material based on that input which it proceeds to sell without giving credit or payment to the authors. That is all there is to it in the end. They take in copyrighted input, they sell non-copyrighted output based on that input. Copyright laws have people in mind. A person cannot consume 10 million pieces of art in a short period of time and then start cranking out hundreds of pieces of art per hour, putting the actual artists out of work in the process.
dharmafootsteps wrote: Sun Mar 19, 2023 4:41 pm The fact that at some point these models were trained on certain data doesn't seem any different to all the other work they stand on e.g. the computers they were built with, the offices they were built in, the programming techniques used by the developers learned from their professors, or colleagues or textbooks, the research articles in the field of machine learning and so on. To me a book that the model reads is not different to the rest of that, provided it's legally owned, it's just one of the many things that went into building a system. None of those things were specifically designed to make a large language model, but they are all being used for their intended purpose, including the book, which is being read for knowledge. If the data isn't legally owned that's a whole other issue, but that wouldn't be an AI specific issue, just an illegal use of data issue.

You mention the reason it's not ok for a computer to learn from copyright material is because it's not a person. Purely as a thought experiment, if these models were to become sentient, would you then be OK with them learning from legally owned copyright material? So the distinction would be that a sentient being can ethically learn from something they own, but an insentient system cannot. I don't know if this is something you can articulate or it's more just a personal feeling, but if this is the issue, why would you say it's not ok for a system to learn from something, but it is ok for a sentient being?
Computers are sold for the purpose of developing programs (among other things), textbooks are written to be read by students, buildings are built to be used for writing programs, etc. Material under copyright is not created with the intention that it is consumed by computer systems that can then imitate it for profit. If anything, it is under copyright so that this does not happen. The letter of the law doesn't really spell that out of course, but the reason for that is that this was not possible until recently.
User avatar
Johnny Dangerous
Global Moderator
Posts: 17071
Joined: Fri Nov 02, 2012 10:58 pm
Location: Olympia WA
Contact:

Re: Keeping AIs honest

Post by Johnny Dangerous »

dharmafootsteps wrote: Sun Mar 19, 2023 9:31 am
Johnny Dangerous wrote: Sun Mar 19, 2023 2:08 am
justsit wrote: Sun Mar 19, 2023 12:55 am Seems there are other applications of AI being implemented. The Law of Unintended Consequences appears to be coming into play....

https://www.theguardian.com/technology/ ... against-ai
So far these AIs do have a really uncanny ability to produce garbage art. I don’t doubt some good stuff can be created with the right people messing with it.
Perhaps you haven't kept up to date with the rate of progress. Is it time to start hanging AI art in the Louvre? No. But these things are probably already better than the average human artist. That's the trend in many tasks and fields just now, they're not better than experts, but they can perform closer to the top end of the curve than the bottom.
“Better” how? Technically being able to create quality images more quickly, sure. Actually producing art that is inspiring or interesting, not so much.
Meditate upon Bodhicitta when afflicted by disease

Meditate upon Bodhicitta when sad

Meditate upon Bodhicitta when suffering occurs

Meditate upon Bodhicitta when you are scared

-Khunu Lama
dharmafootsteps
Posts: 475
Joined: Sun Apr 30, 2017 8:57 am

Re: Keeping AIs honest

Post by dharmafootsteps »

Aryjna wrote: Sun Mar 19, 2023 5:05 pm I think to discuss the matter from the angle of AI systems as potentially eventually sentient simply obscures the fact that the company who owns the system takes copyrighted material as input and then outputs material based on that input which it proceeds to sell without giving credit or payment to the authors. That is all there is to it in the end. They take in copyrighted input, they sell non-copyrighted output based on that input. Copyright laws have people in mind. A person cannot consume 10 million pieces of art in a short period of time and then start cranking out hundreds of pieces of art per hour, putting the actual artists out of work in the process.
dharmafootsteps wrote: Sun Mar 19, 2023 4:41 pm The fact that at some point these models were trained on certain data doesn't seem any different to all the other work they stand on e.g. the computers they were built with, the offices they were built in, the programming techniques used by the developers learned from their professors, or colleagues or textbooks, the research articles in the field of machine learning and so on. To me a book that the model reads is not different to the rest of that, provided it's legally owned, it's just one of the many things that went into building a system. None of those things were specifically designed to make a large language model, but they are all being used for their intended purpose, including the book, which is being read for knowledge. If the data isn't legally owned that's a whole other issue, but that wouldn't be an AI specific issue, just an illegal use of data issue.

You mention the reason it's not ok for a computer to learn from copyright material is because it's not a person. Purely as a thought experiment, if these models were to become sentient, would you then be OK with them learning from legally owned copyright material? So the distinction would be that a sentient being can ethically learn from something they own, but an insentient system cannot. I don't know if this is something you can articulate or it's more just a personal feeling, but if this is the issue, why would you say it's not ok for a system to learn from something, but it is ok for a sentient being?
Computers are sold for the purpose of developing programs (among other things), textbooks are written to be read by students, buildings are built to be used for writing programs, etc. Material under copyright is not created with the intention that it is consumed by computer systems that can then imitate it for profit. If anything, it is under copyright so that this does not happen. The letter of the law doesn't really spell that out of course, but the reason for that is that this was not possible until recently.
I would understand the issue if the outputs of these programs would violate copyright law if produced by a human. Reproducing copyright material is illegal, learning from it and talking about it isn't. It seems to me that the same distinction should apply here. I'm sure instance of that will happen as I previously mentioned, and that needs to be dealt with and regulated somehow. But the vast majority of what is produced isn't something that would violate any laws.

If you could say to ChatGPT, "tell me the story of Harry Potter", and it could just read Harry Potter to you, then that would be a massive issue. It can't do that though, it doesn't have whole books sitting in a database somewhere. What it can do is mostly what a human who read and remembers the book well can. It can give you a synopsis, tell who the characters are, have a chat with you about themes etc.
dharmafootsteps
Posts: 475
Joined: Sun Apr 30, 2017 8:57 am

Re: Keeping AIs honest

Post by dharmafootsteps »

Johnny Dangerous wrote: Sun Mar 19, 2023 5:06 pm
dharmafootsteps wrote: Sun Mar 19, 2023 9:31 am
Johnny Dangerous wrote: Sun Mar 19, 2023 2:08 am

So far these AIs do have a really uncanny ability to produce garbage art. I don’t doubt some good stuff can be created with the right people messing with it.
Perhaps you haven't kept up to date with the rate of progress. Is it time to start hanging AI art in the Louvre? No. But these things are probably already better than the average human artist. That's the trend in many tasks and fields just now, they're not better than experts, but they can perform closer to the top end of the curve than the bottom.
“Better” how? Technically being able to create quality images more quickly, sure. Actually producing art that is inspiring or interesting, not so much.
Better in that it can create images of a technical quality that would take quite a bit of skill and training to improve upon.

As far as the artistic merits, I mentioned in another post that I don't think that's a useful metric just now. It's not being trained for that. If you wanted to produce images that appeal to your particular artistic taste you'd just train a model on what you like, which would make it much more limited than the general purpose models, but also more appealing to you.
User avatar
Aryjna
Posts: 1625
Joined: Mon Mar 27, 2017 12:45 pm

Re: Keeping AIs honest

Post by Aryjna »

dharmafootsteps wrote: Sun Mar 19, 2023 5:36 pm I would understand the issue if the outputs of these programs would violate copyright law if produced by a human. Reproducing copyright material is illegal, learning from it and talking about it isn't. It seems to me that the same distinction should apply here. I'm sure instance of that will happen as I previously mentioned, and that needs to be dealt with and regulated somehow. But the vast majority of what is produced isn't something that would violate any laws.

If you could say to ChatGPT, "tell me the story of Harry Potter", and it could just read Harry Potter to you, then that would be a massive issue. It can't do that though, it doesn't have whole books sitting in a database somewhere. What it can do is mostly what a human who read and remembers the book well can. It can give you a synopsis, tell who the characters are, have a chat with you about themes etc.
I guess it can be a matter of perspective. To me it seems a very clear violation of the creators' rights, especially given that there is money involved and it is done against their will, possibly harming them in the process. In any case, I think that laws must be written/updated to address the situation one way or the other as soon as possible.
MagnetSoulSP
Posts: 269
Joined: Fri Jun 30, 2023 1:45 am

Re: Keeping AIs honest

Post by MagnetSoulSP »

Johnny Dangerous wrote: Sun Mar 19, 2023 2:04 am
Ardha wrote: Thu Mar 16, 2023 4:45 am
No, we can turn electrical signals in brains on and off, mess with neurotransmitters, map function of different areas, etc., then see reaction in organisms which we can record. That is not the same thing, it’s pure speculation that this causes a total cessation of the subjective experience of consciousness, or that consciousness is reducible to physical function only, and is non-falsifiable at any rate. Materialists tend to believe they simply don’t need to account for the fact that consciousness and qualia are non-falsifiable, but they are just avoiding the issue.
Can and have. It sounds more like you're scared that it might be the case. Again, so far the evidence points to consciousness being emergent of the brain. Unless you're gonna appeal to consciousness as some uncaused and independent force like a "soul" or something (which would violate dependent arising) I'm not sure you're in any position to really dog on materialism that much. Seems to have a better track record than anything else so far. Your reaction isn't that different from Christians or other spiritual types any time science seems to threaten this aspect of being human. For some reason consciousness seems to be the last bastion of mysticism.
They’ve hosted and collaborated with all kinds of respected names in neuroscience, etc. So, here you just really don’t know what you are talking about.
Pretty sure I do since organizations like that tend to be highly prone to bias. Mixing religion and science tends to do that.
DID is real, two people in one body, not so much.
Then you don't know much about psychology. It is, effectively, two people in one body. Same with split brain patients.
Look up the Hard Problem of consciousness, it’s probably something you should know about if you want to run around claiming expertise here.

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5013033/
I've aware of the hard problem, but your link is 7 years old, that's more or less a fossil. Not to mention it doesn't really support it's own claims with that alternative framework. Though looking through it, it still supports the physicalist or materialist view of consciousness, though some parts in it raise red flags since it pretty much admits the alternate framework doesn't want to conclude it's physical, which is iffy. Again this just sounds like fear to me. Every time science seems to hint at material source of consciousness it seems to send spiritual types in a panic. I guess it's why AI is so threatening. Personally I find it exciting as it challenges our notions of our specialness.
I don’t think you understand Buddhism well enough to even know what would “nullify” it.
I know enough to know their conclusion would invalidate you. Not only would it call into question other sentient beings existing (since no objective reality if true) but also it would mean the idea of an independently existing "mind" or some odd that is the source of all this. It really would lead to solipsism.
“Someone told me”, sounds convincing.
I've talked to folks in the field and they pretty much say everyone is running rampant with interpretations when all they really are is the best guess of what the math means. But because they're weird people run off and try to use them for their own ends, like you and that article you cite. They also said any mention of QM without math isn't worth reading. The whole practice is heavy high level math that I KNOW you don't know and I don't either. But that won't stop people from dragging consciousness and other stuff to this.

The Wigner's friend I explained is not only highly contentious and debated, so far from settled (like a lot in QM) but it only applies to the quantum level. People forget that last part. But you can't really support the consciousness impacts quantum states without throwing out Buddhism with it.

I would recommend trying to find people who do this for their work and talking to them, but you'll likely get as far as I did as they said they would essentially have to teach me QM to explain it. This is why good science communication is important, otherwise people run off with things they don't understand. Just like the Alpha Wolf study.
So you won’t cite your “friend” and are still on a Buddhism site after two years basically…why again? Why are you here asking these questions if you aren’t interested in a Buddhist perspective?

There hasn’t been some new discovery in the past 7 years that would upend the Hard Problem, don’t troll me.
Because the Buddhist perspective, to be blunt, is wrong here. It's making the same mistakes most science articles do when they talk about quantum physics, taking the interpretations as facts about the world when it's not even close to that. That one article you cited bout Wigner's friend experiment only applies at the quantum level yet they are making sweeping claims about "no objective reality". It's why this stuff needs to stay with the experts who do the math.

Any Buddhist, Christian, etc perspective is null and void because it's basing it on an interpretation and not the math. There are different interpretations of the math and people pretty much pick the one that supports their view. But all they are is a best guess of the math, and that's being generous.

My "friend" was over 5 years ago when I started looking up this stuff, I'm not in contact with them.
User avatar
Kim O'Hara
Former staff member
Posts: 7047
Joined: Fri Nov 16, 2012 1:09 am
Location: North Queensland, Australia

Re: Keeping AIs honest

Post by Kim O'Hara »

For a long, knowledgeable and insightful overview of AI in society, read https://www.theguardian.com/technology/ ... -e-chatgpt, The stupidity of AI.

:namaste:
Kim
dharmafootsteps
Posts: 475
Joined: Sun Apr 30, 2017 8:57 am

Re: Keeping AIs honest

Post by dharmafootsteps »

Kim O'Hara wrote: Mon Mar 20, 2023 12:18 pm For a long, knowledgeable and insightful overview of AI in society, read https://www.theguardian.com/technology/ ... -e-chatgpt, The stupidity of AI.

:namaste:
Kim
This article was actually recently commented on by a notable machine learning researcher, actually one of the foremost in the field, as being particularly poor:

Yann LeCun himself is someone I would suggest as worth paying attention to on these topics, having as deep an understanding as anyone, but also being very measured and not at all sensationalist.
User avatar
Johnny Dangerous
Global Moderator
Posts: 17071
Joined: Fri Nov 02, 2012 10:58 pm
Location: Olympia WA
Contact:

Re: Keeping AIs honest

Post by Johnny Dangerous »

dharmafootsteps wrote: Sun Mar 19, 2023 5:41 pm
Johnny Dangerous wrote: Sun Mar 19, 2023 5:06 pm
dharmafootsteps wrote: Sun Mar 19, 2023 9:31 am

Perhaps you haven't kept up to date with the rate of progress. Is it time to start hanging AI art in the Louvre? No. But these things are probably already better than the average human artist. That's the trend in many tasks and fields just now, they're not better than experts, but they can perform closer to the top end of the curve than the bottom.
“Better” how? Technically being able to create quality images more quickly, sure. Actually producing art that is inspiring or interesting, not so much.
Better in that it can create images of a technical quality that would take quite a bit of skill and training to improve upon.

As far as the artistic merits, I mentioned in another post that I don't think that's a useful metric just now. It's not being trained for that. If you wanted to produce images that appeal to your particular artistic taste you'd just train a model on what you like, which would make it much more limited than the general purpose models, but also more appealing to you.
Art is ultimately relational, and fascinates us precisely because it comes from another sentient being, it’s not just a product created to appeal to certain tastes and check certain boxes, though that certainly describes the way a lot of entertainment works.
Meditate upon Bodhicitta when afflicted by disease

Meditate upon Bodhicitta when sad

Meditate upon Bodhicitta when suffering occurs

Meditate upon Bodhicitta when you are scared

-Khunu Lama
dharmafootsteps
Posts: 475
Joined: Sun Apr 30, 2017 8:57 am

Re: Keeping AIs honest

Post by dharmafootsteps »

Johnny Dangerous wrote: Mon Mar 20, 2023 3:38 pm
dharmafootsteps wrote: Sun Mar 19, 2023 5:41 pm
Johnny Dangerous wrote: Sun Mar 19, 2023 5:06 pm

“Better” how? Technically being able to create quality images more quickly, sure. Actually producing art that is inspiring or interesting, not so much.
Better in that it can create images of a technical quality that would take quite a bit of skill and training to improve upon.

As far as the artistic merits, I mentioned in another post that I don't think that's a useful metric just now. It's not being trained for that. If you wanted to produce images that appeal to your particular artistic taste you'd just train a model on what you like, which would make it much more limited than the general purpose models, but also more appealing to you.
Art is ultimately relational, and fascinates us precisely because it comes from another sentient being, it’s not just a product created to appeal to certain tastes and check certain boxes, though that certainly describes the way a lot of entertainment works.
Yeah, I agree that simply knowing something is computer generated takes away some of the appeal. It's going to become increasingly hard to know what's AI generated and what isn't though.

Interestingly, when you re-attach a human identity by crediting an AI generated piece to a particular human artist people become much more OK with it again, even though they know it was produced using AI. For example Karen X Cheng who did the first AI generated cover of Cosmopolitan. People are still interested in her art because they see it as hers, not the AIs.

Given a little time, I think AI generation will start to just be seen as another tool artists use. Some people will look down on it just as when digital art started becoming big, but it will gradually become more normalized.
MagnetSoulSP
Posts: 269
Joined: Fri Jun 30, 2023 1:45 am

Re: Keeping AIs honest

Post by MagnetSoulSP »

Kim O'Hara wrote: Mon Mar 20, 2023 12:18 pm For a long, knowledgeable and insightful overview of AI in society, read https://www.theguardian.com/technology/ ... -e-chatgpt, The stupidity of AI.

:namaste:
Kim
Nothing new. AI functions based on what you put into it. Also it's only good at the thing it is programmed to do.

The issue with AI art though arises with terms like copyrighting, if it is stealing or original. There is also the very real risk that companies will just exclusively switch to AI generated art and not have to pay real artists (a trend we see with people being phased out for machines) thus leaving less and less people going into art. The less people you have going into art the less diversity and creativity you'll have and the more problems you run down the line. It's actually a HUGE issue that people don't seem to fully get.

IT's even more dangerous when it comes to buildings designed by AI and other stuff like that. Because if something goes wrong and a collapse happens then there isn't really anyone who can be held liable for what occurred. I mean...can you sue an AI? What if a self driving car kills someone in a crash?

The proliferation of AI raises several questions about what it means to integrate it into society.
Bristollad
Posts: 1114
Joined: Fri Aug 21, 2015 11:39 am

Re: Keeping AIs honest

Post by Bristollad »

I think some of the handwringing is just scaremongering: did photography destroy the art of portraiture?
If a company uses an AI to design a building which later collapses due to a flaw in that design, the company will be sued.

I don’t think AI is the threat some like to make it out to be, and saying it is is a convenient distraction away from the existential threats that we do face like the effects of climate change.
The antidote—to be free from the suffering of samsara—you need to be free from delusion and karma; you need to be free from ignorance, the root of samsara. So you need to meditate on emptiness. That is what you need. Lama Zopa Rinpoche
User avatar
Johnny Dangerous
Global Moderator
Posts: 17071
Joined: Fri Nov 02, 2012 10:58 pm
Location: Olympia WA
Contact:

Re: Keeping AIs honest

Post by Johnny Dangerous »

Oh I totally think sone great stuff can be created by artists using AI tools, I also believe the amount will be infinitesimal compared to the mountains of total garbage produced with it.
Meditate upon Bodhicitta when afflicted by disease

Meditate upon Bodhicitta when sad

Meditate upon Bodhicitta when suffering occurs

Meditate upon Bodhicitta when you are scared

-Khunu Lama
User avatar
Kim O'Hara
Former staff member
Posts: 7047
Joined: Fri Nov 16, 2012 1:09 am
Location: North Queensland, Australia

Re: Keeping AIs honest

Post by Kim O'Hara »

Johnny Dangerous wrote: Tue Mar 21, 2023 12:48 am Oh I totally think sone great stuff can be created by artists using AI tools, I also believe the amount will be infinitesimal compared to the mountains of total garbage produced with it.
Yes. Sturgeon's Law doesn't even begin to say how much of it will be rubbish.

https://en.wikipedia.org/wiki/Sturgeon%27s_law

:coffee:
Kim
MagnetSoulSP
Posts: 269
Joined: Fri Jun 30, 2023 1:45 am

Re: Keeping AIs honest

Post by MagnetSoulSP »

Bristollad wrote: Mon Mar 20, 2023 11:07 pm I think some of the handwringing is just scaremongering: did photography destroy the art of portraiture?
If a company uses an AI to design a building which later collapses due to a flaw in that design, the company will be sued.

I don’t think AI is the threat some like to make it out to be, and saying it is is a convenient distraction away from the existential threats that we do face like the effects of climate change.
I don't think so. I think there are some very real questions that come up regarding copyright and liability.
User avatar
justsit
Posts: 1461
Joined: Wed Oct 21, 2009 9:24 pm
Location: Delaware

Re: Keeping AIs honest

Post by justsit »

Oh, so NOW there's big-name concern...um, the ship already sailed.

https://www.bbc.com/news/technology-65110030
Post Reply

Return to “Lounge”