Well then they will have to train their Ai with incorrect informations… politically incorrect, scientifically incorrect, etc… which renders the outputs useless.
Scientifically accurate and as close to the truth as possible never equals conservative talking points… because they are scientifically wrong.
It would be the same with liberal talking points and in general any human talking point.
Humans try to change the reality the way they want it, thus things they say are always incorrect. When they want to increase something, they make it appear less than IRL, usually. Also appearances are not universal.
Humans also simplify things acceptably for one subject, but not for another.
Humans also don’t know what “correct information” is.
A lot of philosophy connected to language starts mattering, when your main approach to “AI” is text extrapolation.
Math is correct without humans. Pi is the same in the whole universe. There are scientific truths. And then there are the the flat earth, 2x2=1, qanon anti vax chematrail loonies, which in different degrees and colour are mostly united under the conservative “anti science” folks.
And you want an Ai that doesn’t offend these folks / is taught based on their output. What use could that be of?
Ahem, well, there are obvious things - that 2x2 modulo 3 is 1, that some vaccines might be bad, that’s why farma industry regulations exist, that pi is also unknown p multiplied by unknown i or some number encoded as ‘pi’ string.
These all matter for language models, do they not?
And you want an Ai that doesn’t offend these folks / is taught based on their output. What use could that be of?
It is already taught on their output among other things.
But I personally don’t think this leads anywhere.
Somebody someplace decided it’s a genial idea to extrapolate text, because humans communicate their thoughts via text, so it’s something that can be used for machines.
So you’re saying you lie to try and change reality or present it in a different way?
That’s horrible and I certainly don’t subscribe to this mentality. I will discuss things with people with an open mind and a willingness to change positions if presented with new information.
We are not arguing out of some tribal belief, we have our morals and we will constantly test them to try and be better humans for our fellow humans.
Only because you are a layer does not conclude that all humans are egoistic layers. Of course there are a lot of them, but it is not a general human thing, it’s cultural and regional. Layers want you to believe that everyone is lying all the time, that makes their lives more easy. But feel free to not believe me 😇.
Well then they will have to train their Ai with incorrect informations… politically incorrect, scientifically incorrect, etc… which renders the outputs useless.
Scientifically accurate and as close to the truth as possible never equals conservative talking points… because they are scientifically wrong.
It would be the same with liberal talking points and in general any human talking point.
Humans try to change the reality the way they want it, thus things they say are always incorrect. When they want to increase something, they make it appear less than IRL, usually. Also appearances are not universal.
Humans also simplify things acceptably for one subject, but not for another.
Humans also don’t know what “correct information” is.
A lot of philosophy connected to language starts mattering, when your main approach to “AI” is text extrapolation.
Math is correct without humans. Pi is the same in the whole universe. There are scientific truths. And then there are the the flat earth, 2x2=1, qanon anti vax chematrail loonies, which in different degrees and colour are mostly united under the conservative “anti science” folks.
And you want an Ai that doesn’t offend these folks / is taught based on their output. What use could that be of?
Ahem, well, there are obvious things - that 2x2 modulo 3 is 1, that some vaccines might be bad, that’s why farma industry regulations exist, that pi is also unknown p multiplied by unknown i or some number encoded as ‘pi’ string.
These all matter for language models, do they not?
It is already taught on their output among other things.
But I personally don’t think this leads anywhere.
Somebody someplace decided it’s a genial idea to extrapolate text, because humans communicate their thoughts via text, so it’s something that can be used for machines.
Humans don’t just communicate.
So you’re saying you lie to try and change reality or present it in a different way?
That’s horrible and I certainly don’t subscribe to this mentality. I will discuss things with people with an open mind and a willingness to change positions if presented with new information.
We are not arguing out of some tribal belief, we have our morals and we will constantly test them to try and be better humans for our fellow humans.
No. You are damn fucking well illustrating what I said, though.
Tell me more about how your theories of gay people being abominations are backed by science.
My theories?
I mean, this is an example. A liberal trying to start an argument with saying things that are false, but in his opinion will lead to something good.
Now do climate change!
You haven’t made any preposterous claims about my supposedly existing theories on climate change.
How about your science on how it’s actually good for children to starve at school if they are poor?
You don’t learn, do you?
Only because you are a layer does not conclude that all humans are egoistic layers. Of course there are a lot of them, but it is not a general human thing, it’s cultural and regional. Layers want you to believe that everyone is lying all the time, that makes their lives more easy. But feel free to not believe me 😇.
This doesn’t make any sense.
Liers got autocorrected in thier message.
I think you hurt peoples feelings lmao.
The truth just isnt very catchy. Thanks for trying though. Im still on lemmy for people like you.
Thx, also the whole idea, I think, is presented better in Tao Te Ching.