@Graingy and for the cherry on top, the parameters for negative feedback are to be narrowed to continue to evolve. A self perpetuating AI with no choice to create other AIs better than itself.
@Graingy I believe that an AI that can feel pain in a sense of outputting a negative feedback. You have to remember that they are simply code. An AI that feels pain when it does something wrong, will more likely than not attempt to achieve perfection. The likely end result would be constant pain that is unfixable due to not being able to achieve any further. That would be considered the end of it's lifespan. Assuming we get many of these AI's and used something like a generational model, the AI could in theory evolve to perfection over many generations.
@Graingy Possibly. But to aim for a more intelligent AI, it must be able to "feel" misery, as that is an effective way to make it either more humanitarian, or much less humanitarian. Either way the principle of allowing pain to be felt is one that will benefit the AI network.
@Graingy you do sound quite appropriate. Now to the matter at hand. To commit the act of murder you must kill a being. The question of wether AI is a being or not could be discussed, but it is currently irrelevant.
The topic was to cause the device to "feel" pain, which is not killing it. Instead it is prolonging what humans would describe as misery. So henceforth the question is not is it murder that it is ok, but committing misery.
@Graingy please revise the previous comments made that were furthering our discussion. Perhaps we've evolved beyond the basic principle of the initial discussion topic.
@Graingy to bring us back to the initial discussion topic, we are good people. And to behave such as that of an early hominid is really to behave of modern individual due to the fact that we are modern day individuals.
@Graingy GrEmLiNs
@Michiganstatepolice 9 months?
@Graingy the gremlins
+1My entry is complete.
@Graingy the panse-
@Graingy hehehehe
@LunarEclipseSP thanks, you are too.
+1@LunarEclipseSP thanks
+1Thank you
+1@Graingy Burnthink
Not gonna lie, waffles hit different
+1Congratulations on platinum
+3I too am going for a major in Aerospace engineering, so I say a special congratulations and good job, it ain't easy.
@Graingy mentalthink
@Graingy yes
+1Ancient
@Graingy I ate the would be good people
+1@Graingy probablethought
@Graingy alivethought
@Majakalona same here
+1@Graingy yummy
+1@Majakalona of which I get very little of
+1@Graingy organic think
@Graingy pentuple think
@DanielJoeer yeah, ignore that other dude, it lets you build better planes, I do the same thing, as I am on a phone as well.
@Graingy triplethink
@Zaineman got it
+1@Graingy good think
@Graingy much true
+1@DanielJoeer it looks great, nice design.
@Majakalona bro he's probably just running it to have better plane performance.
+1@Graingy quite so
+1@Graingy correct
+1@Graingy to judge is those who are logical
@Graingy correct. But is the perfect one worth more than the imperfect many?
@Graingy maybe to an emotional being as yourself and I. However to a purely logical being it is yet to be determined.
@Graingy "does your existence justify the suffering"
@Graingy well, If I created new AIs once perfect. Then It could be asked one singular question. "Do the means justify the ends?"
@Graingy and for the cherry on top, the parameters for negative feedback are to be narrowed to continue to evolve. A self perpetuating AI with no choice to create other AIs better than itself.
@Graingy correct. Make the objective to achieve creation of an AI that is better than it. Allowing for the eventual creation of perfection.
@Graingy we could allow avoiding the negative to cause a negative feedback. How's that for stimulating thought?
Sorry man, I hope you get better quickly
@Graingy I believe that an AI that can feel pain in a sense of outputting a negative feedback. You have to remember that they are simply code. An AI that feels pain when it does something wrong, will more likely than not attempt to achieve perfection. The likely end result would be constant pain that is unfixable due to not being able to achieve any further. That would be considered the end of it's lifespan. Assuming we get many of these AI's and used something like a generational model, the AI could in theory evolve to perfection over many generations.
Nice Job @MrCOPTY
+1@Graingy Possibly. But to aim for a more intelligent AI, it must be able to "feel" misery, as that is an effective way to make it either more humanitarian, or much less humanitarian. Either way the principle of allowing pain to be felt is one that will benefit the AI network.
@Graingy you do sound quite appropriate. Now to the matter at hand. To commit the act of murder you must kill a being. The question of wether AI is a being or not could be discussed, but it is currently irrelevant.
The topic was to cause the device to "feel" pain, which is not killing it. Instead it is prolonging what humans would describe as misery. So henceforth the question is not is it murder that it is ok, but committing misery.
@Graingy please revise the previous comments made that were furthering our discussion. Perhaps we've evolved beyond the basic principle of the initial discussion topic.
@Graingy to bring us back to the initial discussion topic, we are good people. And to behave such as that of an early hominid is really to behave of modern individual due to the fact that we are modern day individuals.
@Graingy that would be correct. Act like the early neanderthals which we would consider inferior.
@DOYOUMIND it does not