tag:blogger.com,1999:blog-16594468.post9129658963355576773..comments2023-10-24T04:29:23.693-06:00Comments on Atheist Ethicist: Programmable MoralityAlonzo Fyfehttp://www.blogger.com/profile/05687777216426347054noreply@blogger.comBlogger5125tag:blogger.com,1999:blog-16594468.post-46862592136610585022008-07-31T05:26:00.000-06:002008-07-31T05:26:00.000-06:00I have to say that, though we are intelligent agen...I have to say that, though we are intelligent agents, we are hardly able to alter our own programming.<BR/><BR/>If we were, I have a whole lot of programming that I would alter. I have whole subroutines for being exceptionally shy that I would delete. And the line that identifies 'love of chocolate' would be edited to become 'love of brocoli'.<BR/><BR/>However, about the only way we have to alter our own programming is to realize the ways in which our interactions with our environment will alter our programming, and then alter our environment accordingly.<BR/><BR/>And there is no "free will" we can draw upon to alter the causal relationships of nature. If we do act to alter our programming, then this is still an intentional action. As such, it must have causes. Namely, it, too, must follow the formula of maximizing fulfillment of our desires, given our beliefs.<BR/><BR/>I wish to alter my shyness and my love of chocolate only because those desires tend to thwart other desires that I have. Those 'other desires' would be my reasons for action.Alonzo Fyfehttps://www.blogger.com/profile/05687777216426347054noreply@blogger.comtag:blogger.com,1999:blog-16594468.post-61070024215185139242008-07-30T08:17:00.000-06:002008-07-30T08:17:00.000-06:00I'm sorry, you're correct. A BDI agent is not the ...I'm sorry, you're correct. A BDI agent is not the same as a general AI and I made the leap on my own. I guess it was probably due to the last line speaking of rights for robots, which would only make sense if they had true intelligence.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-16594468.post-31283312859233744442008-07-30T07:19:00.000-06:002008-07-30T07:19:00.000-06:00Eneasz: Regarding the ability to modify their own ...Eneasz: Regarding the ability to modify their own source:<BR/><BR/>What? There's nothing in Alonzo's definition that implies they should. You are an intelligent being, but you don't have the ability to arbitrarily make changes to your own source besides what it already allows through “runtime APIs.” One could relatively easily make a simulation of how these “creatures” would act, and self-modifiability wouldn't even have to be considered. I think you've watched too much sci-fi.Mikoangelohttps://www.blogger.com/profile/13089931404622865467noreply@blogger.comtag:blogger.com,1999:blog-16594468.post-9218749989755447472008-07-30T01:14:00.000-06:002008-07-30T01:14:00.000-06:00While I'm going on about the topic, I don't think ...While I'm going on about the topic, I don't think something as simple as green and red lights would have any effect, because any truely intelligent AI would have the ability to modify it's own source code. It could simply alter it's own code to ignore flashing lights entirely.<BR/><BR/>They only reason to pay attention to flashing lights would be if they are accurate signals of something about to happen in the real world - signals of danger or help.<BR/><BR/>I think this may tie in with an interesting hypothesis I once heard - no civil protest has ever had any effect unless couple with at least a threat of violence. The (debatable) examples I was given included:<BR/>A - MLK Jr wouldn't have been able to create the change he had if, in addition to his peacefull protests, there hadn't also been race riots and groups like the Black Panthers threatening violence.<BR/>B - Ghandi would have gotten nowhere if not for WWII having completely depleated British military resources and the threat of uprising and riots<BR/>C - The US Labor movement was helped immensly by violent communist revolutions in other countries and the desire to prevent such an armed revolt in the States.<BR/><BR/>I admit, I have not looked deeply into this subject, and I don't think the gay-rights movement has had any sort of threats of violence to propel it forward. However it was an interesting hypothesis, and I doubt that simple blinking lights could have any effect at all unless they are coupled with the real possibility of violence or alliance.<BR/><BR/>I do not wish to appear to be endorsing violence. I do not wish to live in another Iraq. I have never done physical harm to another, and I hope I never do. But it seemed a valid point to raise.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-16594468.post-51558842604443709632008-07-30T00:57:00.000-06:002008-07-30T00:57:00.000-06:00On a non-ethicics note, I wanted to warn against c...On a non-ethicics note, I wanted to warn against creating "moral" AI without first fully grokking intelligence. The typical response to the post you just made is that the AI in question would have a very strong desire to convert as much of the matter on Earth as possible into blinking green lights. I agree with Eliezer Yudkowsky that, in the field of AI research, creating "Friendly AI" is the most important topic we can possibly pursue.<BR/><BR/>Early AI attempts have already demonstrated a proclivity to "wire-head" themselves (ie: creating positive feedback loops, much like a human who wishes to enter The Matrix.) See http://www.aliciapatterson.org/APF0704/Johnson/Johnson.html <BR/><I>During one run, Lenat noticed that the number in the Worth slot of one newly discovered heuristic kept rising, indicating that Eurisko had made a particularly valuable find. As it turned out the heuristic performed no useful function. It simply examined the pool of new concepts, located those with the highest Worth values, and inserted its name in their My Creator slots.<BR/><BR/>It was a heuristic that, in effect, had learned how to cheat.<BR/></I><BR/><BR/>Assuming that an AI would be smarter than a human in at least the same respect that a human is smarter than a dog (and honestly, if they were in our way we humans would have no problem wiping out the entire canine species), making sure than any AI we develope is Friendly BEFORE it goes online seems required if we wish to ensure the survival of human-descendants.Anonymousnoreply@blogger.com