tag:blogger.com,1999:blog-6642011.post110665164433406255..comments2023-10-29T10:32:36.914-04:00Comments on Philosophy, et cetera: God-Given ValueRichard Y Chappellhttp://www.blogger.com/profile/16725218276285291235noreply@blogger.comBlogger5125tag:blogger.com,1999:blog-6642011.post-1107054411648765972005-01-29T22:06:00.000-05:002005-01-29T22:06:00.000-05:00The example I have in mind is one where the A.I. h...The example I have in mind is one where the A.I. has been subject to very open-ended programming, such that it can easily pick up new values, just as people do. Such openness to experience would indeed have to be part of the original "pre-programming". So be it. My point is simply that the 'purpose' the creator had in mind need not necessarily be adopted by the creation. (Consider it as a programming flaw if you must - though it's hardly relevant to the present point.) <br /><br /><A></A><A></A>Posted by<A><B> </B></A><A HREF="http://www.blogger.com/r?pixnaps.blogspot.com%2F" TITLE="r dot chappell at gmail dot com">Richard</A>Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-6642011.post-1107053878092767952005-01-29T21:57:00.000-05:002005-01-29T21:57:00.000-05:00" Imagine some mad scientist created an army of in..." Imagine some mad scientist created an army of intelligent (self-aware) robots, for the purpose of taking over the world. Now consider one of those AIs. Is it this individual's purpose in life to help MadSci take over the world? Not necessarily. It may be MadSci's purpose for them, but thinking agents can rebuke the 'purposes' of their creators. They must, in the end, decide their own 'purpose' in life for themselves."<br /><br />This is an assumption that you don't justify in this post. Indeed, I believe such an assumption is a logical impossibility - it would be impossible for such a machine to choose its own purpose. <br /><br />Let's follow the AI's life:<br /><br />For it to adopt a purpose other than that given to it by its creator, it would have to learn a new purpose once it was 'released into the world' - it would have to learn from the environment and take on some new values. <br /><br />Firstly, the very first action it takes must have been pre-programmed. Let’s say that it’s been programmed so that its first action is to blend in to human life and to look for weaknesses in the human population that it can exploit to take over the world.<br /><br />The first thing it observes on its release is that humans wear clothes. So what does it do with this information? Does it nip down to the nearest Gap and buy some threads? This is the AI’s first decision, so it hasn’t had time to learn from its environment. Therefore it can only make a decision based on its pre-programming. Wearing clothes is going to help it blend in to the human population, so down to Gap as fast as possible.<br /><br />Second decision: casual or smart? Again, just like any other decisions, this isn’t a decision it can make without a value. It’s been told to blend in – but which humans should it blend in with? The scruffs or the suits? It scans its database for advice – nothing programmed. But it simply can’t make a decision without a value – just not physically possible option for a computer program. Luckily, the programmers had anticipated such a scenario and programmed the machine to go out and get more information. It observes that the people with most power seem to be the ones wearing suits, so the AI has now learned a new value – a secondary value: dress smart. But note that this secondary value must be dependent on a value already held. It’s just not possible for it to make a decision unless it can value the outcome against a value that it already holds (which could be either learned or pre-programmed, but if it were learned, that value would have to be based ultimately on a pre-programmed value).<br /><br />Principle: no decision can be taken without a value that is already held.<br /><br />The assumption is that the AI would somehow learn from the human population so much that after say 10 years it has a whole new set of values on which to decide its own purpose.<br /><br />But its not possible – it can only learn secondary values on the basis of values already programmed. Unless someone rewrites its programming (at the programming level, not the learning level) every action that the AI will ever take must be an attempt to take over the world. This is a simple requirement of programming.<br /><br />Such a conclusion would therefore require that humans themselves are pre-programmed with a set of values – coded into the DNA. And that each decision we make must ultimately be referred back to those pre-programmed genetic values. <br /><br /><br /><br /> <br /><br /><A></A><A></A>Posted by<A><B> </B></A>Conscious RobotAnonymousnoreply@blogger.comtag:blogger.com,1999:blog-6642011.post-1106683038682683062005-01-25T14:57:00.000-05:002005-01-25T14:57:00.000-05:00lets say god said "kill the jews"
A) most of us w...lets say god said "kill the jews" <br />A) most of us would not believe it was god<br />B) Even if we did see it as his will we would still find it revolting because ones morals don't turn on a dime. So WE could feel that gods was immoral but at the same time he may not be in an objective sense.<br />C) God (having a lot more information than us) could have some plan to welcome all the jews into heaven which for some obscure reason has become urgent - from their point of view (if they are informed) life on earth could be the bad thing that they should get finished as early as possible.<br /><br />we have subjective (to you) value which seems to basically include human life in general and objective value which we can ceate through a number of mechanisms. These are different. <br />Now two ways (for example) that you could use to create an objective moral standards are <br />1) a theoretical election - where everything in the universe votes for a moral "right" the collective creates a "moral standard"<br />2) just take gods will<br /><br />In theory if god was "right" then gods conclusions would never be that pumpkin tastes good to you because he would never be wrong. you could even say pumpkin tastes good to you because he said it would. <br />similarly it is nonsense to ask "what if 1+1 did not equal 2 because it does. or to put it in the form<br />"if maths told me that 1+1 did not equal 2 maths must be wrong therefore we cannot put complete faith in maths." <br /><br /><A></A><A></A>Posted by<A><B> </B></A><A HREF="http://www.blogger.com/r?geniusnz.blogspot.com" TITLE="spat012 at hotmail dot com">GeniusNZ</A>Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-6642011.post-1106677536330284592005-01-25T13:25:00.000-05:002005-01-25T13:25:00.000-05:00Blogger may be being a dummy, so if this appears a...Blogger may be being a dummy, so if this appears a second time, forgive me:<br /><br />I think its pretty difficult to try and isolate one specific property of God and examine its consequences for our 'meaning' in life. It seems to me that that is what Richard is doing. Certainly no one would embrace the specific view regarding the Jews that he suggested. But thats because it does not seem plausible that God would actually do that: to do that would be to break his word (Abrahamic Covenant, and confirmed again through the Davidic covenant), which, considering one of the properties that God holds is Truth, is impossible. It also ignores elements of God's love. What does God's love mean anyway? How is his love related to his justice? (I heard Nicholas Wolterstorff give an awesome lecture on the interconnectedness of these two properties).<br /><br />My basic point is that to isolate our purpose as relational <I>only</I> to God is to miss out on all the other properties of God which give human beings meaning. We are not here merely to be God's handmaidens, we are here because he desired that we have communion and a relationship with him. However, that understanding cannot be arrived at by analyzing one specific attribute of God. The totality of God's properties must be taken as a whole, which, unfortunatly is a much bigger task than I have time to do, at this stage in my philosophical career. <br /><br /><A></A><A></A>Posted by<A><B> </B></A><A HREF="http://www.blogger.com/r?http%3A%2F%2Fdinnertabledonts.blogspot.com" TITLE="">Peter</A>Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-6642011.post-1106677003146859242005-01-25T13:16:00.000-05:002005-01-25T13:16:00.000-05:00I think its pretty difficult to try and isolate on...I think its pretty difficult to try and isolate one specific property of God and examine its consequences for our 'meaning' in life. It seems to me that that is what Richard is doing. Certainly no one would embrace the specific view regarding the Jews that he suggested. But thats because it does not seem plausible that God would actually do that: to do that would be to break his word (Abrahamic Covenant, and confirmed again through the Davidic covenant), which, considering one of the properties that God holds is Truth, is impossible. It also ignores elements of God's love. What does God's love mean anyway? How is his love related to his justice? (I heard Nicholas Wolterstorff give an awesome lecture on the interconnectedness of these two properties).<br /><br />My basic point is that to isolate our purpose as relational <I>only</I> to God is to miss out on all the other properties of God which give human beings meaning. We are not here merely to be God's handmaidens, we are here because he desired that we have communion and a relationship with him. However, that understanding cannot be arrived at by analyzing one specific attribute of God. The totality of God's properties must be taken as a whole, which, unfortunatly is a much bigger task than I have time to do, at this stage in my philosophical career. <br /><br /><A></A><A></A>Posted by<A><B> </B></A><A HREF="http://www.blogger.com/r?http%3A%2F%2Fdinnertabledonts.blogspot.com" TITLE="">Peter</A>Anonymousnoreply@blogger.com