Tuesday, January 25, 2005

God-Given Value

Macht writes:
Even if humans had never fallen into sin, human life wouldn't be valuable in itself. Apart from God, human life is no more valuable than a hydrogen atom.

I agree that nothing is valuable 'in itself'. Rather, it's always a matter of being valuable for something (or to someone). God values us, therefore we have value (to God). Similarly, other people value us, therefore we have value (to those people).

What's the difference here? Either indirect ('relative') value is real value, in which case humans are far more valuable than hydrogen atoms regardless of God's existence; or else indirect value isn't real value, in which case nothing is valuable, again regardless of God's existence. The first option is obviously preferable - nihilism is just silly. But if one denies the reality of relative values, then silliness is all you're left with.

God created each human for a purpose. We literally exist to fulfill God's purpose. We don't exist and then have to somehow create some purpose for our existence.

God created us for a purpose. (But what purpose God?) It follows that we have a purpose for God. It doesn't follow that it is our purpose. Even if God existed, we would still have to 'create' our own purpose - even if we merely decided to make that purpose identical to God's.

Imagine some mad scientist created an army of intelligent (self-aware) robots, for the purpose of taking over the world. Now consider one of those AIs. Is it this individual's purpose in life to help MadSci take over the world? Not necessarily. It may be MadSci's purpose for them, but thinking agents can rebuke the 'purposes' of their creators. They must, in the end, decide their own 'purpose' in life for themselves.

So the reason it is wrong for humans to kill is not because human life is valuable in itself. The reason it is wrong to kill is because, except for in a very few circumstances, God has given us no authority to take a human life.

I think that's a pretty atrocious suggestion. In fact, I'd be fairly surprised if anyone was genuinely willing to embrace the consequences of this view. For suppose God were to tomorrow trumpet from the skies: "Behold, ye little mortals, the Jews have fulfilled the purposes I had for them. I value them no longer." Would that suddenly make it true that Jewish people are worthless? Would that make it morally permissible to hurt or kill them? Absurd!

I guess the central point is that theists seem to assume that God's subjectivity is somehow objective in a way that no-one else's subjectivity is. I don't see how that's supposed to work. It's as if they were to argue, "God likes the taste of pumpkin; therefore pumpkin is objectively tasty." Well I'm sorry, but it ain't. Jewish people have value, and the taste of pumpkin doesn't (to me), and God's opinion isn't gonna change that. He's not the only being who values things, after all. It seems the theist believes that 'might makes right'. But what a flimsy foundation for morality that is!

5 comments:

  1. I think its pretty difficult to try and isolate one specific property of God and examine its consequences for our 'meaning' in life. It seems to me that that is what Richard is doing. Certainly no one would embrace the specific view regarding the Jews that he suggested. But thats because it does not seem plausible that God would actually do that: to do that would be to break his word (Abrahamic Covenant, and confirmed again through the Davidic covenant), which, considering one of the properties that God holds is Truth, is impossible. It also ignores elements of God's love. What does God's love mean anyway? How is his love related to his justice? (I heard Nicholas Wolterstorff give an awesome lecture on the interconnectedness of these two properties).

    My basic point is that to isolate our purpose as relational only to God is to miss out on all the other properties of God which give human beings meaning. We are not here merely to be God's handmaidens, we are here because he desired that we have communion and a relationship with him. However, that understanding cannot be arrived at by analyzing one specific attribute of God. The totality of God's properties must be taken as a whole, which, unfortunatly is a much bigger task than I have time to do, at this stage in my philosophical career. 

    Posted by Peter

    ReplyDelete
  2. Blogger may be being a dummy, so if this appears a second time, forgive me:

    I think its pretty difficult to try and isolate one specific property of God and examine its consequences for our 'meaning' in life. It seems to me that that is what Richard is doing. Certainly no one would embrace the specific view regarding the Jews that he suggested. But thats because it does not seem plausible that God would actually do that: to do that would be to break his word (Abrahamic Covenant, and confirmed again through the Davidic covenant), which, considering one of the properties that God holds is Truth, is impossible. It also ignores elements of God's love. What does God's love mean anyway? How is his love related to his justice? (I heard Nicholas Wolterstorff give an awesome lecture on the interconnectedness of these two properties).

    My basic point is that to isolate our purpose as relational only to God is to miss out on all the other properties of God which give human beings meaning. We are not here merely to be God's handmaidens, we are here because he desired that we have communion and a relationship with him. However, that understanding cannot be arrived at by analyzing one specific attribute of God. The totality of God's properties must be taken as a whole, which, unfortunatly is a much bigger task than I have time to do, at this stage in my philosophical career. 

    Posted by Peter

    ReplyDelete
  3. lets say god said "kill the jews"
    A) most of us would not believe it was god
    B) Even if we did see it as his will we would still find it revolting because ones morals don't turn on a dime. So WE could feel that gods was immoral but at the same time he may not be in an objective sense.
    C) God (having a lot more information than us) could have some plan to welcome all the jews into heaven which for some obscure reason has become urgent - from their point of view (if they are informed) life on earth could be the bad thing that they should get finished as early as possible.

    we have subjective (to you) value which seems to basically include human life in general and objective value which we can ceate through a number of mechanisms. These are different.
    Now two ways (for example) that you could use to create an objective moral standards are
    1) a theoretical election - where everything in the universe votes for a moral "right" the collective creates a "moral standard"
    2) just take gods will

    In theory if god was "right" then gods conclusions would never be that pumpkin tastes good to you because he would never be wrong. you could even say pumpkin tastes good to you because he said it would.
    similarly it is nonsense to ask "what if 1+1 did not equal 2 because it does. or to put it in the form
    "if maths told me that 1+1 did not equal 2 maths must be wrong therefore we cannot put complete faith in maths." 

    Posted by GeniusNZ

    ReplyDelete
  4. " Imagine some mad scientist created an army of intelligent (self-aware) robots, for the purpose of taking over the world. Now consider one of those AIs. Is it this individual's purpose in life to help MadSci take over the world? Not necessarily. It may be MadSci's purpose for them, but thinking agents can rebuke the 'purposes' of their creators. They must, in the end, decide their own 'purpose' in life for themselves."

    This is an assumption that you don't justify in this post. Indeed, I believe such an assumption is a logical impossibility - it would be impossible for such a machine to choose its own purpose.

    Let's follow the AI's life:

    For it to adopt a purpose other than that given to it by its creator, it would have to learn a new purpose once it was 'released into the world' - it would have to learn from the environment and take on some new values.

    Firstly, the very first action it takes must have been pre-programmed. Let’s say that it’s been programmed so that its first action is to blend in to human life and to look for weaknesses in the human population that it can exploit to take over the world.

    The first thing it observes on its release is that humans wear clothes. So what does it do with this information? Does it nip down to the nearest Gap and buy some threads? This is the AI’s first decision, so it hasn’t had time to learn from its environment. Therefore it can only make a decision based on its pre-programming. Wearing clothes is going to help it blend in to the human population, so down to Gap as fast as possible.

    Second decision: casual or smart? Again, just like any other decisions, this isn’t a decision it can make without a value. It’s been told to blend in – but which humans should it blend in with? The scruffs or the suits? It scans its database for advice – nothing programmed. But it simply can’t make a decision without a value – just not physically possible option for a computer program. Luckily, the programmers had anticipated such a scenario and programmed the machine to go out and get more information. It observes that the people with most power seem to be the ones wearing suits, so the AI has now learned a new value – a secondary value: dress smart. But note that this secondary value must be dependent on a value already held. It’s just not possible for it to make a decision unless it can value the outcome against a value that it already holds (which could be either learned or pre-programmed, but if it were learned, that value would have to be based ultimately on a pre-programmed value).

    Principle: no decision can be taken without a value that is already held.

    The assumption is that the AI would somehow learn from the human population so much that after say 10 years it has a whole new set of values on which to decide its own purpose.

    But its not possible – it can only learn secondary values on the basis of values already programmed. Unless someone rewrites its programming (at the programming level, not the learning level) every action that the AI will ever take must be an attempt to take over the world. This is a simple requirement of programming.

    Such a conclusion would therefore require that humans themselves are pre-programmed with a set of values – coded into the DNA. And that each decision we make must ultimately be referred back to those pre-programmed genetic values.



     

    Posted by Conscious Robot

    ReplyDelete
  5. The example I have in mind is one where the A.I. has been subject to very open-ended programming, such that it can easily pick up new values, just as people do. Such openness to experience would indeed have to be part of the original "pre-programming". So be it. My point is simply that the 'purpose' the creator had in mind need not necessarily be adopted by the creation. (Consider it as a programming flaw if you must - though it's hardly relevant to the present point.) 

    Posted by Richard

    ReplyDelete

Visitors: check my comments policy first.
Non-Blogger users: If the comment form isn't working for you, email me your comment and I can post it on your behalf. (If your comment is too long, first try breaking it into two parts.)

Note: only a member of this blog may post a comment.