Thursday, April 26, 2007

Chatalogue: Think Globally, Act Locally?

Ernie and I chatted again tonight. I'll have some more to say later, I hope, but for now I'll just add the transcription:

E: Hello!

A: Hi

A: So, where to start this week?

E: I see you've been doing some homework as well.

A: A bit. The Toynbee book I got was not exactly what I expected.

E: Anyway, i do think you captured the essence of his argument.

E: http://little-endian.blogspot.com/2007/04/civilized-inference.html

A: Was there anything you wanted to say about that, or are we jumping back into the G/NOD thing?

E: I think the key distinction might be what I'm calling "pseudo-factual" statements.

A: Explain?

E: Does that mean that the religious beliefs of the Balinese were true? No, it just means they had evolved successful rules and encoded those rules in religious stories and rituals. The beliefs were false, but they were helpful. On the other hand, while the rules may have been well-adapted to Balinese climate, they may not have worked well elsewhere or during periods of abnormal rainfall or other sus

E: In your example of the Balinese, their beliefs were not entirely "true", but they were "contextually accurate within a given domain."

A: But attempting to extrapolate further from those beliefs would have been difficult, yes?

E: Isn't it always?

E: In my epistemology, *all* knowledge is merely "contextually accurate within a given domain."

E: (some just span bigger domains than others)

A: But you have to be careful of the domain.

E: Sure -- but that is always true, yes?

A: I guess what I see you doing is saying that (for instance) Jesus/the Bible teaches certain things about love. Love seems to be a good thing. So, since Jesus/the Bible was accurate in that one instance, we can trust him/it about other things.

E: No, not at all.

A: So, how do you support the supernatural/theistic aspects of your beliefs?

E: That's too many steps to jump in one leap.

E: Let me start with a simpler set of statements, to see if we can identify where we part company.

E: 1. Successful moral systems must embody some number of valid truths about human nature.

E: 2. The ontological claims of those moral systems may or may not be counterfactual.

E: 3. Medieval Western Christianity (MWC) -- while far from perfect -- has proven to be the most fertile ground for succesful moral innovation of any system yet attempted.

A: How are you determining that?

E: Well, can we go back to the metric of "greatest good for the greatest number?"

E: If you buy Toynbee's characterization of "Western Christendom" as a civilization, it has produced more net happiness than any other civilization to date.

A: See, I don't see how you can make that determination so easily, nor can you easily tie that production to Christianity.

A: It's not that I don't think there is some truth there, but I think there is too much contingency and dynamicism and complexity to make simple attributions like that.

E: Well, here's a simple test.

E: (I have this weird feeling of deja vu -- have we discussed this before?).

E: Compare the "quality of life" of someone in Western Christendom, and its ability to respond to external threats, with that of any other civilization.

A: At what point in time?

E: Pick a time.

A: And how do we know that Christianity itself was responsible for that?

E: The only exception I can think of is the Moorish empire.

E: I didn't say that "Christianity per se" was necessarily responsible.

E: I am saying that a culture which (at least initially) was based on MWC has proved to be extremely succesful.

E: It is a data point, not a proof.

A: OK

E: So, moving on...

E: 4. Many of the ontological and ethical presuppositions of MWC (though far from all) are still a vital part of Contemporary Western Civilization (CWC).

E: go ahead

A: Wasn't sure if you were going on to (5) or not...

E: trying to

E: hard to phrase properly...

E: 5. Any system that presumes to improve on MWC needs to adequately account for those axioms that are in use by CWC, and ideally provide better explanatory and predictive power.

E: there

A: Let me think about that a second...

A: I guess the reason I have trouble with this line of argument is that whatever Christianity (or MWC) got right about ethics, many of its claims about reality go well beyond what are supported by those parts it got right.

E: Maybe, maybe not.

E: The question is, do you concede that your "improvement to Christianity" needs to get right at least as many things that Christianity did?

A: Yes and no.

A: I think the "improvement to Christianity" is wider than ethics. I also think that many of the improvements that MWC has made in the area of ethics have come about in contrast to the established views of Christians.

A: Not sure if that was very clear.

E: Sure, cast the net as wide as you wish.

E: But do you concede that, on the whole, your improvement to Christianity (as we know it) actually needs to be an overall improvement on areas that matter, not just superior in one tiny facet?

A: Yes.

E: Ok, good.

E: We can go back to G/NOD, unless you'd like to tackle something else.

A: That's fine.

E: Okay, well from this perspective I would claim that MWC implicitly made several very powerful assumptions about what we can interpret as G/NOD.

A: Which are?

E: I. That G/NOD is a singular, well-defined entity covering all of humanity.

E: II. That the moral rules governing G/NOD are *discovered* more than they are *invented*.

E: III. That those rules are in principle discoverable by human beings in the right circumstances

E: IV. That there is such a thing as virtuous character, which is always better than vicious character.

E: V. That it is always rational to do that which is virtuous.

E: Of course many of these were also inherited from the Greeks.

E: But MWC managed to develop an ontological scheme that accounted for everything they liked about the Greeks, and quite a bit more.

E: over to you...

A: And does this relate to Christianity specifically in any way?

A: Or theism generally?

E: At this level, not necessarily (except implicitly in III).

E: However, it does relate to my claims regarding the Deistic Hypothesis, and our first goalpost.

A: What part of the ontological scheme requires a deity?

A: Or transcendent morality?

E: Those terms aren't necessarily well-defined.

A: How does this relate to you deistic hypothesis then? Doesn't that imply a deity somewhere?

A: I just don't see where you make the leap.

E: From the MSSB, I define DH as "the various systems encompassing humanity are the result of a benevolent Purpose -- one sympathetic to human Reason, Virtue, and Happiness"

E: I am simply asserting that one can derive all five of those Principles from the DH.

A: I guess it doesn't appear to me that such a Purpose is necessary. Those principles (or similar) can be derived from a G/NOD without reference to external purpose.

E: And that Desire Utilitarianism requires those same assumptions (but ad hoc) "in order to support meaningful “social inquiry.”

E: "

E: No, they can be *asserted* for G/NOD.

E: But not derived in any meaningful sense.

E: still there?

A: Scrolling back up to look at the statements...

A: DU says that desires are real. Individuals have them. They can be aggregated.

A: There are relationships between desires, mediated by actions.

E: Sure.

E: But is there a global solution that maximizes then? Do we have sufficient information to make that determination?

A: There is at least one such solution, right? How can there not be at least one maximum? Unless it goes to positive infinity somewhere...

E: There can be multiple local maxima.

E: It could be flat (zero-sum) with multiple minima.

A: Yes. But at least one and possible multiple global maxima.

A: It's even possible that the maximum is "negative", I suppose, but it's still a maximum.

E: Well, okay. But is it reachable?

A: How is that relevant here?

E: More importantly, is virtuous behavior due to virtuous desire the optimal means to get closer to that maxima?

E: In order for DU to be actionable, it seems necessary to answer questions like that.

A: Virtuous behavior is behavior that promotes desires, so by definition, virtuous behavior move close to at least a local maxima.

A: "closer", not "close"

E: Right, but what if that local maxima takes one away from the global maxima?

A: Again, how is this relevant here? Yes, it seems possible.

E: My point is that if "morally good" is defined relative to G/NOD -- and that's what we care about -- then mere local statements and decisions about what is "functionally good" don't tell us anything about genuine morality.

E: We need some additional assumptions about how maximizing the local Network of Desires (L/NOD) impacts the G/NOD.

E: I'm trying to decide whether DU (as you understand it) implicitly makes those assumptions, or denies their relevance.

A: Ooh, I think we are talking about different sense of the word "global".

A: I am imagining a multi-dimensional landscape where the "altitude" at any point is the total amount of desire fulfillment.

E: okay...

A: A local maxima does not refer to maximizing desires locally (socially speaking) but maximizing desires in the neighborhood of the current position on the landscape.

A: So, the dimensions correspond to the strenghts of various desires.

E: I'm not sure I see the difference.

A: Well, what do you mean by the L/NOD?

E: At any rate, if we don't have a unimodal landscape, it seems you have the same issues.

E: L/NOD = the local set of entities and their desires

A: Local geographically (as in social connections)?

E: hold on

E: okay, I'm back.

A: Let's say there are only two desires that people have, A and B. So there is a two dimensional landscape (with hills).

E: Let us assume that -- at least in principle -- I can observe the entities that I have personal awareness of, and make a plausible assessment of their desires.

E: (which itself is a big assumption, but I can live with it).

A: The hills are the total amount of desire fulfillment in the entire population when that population has desires a in domain and b in domain

E: [feel free to continue your example while I work on mine]

A: Ooops. that was domain ( A ) and domain ( B ) that cleverly got converted to pictures...

E: I was wondering... Beer and angels made a nice contrast

A: So, there may be a local maximum of total desire fullfillment at a=a1 and b=b1 but a global maximum at a=a2 and b=b2. But in both cases, the desire fulfillment of the entire population is being measured.

E: If P is the people whose desires I can observe or infer, then L/NOD is simply D , the aggregate of all their desires.

A: he he

E: D[P]

E: Anyway, I'm concerned with the epistemic problem.

A: How can we find the solution? How do we know when we've found it?

E: Exactly.

E: How indeed?

E: I am asserting that there needs to be some sort of paradigm.

E: And more, that there needs to be a widespread moral consensus about the validity of that paradigm.

A: I don't think I follow.

A: For instance, consider this alternative: each individual promotes in others desires that will tend to satisfy his own desires.

E: Sure.

E: I call that manipulation.

A: At the same time, of course, he is acted on by others in the same way.

A: Do we want to get side-tracked on manipulation?

A: It doesn't have to be dishonest or "tricky".

E: I'm just labeling your definition.

E: Feel free to keep going...

A: Grrr... trying agin.

A: again

A: My point is just that progress can be made by individuals acting as individuals without the widespread consensus you mentioned.

A: It might go faster with consensus, but not necessary.

E: I disagree, at least under the terms you defined above.

E: I would assert that influencing other people's desires requires either a) power, or b) moral authority to succeed.

A: What is moral authority? If not power?

E: And that if influencing you to act according to my desires is not perceived by you as in your best interest, it will interpreted as unhealthy and illegitimate manipulation.

E: If you like you can define moral authority as a form of "soft power" to distinguish it from "hard power."

A: Praising somebody for something they did is a way to manipulate them and others to do similar things again. If the praise is sincere, is it unhealthy?

E: Define "sincere" and "unhealthy".

A: Sincere simply means here that I gave praise because I actually approved of what I am praising.

E: I would define "sincere praise" as "valuing something as truly good", not merely as "beneficial to me."

A: Are you asserting the existence of intrinsic goodness apart from anybody's desires?

E: At least part from the "local" desire, yes.

E: For example...

A: It does not have to be beneficial to me to be praiseworthy.

A: I can praise somebody for helping somebody else, knowing that similar actions could be helpful to me in the future.

E: If a kid in high school who wanted my approval stole the answers to the physics exam, I could easily praise her for her ingenuity and courage.

E: And I would be fully sincere.

E: But it would be manipulative and immoral in the larger scheme, no?

E: Sure, *some* praise is healthy; but that doesn't mean it all is.

A: Oh, definitely.

E: Okay, so let us define 'healthy praise' as this which encourages behavior that tends to maximize the G/NOD, whether or not it maximizes the L/NOD.

A: I'm sorry if I implied that sincerity was the *only* criteria by which the praise should be judged.

E: Or do you have a criteria for "healthy" that doesn't reference the G/NOD?

A: No.

E: So, we're running out of time.

A: I was just going to say...

E: I'm not in a hurry, but we should probably try to wrap up..

A: Yep.

A: I guess we'll leave it there... maybe I'll blog about it before next week.

E: My position is that it is possible to make meaningful statements about how to maximize L/NOD with fairly weak assumptions about reality, but that to make meaningful statements about the G/NOD requires fairly *strong* assumptions about reality (comparable to I-V) above.

E: Is that much at least clear?

A: Clear as lemonade, at least.

E: I'll settle for translucent. Hopefully we can start with that next week, unless we manage to resolve it before then.

A: OK, "talk" to you then.

E: Bye!

A: bye

No comments: