Epistemic Value

Monday, July 10, 2006

Jeffrey on Value of Knowledge

I've been revisiting some classics in the foundations of subjective probability and thought I'd share this opening paragraph from Richard Jeffrey, "Probable Knowledge," p. 227 in Kyburg and Smoker, Studies in Subjective Probability (1980).

-------
The central problem of epistemology is often taken to be that of explaining how we can know what we do, but the content of this problem changes from age to age with the scope of what we take ourselves to know; and philosophers who are impressed with this flux sometimes set themselves the problem of explaining how we can get along, knowing as little as we do. For knowledge is sure, and there seems to be little we can be sure of outside logic and mathematics and truths related immediately to experience. It is as if there were some propositions--that this paper is white, that two and two are four--on which we have a firm grip, while the rest, including most of the theses of science, as slippery or insubstantial or somehow inaccessible to us. Outside the realm of what we are sure of lies the puzzling region of probable knowledge--puzzling in part because the sense of the noun seems to be cancelled by that of the adjective.
The obvious move is to deny that the notion of knowledge has the importance generally attributed to it, and try to make the concept of belief do the work that philosophers have generally assigned the grander concept. I shall argue that this is the right move.
-------

I would say *justified* belief rather than belief simpliciter, but the point is well-taken-by-me. However, the passage seems a bit obscure in one respect.

The obscure-to-me part is that Jeffrey seems to be instantiating this argument pattern.

(JA1) Knowledge is rare, thus knowledge is not valuable.

This seems quite odd since value usually varies in direct proportion to rarity: "scarcity begets value". Yet this interpretation is hard to avoid. Consider the sentences of the paragraph sequentially. S1 is a statement about the history and sociology of epistemology and poses the following question.

(JQ1) How do we get along, knowing as little as we do.

S2 and S3 defend the presupposition of the rarity of knowledge in (JQ1), i.e. the Rarity Thesis.

(RT) Knowledge is rare (among humans generally).

Then S4 coins a new technical term "probable knowledge" which I assume comes to this.

(JPK) Epistemic item k is an instance of probable knowledge for S at t iff at t S knows p, but S isn't sure that p (or perhaps has no *right* to be sure or has no *grounds* for surety or what have you).

But then the very next "move" is to assert the Knowledge-ain't-all-it's-cracked-up-to-be Thesis.

(KACTB) Knowledge doesn't have the value traditionally attributed to it.

Now I heartily endorse (KACTB) but (JA1) doesn't seem to be the right way to get there at first blush.

However, I think there's a hint in (JQ1). Presumably, whatever "getting along" entails, it entails using items with some kind of positive epistemic status (to use Chisholm's phrase) to guide our decisions. This has traditionally been attributed to knowledge (see my post here concerning Chisholm on knowledge, the right to be sure, and action-guiding evidence).

If knowledge *did* play this important role of being that-which-guides-our-decisions, then that would surely confer a good deal of value upon it. However, precisely what Jeffrey will endorse, of course, is Bishop Butler's Thesis.

(BBT) Probability is the very guide of life.

If my reconstruction is right, then the idea is this:

Argument A

1. Knowledge has its traditional degree of value only if it's the very guide of life (in the sense that sub-known items are *not*).

2. But knowledge is *not* the very guide of life (in the sense specified in 1).

3. Thus, it is not the case that knowledge has its traditional degree of knowledge.


So we've got a valid argument and I take (BBT) to be secure--in some precisification at least--and so 2. is true. Thus 1 is the premise to focus on. The best reason I can think of to deny 1. would be be some form of the Knowledge Maximality Thesis.

(KMT) Knowledge entails a maximal, or near maximal, degree of some positive epistemic status.

This would be true if, for instance, knowledge entailed certainty (which I take it it does not) or the right to be sure (see post link above).

But even then as long as not *all* of the value traditionally attributed to knowledge was explained by (KMT) and *some* were derived from exclusive-action-guidance 1 would still be true. It's degree of truth would depend on the degree to which the value traditionally ascribed to knowledge was derived from this function. The following thesis, at least, seems safe:

(TD1) To the extent that the value of knowledge was derived from exclusive-action-guidance, to that very extent (BBT)--suitably understood--threatens the value of knowledge.

So if we had the following premise

(P1) A lot of the value of knowledge was derived from it's supposedly being exclusively action guideing.

then we'd have the following conclusion via (TD1) and (P1).

(C1) A lot of the value of knowledge is threatened by (BBT).

Now clearly (KMT) mitigates (C1), but--in addition to the Chisholm reference above--the literature on induction and inductive acceptance (the portion which doesn't attend to probability kinematics) is replete with something akin to the the Knowledge Is the Very Guide of Life Thesis.

(KGLT) Knowlege is the very guide of life.

There are two forms of (KGLT) I see quite often in the literature on induction.

(KGLTa) Knowledge is the evidence upon which we conditionalize.

(KGLTb) The data upon which (ampliative) inductive principles operate are those known via empirical observation.


The probabilist (radical or not who accepts (BBT)) will reject both theses.

So it seems that probabilism constitutes one route to a value-shift in epistemology, a transference of some of the value--plausibly a considerable quantity--of knowledge to belief or justified belief.

9 Comments:

  • At 6:24 PM, Anonymous Anonymous said…

    I think Jeffrey's view has a big hole, and it is that you need certainty for probability. Jeffrey conditionalisation has two big flaws, first, it is ridiculously complicated, and second, everything ends up tending towards .5. Without JCOND, you can't even say what probability is without reference to knowledge and certainty. So knowledge is the ultimate guide to action (life), the only real contender being value. The importance of probabilistic reasoning makes the central project of epistemology more important, not less. We need to have a unifying theory that links the folk concept of knowledge up to probability calculus. Saying that we can't be certain of anything but logical truths and immediate experience is just false. I am certain of many more things that these. So the idea that probability 1 can only be applied to these things is likewise false. This is where epistemology should be focussed, on certainty as probability 1 in non tautological non immediately experienced propositions. I am certain that the Thames runs through London, my degree of belief in this proposition is 1, it has an objective probability of 1, and I know it to be the case. This is a fact about certainty, probability and knowledge. Any probability theorist who denies it must change their theory. Since many, if not most probability theorists would deny it, the philosophical value of studying knowledge is as high as it has ever been. When the consensus is absurd, it should be over turned.

     
  • At 2:30 PM, Blogger Trent_Dougherty said…

    Jonny, I'm not going to touch the claims about JCOND, that would take us way to far afield and I'm in agreement with you that unit credence can be extended beyond logic and and immediate experience.

    However, there are two respects in which I'm not yet convinced you've exposed a major problem with Jeffrey's arguments.

    1. Someone might well get along without any certainties/knowledge.

    2. I suggested that it was once common to think of *only* knowledge as evidence. Note well the caveat on the first premise of Argument A.

    "P1. Knowledge has its traditional degree of value only if it's the very guide of life (in the sense that sub-known items are *not*)."

    Knowledge had a special status and now it's just a special case.

     
  • At 12:12 PM, Anonymous Anonymous said…

    Thanks for answering my comment. The points are subtle. I am doubting
    1. Someone might well get along without any certainties or knowledge.

    Even were we to accept JCOND as a valid method of belief revision under uncertainty, this still doesn't mean that anyone could get along without ANY certainty.This is because the value system of an individual is based on certainty, and the terms in which the decision is represented results in certainty. Beliefs, knowledge and other mental representations at least sometimes must result in action. Action is necessarily coarse grained. You cannot 90% get out of bed. Actions themselves must be monitored. To be flippant, there is going to be some point in the afternoon where any risidual doubts that you got out of bed in the morning resolve to 0.

     
  • At 4:26 PM, Blogger Trent_Dougherty said…

    "the value system of an individual is based on certainty"

    Not sure what you mean by that, could you expand?

    "To be flippant, there is going to be some point in the afternoon where any risidual doubts that you got out of bed in the morning resolve to 0"

    Yes, but your statement is in the indicative mood whereas mine was in the subjunctive. I was questioning the existence of a *logical* relation between being a human-like rational agent and having certainties. All I need for that point are possible cases.

    To be not quite flippant, I'm not sure my credence on such things ever gets to 0, only to a very very small degree. After all, I'm not *certain* that I'm not in a demon world or that idealism is true in the since that I have unit credence in their negation. I think closure issues are tricky, but for many agents those two figures are going to be linked in the way my point needs them to be.

     
  • At 11:24 PM, Anonymous Anonymous said…

    Argh! Just wrote a long comment and then deleted it by mistake.
    The value system of a subject can only be deduced by seeing which options they prefer under certainty. Only with an ordering of values so established can we make sense of degrees of belief. See Ramsey "Truth and Probability"

    The other point about residual doubt, I am trying to develop an argument that although it is possible to doubt things that we know, like that I got out of bed this morning, we cannot give any real probability value to these doubts, because otherwise they will accumalate. Lets suppose it is possible that I didn't get out of bed this morning but instead died and am now experiencing an afterlife that is very similar to life. Let us give this a probability of 0.001. But this possibility is no more likely to have happened this morning as any other morning. Suppose I have had 10 000 mornings. This would make my chance of being currently in the afterlife 1-(0.999 to the power of 10 000), which is very close to certain. And bear in mind that this is only one wild possibility that I can't eliminate. There could be infinitely many others. Every second I could have had my brain removed and put in a vat.
    My guess is that it is ridiculous to suppose that the liklihood of these far out scenarios have any probability at all, because of the super absurdity of their accumalative probability

     
  • At 11:51 PM, Blogger Trent_Dougherty said…

    "The value system of a subject can only be deduced by seeing which options they prefer under certainty."

    OK, I see where you're coming from a little better now, but I utterly reject anything even smacking of operationalism about "degrees of belief". But even on that kind of view all that follows from the above quote is that we couldn't deduce a subjects value system: doesn't mean she doesn't have one.

    "although it is possible to doubt things that we know, like that I got out of bed this morning, we cannot give any real probability value to these doubts"

    What's the difference between "real" probabilities and, I suppose, "non-real" probabilities? Leaving infinitesimals out of it for now, either we model those doubts with non-zero probability or we don't. That's up to the theorist, but I can't see much use for a model that assigns unit value to a proposition for which there is doubt. It's common enough to let ε (can't remember if Blogger can read much HTML in comments) represent marginal doubt and we can have a designation of "acceptable" or "practically certain" (Kyburg does this routinely), but to actually assign the full unit seems a bad model (though I'd have to see more of the system worked out to tell. If you let "Ps(A) = 1" represent "A is practically certain for S" then how are you going to represent *complete* certainty?).

    I think your example just shows that you've given too high a probability to the proposition. For infinite sets I'd pull in non-standard analysis, though I admit that issues pertaining to countable additivity are tricky.

     
  • At 9:35 AM, Anonymous Anonymous said…

    I'd like to know what "operationalism" means. In a paper by Jeffrey called something like Wolfman meets Dracula, He says that degrees of belief are theoreticians terms and eventually terminate in the betting behaviour of the subject. I think he calls it "Quasi operational". I think it comes from Hempel's operational definitions. For this reason, I would have thought it important to Jeffrey that we can't evaluate someone's degrees of belief at all from any of their behaviour. This to me is the hole in Jeffrey's approach. Because if you allow JCOND any behaviour is consistent with a wide range of probability distributions and value systems.
    So if by rejecting operationalism you mean that there is a unique "real" degree of belief that a subject has in a proposition which is theory independent, then JCOND doesn't cut the mustard, and I am not sure that Jeffrey would agree with you anyway.
    On the other point, you are suggesting something like this: If it is possible to doubt p, but impossible to doubt q, then q must have a higher degree of belief that p so therefore p can't have unit credence.
    But there is a distinction to be made between the possibility of doubt, and actually doubting. All I can say is that it is possible for me to doubt things that I don't actually doubt. So for example it is possible for me to doubt that the Thames runs through London even though I do not doubt it, and I am certain of it. To withold unit credence on such propositions because of the possibility of doubt is unmotivated because such possibilities of doubt will not show up in behaviour.
    One might respond that if one did actually doubt something, then this would show up in behaviour, so the possibility of doubt affects one's behaviour *dispositionally*. But it is possible that I doubt anything. For example it is possible that I doubt ~(p&~p), or that "I think therefore I am". In fact in terms of frequency I have doubted these things more often than I have doubted that London is in England. Does this mean that my degree of belief that London is in England is necessarily higher than my degree of belief that I am? I actually doubt neither, so surely it is best to give them both unit 1.

     
  • At 4:36 AM, Blogger Trent_Dougherty said…

    Saying that degrees of belief are defined in terms of betting behavior is usually a sufficient condition for operationalism.

    "if you allow JCOND any behaviour is consistent with a wide range of probability distributions and value systems."

    Sure, but there's nothing new in that. The same was true of Aristotle's practical syllogism. It's not a good idea to expect decision theory to do all of psychology.

    "So if by rejecting operationalism you mean that there is a unique "real" degree of belief that a subject has in a proposition which is theory independent, then JCOND doesn't cut the mustard"

    I don't know about theory independence or cutting mustard, but I'm not an operationalist and I endorse some form of weighted conditionalization and Jeffrey's probability kinematics seems as good as any. All this stuff needs to be interpreted though and I'd interpret it somewhat differently that Jeffrey.

    I'm inclined to talk about degrees of certainty rather that degrees of belief. I don't know what beliefs are, much less degrees of them. I'm OK with defining belief in terms of degrees of belief since I reject the unrestricted validity of conjunction introduction, but I think I'm interested in something subtly different than orthodox bayesians. I'm happy with taking beliefs or degrees of belief as basic and giving them contextual definitions by theorizing about them (see Weirich, _Decision Space: Multidimensional Utility Analysis_ (CUP, 2001), $1.4 "Empiricism in Decision Theory," pp. 26-40.

    ----------

    "On the other point, you are suggesting something like this: If it is possible to doubt p, but impossible to doubt q, then q must have a higher degree of belief that p so therefore p can't have unit credence."

    No, my statement wasn't modalized.

    "it is possible for me to doubt that the Thames runs through London even though I do not doubt it, and I am certain of it."

    Well, OK, but I don't think one *need* be, even if they have all the same experiences (sense impressions) as you. And, again, I was making a *logical* point that possibly, a humanly rational agent exists without any non-trivial certainties (and by non-trivial I mean non-logical and such).

    "To withold [sic] unit credence on such propositions because of the possibility of doubt is unmotivated because such possibilities of doubt will not show up in behaviour. "

    This is just where we disagree. You're some kind of behaviorist and I'm not. Belief and action are two different things and one is not defined in terms of the other. I almost said they were logically independent notions but "believing" entails "acting" and true actions entail beliefs, but that's OK for my position.

    Or rather I should say that such *notions* of belief seem to me unhelpful for most purposes (I think people's betting behavior *frequently* reveals there beliefs, so the notion can be a kind of useful fiction or heuristic, but as far as being a regimented precisification or explication of the ordinary language term I think it's a dead end.

    "I actually doubt neither, so surely it is best to give them both unit 1."

    That's what I was suggesting you do in that case. However, I was suggesting that one could also fail to do so. The discussion stemmed from your counterclaim that: "this still doesn't mean that anyone could get along without ANY certainty."

    You adverted along the way to various behavioral factors which I think are jejune. So I've yet to see any reason to doubt your claim that:

    "Jeffrey's view has a big hole, and it is that you need certainty for probability."

    Tomorrow I leave for Alaska, so I'm afraid this is it for me. Thanks for the stimulating discussion.

     
  • At 11:29 AM, Anonymous Anonymous said…

    Mmm, "Jejune". There is a difference between behaviourism of the Skinner kind, and a resistence to the idea that there can be wildly different belief systems that terminate in identical behaviour. Of course what counts as behaviour is vague, but let it be broad, so let it include events only available to introspection. If two people are identical in every observable way, either through introspection or third party observation, then to say that there is still a wide range of incompatible beliefs systems that they could have is ontologically excessive.
    I know that you are off to Alaska, but it was just too thrilling to be called a jejune behaviourist for me to let it lie.

     

Post a Comment

<< Home