From: grubb@math.niu.edu (Daniel Grubb) Subject: Re: CH and second order set theory Date: 28 Jul 2001 03:49:12 GMT Newsgroups: sci.math,sci.logic Summary: Comparing cumulative hierarchy and other models of set theory <3b6178a4.2919622@nntp.sprynet.com> >Well, first I was not aware that you were taking "set in the >cumulative heirarchy" as the definition of "set". (I'm not >much of a logician - is this the same as assuming that V = L? >Not everyone agrees that V = L is "true" in any absolute sense...) These are not the same. V=L is a *very* restrictive assumption about every set being definable in a very strict sense. This does not get around the problem with the concept of the cumulative hierarchy though, which goes as follows: V_0 =\emptyset V_(\alpha +1) =P(V_\alpha ) where \alpha is an ordinal and P(X) is the power set of X. V_\alpha = \bigcup_{\beta < \alpha} V_\beta when \alpha is a limit ordinal. Sets are those things that are in V_\alpha for some ordinal number \alpha. The problems here are manifold. The first is that we have to know what the empty set is and agree on it. Not a big problem, perhaps, but there. Much more of a difficulty is what P(X) means. If our original difficulty is what it means to be a set, then it seems that knowing the meaning of P(X) is at least as problematic. In particular, we *know* that different models of set theory disagree on exactly what P(X) should mean. So we simply cannot expect any two people to agree on what it means. Next, I don't know what it means to be an ordinal until I know something about sets, so once again, the intuition leads to a circular definition of 'set'. It seems much better just to leave the concept 'set' undefined and assume whichever axioms we need at the meta-level and continue from there when we develop logic. In fact, we can go further. Whenever we have a model of ZF, every set is in the above cumulative hierarchy somewhere (foundation axiom, essentially) as defined in that model. Thus, this intuition about sets *cannot* uniquely distinguish a certain model that we should call the 'real sets'. Any model *at all* of ZF would satisfy the criteria of this intuition when P(X) is defined *in that model*. In actual practice, this intuition of the cumulative hierarchy is extended to include other intuitions about how anything that could happen at the class level already happens at the set level. This leads to axioms for large cardinals, etc, where, say a strongly inaccessible cardinal *could* simply be a proper class, but we assume it is actually a set. The problem here is that we may find ourselves assuming the existence of cardinals that give contradictions (it's happened). On the other hand, we *do* find that large cardinal assumptions 'line up' in a linear order when we consider whether the existence of one cardinal guarantees the existence of a model of ZF with a different cardinal. At least, this has been the case so far. What I find confusing is that people accept all this intuition about large cardinals, etc, but don't seem to like the simplifications of cardinal arithmetic that the Generalized Continuum Hypothesis offers. I, for one, would *much* rather give up measurable cardinals than ease of computing cardinal powers. Measurable cardinals have only made good theorems ackward to state in what I've found. Too many results need the additional hypothesis "as long as card(X) is not a measurable cardinal....". Perhaps it is different in other areas of math. Partially for this reason, I am curious whether any of the proposed large cardinal axioms affect CH or GCH. I am much more interested in GCH, actually. I would almost find some large cardinal saying something against GCH as an argument against that cardinal. YMMV --Dan Grubb ============================================================================== From: Mike Oliver Subject: Re: CH and second order set theory Date: Tue, 31 Jul 2001 13:21:45 -0700 Newsgroups: sci.math,sci.logic Daniel Grubb wrote: > [Mike Oliver wrote:] >> That doesn't mean that we have to give equal >> weight to every model's opinion. Suppose we have two competing models, >> M and N, and they agree on everything up to the rank of X. However >> N has subsets of X that M doesn't have, whereas every subset of X that's >> in M is also in N. > >> Which of these models is "more correct" about what P(X) is? Obviously, >> N is more correct, because it knows about more subsets of X. > > At first blush, I would be inclined to agree. However, we know that > there are models where the power set of the naturals is as large as > we want. Not quite. Suppose for the sake of argument that CH is really true. Then there are no transitive models at all which assign to |P(N)| the *real* Aleph_2, or anything bigger than the real Aleph_1. There *are* transitive models that *think* |P(N)|=Aleph_2, but only because they have the wrong Aleph_2. Their Aleph_2 is most likely some countable ordinal. The reason the model can't tell that the value is a countable ordinal is that it doesn't have *enough* subsets of the naturals--in particular, it doesn't have one that codes a wellorder of N with length equal to the model's Aleph_2. > This suggests to me that going for the largest isn't a good > idea; you will get that the power set of the naturals is a proper class, > not a set at all. There are some who take that view. I don't find it a terribly useful approach in most contexts. > The alternative seems to be to go for the *smallest*, > which leads to GCH. It's really not a case where you get to choose. The commonly agreed narrative is that P(X) contains *all* subsets of X. If you're not going by that motivation, then you're talking about something else. And in fact, for a fairly natural meaning of "smallest", it's easily identifiable what you're talking about--you're talking about L. L sits very nicely, transitively and absolutely definably inside V, making it quite convenient to rephrase anything you want to derive from the "smallest" motivation by saying "such and such is true restricted to L".