From: stoneje@ucsu.colorado.edu (Stone Jeremiah Ethan) Subject: Re: P vs. NP (with respect to Cook's paper on claymath) Date: 22 Aug 00 23:26:19 GMT Newsgroups: sci.math Summary: [missing] There is a nice overview of P and NP at http://www.csc.liv.ac.uk/~ped/teachadmin/algor/npcomp.html -Jeremiah -- ____________________ Jeremiah Stone stoneje@colorado.edu -------------------- ============================================================================== From: David A Molnar Subject: Re: Status of 'Is P=NP?' Date: 30 Sep 2000 14:34:52 GMT Newsgroups: sci.math Rajarshi Ray wrote: > What are the chances of solving the P/NP issue, say in the next 50 > years? Will it last that long? Or will the tremendous interest to solve > it (as compared to FLT) break it before long? I don't know what the chances are. A new idea of some kind seems to be required. Your guess as good as mine how long new ideas take. > And can you explain, briefly, how "probabilistically checkable proofs" > and "phase transitions" are advances towards solving P/NP? "phase transitions" -- Some combinatorial decision problems are easy on small instances, but as the size of the instance increases, the difficulty "jumps" suddenly to intractable. On a graph it looks something like this: -- difficulty / / ---X ------------- problem size (but keeps curving upward, of course; I'm limited by ASCII art here). The X is the point at which the difficulty begins to skyrocket. The term "phase transition" is used to describe this phenomenon and is borrowed from physics, where you see phase transitions such as ice turning to water or water turning to water vapour. Why do we care? If we knew what made the problem "to the left of the X" comparatively easy, and "to the right of the X" comparatively hard, then we just might get some insight into what "hardness" means, which in turn would give insight into P ?= NP. Plus we might gain some insight into how to generate hard instances of problems. Generating such instances has applications in testing solving algorithms/heuristics and in cryptography. By the way, this is a gross oversimplification. There are several different kinds of phase transitions we could speak of and observe in solving problems. I don't know the field well enough to give an overview. I attended a talk over the summer on recent work by Christian Borgs and Jennifer Chayes on the application of a technique called "finite-size scaling" to these problems. An abstract for a similar talk can be found here: http://www.math.uiuc.edu/Bulletin/Abstracts/October/Oct25_99/borgs_oct26-99.html I regret that I am unable to find a paper or more complete references at this time. :-( They acheived actual theoretical results about the existence and nature of phase transitions in several combinatorial problems, including 2-SAT (which is in P, but has a "more difficult" problem region) and some problems in NP. This is a major advance over just running a bunch of problem instances and graphing the runtimes. Unfortunately, it seems that their results aren't preserved under normal reductions. So at this point it seems as though each problem's phase transitions must be tackled independently. and these aren't trivial results... Maybe one way out would be to define reductions which preserve phase transitions, but it's not clear how to do that and then if you could get anywhere with it. Also the experience with "special reductions" seems to be a little uneven. Sometimes you can define a special reduction and discover that you can't prove that much of anything reduces to anything with the reduction. I can't evaluate whether phase transitions are going to solve P ?= NP. Maybe. The results sound promising, but I'm not an expert by any means. "probabilistically checkable proofs" -- We can consider the normal notion of NP to be a "proof system" in the following sense: there is a prover P we'll equivocate on P's power for now. there is a verifier V V is modelled by a deterministic Turing Machine. P wants to establish a proposition. V is sceptical. So P sends V a proof. This proof has the property that it is "short" and checkable in polynomial time. That means V has an algorithm to determine the correctness of the proof and can run it on what P says. Then V accepts or rejects depending on whether P said the right thing. There is an adversarial situation here. P might be trying to convince V of something which isn't true. At the same time P wants to be assured that if what it's saying *is* true, that V will accept that truth. This is a very limited version of what's called an "interactive proof." Two players P and V, one message from P to V, and no further communication. (Maybe V tells P whether it accepts, which would make two messages total). Probabilistically checkable proofs are related to the question: "What happens if V is randomized?" That is, what if V is allowed to flip coins and make random decisions, and further to have some small probability of error in accepting? That question led to a massive amount of work on randomized interactive proofs with various numbers of rounds and so on. Turns out you can prove everything in PSPACE with such things. But there's still the interaction. Probabilistically Checkable Proofs can be thought of as saying "well, what if we just wrote down all P's responses to all possible questions *in advance* ?" Then this removes the interaction, at the cost of an expanded size proof. Now V, instead of interacting with some prover P, will just 'interact' with the proof on its hard disk. The crucial point here is that V doesn't need to read *all* of the proof. Just those bits which are relevant for V's particular series of questions this time around. So V can get away with "probabilistically checking" parts of the proof -- V flips random coins, discovers "oh, I should check bits 10292, 3983, and 237795749" and needs read no more. Extremely cute techniques are used to ensure that if the proof is wrong, then it is "wrong everywhere" -- it is impossible to create a proof of a false assertion which has only a few wrong answers. If the assertion is false, the proof will have wrong answers that V can notice almost everywhere. This justifies V's checking only a few randomly selected locations. While these techniques may be cute, and their objective is fairly easy to describe informally, the technical details are involved. VERY involved. In fact, proving that these things actually do the magical things they are supposed to do is serious work. One of the major open problems in the area is to find a simpler way to do everything. All of this builds up to the "PCP Theorem", which allows us to characterize exactly how many queries and how much randomness we need to prove languages in NP. I think the record is currently at 3 queries and O(log n) randomness, due to Johan Hastad. (the *size* of the total proof, on the other hand...) Right, all well and good, but what does this have to do with P ?= NP ? Well, for one thing, the PCP Theorem tells us something about the "difficulty" of NP when randomness is involved. We actually have a whole range of things we can prove, depending on the number of queries and the amount of randomness used. For another, PCPs and interactive proofs bear some relation to the "polynomial time hierarchy," which is what you get if you start asking "well, what if my NP machine had an oracle for coNP? for NP?" It's known that if P = NP, then the hierarchy collapses. So if the hierarchy does not collapse...and that's another angle of attack for P != NP. Oded Goldreich has a book with a short intro to PCP. You could also see his notes at http://www.wisdom.weizmann.ac.il/~oded/pps.html You might also look at Sanjeev Arora's list of publications: http://www.cs.princeton.edu/~arora/publist.html Thanks, -David ============================================================================== From: djimenez@cs.utexas.edu (Daniel A. Jimenez) Subject: Re: Status of 'Is P=NP?' Date: 3 Oct 2000 08:56:21 -0500 Newsgroups: sci.math In article , Russell Easterly wrote: > >Daniel A. Jimenez wrote in message >news:8rad4s$2fc$1@mamba.cs.utexas.edu... >> Determining the output of a Boolean circuit, given values for the inputs, >> takes time linear in the size of the circuit. It's an interesting problem, >> since it is P-complete (which roughly means it's hard to parallelize), but >it >> doesn't say much about P vs. NP. It's not quite clear what you're asking >> here, but to prove P=NP using SAT you need to prove that there exists an >> algorithm with a time polynomial in the size of the circuit that determines >> whether there is any combination of input values causing the circuit to >> output "true." > > >Proving that it can be solved in polynomial time with respect >to the size of the circuit is not the problem. Actually, that is precisely the problem of P?=NP. >The problem that IS in NP is to find a "certificate" >that can be checked in polynomial time for validity. But this doesn't say anything about P vs. NP. I can spend exponential time to find such a certificate, then check it in polynomial time, but that still doesn't mean P=NP. >For the SAT problem the certificate is an assignment >of variables that make the Boolean circuit true >(it is assumed that such an assignment exists). >In theory, this assignment can be validated in >polynomial time. > >The input to the "validator" is the assignment of variables. >So the problem is NP with respect to the number of variables. > >The actual size of the circuit doesn't change the problem much. >Consider the expression: > >A and A or A and A and A or A and A and A and A = true. > >We know that the answer is either A or NOT(A). >This is true of any circuit with only one variable. Certainly there are easy instances of SAT. Monotone circuits, Horn formulas, 2-CNF-SAT, CNF formulas for which the ratio of clauses to variables is very low or very high, etc. are all easy instances of SAT (I'm talking about some Boolean formulas here, but there is a trivial correspondence to circuit-SAT). Nevertheless, the only metric that matters in terms of NP is the size of (a reasonable string encoding of) the circuit. This is because a language L is in NP iff any instance x of L can be decided in time polynomial in |x| (for a fixed polynomial for L). See the comp.theory FAQ section on P and NP at http://www.cs.unb.ca/~alopez-o/comp-faq/faq.html for more info. -- Daniel Jimenez djimenez@cs.utexas.edu "I've so much music in my head" -- Maurice Ravel, shortly before his death. " " -- John Cage