NSA: Possibly breaking US laws, but still bound by laws of computational complexity

Update (Sept. 9): Reading more about these things, and talking to friends who are experts in applied cryptography, has caused me to do the unthinkable, and change my mind somewhat.  I now feel that, while the views expressed in this post were OK as far as they went, they failed to do justice to the … complexity (har, har) of what’s at stake.  Most importantly, I didn’t clearly explain that there’s an enormous continuum between, on the one hand, a full break of RSA or Diffie-Hellman (which still seems extremely unlikely to me), and on the other, “pure side-channel attacks” involving no new cryptanalytic ideas.  Along that continuum, there are many plausible places where the NSA might be.  For example, imagine that they had a combination of side-channel attacks, novel algorithmic advances, and sheer computing power that enabled them to factor, let’s say, ten 2048-bit RSA keys every year.  In such a case, it would still make perfect sense that they’d want to insert backdoors into software, sneak vulnerabilities into the standards, and do whatever else it took to minimize their need to resort to such expensive attacks.  But the possibility of number-theoretic advances well beyond what the open world knows certainly wouldn’t be ruled out.  Also, as Schneier has emphasized, the fact that NSA has been aggressively pushing elliptic-curve cryptography in recent years invites the obvious speculation that they know something about ECC that the rest of us don’t.

And that brings me to a final irony in this story.  When a simpleminded complexity theorist like me hears his crypto friends going on and on about the latest clever attack that still requires exponential time, but that puts some of the keys in current use just within reach of gigantic computing clusters, his first instinct is to pound the table and shout: “well then, so why not just increase all your key sizes by a factor of ten?  Sweet Jesus, the asymptotics are on your side!  if you saw a killer attack dog on a leash, would you position yourself just outside what you guesstimated to be the leash’s radius?  why not walk a mile away, if you can?”  The crypto experts invariably reply that it’s a lot more complicated than I realize, because standards, and efficiency, and smartphones … and before long I give up and admit that I’m way out of my depth.

So it’s amusing that one obvious response to the recent NSA revelations—a response that sufficiently-paranoid people, organizations, and governments might well actually take, in practice—precisely matches the naïve complexity-theorist intuition.  Just increase the damn key sizes by a factor of ten (or whatever).

Another Update (Sept. 20): In my original posting, I should also have linked to Matthew Green’s excellent post.  My bad.


Last week, I got an email from a journalist with the following inquiry.  The recent Snowden revelations, which made public for the first time the US government’s “black budget,” contained the following enigmatic line from the Director of National Intelligence: “We are investing in groundbreaking cryptanalytic capabilities to defeat adversarial cryptography and exploit internet traffic.”  So, the journalist wanted to know, what could these “groundbreaking” capabilities be?  And in particular, was it possible that the NSA was buying quantum computers from D-Wave, and using them to run Shor’s algorithm to break the RSA cryptosystem?

I replied that, yes, that’s “possible,” but only in the same sense that it’s “possible” that the NSA is using the Easter Bunny for the same purpose.  (For one thing, D-Wave themselves have said repeatedly that they have no interest in Shor’s algorithm or factoring.  Admittedly, I guess that’s what D-Wave would say, were they making deals with NSA on the sly!  But it’s also what the Easter Bunny would say.)  More generally, I said that if the open scientific world’s understanding is anywhere close to correct, then quantum computing might someday become a practical threat to cryptographic security, but it isn’t one yet.

That, of course, raised the extremely interesting question of what “groundbreaking capabilities” the Director of National Intelligence was referring to.  I said my personal guess was that, with ~99% probability, he meant various implementation vulnerabilities and side-channel attacks—the sort of thing that we know has compromised deployed cryptosystems many times in the past, but where it’s very easy to believe that the NSA is ahead of the open world.  With ~1% probability, I guessed, the NSA made some sort of big improvement in classical algorithms for factoring, discrete log, or other number-theoretic problems.  (I would’ve guessed even less than 1% probability for the latter, before the recent breakthrough by Joux solving discrete log in fields of small characteristic in quasipolynomial time.)

Then, on Thursday, a big New York Times article appeared, based on 50,000 or so documents that Snowden leaked to the Guardian and that still aren’t public.  (See also an important Guardian piece by security expert Bruce Schneier, and accompanying Q&A.)  While a lot remains vague, there might be more public information right now about current NSA cryptanalytic capabilities than there’s ever been.

So, how did my uninformed, armchair guesses fare?  It’s only halfway into the NYT article that we start getting some hints:

The files show that the agency is still stymied by some encryption, as Mr. Snowden suggested in a question-and-answer session on The Guardian’s Web site in June.

“Properly implemented strong crypto systems are one of the few things that you can rely on,” he said, though cautioning that the N.S.A. often bypasses the encryption altogether by targeting the computers at one end or the other and grabbing text before it is encrypted or after it is decrypted…

Because strong encryption can be so effective, classified N.S.A. documents make clear, the agency’s success depends on working with Internet companies — by getting their voluntary collaboration, forcing their cooperation with court orders or surreptitiously stealing their encryption keys or altering their software or hardware…

Simultaneously, the N.S.A. has been deliberately weakening the international encryption standards adopted by developers. One goal in the agency’s 2013 budget request was to “influence policies, standards and specifications for commercial public key technologies,” the most common encryption method.

Cryptographers have long suspected that the agency planted vulnerabilities in a standard adopted in 2006 by the National Institute of Standards and Technology and later by the International Organization for Standardization, which has 163 countries as members.

Classified N.S.A. memos appear to confirm that the fatal weakness, discovered by two Microsoft cryptographers in 2007, was engineered by the agency. The N.S.A. wrote the standard and aggressively pushed it on the international group, privately calling the effort “a challenge in finesse.”

So, in pointing to implementation vulnerabilities as the most likely possibility for an NSA “breakthrough,” I might have actually erred a bit too far on the side of technological interestingness.  It seems that a large part of what the NSA has been doing has simply been strong-arming Internet companies and standards bodies into giving it backdoors.  To put it bluntly: sure, if it wants to, the NSA can probably read your email.  But that isn’t mathematical cryptography’s fault—any more than it would be mathematical crypto’s fault if goons broke into your house and carted away your laptop.  On the contrary, properly-implemented, backdoor-less strong crypto is something that apparently scares the NSA enough that they go to some lengths to keep it from being widely used.

I should add that, regardless of how NSA collects all the private information it does—by “beating crypto in a fair fight” (!) or, more likely, by exploiting backdoors that it itself installed—the mere fact that it collects so much is of course unsettling enough from a civil-liberties perspective.  So I’m glad that the Snowden revelations have sparked a public debate in the US about how much surveillance we as a society want (i.e., “the balance between preventing 9/11 and preventing Orwell”), what safeguards are in place to prevent abuses, and whether those safeguards actually work.  Such a public debate is essential if we’re serious about calling ourselves a democracy.

At the same time, to me, perhaps the most shocking feature of the Snowden revelations is just how unshocking they’ve been.  So far, I haven’t seen anything that shows the extent of NSA’s surveillance to be greater than what I would’ve considered plausible a priori.  Indeed, the following could serve as a one-sentence summary of what we’ve learned from Snowden:

Yes, the NSA is, in fact, doing the questionable things that anyone not living in a cave had long assumed they were doing—that assumption being so ingrained in nerd culture that countless jokes are based around it.

(Come to think of it, people living in caves might have been even more certain that the NSA was doing those things.  Maybe that’s why they moved to caves.)

So, rather than dwelling on civil liberties, national security, yadda yadda yadda, let me move on to discuss the implications of the Snowden revelations for something that really matters: a 6-year-old storm in theoretical computer science’s academic teacup.  As many readers of this blog might know, Neal Koblitz—a respected mathematician and pioneer of elliptic curve cryptography, who (from numerous allusions in his writings) appears to have some connections at the NSA (on reflection, this is unfair to Koblitz; he does report conversations with NSA people in his writings, but has never had any financial connection with NSA)—published a series of scathing articles, in the Notices of the American Mathematical Society and elsewhere, attacking the theoretical computer science approach to cryptography.  Koblitz’s criticisms were varied and entertainingly-expressed: the computer scientists are too sloppy, deadline-driven, self-promoting, and corporate-influenced; overly trusting of so-called “security proofs” (a term they shouldn’t even use, given how many errors and exaggerated claims they make); absurdly overreliant on asymptotic analysis; “bodacious” in introducing dubious new hardness assumptions that they then declare to be “standard”; and woefully out of touch with cryptographic realities.  Koblitz seemed to suggest that, rather than demanding the security reductions so beloved by theoretical computer scientists, people would do better to rest the security of their cryptosystems on two alternative pillars: first, standards set by organizations like the NSA with actual real-world experience; and second, the judgments of mathematicians with … taste and experience, who can just see what’s likely to be vulnerable and what isn’t.

Back in 2007, my mathematician friend Greg Kuperberg pointed out the irony to me: here we had a mathematician, lambasting computer scientists for trying to do for cryptography what mathematics itself has sought to do for everything since Euclid!  That is, when you see an unruly mess of insights, related to each other in some tangled way, systematize and organize it.  Turn the tangle into a hierarchical tree (or dag).  Isolate the minimal assumptions (one-way functions?  decisional Diffie-Hellman?) on which each conclusion can be based, and spell out all the logical steps needed to get from here to there—even if the steps seem obvious or boring.  Any time anyone has tried to do that, it’s been easy for the natives of the unruly wilderness to laugh at the systematizing newcomers: the latter often know the terrain less well, and take ten times as long to reach conclusions that are ten times less interesting.  And yet, in case after case, the clarity and rigor of the systematizing approach has eventually won out.  So it seems weird for a mathematician, of all people, to bet against the systematizing approach when applied to cryptography.

The reason I’m dredging up this old dispute now, is that I think the recent NSA revelations might put it in a slightly new light.  In his article—whose main purpose is to offer practical advice on how to safeguard one’s communications against eavesdropping by NSA or others—Bruce Schneier offers the following tip:

Prefer conventional discrete-log-based systems over elliptic-curve systems; the latter have constants that the NSA influences when they can.

Here Schneier is pointing out a specific issue with ECC, which would be solved if we could “merely” ensure that NSA or other interested parties weren’t providing input into which elliptic curves to use.  But I think there’s also a broader issue: that, in cryptography, it’s unwise to trust any standard because of the prestige, real-world experience, mathematical good taste, or whatever else of the people or organizations proposing it.  What was long a plausible conjecture—that the NSA covertly influences cryptographic standards to give itself backdoors, and that otherwise-inexplicable vulnerabilities in deployed cryptosystems are sometimes there because the NSA wanted them there—now looks close to an established fact.  In cryptography, then, it’s not just for idle academic reasons that you’d like a publicly-available trail of research papers and source code, open to criticism and improvement by anyone, that takes you all the way from the presumed hardness of an underlying mathematical problem to the security of your system under whichever class of attacks is relevant to you.

Schneier’s final piece of advice is this: “Trust the math.  Encryption is your friend.”

“Trust the math.”  On that note, here’s a slightly-embarrassing confession.  When I’m watching a suspense movie (or a TV show like Homeland), and I reach one of those nail-biting scenes where the protagonist discovers that everything she ever believed is a lie, I sometimes mentally recite the proof of the Karp-Lipton Theorem.  It always calms me down.  Even if the entire universe turned out to be a cruel illusion, it would still be the case that NP ⊂ P/poly would collapse the polynomial hierarchy, and I can tell you exactly why.  It would likewise be the case that you couldn’t break the GGM pseudorandom function without also breaking the underlying pseudorandom generator on which it’s based.  Math could be defined as that which can still be trusted, even when you can’t trust anything else.

122 Responses to “NSA: Possibly breaking US laws, but still bound by laws of computational complexity”

  1. Aaronson on crypto. Schneier “elliptic-curve systems; the latter have constants that the NSA influences when they can.” | Gordon's shares Says:

    […] Link. Trust math, but not NSA mathematicians. […]

  2. Douglas Knight Says:

    Could you be more specific about what you mean by the hypothetical “big improvement” on number theory algorithms that is covered by your 1%?

    Do elliptic curve algorithms count? Does an L(1/4) algorithm count, or only quasi-polynomial? What if they can’t break all instances, but, as has repeatedly happened, they discovered bad primes or bad exponents that make particular keys weak? Breaking a random half of all keys is almost as good as breaking all of them. Schneier’s condemnation of ECC seems to require more than 1% chance NSA knows something special about ECC.

    PS – David Jao, commenting on Schneier’s blog says that we can and do use cryptography to prevent NSA from meddling with mystery constants. He says that the ECC standard curves are generated by SHA-1, so to meddle, NSA would have to break the has function. (But if half of curves are bad, that’s easy.)

  3. Anonymous Says:

    You are making good and interesting points. However, Koblitz also has some valid criticisms of TCS even if his conclusions are not valid. The mathematical models we built in TCS are useless if they don’t relate to the practice and we know many of our standard models are not good enough approximation of the reality and arguably there isn’t enough effort to deal with these issues. Technical heavy weight lifting is used as the ultimate criteria for judging the value of research projects inside the community.

    Also I think you are exaggerating what most cryptographers expected that NSA was doing. I have heard several famous crypto experts quite surprised by these revelations and it has shaken their trust in the government institutions. I never understood why some people presume that government is a benevolent entity, such beliefs in government institutions seems like ideology to me.

  4. Daniel Armak Says:

    You can trust the math itself, and so can Bruce Schneier and a few tens of thousands of other people. But everyone else who can’t grok the entire mathematical arguments for each cryptographical system, or doesn’t want to spend a long time studying it, must trust the word of people like you. And since the NSA can and does subvert people like you, who do original work and analyze others’ work and sit on standards committees, not to mention the programmers who implement it in code, what are we to do?

  5. Daniel W. Says:

    In my mind, the best circumstantial evidence that the NSA has not practically broken any of the major cryptosystems is the following:, if they had, they would most likely keep this as a highly guarded secret to be used only against high value targets rather than as a means of monitoring potential terrorists. It would most likely be contained within a small circle and not mentioned in power-point presentations to low-level analysts.

    Of course, the above argument may be flawed by assuming the NSA has too high of a level of competence.

  6. T H Ray Says:

    Scott,

    ” … the clarity and rigor of the systematizing approach has eventually won out.”

    No doubt. In Euclid’s time as well as the present, though, it is helpful to have something to systematize. Making that assumption available and convenient is what mathematicians do.

  7. Scott Says:

    Daniel Armak #4:

      You can trust the math itself, and so can Bruce Schneier and a few tens of thousands of other people. But everyone else … must trust the word of people like you.

    You raise an excellent point, which I think applies even more broadly than you say. For one thing, I merely understand some of the general ideas: I haven’t gone through every detail of the math used by the crypto in my web browser, and I dare say that most professional cryptographers haven’t either.

    For another, the point is much broader than cryptography: how can you trust quantum mechanics, if you haven’t done the requisite experiments yourself? The physicists could’ve all been bought off by some anti-realist cabal. 🙂 Or how can you trust that the government isn’t putting mind-control drugs into the fruit you buy in the supermarket, etc. etc.

    So we’re extremely lucky that science hit on a solution to these problems—the only workable solution, really—back in the 17th century. The solution is to open up every question to scrutiny, discussion, and challenge by any interested person. Assertions gain credibility by surviving public criticism—and that’s just as true in math as it is in experimental sciences. I believe many theorems even though I haven’t checked the proofs myself, because I know that if there were an error, then someone else could’ve made a name for themselves by finding it.

    Now, for this Popperian dynamic to work, the whole process has to be carried out in the open: if I thought someone who found a fatal flaw in a proof would only tell their friends, then that doesn’t do me any good. That’s why the dividing line between “crypto as black art” and “modern crypto” happened precisely when new discoveries started being published in the open literature, rather than being filed in a drawer at NSA or GCHQ.

  8. wolfgang Says:

    Unfortunately, this xkcd.com/538/ had it right imho.

  9. Scott Says:

    Daniel W. #5: If the NSA had indeed completely broken strong cryptosystems, then why would they resort to so many covert tactics (or, in the case of the Clipper Chip, overt attempts) to prevent people from using strong crypto, unless NSA has a backdoor? I suppose it’s all elaborate psychological warfare, to prevent us from discovering the fact that these cryptosystems were broken? And that even Snowden himself is part of the NSA’s master plan? 🙂

    At least in my book, every time you claim that what looks on its face like evidence for X, is really evidence for a powerful cabal trying to prevent everyone from discovering not(X), the plausibility of your theory gets cut by a factor of maybe 50,000. This is directly related to the fact that I don’t believe any conspiracy theories—as in zero, not one.

  10. Scott Says:

    Douglas Knight #2: Sure, dramatic improvements in elliptic-curve algorithms would certainly count—as would “merely” subexponential algorithms, were the improvements large enough to threaten key sizes that the academic cryptographers considered safe.

    More broadly, though, you’re entirely right that there’s not a sharp line between “improved number-theory algorithms” and “implementation vulnerabilities.” Often, what’s happened in practice is that an implementation vulnerability has opened the way for an attack that still requires interesting and nontrivial number theory. But I suppose that sort of thing would still belong to the “99%” part of my probability estimate. In the “1%” part, I really had in mind “something that would give theoretical cryptographers a heart attack” (like, I dunno, factoring in L(1/10), or the general elliptic curve discrete log problem in polynomial or quasipolynomial time).

  11. Scott Says:

    Anonymous #3:

      You are making good and interesting points. However, Koblitz also has some valid criticisms of TCS even if his conclusions are not valid.

    I completely agree that Koblitz has some valid criticisms.

    However, I’ve read pretty much all of his and Menezes’s anti-TCS screeds, and to me what he’s doing seems, if you like, too easy to be helpful. Koblitz’s favorite M.O. is to recount various slip-ups by people in the “Goldreich school of crypto” and laugh at them: “haha, they talk about ‘provable security,’ but there was a bug in their proof! or their security definition left out an important class of side-channel attacks!” Then, with even more glee, Koblitz relates how the hapless computer scientists put out a new paper supposedly fixing the problem, but that paper had its own problems, and so on.

    The trouble is, that is indeed what a bunch of incompetent buffoons would look like, but it’s also what science looks like! 🙂 Koblitz never seems to want to acknowledge that the end result of the process is better scientific understanding and more secure cryptosystems than before (even if still not perfect).

    Also, of course, Koblitz almost defiantly refuses to suggest any better mathematical foundations for cryptography, besides the reduction-based foundations that were built up over the last 30 years. I.e., it’s not that instead of adaptive chosen ciphertext attack, he has a better definition to propose, or that instead of “bodacious” new hardness assumptions, he can give a single assumption that suffices for everything. Instead, what he appears to want is simply a return to the “black art” era of cryptography, when security arguments boiled down to “we tried to break it and failed” or “trust us, we have better mathematical taste than you.”

    The trouble is, I can’t think of a single case in the history of science when mathematical foundations as well-developed as cryptography’s now are, were simply abandoned wholesale without better mathematical foundations to replace them. So intellectually, Koblitz strikes me as someone who’s throwing spears at battle-tanks. Being the excellent marksman that he is, he actually scores some hits! But the reduction-encrusted battle-tanks are still going to win eventually.

      The mathematical models we built in TCS are useless if they don’t relate to the practice and we know many of our standard models are not good enough approximation of the reality and arguably there isn’t enough effort to deal with these issues.

    Would one also say that the mathematical foundations of topology—open sets, Urysohn’s Lemma, etc.—are useless if they don’t relate to the practice of tying and untying knots? I think that’s a pretty close analogy for the relationship between what, say, Goldreich or Goldwasser or Micali do, and the actual practice of cryptography. In both cases, yes, there’s some relation between the intellectual foundations on the bottom and the beautiful ornaments on top, but not surprisingly there are many floors in between. Starting from a one-way function, for example, you first have to construct a quasi-regular one-way function, then a pseudoentropy generator, then a pseudorandom generator, then a pseudorandom function, and then maybe you can start to think about building (say) a rudimentary private-key cryptosystem or signature scheme.

      Also I think you are exaggerating what most cryptographers expected that NSA was doing. I have heard several famous crypto experts quite surprised by these revelations and it has shaken their trust in the government institutions. I never understood why some people presume that government is a benevolent entity, such beliefs in government institutions seems like ideology to me.

    My situation is different: I never had any real doubt that NSA was doing such things; the thing I genuinely don’t know is whether they have good reasons to be doing them. I consider it conceivable that the NSA has indeed stopped many terrorist attacks or other international disasters that we never hear about—in which case, the strongest case in their favor might be stronger than the strongest case that can ever be made publicly. The fact that President Obama, who’s so reasonable on so many issues, has implied as much is evidence for that view from my perspective. On the other hand, I also consider it conceivable that the current eavesdropping regime is purely a result of the universal tendency of bureaucracies to expand, justify themselves, and zealously guard their power and privileges. Or it could be some combination of the two.

    For me, though, the deciding consideration is that, even in a fantasy world where the NSA’s actions had always been 100% justified, I’d still want them to be more accountable to the public than they are now. “Trust that we have our reasons, even though we can’t tell you what they are” simply doesn’t work over the long term in a democracy, even if the trust is justified at any particular time or in any particular case (and of course, often it hasn’t been).

  12. Anonymous Says:

    I agree with you that his attitude is not constructive criticism. I would even go further than you and say it is stupid to forget the science of crypto and go back to purely engineering art treatment.

    Regarding reasonability of what NSA does, NSA and its backers would of course claim these tools are useful. To be honest, security was a weak point of Obama’s campaign, he is not really knowledgeable in these issues and he has not gone and will not go against his advisers if they tell him these tools are necessary to fight terrorism. However, as far as I have heard, they have hard time convincing anyone outside executive branch that these tools have been as useful as they are claiming. How many major terrorist plots they have been uncovered and prevented using these tools? It seems that they are using these tools for a very wide range of activities including industrial and political espionage on foreign governments and companies and gain political and commercial advantage (what they call US national interests, not just securing Americans against terrorists). Does anyone really believe that EU or Brazil or liberal NGOs will launch a terrorist attack on US? FBI’s actions against Dr. King is telling how far they would go. They use the fear factor of a possible terrorist attacks to justify these actions to the public, however the laws allow them to do whatever they want to and when there are restrictions (like the fourth amendments) they find ways to circumvents them (e.g. by colliding with foreign intelligence services like GCHQ to spy on American citizens) or change the interpretations of those laws. We are very lucky that many influential Americans in the previous generations had a negative view of the federal government and wanted to restrict its powers as much as possible, restrictions which are being removed in practice (partly because some people want to settle sociopolitical disputes present in the country using the government’s power). I don’t see why so much power should be invested in a single authority with almost no real public supervision and scrutiny (a role that media was playing to some extent in previous decades but is coming under heavy pressure from government as Manning, Swartz, Snowden, … cases demonstrate). And even when courts find that someone in the government has seriously violated the laws the president forgives them and they avoid real punishment (as Scoot Libby case demonstrates).

    It is not just US government, there is a trend in western liberal democracies. It is simply unbelievable that the UK security forces used a law passed to fight terrorism to hold the partner of a Guardian journalist for 9 hours without a lawyer and without the protection of Miranda rights against self-incrimination. Anyone who thinks that security forces will only use the authority and tools they obtain to the limited extent of the original goal suffers from extreme nativity. They will use any tools in their disposal to the fullest extent they can to achieve what they perceive to be the goals of their institution. When they perceive journalists like Greenwald as a threat to the national interests they use these tools to fight them which includes intimidating the partner of a journalist using terrorism fighting powers. I still fund it really hard to believe that we have gone so far in the direction of an Orwellian society.

  13. What can theoretical computer science offer biology? | Theory, Evolution, and Games Group Says:

    […] the aid that cstheory can offer to biological understanding. In yesterday’s post on the NSA and computational complexity, Aaronson — with attribution to mathematician Greg Kuperberg — provided the following […]

  14. Paul Beame Says:

    Some of the NSA revelations have been no surprise at all. It was well known in the 1980’s, particularly after the publication of The Puzzle Palace, that the NSA was tapping all the trans-Atlantic telephone cables; gathering up of all e-mail to foreign addresses seems like more of the same.

    The relationship of the NSA with TCS cryptographers has been pretty shaky. I recall attending a theory of cryptography workshop at MIT’s Endicott House in June 1985 with one or two official NSA attendees. At the time, there were one or two TCS attendees known to have NSA funding and the NSA people wanted to recruit more. In announcing their desire to sponsor more TCS cryptographers, one of the NSA people cast a pall over the meeting by saying: “If you are interested, just mention it in a phone conversation with one of your friends and we’ll get back to you.” This didn’t exactly endear them to anyone.

  15. J Says:

    “Math could be defined as that which can still be trusted, even when you can’t trust anything else”

    Wait till someone shows multiplication and addition have same complexity or possible Voevodsky’s/Nelson’s worst nightmare comes true

    Refer:
    http://mathoverflow.net/questions/40920/what-if-current-foundations-of-mathematics-are-inconsistent

    http://mathoverflow.net/questions/36693/nelsons-program-to-show-inconsistency-of-zf

  16. Scott Says:

    J #15: Multiplication and addition having the same complexity (and yes, it’s conceivable that there’s a linear-time multiplication algorithm) wouldn’t do anything whatsoever to undermine my trust in math—why would it?

    Also, even if ZF set theory were shown to be inconsistent (and it won’t be 🙂 ), that wouldn’t do anything whatsoever to undermine my trust in theorems about (say) finite groups, or low-dimensional topology, or theoretical computer science—in fact, about anything that doesn’t involve transfinite sets. It would “merely” tell me that there was a need (and, of course, an exciting opportunity) to rethink the foundations. That’s something that already happened 100+ years ago (the renovations causing virtually no damage to the higher floors), and that could conceivably happen again.

  17. Vitruvius Says:

    I agree, Scott, with your general position that any time one claims that “evidence for x is really evidence for a powerful cabal trying to prevent everyone from discovering not(x)” one’s credibility drops by an irrecoverably large factor, and I agree with you that “math can be defined as that which can still be trusted, even when you can’t trust anything else” (as you put it), yet that still begs the question of how we the people decide what to trust to be valid math.

    Similarly, while your suggestion to “open up every question to scrutiny, discussion, and challenge by any interested person” may be necessary in order to establish public trust, it isn’t sufficient because we still have the problem of deciding which such interested persons to trust, and which to write off as conspiracy theorists in their own right. How do we feasibly decide, in effect, whether Ehrenhaft is a crackpot (as it were), and whether “Snowden himself is part of the NSA’s master plan” (as you playfully alluded to)?

    To that end you may be interested in Why Doesn’t the Public Trust Scientists?, a lecture by The Right Honourable Professor The Baroness O’Neill of Bengarve, Emeritus Professor of Philosophy at the University of Cambridge and past Principal of Newnham College, Cambridge, which she presented in 2005 as part of the Science Futures series by the San Diego Science and Technology Council’s Center for Ethics in Science and Technology.

    Note that while “scientists” are the titular and exemplary referent matter in that lecture, Baroness O’Neill’s talk actually considers a range of questions in regard of public trust, including the roles of professional organizations, trustworthiness (which can’t replace trust because of the quis custodiet ipsos custodes problem), statutory regulation, post hoc accountability, &c, which apply more broadly to the matters of public trust in any and every profession and institution, including politics and the law.

    O’Neill argues, if I may be so bold as to suggest a précis, that going back through the 17th century (as you noted) western liberal democracies have indeed evolved a multipartite methodology that does tend work in practice and that may well be the best we can get in principal, though it remains unclear to me how well we are applying those techniques to matters of state security in general, and how effectively you folks in the United States of America are applying those techniques to your vaunted Agency in particular.

  18. Scott Says:

    Paul Beame #14: I’ve actually heard that joke many times, in other variants. (“Interested in career opportunities at the NSA? Call your mom and let her know!”) I didn’t know that NSA people themselves used the joke at conferences, but it doesn’t surprise me at all.

  19. J Says:

    “Multiplication and addition having the same complexity (and yes, it’s conceivable that there’s a linear-time multiplication algorithm) wouldn’t do anything whatsoever to undermine my trust in math—why would it?”

    I thought I read somewhere that if addition and multiplication turn out to be similar in complexity, then it would imply something is wrong with mathematics.

    On the same vein think of the generalization of scheme theory that Mochizuki claims to have undertaken to take apart + and x in ring structure.

    I would think something fundamentally would have changed in our picture if they turn to be similar in complexity.

  20. J Says:

    Atleast for computational purposes, the multiplicative group structure and additive group structure of $\Bbb Z$ seem to be coinciding. This seems wrong. I cannot directly relate to $Z \bmod p$ but this seems to have implication to Discrete Log. An implication for this may not be beyond reach for atleast a few other rings as well.

  21. Scott Says:

    J #19: Well, we already have a remarkable n logn 2log*n multiplication algorithm (due to Fürer, and building on many previous works), and it hasn’t created any problem for the foundations of mathematics that I know about. Meanwhile, just like for most problems, we currently have no lower bound for multiplication better than the trivial Ω(n). I suppose I’d guess that Ω(n logn) is some sort of barrier, but not with any strength of conviction: if a linear-time algorithm were discovered, it certainly wouldn’t cause me to doubt the consistency of ZF set theory. 🙂

  22. Scott Says:

    Vitruvius #17:

      it remains unclear to me … how effectively you folks in the United States of America are applying those techniques to your vaunted Agency in particular.

    As long as we’re trading mild national barbs, you’re Canadian? You guys do have the Communications Security Establishment, which according to the NYT article is one of only four foreign agencies (along with Britain’s, Australia’s, and New Zealand’s) that “knows the full extent” of the NSA’s decoding capabilities and is cleared for its “Bullrun” program. Though I confess that, when I try to imagine Canada’s CSE, I come up with something like the following:

      Read this gentleman’s private email? Ooo, nooo, that doesn’t sound terribly polite, eh?
  23. J Says:

    Professor I am well aware of all $n^{1+\epsilon}$ algorithms and Schonage’s $O(n)$ algorithm on multitape machines. I cannot find the reference I am thinking. It was written by a TCS theorist. I would seriously think that the standard ring structure in $\Bbb Z$ could be modeled differently. I do not know if ZF would be affected. However the question of treating x and + differently for computation purposes compare to mathematical purposes arises making things murky.

    I am not implicating ZF with $O(n)$ algorithms for standard x operations on the standard structure of $\Bbb Z$. The ZFC comment was a second piece of mathematical conundrum some reputed folks have raised awareness about for a need to be more well-grounded and it rang well with your statement on truth in math as we know it. (Unrelated but bringing in – $Z$ has been a puzzle before as well – it is the simplest ring with a spectrum of prime ideals whose dimension is unclear to be interpreted in a standard way)

  24. Scott Says:

    Wolfgang #8:

      Unfortunately, this xkcd.com/538/ had it right imho.

    YES! I especially liked the mouseover text (“Actual actual reality: nobody cares about his secrets”).

  25. Vitruvius Says:

    I think your mild national barb trading is valid, Scott, and I agree your point, though I hadn’t actually intended a barb, I was rather attempting to consider the significance of the resources under your citizens’ watch relative to those under ours. And yes, I am a citizen of your northern neighbour, though more in the Alaska direction, and since I seem to be enjoying participating here at Shtetl-Optimized of late, I should probably note for the record and your reference that my background is Bachelor of Science in Electrical Engineering and Masters of Science in Computing Science back in the ’70s.

    More importantly, though, and my apologies again for my unintentionally getting us off topic (it’s not terribly polite, eh), it’s Onora O’Neill’s thesis and argument that I really do think is relevant to this discussion, as typified by her suggestions that we “need to free professionals and the public service to serve the public… to work towards more intelligent forms of accountability… [and] to rethink a media culture in which spreading suspicion has become a routine activity”, which her Wikipedia page attributes to her 2002 BBC Reith Lectures on “A Question of Trust” (as published by Cambridge University Press).

  26. jonas Says:

    What does this L notation mean precisely? Could you add it to https://complexityzoo.uwaterloo.ca/Zoo_Glossary if it’s approperiate there?

  27. Scott Says:

    jonas #26: L(α) is being used here as a shorthand for ~2n^α time. And yes, someone should add it to the Zoo Glossary (even though I don’t think that notation is used anywhere in the Zoo itself)!

  28. GASARCH Says:

    1) Actually I had always assumed that the NSA is using known factoring algorithms but with very careful, very good implementations and slightly better hardware. Nothing earthshattering. I’m disapointed(?) that thats not even true.

    2) Even shorter summary of Snowden’s revelations:
    “TV Show Persons of Interest is based on Truth.
    Also, NSA is doing Clipper chip covertly”

  29. J Says:

    @GASARCH What a let down! Isn’t it? My prof once said NSA probably knows the best rank of elliptic curves over $\Bbb Q$ and he also said they probably need it for advanced factoring algorithms. I used to break my head over that statement since as far as I know, Lenstra’s technique uses torsion part for control. Now I am thinking it is all a bunch of speculation.

  30. Paul Stadig Says:

    “The solution is to open up every question to scrutiny, discussion, and challenge by any interested person. Assertions gain credibility by surviving public criticism—and that’s just as true in math as it is in experimental sciences.”

    To the point, as a non-expert, isn’t this supposedly what happened with these international standards that the NSA has allegedly influenced?

    I can’t make a determination about the math myself. I can trust an open process with debate and public criticism from experts, and I assume that is what these standards organizations do.

  31. Scott Says:

    Paul Stadig #30: Well, yes, that’s what standards organizations are supposed to do! But it’s now looking like NSA probably had a policy to subvert the standards process, by providing “expert advice” (they are experts, after all 🙂 — not to mention major stakeholders and standards-setters themselves), influenced by … certain considerations besides just technical excellence. If it can be proved that that’s indeed what happened, it will be extremely interesting to see what the implications are for future crypto standards processes.

  32. Benjamin Smith Says:

    With all this NSA revelation, I’ve come to the conclusion that I trust the maths and that encryption is clearly something we should be using far more than already. I’m no mathematician, I’m a lowly programmer writing business apps. My questions are two-fold:

    1) Can we trust the CAs? If NSA/FISA could order Google to insert a back door and squelch public disclosure of the order, couldn’t they also (conceivably) do the same to a CA for their private key? And if so, couldn’t they MITM any connection they could reliably intercept?

    2) Is there a way to double-sign a public key? If I could get a public key signed by a USA CA (which may be vulnerable to the 1st back door) could I also get it signed by another, foreign CA not (or at least less) subject to the long arm of the US legal system?

  33. JT Reynolds Says:

    I’m neither a mathematician nor a techie. Common sense tells me we need to take matters into our own hands to protect what little is left of our privacy.

    Encryption won’t keep NSA out entirely, but it will make it harder for them to pick us out of the crowd. Decrypting still takes extra time & effort and that little bit of hassle may be enough to keep their noses out of your business.

    The same goes for storing stuff on Dropbox, iCloud, etc. Take it down and stash everything in a CloudLocker (www.cloudlocker.it), which works just the same but it’s private and stays in your home where they still need a warrant to see inside.

  34. Am I Lloyd Says:

    So here’s a candid question: Would you consider consulting for the NSA if they approached you tomorrow? Would you even consider meeting with them to consider their offer?

  35. Douglas Knight Says:

    Benjamin Smith:

    CAs are hacked or subverted all the time.

    No, there is no way of double-signing in the current CA system. A lot of people suggest new systems, but it’s difficult to get vendors to adopt them. In particular, if your system isn’t backwards compatible, it breaks everything, but if it is backwards compatible, then it’s probably vulnerable to the old tricks. The most common suggestion is simply to warn whenever there is a change of certificate.

    I don’t understand the details, but I think Google Chrome has a double-signing system, in which you get the signature of both google and a CA. This successfully protected against Iran, but has limited (but non-zero) protection against NSA, who could subvert google. (The backwards compatibility is that it’s opt-in. They distribute a master list of those they’ve signed and no one else is affected. And not everyone can get on the master list, though there’s a way of getting some protection if the first time the user connects is secure.)

  36. Scott Says:

    “Am I Lloyd” #34: Sure, I’d talk to them and see what they had to say, out of curiosity if nothing else!

    I’m on friendly terms with a few people who do quantum computing theory at NSA or one of its satellite organizations (though obviously, I don’t know any classified aspects of their work!). And I had an offer to do the NSA’s summer program while in college, which I seriously considered but ultimately decided against.

    Here’s the thing: as you might have gathered from … err, this blog, I’m one of the worst people on earth at keeping secrets. For that reason, I don’t want to know anything classified, since I don’t believe I could mentally separate it from the stuff I’m allowed to talk about, and I’d get myself into trouble within nanoseconds. As if that weren’t enough, I worry about philosophy more than practicality, asymptotics more than concrete parameters, and clean mathematical models more than side-channel attacks—and all of those tendencies in me have only been intensifying (worsening?) for the past 15 years. So, there are many excellent reasons why I doubt the NSA would ever want anything to do with me! (Except for one or two people there who like quantum computing theory for its own sake.)

  37. PGP inventor Phil Zimmermann: Open source, Legislative & judicial actions needed to pushback surveillance state — Tech News and Analysis Says:

    […] He makes a fair point, and he is not alone in professing such views. On his blog, Scott Aaronson, an Associate Professor of Electrical Engineering and Computer Science at MIT, writes: […]

  38. PGP inventor Phil Zimmermann: Open source, Legislative & judicial actions needed to pushback surveillance state | 8ballbilliard Says:

    […] He makes a fair point, and he is not alone in pro­fess­ing such views. On his blog, Scott Aaron­son, an Asso­ciate Pro­fes­sor of Elec­tri­cal Engi­neer­ing and Com­puter Sci­ence at MIT, writes: […]

  39. wolfgang Says:

    #33 “Encryption won’t keep NSA out entirely, but it will make it harder for them to pick us out of the crowd.”

    I have heard the opposite argument, that encrypting your emails will attract the NSA …

    Btw I think the worst you could do is to become a “love interest” of somebody who works there.
    It seems that the worst abuses come as so called LOVEINT. And those cases (as well as Ed Snowden himself) showed that the internal controls at NSA are either non-existent or easy to circumvent … in other words, just another government agency.

  40. Yan Zhu Says:

    Hi, Scott! Thanks for posting. I was wondering: what evidence is there that an attacker has a cryptographic advantage from only being able to choose the elliptic curve constants? I’m familiar with the attack on dual_ec_prng (http://rump2007.cr.yp.to/15-shumow.pdf), but I’m not sure if that attack or a similar one generalizes to other elliptic curve algorithms.

  41. Scott Says:

    Yan Zhu #40: It’s an excellent question, but I’m sure there are readers better-placed than me to address it. Let’s hear from them…

  42. A. Certain Says:

    Hi, Scott. I’ve been intermittently reading your blog for a while and quite enjoy it. I hope that you haven’t already answered this question. If so, just a link would be great.

    I, too, don’t believe in conspiracy theories, but it seems to be a tautological argument: if it’s implausible to me then it’s a conspiracy theory; otherwise, it’s a potential conspiracy.

    For instance, a year ago, if somebody had claimed that all of the major Internet companies were secretly allowing the NSA access to any of their users data without a warrant, I think some reasonable people would have thought that a conspiracy theory. I wasn’t so shocked, and neither were you, but what separates that from the theory that the Mossad secretly masterminded the WTC attacks? I agree there is a difference, but how do we find it?

    I think, ultimately, the answer is the same as everything else, openness and time. Eventually it was shown to be true that the cigarette companies were intentionally manipulating nicotine levels to make their product more addictive. On the other hand, nobody has made a deathbed revelation that they were the cameraman for the faked moon landings. But that’s an unsatisfying answer. Do you have any better insight?

    Andrew

  43. Douglas Knight Says:

    No, that does not generalize to ECC.

    The only evidence that NSA can gain advantage by choosing the elliptic curve is that it promotes ECC and it easily could have chosen some bits in the NIST standard. Whereas it cannot choose any bits in RSA or DH, except by subverting RNG (which it probably does, but that affects everything).

  44. Circe Says:

    Scott:

    Well, we already have a remarkable O(n logn loglogn) multiplication algorithm (due to Fürer, and building on many previous works)

    Well, I thought Schonage-Strassen was already of that complexity and Furer changed the log log n term to log* n or thereabouts, thus making it O(n log n) for all practical purposes.

  45. Douglas Knight Says:

    nobody has made a deathbed revelation that they were the cameraman for the faked moon landings

    Actually, I believe that Stanley Kubrick (long before his death) claimed responsibility for the moon landings. Also, Howard Hunt made a deathbed confession of assassinating JFK.

  46. Yan Zhu Says:

    Does anyone have thoughts on the following paper, which (from my understanding) says there is a feasible but non-trivial attack possible by an attacker who gets to pick the elliptic curve? https://eprint.iacr.org/2003/058.pdf

  47. wolfgang Says:

    >> fake moon landing vs. NSA – Google

    The difference is that faking the moon landing was impossible in 1969, see e.g. this
    gizmodo.com/5977205/why-the-moon-landings-could-have-never-ever-been-faked-the-definitive-proof

    cooperation NSA – Google was always technically possible, it was just that we thought that there are some moral and legal standards which make such a thing unlikely.

    I think it was Scott’s argument that you can always trust math, physics, science, engineering (in that decreasing order), but if you have to trust morality, politics, laws then you will be disappointed.

  48. Crypto Nerd Says:

    See Thomas Pornin answer from 2011 to
    “How can we reason about the cryptographic capabilities of code-breaking agencies like the NSA or GCHQ?”

    http://crypto.stackexchange.com/questions/579/how-can-we-reason-about-the-cryptographic-capabilities-of-code-breaking-agencies

  49. wolfgang Says:

    btw if the company motto is “don’t be evil” this alone should probably raise red flags already 😎

  50. Neel Krishnaswami Says:

    J@15: Nelson and Voevodsky are talking about two entirely different things.

    Nelson genuinely believes that arithmetic is inconsistent. Specifically, he thinks that the induction axiom in Peano arithmetic is dangerously impredicative, and may well let us prove false theorems. In particular, he thinks that exponentiation is not total!

    If Nelson is right, this would mean that Scott would need to find a meditative practice other than reciting the Karp-Lipton theorem, since it would be false. Of course, I doubt he would mind, since such a result would resolve P != NP, by showing that it represents the boundary between consistency and inconsistency.

    Personally, I strongly doubt Nelson is right, on the grounds that we couldn’t possibly be so lucky. First, it would mean we live in a mathematical universe where brute force search doesn’t work, and all algorithms must contain a real idea! Second, it would mean that the natural logic of mathematics is actually linear logic, which is a much more natural logic than classical or intuitionistic logic. (Since Scott likes betting, I would bet against Nelson at a trillion to one odds, but I would hesitate at a quadrillion to one.)

    Voevodsky’s talk was about something entirely different. He was hypothesizing the failure of Peano arithmetic merely as an example of a “foundational catastrophe”, and asking how we could decide which parts of mathematics we could salvage in the wake of such a disaster. In particular, he was observing that type theory is much more robust against foundational disasters than traditional foundations like ZFC are. The reason is that type theory formalizes the notion of truth in a very different way than the standard truth-functional approach.

    The idea is that (following the Brouwer-Heyting-Kolmorogov interpretation) define each logical connective (implication, conjunction, etc) in terms of a canonical witness. A canonical witness of “P and Q” is a pair of a canonical witness of P and a canonical witness of Q, and a witness of “P implies Q” is an algorithm that transforms witness of P into witnesses of Q, and “false” has no canonical witness, and so on. A formula is true, just when it is possible to construct a canonical witness for it. A consistency result amounts to saying that if you are given a purported proof, there is an algorithm which can transform it into a canonical witness.

    Inconsistency means that you may have a proof of false, but you still won’t be able to construct a canonical witness for it. (That is, there won’t be a way to normalize it into a canonical witness.) As a result, even type theories which are logically inconsistent can have non-trivial models (we call such theories “programming languages”, btw), and so in the event of a foundational disaster, you can use the canonicity criterion to save the true theorems from ex falso quodlibet.

  51. Scott Says:

    Circe #44:

      Well, I thought Schonage-Strassen was already of that complexity and Furer changed the log log n term to log* n or thereabouts, thus making it O(n log n) for all practical purposes.

    Oops, sorry, fixed! But I’d say that n logn loglogn was already “n logn for all practical purposes.” 🙂

  52. Scott Says:

    A. Certain #42:

      [A] year ago, if somebody had claimed that all of the major Internet companies were secretly allowing the NSA access to any of their users data without a warrant, I think some reasonable people would have thought that a conspiracy theory. I wasn’t so shocked, and neither were you, but what separates that from the theory that the Mossad secretly masterminded the WTC attacks? I agree there is a difference, but how do we find it?

    That’s an interesting and insightful question.

    For me, though, one of the most characteristic features of a conspiracy theory is that it asks us to accept that some huge organization, typically involving thousands of people, is 100% different in its nature and purpose from what it looks like even to intelligent outside observers. I.e., that the organization’s stated aims are entirely a facade to mask some dark and sinister purpose. And that even after decades, not a single person involved in the organization has slipped up and revealed the terrible secret.

    So for example: NASA claims it’s about space exploration, but really it’s a disinformation agency to convince the world that humans walked on the Moon even though they didn’t (which, to me, would be a much more impressive feat than just sending some humans there and being done with it! 🙂 ). Or: the Mossad claims that it’s about protecting Israel (an American ally) and stopping terrorist attacks. But secretly, it masterminded the worst terrorist attack against the US in its history, with the nefarious aim of dragging the US into foreign wars, which wars would then benefit Israel in some way that remains rather unclear 12 years after they started.

    Now, let’s contrast these examples with the other examples from your comment. On its face, the NSA appears to be an enormous, super-secretive spy organization, which collects vast amounts of data from all over the world with very little legal oversight, decrypts the data whenever necessary and possible, and then searches it for patterns of interest to US intelligence. However, thanks to Snowden, we now know that the NSA is … exactly that. And maybe even slightly more brazen than people thought. Or: on their face, the tobacco companies seem like greedy corporations that make money by peddling an addictive, deadly product. However, we now know that tobacco companies are … greedy corporations that make money by peddling an addictive, deadly product. And that they also intentionally manipulated nicotine levels.

  53. Bram Cohen Says:

    Schneier’s blanket statement that the NSA can make all ECC keys weak is just plain wrong. In particular, there’s no reason to suspect anything at all about curve25519, whose methodology for selecting its curve is completely public, and came from djb, who can hardly be called an NSA stooge. As for the NSA pushing ECC, it might have something to do with how actually implementing RSA is sketchy as hell, where ECC is much more confidence inspiring. If anything there was a conspiracy in the other direction, where RSA Inc. launched a massive FUD campaign against ECC because it was something they didn’t have a patent on, and somehow RSA got to be the well known brand despite being clearly inferior technology, even inferior to regular diffie-hellman.

  54. annoymous Says:

    Scott, off the topic, what’s your opinion on

    http://arxiv.org/abs/1306.3995 and http://www.newton.ac.uk/programmes/MQI/seminars/2013090614001.html

    ?

  55. wolfgang Says:

    >> we now know that the NSA is … exactly that

    Scott, but this is a bit too weak.

    We did not know that the NSA collects data on every US citizen, one reason being that its head actually denied it in Congress until recently.
    We did not know that the NSA intentionally weakens security standards.
    We did not know that NSA spying includes corporate espionage (e.g. Petrobras) and we did not know that internal controls are mostly lacking. (How much insider trading is done in addition to LOVEINT?)
    We did not know that there was essentially no real oversight from Congress and the FISC and we did not know the large role private contractors are playing (Ed Snowden was such a contractor and had access to pretty much everything it seems).
    We did not know that most US companies are cooperating with NSA, installing backdoors and handing out data of its customers.

    Maybe some people suspected something, but now we know.

    in other words, I think you should give Ed Snowden some credit, mostly for making his knowledge public instead of just selling it to the highest bidder …

  56. Scott Says:

    anonymous #54: Just give us a couple weeks, OK? 🙂 We’re working on a long paper that not only debunks many of their claims, but goes further to prove some new results about the permanents of iid Gaussian matrices, and the distinguishability of the BosonSampling distribution from the uniform distribution.

  57. Scott Says:

    wolfgang #55: Yes, you’re entirely right, and I apologize for giving the impression otherwise. Snowden does get the credit for letting us turn plausible conjecture into almost-certain knowledge. That, after all, is what we try to do every day in math and CS theory! 🙂 And, as a friend pointed out to me yesterday, the conversion of conjecture into knowledge matters, if for no other reason than that it allows the conjecturing to start from a further point than before…

  58. A. Certain Says:

    Scott,

    Excellent observation. I’ll have to cogitate on that one. Thanks for the input.

    Andrew

  59. A. Certain Says:

    Scott,

    Upon further reflection, for me the “conspiracy theory” part of it isn’t the NSA’s actions, it’s the Internet companies’. Here are most of the major Internet companies, one of whom’s motto is “Do no evil,” agreeing to hand over this information to the government, possibly illegally, and every person at every one of those companies that knew about it kept it a secret. I’m much more surprised that the companies agreed than that the NSA was willing to collect that data.

    Andrew

  60. J Says:

    Neel Krishnaswami Says:
    Comment #50

    Thankyou for the kind enlightenment here.

    Could you comment on the other piece too that if + and x tun out to be similar in complexity for computational purposes over $\Bbb Z$ then may be the ring structure could be modeled much more interestingly?

  61. Douglas Knight Says:

    Yan Zhu 46:

    Yes, that’s the kind of thing Schneier is talking about. The existence of that attack is more evidence than I had claimed existed.

    For any specific, known attack, we can rule out that NSA used it. In particular, this attack requires a field with composite exponent, which NIST avoids. I’m not sure of the history, but lots of things go wrong with such fields and probably there was already some reason in the open literature to avoid them.

    Let me distinguish between a weakness and a backdoor. This paper claims that you can promote the GHS weakness into a backdoor. By a weakness, I mean something vulnerable to anyone who knows about the method, while by a backdoor, I mean that there is a secret key used in the construction and even if you figure out the attack, as in Dual_EC_DRBG, you still can’t exploit it without the secret key.

    It is possible that NSA knows about weaknesses that the open community does not know about that affect some curves but not others. If they knew about them 15 years ago, then they probably manipulated the magic constants so that the NIST curves have (or avoid!) the weakness. (I see a lot of people claiming that NSA sabotage started 10 years ago, but I don’t know where they got that date from.)

    But NIST does not choose specific curves, so NSA cannot embed backdoors in them. Even if NIST had used 2^161, NSA could not have applied this attack. NIST chooses mysterious constants but then applies SHA-1 to them to produce the actual curves. So they could try a trillion different seeds and check if the curves produced have a 1 in a trillion property that they like (a weakness or avoidance thereof), but the property of having a backdoor is not something you can check just from the curve, not something you can build with such little control. (Also, NIST does choose specific curves in characteristic 2, but they are fairly transparently constructed, with hardly more choice than curve25519.)

  62. Neel Krishnaswami Says:

    J@60: Sorry, I don’t know anything about lower bounds for arithmetic (or much else, honestly — my work is in semantics and logic, not complexity).

    My (uninformed) suggestion would be to look at Kaplan and Tarjan’s constant-time catenable dequeues, and see if you can use it to implement fast arbitrary-precision multiplication.

  63. Fred Says:

    Sorry for being off-topic, but reading about Karp-Lipton wiki… can anyone recommend a good college textbook about complexity theory? (PS: I have Scott’s book)

  64. Scott Says:

    Fred #63: You’re in luck! There are lots of great complexity textbooks.

    Sipser’s Intro to the Theory of Computation
    Clearest exposition of the bare basics of computability, formal languages, and complexity, basically up through the 1970s

    Papadimitriou’s Computational Complexity
    The one I learned from. Full of wisdom and insight; takes you up to the late 1980s

    Arora and Barak’s Computational Complexity: A Modern Approach
    Not for beginners, but will take you very close to the current frontier in derandomization, circuit lower bounds, etc., and (as the title promises) show you how “21st-century complexity theorists” think

    Mertens and Moore’s The Nature of Computation
    Funny and quirky yet wonderfully comprehensive—an excellent textbook by two physicists that covers complexity theory as well as its connections to statistical physics and quantum computing

  65. How The N.S.A. Defeats Encryption | Ben Mobley on Security Says:

    […] NSA: Possibly breaking US laws, but still bound by laws of computational complexity (scottaaronson.com) […]

  66. Roger Says:

    Answering # 61: NIST certainly does choose specific curves. Some NIST curves for govt use are listed here.

  67. Trust the math? | Not Even Wrong Says:

    […] NSA knows about this than the rest of the math community. Scott Aaronson has an excellent posting here about the theoretical computation complexity aspects, which he initially ended with advice from […]

  68. Scott Says:

    Bram #53: I think the fundamental problem here is that NSA has a dual role, of securing American communications but also being able to break things when needed (which is why they unsuccessfully pushed the Clipper chip in the 90s). So from their perspective, the ideal crypto standard would be one that’s vulnerable to them but that they also judge to be the least vulnerable to everyone else, at least in the near future. Do you not see how, given the recent revelations about their insertions of vulnerabilities, the NSA’s powerful advocacy of ECC might generate a rational fear that ECC is the system that, in their judgment, best satisfies those strange dual requirements? (See, for example, the anecdote at the top of page 15 of this essay by Koblitz et al., which I guess conspiracy nuts could have a field day with… 🙂 )

  69. Titus Says:

    Hi, I’m from Europe and don’t know much about NSA. My question is what is the (mathematical) quality of the NSA people? Are their top people at the same level with top researchers at top universities or research labs? (aka could there be invisible super smart people?)

  70. Scott Says:

    Titus #69: It’s a good question. By way of answering, I can hardly do better than to point you to this Slate article, which I learned about from Peter Woit’s post.

    Here are the basic facts:

    The NSA is the biggest employer of mathematicians of earth. And it’s clear that some extremely talented mathematicians do work there, as well as at their satellite, the Institute for Defense Analyses (IDA). And on top of that, they obviously have complete access to all the cryptanalytic work done in the open (but not vice versa). So it’s almost inconceivable that they’re not somewhat ahead of the open world.

    To help calibrate, we know today that Britain’s GCHQ did discover RSA and Diffie-Hellman a few years before the open world did, and that the NSA also knew about differential cryptanalysis and various side-channel attacks (e.g., EM radiation attacks) well ahead of the open world.

    On the other hand, the fact that NSA mathematicians can’t talk to outsiders about most of what they do is obviously a huge obstacle in recruiting top people. By nature, mathematicians (like all scientists) want to publicize their results and earn glory and acclaim for them. (Even the Manhattan Project scientists probably had in the back of their minds that glory would be theirs if they just waited a few years…)

    For that reason (among others), the idea that the NSA has mathematical advances that would strike outsiders as being from centuries in the future or from an alien civilization, strikes me as profoundly implausible. But from one decade in the future? That’s possible. 🙂

  71. Douglas Knight Says:

    Scott 68, it’s possible that ECC is generally weak, but that’s a different concern. Schneier gave a specific concern that standards include mysterious constants that he doesn’t trust. BC 53 is giving a specific counterexample, that Curve25517 is a standard that (1) has an open design process that appears to give very little room for the designer to control; and (2) has a designer with decades of history as anti-government. There’s also a European standard that uses pi as the constant.

  72. Scott Says:

    Douglas #71: I don’t know the details nearly as well as I should. But let’s assume Bram is right that Curve25517 is beyond the NSA’s ability to control. Even then, one can ask a different question than whether there exist ECC standards that NSA (knowably) hasn’t influenced to make them less secure: namely, do there exist ECC standards that NSA has influenced?

  73. Me Says:

    Regarding comment #69: If one is interested to get a bit of an inside perspective on NSA’s crypto department, i found this interview with a former NSA cryptographer pretty interesting:

  74. Douglas Knight Says:

    It would be very easy for NSA to exert a little influence on the NIST standard curves from 1999. It seems safe to say that if they knew in 1999 that some curves were weaker than others, that they would have used that influence to choose or avoid those curves. Also, NSA explicitly sets standards for government use. Since 2005, their standard requires the US government to (a) replace DSA with ECC; and (b) use the NIST curves. (There is also an older standard, “Suite A,” whose very algorithms are secret. I have heard conflicting rumors of whether it is obsolete or reserved for extra security.)

    I think actions speak louder than words and that the NSA standard is pretty good evidence that ECC is not terribly broken. Whether NSA-influenced NIST curves are better or worse than DJB’s curve is a harder question. DJB’s implementation is probably better than others in avoiding things like timing attacks that aren’t very specific to elliptic curves, so I would choose it, but that doesn’t answer the question.

    Given the hashing construction of the NIST curves, NSA can only exert a few dozen bits of influence on the NIST curves. But it’s easy to imagine that one bit is all they need. There’s a European standard that uses pi as the seed for the hash function, but it’s easy to imagine that one bit of influence is possible through details of how they hash. DJB could easily apply more than a bit of influence through his choices. eg, why did he choose 2^255-19, rather than 2^255+95? (Similarly, you could ask how the US and European standards chose their base fields, which are not done through hashing. They do appear low entropy, that is, room for only a few bits of influence.)

  75. J Says:

    Scott #56
    We look forward to it.

    BTW do you think NSA has a poly time algorithm to factoring? My prof had a belittling comment once. Integer factoring has been there for 2000 years. It is time someone found that. He also speculated CIA possibly had one. In above comment #70, you have mentioned NSA is probably a decade ahead. You think factoring is something that could fall within the decade? If so how does your field, quant computing, and rest of complexity theory get affected?

  76. Scott Says:

    J #75: No, I don’t think the NSA has a polytime algorithm for factoring—I already said that in the post. It would be very astonishing (though less astonishing than, say, P=NP) if a classical polytime algorithm for factoring even exists.

    In case people missed the main point of this post: it’s extremely important to understand that the NSA can more than likely read huge amounts of encrypted communications, without needing to know polynomial-time algorithms for factoring or discrete log—or anything similarly earth-shattering from a complexity-theory perspective. Instead, some combination of
    (1) side-channel attacks,
    (2) vulnerabilities and backdoors that the NSA itself inserted into standards, software, and hardware,
    (3) sheer brute-force computing power, and
    (4) more modest algorithmic advances (?)
    would very plausibly suffice in most cases.

    On your last question: if a classical polytime algorithm for factoring were discovered, quantum computing would certainly lose a bit of its sex appeal. In such a case, I guess the “marquee application” for quantum computing would revert to Feynman’s original one: simulating quantum systems. (Which is much more important than factoring as a long-term practical application in any case.)

  77. Bram Cohen Says:

    Scott #68: My previous comments in no way depend on any NSA conspiracy inference, and are entirely based on the fact that (a) curve25519 is designed with an open set of reasoning with no more than a few bits of designer influence, no reason to believe it’s weak and (b) my comment had to do with Schneier’s claim that ECC is MORE prone to mathematical break than RSA, which is wrong wrong wrong. It’s vastly more likely that the NSA has an incremental improvement to factoring, either an improved algorithm or improved hardware. The bottleneck on factoring is row reduction on bit vectors. That’s very, very prone to specialized hardware. And there have been incremental algorithmic improvements to RSA over time, with no such thing for ECC.

    Putting on my conspiracy inferencing hat, the evidence seems to be that the NSA is playing a well defined game, where they don’t like getting embarrassed by having bad crypto. The (old) GSM standard, for example, was weakened by reducing the number of bits in the key, so it could be brute-forced, but only by someone with a lot of resources. At a crypto conference I was at once the NSA gave an encryption algorithm they’d come up with for others to use (crypto ’98, long before AES) and Adi Shamir gave a significant crack of it in the rump session, with several rather panicked NSA people taking lots of notes in the back of the room. It simply isn’t in the NSA’s interest to weaken things in ways where they aren’t sure exactly how much the thing has been weakened.

    Now, if you really want to go all tin foil hat, you could imagine that the NSA would put forth a standard encryption library with an API so horrifically broken that noone who wasn’t very experienced could possibly use it right, and make up a goofy story that the plant they had ‘write’ it was using it as an exercise in learning to program in C. You also can imagine having a mole write a book on cryptography which made it all sound like a fun how-to guide which gave the reader plenty of rope to hang themselves with and no real guidance about how to make actual protocols securely, and had an erratum half the length of the book.

    Of course, we have both of these things, they’re called OpenSSL and Applied Cryptography, but noone accuses Eric Young or Bruce Schneier of being NSA moles, even though there’s plenty of circumstantial evidence to support that theory.

  78. Bhupinder Singh Anand Says:

    Scott: “At the same time, to me, perhaps the most shocking feature of the Snowden revelations is just how unshocking they’ve been”.

    Scott@77: “it’s extremely important to understand that the NSA can more than likely read huge amounts of encrypted communications, without needing to know polynomial-time algorithms for factoring or discrete log—or anything similarly earth-shattering from a complexity-theory perspective”.

    Simply for the sake of introducing an element of perversity into the discussion:

    Perhaps it’s also equally important to value (read cherish as in a child’s antics, not endorse as in adult action) Snowden’s deliberate breach of contracted confidence as much as NSA’s insidious attempt to evade all social, administrative, political and judicial checks and balances.

    Don’t we need both Snowden’s betrayal and NSA’s perfidy (not necessarily in equal measure) to help us keep a critically needed balance in our constantly evolving perspective of an intellectually challenging universe?

    After all, wouldn’t the story of Job be mere braggadocio without Satan?

    And isn’t holism also as much the message of the scientific method as of Buddha and Hegel?

    Moreover—on the question of perspective—just how much re-interpretation would it take for Tom Clancy to convincingly interchange the villain and victim?

    Regards

  79. J Says:

    Comment#78

    Bhupinder Singh Anand and P=NP?

    http://arxiv.org/find/all/1/all:+AND+bhupinder+singh/0/1/0/all/0/1

    The same person here?

  80. This week in science, technology, public policy | mek wasabi Says:

    […] NSA: Possibly breaking US laws, but still bound by laws of computational complexity – S. Aaronson […]

  81. J Says:

    Scott
    Comment#76

    “I don’t think the NSA has a polytime algorithm for factoring—I already said that in the post. It would be very astonishing (though less astonishing than, say, P=NP) if a classical polytime algorithm for factoring even exists.”

    The above could be true in general. However, the NSA mathematicians are probably interested in only numbers of form $N=PQ$ which are special. May be they know a nifty way for this problem. Although I have seen statements that numbers of this form are hardest, I have not seen a convincing proof.

  82. Scott Says:

    J #81: It’s true that no general such reduction is known (someone correct me if I’m wrong). On the other hand, if someone had a factoring algorithm that only worked for products of 2 distinct primes, I’d personally be fine to call that just “a factoring algorithm,” since that’s the special case that almost everyone cares about, and that most current crypto specifically needs to be hard.

  83. iNSAne | Arkeoblog Says:

    […] Scott Aaronson – About NSA and computational complexity […]

  84. Rahul Says:

    Scott says:

    “Any time anyone has tried to do that, it’s been easy for the natives of the unruly wilderness to laugh at the systematizing newcomers”

    The valid criticism (I think) of theoretical computer scientists is that they (often) write about security / cryptography in a way which demonstrates total ignorance (disdain?) of the practical ground realities or applied issues.

    It is perfectly fine to be “systematizing newcomers” but if you consciously choose to ignore whatever knowledge / heuristics / experiences generations of unsophisticated natives have gleaned from their primitive utilitarian insights, there’s a problem right there.

    The intent is not to criticize the “clarity and rigor of the systematizing approach”. What might be criticized are things like repeatedly inventing solutions in search of a problem, or starting with bizarre simplifying assumptions etc. Essentially, sacrificing the primary goal of security at the altar of mathematical elegance.

    To me that’s the right spirit of the criticism (though a mathematician is an odd credential to deliver it).

  85. Elliptical Training Secrets and Theorem Shaving « Pink Iguana Says:

    […] Aaronson, Shtetl-Optimized, NSA: Possibly breaking US laws, but still bound by laws of computational complexity, here. […]

  86. Rahul Says:

    “Math could be defined as that which can still be trusted, even when you can’t trust anything else.”

    One (possibly naive) question that bugs me is whether physics or maths is more fundamental when it comes down to the basics. (Perhaps this is one of those dopey philosophical questions, and if so, it doesn’t deserve an answer in my book!)

    In the sense, has our math evolved to describe the physical laws of the universe we live in? Or did physical laws somehow evolve to fit the constraints of math.

    Could we encounter advanced creatures who give a perfectly successful description of our universe (say, QM) using math totally unrecognizable to us? Could we have lived in a universe where the Karp–Lipton theorem does not hold?

  87. Rahul Says:

    …..alternatively when you cannot trust anything else, should the one thing you trust be the laws of physics or the laws of math?

  88. Somebody Says:

    The aliens might have started with a different geometry and might therefore have successfully combined quantum theory and relativity; by the way they cannot bypass physical phenomenon like quantum theory and relativity.

    Assuming they approach physical laws and other stuff via axiomatic approach (which I believe is reasonable), they cannot escape Arithmetic in which case they would have hit on the same number theory (c.f Dedekinds theorem for Peano arithmetic (2-nd order)).

  89. Scott Says:

    Rahul #87: You should trust the laws of math. And here is my argument:

    You can easily use math to describe physical laws totally different from the laws we observe. And not just in principle—you can actually do it. You can start, of course, by adjusting the Standard Model parameters, but when you get more adventurous, change the number of spacetime dimensions, change QM or GR, or just throw out the known laws entirely and replace them with (say) Conway’s Game of Life, or Wolfram’s Rule 110. Math (like the honey badger) don’t care. 🙂 Moreover, as a matter of logic, of course you can’t rule out that we’re all living in a Matrix-style simulation, and that any of those alternative physical laws are the “real, true” laws of the simulating universe.

    By contrast, try to describe to me what it would be like for the Karp-Lipton Theorem to be false (I don’t mean by changing the definitions of NP, P/poly, etc., which you can always do and which is trivial). Sure, you can string those words together to form a grammatical sentence—but I predict that you won’t be able to give me even a single example of a possible state of affairs corresponding to that sentence.

  90. Rahul Says:

    Scott says:

    “By contrast, try to describe to me what it would be like for the Karp-Lipton Theorem to be false…”

    You are right. I won’t be able to.

    But I wonder if someone else can. It’ll be interesting to see.

    Maybe that’s why so many Sci-Fi authors dabble into tweaking physical laws; but are there any stories where an author has considered a different set of mathematical laws?

  91. Rahul Says:

    “Prefer conventional discrete-log-based systems over elliptic-curve systems”

    As a layman, one point that intrigues me is why people got around to using elliptic curve cryptography at all?

    Are there advantages other than smaller key sizes? And in an age where we stream Megabyte sized videos to mobile devices is the benefit of a 256 bit ECC key versus a 3072 bit legacy key relevant at all?

    Are there niche segments where key-length parsimony actually matters? Which are these so severely channel-constrained environments where key exchange occurs so frequently and on such a low bandwidth network for key length to matter so much?

    What am I missing here?

  92. Philip White Says:

    As much as I’m sure the NSA loves crypto and TCS as much as all of you, I would imagine that the higher ups think more strategically than mathematically, and thus more outside of the box.

    What I’m getting at is that if I were in charge of the military and government’s surveillance efforts, I wouldn’t be investing in crypto or TCS; I would be investing in better physical surveillance and reconnaissance technology. I.e., physics and engineering.

    There are plenty of conspiracy theorists who talk about “infrared sensors” or whatever tracking their movement; but as one can plainly see in this video of the pursuit of Boston Marathon bomber Tsaernev, some of that technology has already become a reality:

    http://www.youtube.com/watch?v=43-8ZlAvjnE

    Bottom line: Why waste my time hacking your computer and intercepting your communications when I could fly a helicopter (or stealth fighter) over your house and watch you type into your computer? Yes, it’s expensive; but it’s also more (conceivably) possible than breaking RSA.

    Perhaps we haven’t reached the point where this is possible yet, but if I were the NSA, I would be focused on that frontier.

  93. David Speyer Says:

    Scott, you write “It would be very astonishing (though less astonishing than, say, P=NP) if a classical polytime algorithm for factoring even exists.” I’d be curious to hear why you believe this. As an outsider who tries to be somewhat aware of these issues, I found Henry Cohn’s argument http://research.microsoft.com/en-us/um/people/cohn/Thoughts/factoring.html that there is no strong reason to either believe or disbelieve factoring is in P pretty plausible.

  94. Mike Says:

    David @93

    As Cohen says, “Of course, I have no real evidence for my views; if I had any good ideas for factoring, I’d be working on them instead of writing this. On the other hand, the people who talk about the great difficulty of factoring have equally little evidence. The net result is that it’s reasonable to hope that factoring is difficult, but you should regard that as a provisional hypothesis rather than a certainty.”

    Well, I don’t disagree with some of what you say. However, quantum physics has has been shown to be capable of some factoring tasks. So, it’s not correct to say that there is “equally little evidence.”

  95. Douglas Knight Says:

    Rahul 91, I don’t understand how it could possibly be true, but it is widely claimed that key exchange bandwidth matters. What makes more sense is that the computation time of public key crypto matters.

    Public key crypto is mainly used for key exchange. It is expensive, but it is amortized over long messages, where the private key cost builds up. If you have lots of short messages, it is expensive, but I don’t really know any examples like that. Maybe if you hit a web server and it says “use cache.” And even if the messages are long, public key crypto introduces latency. Most web servers don’t do any crypto; I believe that they blame cost and latency.

    People usually say that keysize matters for small devices, but the only example they ever give is cell phones. The military’s ruggedness requirements might reduce its computational power.

    Here are some examples of people using ECC and caring about key length.

    (1) If you want to send keys on paper or over the phone, brevity helps. But usually it’s enough to verify the key over such a channel, and then you can use a hash to create brevity. There’s some related issue in the names of Tor nodes being derived from public keys.

    (2) Tor uses RSA, only 1024 bit. It recently degraded when a 3 million computer botnet joined. It happens that it was in the process of switching to ECC, which alleviated the congestion. But Tor does some n^2 precomputations, which is the heart of the problem.

    (3) As I understand it, the iphone does whole disk encryption with ECC. While the phone is locked it can write to disk but not read. Thus it can take a photo when locked. Or it it can save incoming mail, if it keeps in ram your mail password, which is probably more valuable than anything on disk.

    (4) Bitcoin uses ECC. This might count as “lots of short messages.” The system as a whole has a lot more hashing than public key crypto, but only the miners do hashing, while the people using bitcoin just do public key, so you might expect a bottleneck there, though the latency of waiting for miners to record transactions dominates. Also, bitcoin is extensible, so it can use public key crypto to build complicated contracts. So that’s a situation where public key performance might matter.

    Exotic public key algorithms that might survive quantum computers have megabytes of keys.

    90: the closest I’ve seen to “math fiction” is in Ted Chiang, with Greg Egan a distant second.

  96. Jay Says:

    Douglas #95 re #90

    I knew “Luminous” and “Dark integer” from Greg Egan, but not Ted Chiang. Is it in “Stories Of Your Life and others”?

  97. J Says:

    @David Speyer Comment#93
    Peter Sarnak also thinks so. DL seems to be a little hard on intuition but may be this is also easy.

  98. J Says:

    @David Speyer Comment#93
    May be Scott’s intuition comes from the existence of Shor algorithm and that QM may have some advantages. However, I will let Scott comment.

  99. Scott Says:

    David Speyer #93: I agree with all of Henry’s factual assertions. So at some point, as Henry himself says, it just comes down to personal hunches. However, here are two of the considerations leading to my hunch that factoring has some “core of classical hardness” to it (let’s say, that there exists an α>0 such that factoring can’t be solved in L(α) time):

    (1) In the past, when a problem was proved to be in P after a long research effort, that was usually preceded by the empirical discovery that the problem could “almost always” be solved quickly in practice. Two good examples are primality testing and linear programming. But the empirical situation with factoring is nothing like that (obviously not, or no one would use it as a basis for cryptography).

    (2) It took only about a year from when theoretical computer scientists first started thinking seriously about quantum algorithms, to the discovery of Shor’s factoring algorithm. Now, some people might interpret the existence of a fast quantum factoring algorithm, as evidence that maybe there’s a fast classical algorithm too. But I interpret it just the opposite way: if a fast classical factoring algorithm existed, then why wouldn’t it have been discovered almost as quickly as Shor’s algorithm?

    Having said that, I’d say the preceding two considerations only give me ~80-90% confidence that factoring is not in P (whereas for P≠NP, it’s >99%; I’m not sure exactly how many 9’s).

    Incidentally, Henry says in his essay that there is compelling evidence for P≠NP, but he never elaborates on that. I have my own ideas, but I’d be curious about what he considers the evidence for P≠NP to be, and why it doesn’t apply to factoring.

  100. Scott Says:

    On reflection, someone might ask: how could I believe that there’s a 10-20% chance that factoring is in P, and also that the discovery that factoring is in P would be “astonishing”? However, I think one can give many other examples. E.g., many people might judge that the chance of the US and China going to war in our lifetimes is at least 10%. But if you read in the newspaper tomorrow that the US and China had gone to war, it would still be astonishing.

  101. J Says:

    I was actually wondering how Scott thinks it was as high as 20% while NSA guarantees 10^-100:)

  102. If you’re gonna go down, go down fight’n | Pyrolox Says:

    […] clear invasion of individual’s privacy.  The issue actually goes beyond this.  I found this article which points out that the NSA has also been pressuring companies to add backdoors into their […]

  103. Raoul Ohio Says:

    #42:

    ” … most people shocked that NSA …”

    Guess what? That is their job. Taxpayers pay for that.

    You might have noticed a lot less planes flying into buildings lately.

    A coincidence? Who knows.

  104. wolfgang Says:

    >> a lot less planes flying into buildings lately.

    unfortunately, the NSA could not prevent the Boston bombing – although one of the brothers was quite active on the internet…

    I guess it is not so easy to connect the dots after all,
    http://www.schneier.com/blog/archives/2013/09/the_limitations.html

  105. Vadim Says:

    Raoul,

    We had over 100 years of powered flight *before* anyone intentionally flew a commercial airliner into a building. Seems hardly fair to give the NSA all the credit for merely 12 years.

    Yeah, NSA activities could potentially prevent a terrorist attack, but most everyone has a line in their minds as to what they’re willing to give up in terms of privacy, anonymity, freedom, etc, in order to get a certain reduction in the chance of being victims of a crime/terrorist attack. I guarantee you that mandatory surveillance cameras in each and every room of each and every home would be even more effective at preventing a terrorist attack than intercepting communications. Should it be done?

    As I finished typing that, I realized that your post may have been sarcastic. I hope it was.

  106. Fred Says:

    Assuming everyone’s wrong and NP-complete problems can be solved in poly time, keeping it a secret would give a gigantic advantage, wouldn’t it?

    How easy would it be to keep it secret though?
    If a government agency were to discover it, could they keep it secret?
    If someone outside a gov agency were to discover it, could the government keep him/her from publishing about it?

    If you wanted to go public, how would you do it without possibly instantly destabilizing the entire internet? (I’m assuming P=NP means factorization would become easy)

    Or would you go private about it and contact your own government as a matter of national security/patriotism?

    (the 1992 movie “sneakers” is somewhat about that)

  107. Douglas Knight Says:

    If someone outside a gov agency were to discover it, could the government keep him/her from publishing about it?

    Three can keep a secret if two are dead.

  108. Raoul Ohio Says:

    A recent insider account of what happens in the NSA:

    http://www.zdnet.com/nsa-cryptanalyst-we-too-are-americans-7000020689/?s_cid=e540&ttag=e540

    Anyone wearing a tinfoil hat; don’t bother to read it.

  109. Vadim Says:

    It’s funny how, in the last year, the NSA has admitted to so many facts that a year prior would only have been believed by the so-called tin-foil hat crowd. Granted, there *is* an actual tin-foil hat crowd, but the belief that the NSA is monitoring more than they’re entitled to is pretty mainstream and entirely indisputable now.

  110. Bill Kaminsky Says:

    Sometimes I think that the academic community needs the P vs. NP equivalent of Alcoholics Anonymous… that is, it needs a place where someone like me can say:

    Hi, I’m Bill, and I admit I have a problem. I frequently not only spend time making direct attacks on whether or not P = NP, but worse, I frequently end up thinking I’ve succeeded in finding a practical polynomial-time algorithm for NP-complete problems! Now, I used to make excuses: “No, I’m not really a crackpot. I actually code up all my crazy ideas, test ’em, and find where they fail. Maybe one of ’em will turn out to be a good, albeit incomplete heuristic… yeah, yeah, that’s what I’m doing, and crackpots never do that.” But, like I said, I’m an addict. I need help.

    As such an addict, I emphatically second Fred’s question (Comment #106):

    If you [proved P = NP and] wanted to go public, how would you do it without possibly instantly destabilizing the entire internet? (I’m assuming P=NP means factorization would become easy)

    Or would you go private about it and contact your own government as a matter of national security/patriotism?

    I’m very curious to hear serious answers to this question.

    I’d expound at length on my current thoughts on the matter… but I’d like to first make sure people are still actively participating on this comment thread.

    After all, I’m struggling to admit I have a problem, and I need help. 😉

    P.S. The short version is that one does have to disclose to the government first, but one should use gov’t whistleblower best practices, which is to say:

    1) Divulge your amazing discovery to multiple, hopefully adversarial, parties who have the requisite uber-high security clearances. Here in the case of crypto and “breaking the internet”, I think this would entail not only the mean big ol’ mean-and-scary NSA and the supposedly cuddly-and-civic-minded (though-sometimes-tricked-or-pummeled-into-submission) NIST, but also:

    a) the various intelligence community Inspector Generals, (who, evidently, publish really top-notch reports into intelligence community wrong-doing… which, then of course, receive some compartmentalized Top Secret classification that ensures the vast majority of policymakers, let alone the public, will never see ’em)

    b) the various people in the Judiciary who oversee the US gov’ts, ahem, not-obviously-legal activities (similar comment to a)

    c) the various Government science advisory bodies with compartmentalized Top Secret clearances (mucho importante because you need someone whose bureaucratic imperatives impel them to look at the P = NP issue for its nigh-Godlike ability to revolutionize all of science and technology and not just break crypto)

    d) last but not least, though they haven’t seemed to comport themselves all that admirably, the legislators with the clearances to see compartmentalized Top Secret stuff (which I think is the Speaker of the House, the ranking House minority member, the Senate majority and minority leaders, and the House and Senate Intelligence committees… and on that note, be sure to include the backbenchers in those committees who seem to be at least somewhat troubled by what they’ve had to keep secret, not just the chairpersons who always seem to knuckle under)

    2) Given that (1) might not work, kick up your whistleblower skills yet another notch (i.e., become some combo of Bruce Schneier, Edward Snowden, and Julian Assange) and make one of those kick-ass encrypted-dead-man’s-switch disclosure-time-bombs and post it all over the Internet and the darknets. Allot, say, 1-3 years before it all goes public.

    Oh God, did I just enter a fugue state and type all of the above crap gaming through a scenario that likely never will arise?! I have a problem. I need help. 😉

  111. J Says:

    Professor what makes you think DL is still hard to break for general primes?

  112. Fred Says:

    #110 where can I join that “P=NP Anonymous”?

  113. Fred Says:

    #110
    I can’t think of a precedent where a math discovery by a single individual would have such an impact.

    Just reading up about PGP from the wiki:

    “Shortly after its release, PGP encryption found its way outside the United States, and in February 1993 Zimmermann became the formal target of a criminal investigation by the US Government for “munitions export without a license””

    P=NP would be the digital equivalent of “discovering” and “building” the a-bomb all on your own in your garage.

    Proving P!=NP would be way safer!

  114. Vadim Says:

    Strong encryption had been categorized as munitions for export purposes (though even in the case of PGP, the government dropped its case). I’m not aware of attacks on encryption as ever having been so categorized. Publish away.

  115. Bill Kaminsky Says:

    To Fred @ #112

    …where can I join that “P=NP Anonymous”?

    If there’s enough interest in the Greater Boston area, I’ll finally start a chapter at MIT. But serious inquiries only… after all, the first step is admitting you have a problem. 😉

    ————

    P.S. Of course, you could argue you don’t have a problem and you have you P = NP “hobby” under control if in addition to any “proof” of P = NP you have, you provide proof your algorithm can at least do the modest task of solving at least 1 of the “unsolved instances” from the SAT 2013 Competition, which are databased along with all the correctly solved instances in loving detail at

    http://edacc4.informatik.uni-ulm.de/SC13/experiments/

    ———–

    P.P.S. Solving an “unsolved instance” from the SAT 2013 Competition is a much, much more modest task than, say, providing the prime factors of all the remaining RSA Challenge Numbers or providing the keys that join plaintexts and ciphertexts encoded under a full 10 rounds of AES-256 or any other good stuff that’ll instantly cause you to be a person of interest to the NSA even if your algorithm isn’t a true solution to P = NP in the rigorous, asymptotic sense.

    Namely, an “unsolved instance” from the SAT 2013 competition is merely one that couldn’t be solved within 5,000 seconds (i.e., 1 hr, 23 mins, 20 secs) by any of the algorithms in the competition, all of which were run on 2.83 GHz Intel Xeon E5440 processors, either singly in the “Sequential” track or on 8 of ’em in the “Parallel” track.

    In case you’re wondering, the smallest such “unsolved instance” that is (almost assuredly) in fact satisfiable is an instance of Random 7-SAT on 93 variables with 8164 7-CNF clauses. (The clause/variable ratio of 87.7 is right below the apparent SAT-UNSAT threshold.)

    ————–

    P.P.P.S. If you don’t understand either of the last two postscripts, yet you think you’ve proved P = NP, you’re almost assuredly a crackpot, and until you’re ready to admit you have a problem we ask you not come to “P=NP Anonymous” meetings. You’ll just upset the people there who are doing the hard work of doing the 12 Steps. 😉

  116. Fred Says:

    #115
    Haha, great, I was looking for a list of non trivial problem instances to test. Perfect.

  117. Rohit P Khera Says:

    In reference to remarks about Koblitz, part of what Koblitz is saying is that there is too much emphasis just on reductions to reference number theoretic problems. He points out that a lot of vulnerabilities, even with schemes based on robust one way functions arise out of “protocols”, viz., the application of the function to achieve a specific goal, such as establishing keys (a good example is some of the NIST ECC point exchange based key establishment protocols such as ECDHE which have known vulnerabilities such as KCI). So part of Koblitz’s criticism is that not as much time is spent analyzing message exchange / protocol aspects of schemes.

  118. Gil Says:

    Scott, as you believe that there’s a 10-20% chance that factoring is in P (while it will very astonish you), would you give a similar chance (again, with great astonishment attached) to QC being impossible?

  119. Scott Says:

    Gil #118: Sorry, just saw your comment! I’d give maybe a 1% probability (if I’m feeling generous 🙂 ) to QC being fundamentally impossible. If it is, then I’m certain we have something new and astonishing to learn about the laws of physics.

  120. anonymous coward Says:

    scott 89: “You can easily use math to describe physical laws totally different from the laws we observe.”

    in the history of science thus far–ancient or modern, we’ve mostly chosen rules that match the most accurate data at that point in time and that’s why it appears as though we can modify the “laws” of physics as we see fit; the “laws” change as technology and techniques improve/change. there are some examples of laws that are closer to what i think of as laws (inviolable/unchangeable) e.g. the law of conservation of energy.

    to me, it’s much harder to envision the violation of concrete physical laws than mathematical ones. unlike mathematical axioms that one can create at will using, sometimes, vague or arbitrary definitions, physical phenomena are what they are, and the laws are what they are, and we can only try to understand them whereas with math, we can choose which axioms we like or fit what we’re trying to accomplish and explore without limits (other than those axioms).

    in other words, it’s the maths that’s untrustworthy–it has no constraints! you can begin your definition at a point of your choosing and there are no limits to how far you can stretch.

    the laws of thermodynamics, the uncertainty principle (which you often mention or allude to in this blog or your own work)…they’re necessarily useful because they already let us know where the ceiling is and we don’t try to do things like build 100% (let alone more) efficient machines etc. and can use them to tell whether a paper/article is worth reading or not.

    with maths, you find one geometry too restrictive? go to any of the other infinite ones (or create a new one). you’ve hit the wall with the currently-defined numbers? define new numbers for your research.

    “You can easily use math to describe physical laws totally different from the laws we observe.”

    we don’t prove things physically just by using maths. just because it’s mathematically valid doesn’t mean it’s physically valid, let alone a physical law otherwise there’d be plenty of physical laws now coming from string theories.

    the reason people thought that you could just compose velocities in the classical way regardless of how fast the objects were moving was based on mathematical induction (type of) reasoning.

    “You should trust the laws of math.”

    maybe; to do mathematics or to build upon it, but not necessarily in other disciplines.

    since this post is long enough, i’ll just apologise for the length and abruptly end with a quote from eddington (on probably my favourite law):

    “…if your theory is found to be against the second law of thermodynamics I can give you no hope; there is nothing for it but to collapse in deepest humiliation”

  121. scientists/ mathematicians scrounge some spine against the @#%& NSA | Turing Machine Says:

    […] 2. Shtetl-Optimized » Blog Archive » NSA: Possibly breaking US laws, but still bound by laws of compu… […]

  122. Shtetl-Optimized » Blog Archive » NSA in P/poly: The Power of Precomputation Says:

    […] on the Internet rushed to suggest each of these far-reaching possibilities.  Some of us tried to pour cold water on these speculations—pointing out that one could envision many scenarios that were a […]