How to Keep Secrets in a Post-Quantum World

BLOG: Heidelberg Laureate Forum

Laureates of mathematics and computer science meet the next generation
Heidelberg Laureate Forum

The list of panelists at the “Post-Quantum Cryptography” panel at the 9th HLF read like a who’s who in computer science.

The first to join the panel was Vinton ‘Vint’ Cerf, often referred to as the “father of the internet” and a rockstar of computer science. He works as an “evangelist” at Google – a job title he chose himself. Vint Cerf also won the 2004 ACM A.M. Turing award for his work on internetworking, including the design and implementation of the internet’s basic communication protocols (TCP/IP) and also for his inspired leadership in networking.

Prior to the panel, Whitfield Diffie gave a speedy run through of modern cryptography. Diffie, who received the 2015 ACM A.M. Turing Award for his eponymous Diffie-Hellman key exchange method (among other things), also sat on the panel. Alongside him was Adi Shamir, who won the 2002 ACM A.M. Turing Award for his work in co-developing the RSA algorithm.

The panel was completed by Vadim Lyubashevsky and Gregor Seiler, both research scientists at IBM Research Europe in Zürich. Their work deals with the theory and practice of quantum-safe computing and they both made significant contributions to the design of some of the schemes recently selected as the upcoming quantum-safe standards for public-key encryption.

On a stage sits five white men. From left to right they are Tom Geller, Whitfield Diffie, Vinton Cerf, Avi Shamir, Vadim Lyubashevsky, and Gregor Seiler. The camera is just beyond Gregor so is looking at the panel from the right.

From left to right: Tom Geller, Whitfield Diffie, Vinton G. Cerf, Adi Shamir, Vadim Lyubashevsky, Gregor Seiler. © Heidelberg Laureate Forum Foundation / Flemming

What is post-quantum cryptography?

Before we even begin to ponder a “post-quantum world”, we need to agree on what post-quantum actually means. Put most simply, “post-quantum” in the context of cryptography means being resistant to attack by quantum computers, i.e. a system or algorithm leading to encryption that a quantum computer is unable break.

The problem with this definition, however, is it brings up a rather uncomfortable question: How do we know that an algorithm can be broken? This doesn’t just apply to quantum computing either: we currently have no way of guaranteeing that a classical computing algorithm is secure. The best we can do at the moment is assert that we haven’t been able to break an algorithm yet and hope that remains the case. As Vadim Lyubashevsky put it: “We’ve worked on these problems. We think they’re hard. This is the best we can do.”

This may sound slightly terrifying to you. It did to me. A shiver ran down my spine when Whitfield Diffie pointed out that World War 2 saw the breaking of many algorithms that had previously been thought to be unbreakable. There is a silver lining, however. We do not actually need an algorithm to be impossible to crack. All we need is for an algorithm to be difficult enough to crack that it cannot be done in a useful length of time.

What do I mean by this? Well, each secret has a lifespan. I no longer care if anybody knows who I had a crush on when I was ten. This means that the cipher I used to write a note telling him I had a crush on him (I wish I was joking) only needed to remain safe from attack for fifteen years. Even government secrets have a point in time when they can be declassified, although this typically takes a bit longer. As long as the encryption algorithm takes longer to break than the expected lifetime of the secret, it is for all intents and purposes “secure”. Musing on this led Vint Cerf to propose a new idea of what “secure” should mean: “A relative work factor that postpones the breaking long enough that the information is no longer useful.”

P = NP?

I want to take a bit of an aside here to talk about complexity theory. Complexity theory is the branch of mathematics and computer science that investigates how much time or space is required for an algorithm to run, usually as a function of the size of the input to the algorithm (usually denoted by \(n\). There is a famous unsolved problem in complexity theory: P versus NP.

P stands for “polynomial time”. An algorithm is said to run in polynomial time if the runtime of the algorithm is a polynomial function of the input size \(n\). A problem is said to be P, if there exists a polynomial time algorithm that solves it. NP stands for nondeterministic polynomial time, and problems are said to be NP if a solution can be checked in polynomial time.

All P problems are also NP. The P versus NP problem asks if all NP problems are also P i.e. whether or not P and NP are equivalent. This problem was first formally introduced in 1971 by Stephen Cook. Cook won the 1982 Turing Award “For his advancement of our understanding of the complexity of computation in a significant and profound way.” If that wasn’t enough to show how important this question is, in the year 2000, the Clay Mathematics Institute selected it as one of its “Millennium Prize Problems”. These are seven problems which each carry a $1,000,000 prize for the first person to solve them. So far only one of the problems has been solved.

An important example of a problem that is NP – but might not be P – is prime factorisation: We can easily check a potential prime factorisation by multiplying the prime factors together. We do not yet have a polynomial time algorithm that can factorise numbers, but we can’t say for certain that one doesn’t exist. The reason that this example is so significant is simply: The fact that prime factorisation is hard forms the foundation of the main cryptosystem used in encryption today (the RSA cryptosystem). If it is proven that \(P=NP\), this would prove that all of modern cryptography is vulnerable to attack.

Thankfully, I have some good news. While we do not have proof that \(P \ne NP \), the overwhelming consensus is that this is the case. William Gasarch, an American computer scientist, has conducted multiple polls of researchers on this topic. In 2002, 61% believed that \(P \ne NP \). By 2012, this had risen to 83% and it rose even further to 88% in 2019. Better still, when restricted to only experts, 99% of those surveyed in 2019 believed \(P \ne NP \).

This means that the biggest threat to modern cryptography is almost definitely quantum computing. You may, at this point, be wondering whether we could just adapt our current algorithms to become better. Diffie also had this thought during the panel, and questioned Shamir about whether RSA could just steadily outrun things, simply by increasing the number of bits. Shamir’s answer was clear: “If you have a secret that you want to maintain for the next 100 years, please don’t use RSA.”

Quantum computing is so powerful because it allows a high level of parallelisation so algorithms that aren’t classically polynomial time can be adapted, and new algorithms can be created that are polynomial time. In fact, there already exists a polynomial time quantum algorithm that can carry out prime factorisation, known as Shor’s algorithm. All that remains is to build the quantum computers themselves. It should be noted that we do actually already have some quantum computers, but they are relatively small in terms of the number of qubits, so none are capable of such a calculation yet.

How worried should we be?

Once we have more advanced quantum computers, we’re in big trouble – any messages we send using current cryptosystems be vulnerable to decryption. This might not be something we need to worry about for a while though.

A unique benefit of the Heidelberg Laureate Forum is the opportunity to get some of the brightest minds in their fields together to discuss problems such as this, and to give their views. Tom Geller asked the panellists when they thought post-quantum computing would be needed. It was a relief to hear Vint Cerf opt for “within the next 20 years,” while Shamir, Lyubashevsky and Seiler didn’t think that it would be needed even then, opting for “within the next 30 years.”

There are many reasons we do not yet have this technology. The main challenge is largely agreed to be quantum decoherence. Quantum states can be thought of as a probability distribution of each possible state being measured. A pure state gives a particular measurement with probability \(1\), and all other measurements with probability \(0\). Decoherence is when this probability distribution “spreads out” (to use a non-technical term), and the state becomes impure to a greater and greater extent. With decoherence comes loss of information.

One way to deal with loss of coherence is to use multiple qubits to represent one logical qubit: So if \(1000\) logical qubits are needed, the machine should have, say, \(1,000,000\) qubits. This is quite a difficult machine to build, and even if you create many smaller machines, you’re then faced with the problem of how to link them.

In Whitfield Diffie’s introductory talk, he had mentioned how quantum computing had been promised for the past 30 years but not yet realised. He described how cynics think it’s like controlled nuclear fusion – always promised but never coming – whereas optimists always think it’s just round the corner. Adi Shamir added weight to this, pointing out that IBM had previously predicted that by 2023 we should have a more than 1000-qubit machine, but at the time of the talk, it was September 2022 and the best available was a 53-qubit device. This is a far cry from the potential 1,000,000-qubit machine we might need.

How do we stay safe?

Suppose then, that quantum computers will be developed in 30 years time. At what point should we switch to quantum-resistant algorithms? These algorithms are already being developed. There are quantum-safe systems available now, and even some hybrid schemes. In fact, Lyubashevsky and Seiler themselves have been working at IBM to create post-quantum systems and algorithms.

The first thing to note is that although we don’t currently have quantum computers capable of decryption, any messages sent now could be stored to be decrypted once we do have the means. This means that we can’t necessarily afford to wait long before switching to post-quantum computing. Given that the panel predicted we have between 20 and 30 years until such technology is developed, any company or government with long-term secrets that need keeping beyond this time should be worried.

There is a technical point here, which Vint Cerf was keen to point out. The things likely to be cracked are the methods of key distribution, not the encryption itself. As Diffie explained in his introductory talk, the keys are what is used to encrypt the information, as opposed to the information itself. It is therefore possible that the messages will still be safe unless the keys are stored alongside the messages. Cerf admitted he believed that – for high-level secrets – so long as parties didn’t use public key encryption, they were safe. But this doesn’t mean he’d recommend it. “I also think a brick is safe. It just sits there and doesn’t do anything. It’s very secure,” he quipped.

Another important factor to consider when debating the switch to post-quantum computing is how distributed the world currently is. Nowadays, there’s a general mistrust among the global population towards institutions, which has led to a growing movement for people to want to be responsible for their own data. There are already many such distributed systems, such as blockchain, which is most famously used by Bitcoin. In distributed systems, everyone is in charge of their own key so they would have to switch to quantum-safe methods themselves. Gregor Seiler argued that “it takes a very long time to deploy cryptography in large systems such as the internet” and hence, in his opinion, the switch should be made now. The panel unanimously agreed that, even if current communication systems aren’t by default quantum-resistant, they should be designed in such a way that there is an ability to switch easily.

On a stage sits five white men. From left to right they are Tom Geller (but we can only see the back of his head), Whitfield Diffie (who is hidden behind Tom), Vinton Cerf, Avi Shamir, Vadim Lyubashevsky, and Gregor Seiler. The camera is just beyond Tom so is looking at the panel from the left.

From left to right: Tom Geller, Vinton G. Cerf, Adi Shamir, Vadim Lyubashevsky, Gregor Seiler. © Heidelberg Laureate Forum Foundation / Flemming

What’s next for cryptography?

It might seem that in terms of cryptography, everything is now sorted: We have quantum algorithms that can break modern cryptography, and we have quantum cryptosystems that are secure, as far as we can be confident. There is still much to discover, however. This back and forth between encryption algorithms and decryption algorithms is always going to keep going, and there’s plenty of research to be done.

For a start, there is still more work to be done in the realm of classical computing. Quantum cryptography won’t be around for a while, so it’s important that algorithms are safe from attack from classical computers, too. Adi Shamir would like to see a move away from the current lattice-based schemes. Most of the encryption schemes that remain unbroken are based on lattices, so if a general approach can be found, then that leaves us very vulnerable. On the other hand, Shamir jokingly added that those wanting to work on breaking schemes should focus on lattice-based ones.

How one creates a non-lattice based scheme is a trickier problem. If any of the panelists knew, you could be sure that they’d have done it themselves. Some ideas that were floated, however, were looking into systems of solving algebraic equations, or looking at diophantine equations, to see if they could be adapted in any way. Another suggestion posed was utilizing non-linear cellular automata.

Another area of research that remains less explored is that of virtual quantum computers. It’s possible to simulate a quantum computer on a classical one, but at the moment it requires a large amount of time or memory that grows exponentially with the number of qubits. Given a virtual quantum computer, one could also look into using quantum computing to solve hard problems. I’ve mentioned before that quantum computers can carry out prime factorisation in polynomial time, but there are many other difficult problems that haven’t yet been explored with respect to quantum computers.

Never stop learning

Throughout the panel, the audience learned a vast amount about how cryptography is conducted today, what the future of cryptography may look like, and the challenges we will face when quantum computers capable of decryption become a reality. The audience weren’t the only ones to learn something though. Over the course of the hour, the laureates quizzed each other on their areas of expertise, asking insightful questions to their peers.

The final gem of wisdom was given by Adi Shamir, and was directed at the audience and panel alike. He recommended that anyone interested in learning more should look into QISKIT.org – a free online introduction to quantum cryptography – and should read the lecture notes from Scott Aaronson’s undergraduate quantum computing course, which he has made freely available on his website.

At this point, Vinton Cerf asked the panel for book recommendations on quantum computation, and the tone of the panel became a lot more conversational. It was wonderful to see the laureates talk among themselves, and it was hugely inspiring that the panel ended with someone as esteemed as Cerf asking how he can learn more. That, perhaps, was the most important lesson of the entire week.

The full panel discussion, along with the entire scientific program of the 9th HLF, can be viewed on the HLFF YouTube channel.

A blonde, white woman smiling

Posted by

Sophie Maclean is a mathematician and maths communicator, currently studying for a PhD in Analytic Number Theory at Kings College London. She has previously worked as a Quantitative Trader and a Software Engineer, and now gives mathematics talks all over the UK (and Europe!). She is also a member of the team behind Chalkdust Magazine. You can follow her on Twitter at @sophietmacmaths

11 comments

  1. Preserving secrets among close friends is easy

    As long as you only communicate in a closed circle and maintain your private code, communicating secrets is easy. For example, you can exchange a sequence of numbers and interpret each number as an entry in a book (word number) to receive the encrypted message, and only your friends know which book is used as an encoder.

    The actual encoding challenge arises when you use an encoding method that is publicly known and based on a key with finite precision.

    Question: How can you avoid that a future attack can break your secret and decipher what you have shared?
    Answer: You can’t. But if you encode your encoding with a second encoding known only to your friends, then you can make attacks very difficult. Double encoding can therefore improve security.

    • The actual encoding challenge arises when you use an encoding method that is publicly known and based on a key with finite precision.

      Genau.

      Theoretisch sind derartige Verschlüsselungssysteme, die nicht erfolgreich angegriffen werden können, unmöglich, praktisch dagegen schon.
      Insofern geht es hier wohl um sogenannte Einweg-Funktionen, das Rückrechnen soll praktisch verunmöglicht werden & auch sog. Quantenrechner stehen ab gewisser Komplexität des sozusagen Schlüsselvorhabens auf dem Schlauch.

      Zu diesem “Gag” – ‘However, the point is to communicate safely even (and especially) with those who may turn out to be your adversaries.’ [Kommentatorenfreund Dr. Frank Wappler] – kann nicht viel gesagt werden, aus diesseitiger Sicht, denkbarerweise lag ein “Gehirnklops” vor.
      Denn, wenn der Empfänger von Nachricht unsicher ist, braucht es keine Kryptologie.

      • Dr. Webbaer wrote (06.11.2022, 15:06 o’clock):
        > […] Zu diesem “Gag” – ‘However, the point is to communicate safely even (and especially) with those who may turn out to be your adversaries.’ [Kommentatorenfreund Dr. Frank Wappler]

        … admittedly, that’s mostly my well-trained reflex to an all-too-well-known nausea …

        > – kann nicht viel gesagt werden […]

        Aber doch z.B. dieses kleines Biss-chen. (Also available for our international readers.)

  2. Martin Holzherr wrote (03.11.2022, 07:24 o’clock):
    > Preserving secrets among close friends is easy […]

    However, the point is to communicate safely even (and especially) with those who may turn out to be your adversaries.

    p.s.
    Sophie Maclean wrote (02. Nov 2022):
    > […] Adi Shamir […] recommended that anyone interested in learning more […] should read the lecture notes from Scott Aaronson’s undergraduate quantum computing course, which he has made freely available on his website [ https://scottaaronson.com/ ]

    And there (p. 214) can be read:

    The standard game plan for much of modern physics goes like this:
    – Write down the Hamiltonian H of your system (or some approximation to H).
    – Diagonalize H.
    – Extract the energy eigenstates.
    – Then, as a first guess, see if your system is just sitting in its ground state doing nothing.

    Are there and (recommendable) lecture notes on how to determine to which approximation a particular Hamiltonian (that has been written down in some format) has been, or will be, the Hamiltonian H of a given system (especially in regard to the next trial, or attempt, to further characterize the given system) ?

    Are there any (recommendable) lecture notes on how to determine whether (or to which approximation) a given system had been “just sitting in its ground state doing nothing” (especially while a trial was conducted, or an attempt was made, to characterize the given system in terms of whether, or to which approximation, it had been “just sitting in its ground state doing nothing”) ?

    • @Frank Wappler (Quote) „ However, the point is to communicate safely even (and especially) with those who may turn out to be your adversaries.„

      Yes, but adding privacy increases security. You can imagine a rich customer of a bank who only wants to communicate with one contact person in the bank, then you could use a so-called One Time Pad (Quote Wikipedia):

      In cryptography, the one-time pad (OTP) is an encryption technique that cannot be cracked, but requires the use of a single-use pre-shared key that is not smaller than the message being sent. In this technique, a plaintext is paired with a random secret key (also referred to as a one-time pad). Then, each bit or character of the plaintext is encrypted by combining it with the corresponding bit or character from the pad using modular addition.
      ….
      The resulting ciphertext will be impossible to decrypt or break

      A bank customer who makes up this special form of communication with his bank can be absolutely sure that no one other than him and the contact person in the bank can read the message. No matter how the technology improves.

      • Martin Holzherr wrote (03.11.2022, 15:22 o’clock):
        > […] a so-called One Time Pad (Quote Wikipedia):
        > »In cryptography, the one-time pad (OTP) is an encryption technique that cannot be cracked, but requires the use of a single-use pre-shared key […] The resulting ciphertext will be impossible to decrypt or break.«

        Oh, well. …
        In case a ciphertext resulting from application of the one-time pad encryption technique would nevertheless have been decrypted (by an eavesdropper), or (at least) “be broken” by failing to be decryptable (by the recipient) to an equivalent of the encryptet message (from the sender),
        then sufficient reason for such failure is still attributable to distinct instances (a.k.a. “copies”) of “the key” having to be pre-shared by (and thus distributed to) distinct and generally even separated parties.

        Surely the “Hamiltonian H of the sender-receiver-eavesdropper-and-all-other-unknown-unknowns system, applicable to the specific case described above”) could be considered, too.

        (And whatever Hamiltonian would be written down specificly, in a suitable format, would outright constitute »some approximation to H«.)

      • Howdy, Herr “Martin Holzherr” (die doppelten Anführungszeichen nur deswegen, weil Sie ein als solches unerkennbares Pseudonym verwenden (was im Web einigen als unschicklich gilt)), und hierzu kurz :

        @Frank Wappler (Quote) „ However, the point is to communicate safely even (and especially) with those who may turn out to be your adversaries.

        Yes, but adding privacy increases security. You can imagine a rich customer of a bank who only wants to communicate with one contact person in the bank, then you could use a so-called One Time Pad (Quote Wikipedia):

        Sogenannte One Time Pads meinen das Soziale, denn im Sozialen könnte ja defektiert werden, was dann ga-anz andere Herausforderungen an allgemeine kryptologische Kommunikation stellen würde und …

        … mit dem hier behandelten kryptologischen Problem nichts zu tun hätte.

        MFG
        WB

  3. Kryptologie funktioniert zuverlässig, dies als sozusagen Schlüsselsatz vorab.

    Es kann mit Schlüsseln, die vorab bilateral versucht (und vorab ausgetauscht wurden) werden, dies ist zuverlässig, wenn eine gewisse Komplexität diesen Schlüsseln hinzugebaut ist.
    Und / oder wenn gar nicht klar ist, dass verschlüsselte Nachricht, dann Sekundärnachricht (als eigentliche Nachricht), vorliegt, Stichwort :

    -> https://en.wikipedia.org/wiki/Steganography


    Moderne Kommunikationsformen, die einer Sicherheitsschicht bedürfen, das sogenannte Web ist gemeint, nutzen die sogenannte asynchrone Verschlüsselung, die ebenfalls sicher ist, wenn die anzulegenden Rechenvorgänge zum Entschlüsseln zu viel Zeit, zu viel Rechenleistung beim Angreifer benötigen.

    So dass sich in praxi bereits heraus gestellt hat, dass, äh, lange, sehr lange Schlüssellängen erforderlich sind – um dann auch sog. Quantencomputer mit angeblich sehr sehr hoher spezifischer Rechenleistung abweisen könnten, eben die Entschlüsselung von Nachricht meinend.

    “Sehr sehr lange Schlüssellängen” sind leicht beizubringen.
    Dies geschieht heutzutage aus ergonomischen, aus wirtschaftlichen Gründen nicht.
    Mehr ist nicht los.

    Mit freundlichen Grüßen + einen schönen Tag des Herrn noch
    Dr. Webbaer

  4. Dr. Webbaer schrieb (06.11.2022, 14:56 o’clock):
    > […] vorab bilateral versucht (und vorab ausgetauscht […) …]

    Die Beschreibung der Evolution eines Systems (insbesondere aus dem bekannten “Sender” und “Empfänger”, dem in Betracht gezogenen “Unbekannten/Spion(-ageversuch)”, und all dem was unwissentlich oder absichtlich nicht im Einzelnen benannt und berücksichtigt würde), von “vorab”, über “als es darauf ankam”, bis “hinterher”, ist ja eng verbunden mit der Angabe “des” vom oben erwähnten Scott Aaronson stipulierten “entsprechenden Hamiltonian Operators H”, bzw. (wenigstens) mit der quantitativen Beurteilung, in wie fern jeweils irgendein dahingeschriebener Ausdruck eine Näherung ausdrückt.

    (Man beachte auch weitere Verallgemeinerungen, s. Memo.)

    > […] zuverlässig, wenn eine gewisse Komplexität diesen Schlüsseln hinzugebaut ist.

    Die Vorgabe von “gewisser Komplexität” des Bekannten legt die Frage nahe, wie komplex ein Hamiltonian insgesamt sein müsste, um trotzdem (oder womöglich auch gerade deshalb ?) unzuverlässige Evolution (des Wesentlichen) zu beschreiben.

    Die Notwendigkeit des Erstellens und Verteilens bzw. Behaltens von “Kopien”, oder (ansonsten) der Vorab-Ermittlung von Exemplar-Paarungen, die “vorab zueinander gepasst hatten”, garantiert jedenfalls, dass Evolution (alias “Ablauf”) in der Beschreibung des oben genannten Systems überhaupt auftritt.

    Dr. Webbaer schrieb (06.11.2022, 15:15 o’clock):
    > […] Sogenannte One Time Pads meinen das Soziale, denn im Sozialen könnte ja defektiert werden […]

    Sogenannte One Time Pads könnten ja (gegenüber einander) defektieren, weil (im Sinne “hinreichender Begründung”) unterscheidbare Exemplare “davon” angefertigt oder vorgefunden und bis zum Einsatz verteilt bzw. aufgehoben werden sollen. (Diese One Time Pad-Exemplare sind dabei ja nicht unbedingt vom “Sender” bzw. von “Empfänger” jeweils physisch getrennt oder überhaupt unterscheidbar, und in diesem Sinne durchaus “sozial”. Ansonsten sie als individuelle bekannte Bestandteile des o.g. Systems ausdrücklich genannt und berücksichtigt werden.)

  5. Martin Holzherr, Dr. W.
    die symmetrischen Verschlüsselungsverfahren verlagern nur das Sicherheitsrisiko auf den Menschen. Wenn der Schlüssel getrennt übertragen werden muss, dann steigt das Risiko um 100 %.

    • @fauv (Zitat):“die symmetrischen Verschlüsselungsverfahren verlagern nur das Sicherheitsrisiko auf den Menschen.“

      Das scheint mir ein Missverständnis ihrerseits zu sein. Ich bin nicht für symmetrische Verschlüsselungsverfahren, sondern für die Verwendung eines speziellen Schlüssels für besonders kritische Informationen. Ich behaupte, dass eine Kommunikation im kleinen Kreis mit einem nicht-öffentlichen Verschlüsselungsverfahren wie etwa dem One Time Pad weit besser geschützt ist als etwa mit RSA, einem Verfahren, das nur sicher ist, solange die Primfaktorzerlegung äusserst rechenaufwendig ist. Warum ist das so? Nun, ganz einfach: Wenn es irgend einem Geheimdienst und den eingebundenen Wissenschaftlern gelingt, RSA zu knacken (konkret RSA mit Schlüssellängen von beispielsweise 1024 Bytes), dann wird dieser Geheimdienst das niemandem mitteilen. Nein, sie werden dann einfach alle vermeintlich verschlüsselten Meldungen selber entschlüsseln und das für sich ausnützen ohne damit aufzutrumpfen.. Das heisst: Verfahren wie RSA sind nur solange sicher bis sie nicht mehr sicher sind. Doch sie wissen nicht, wann dieser Zeitpunkt, wo sie nicht mehr sicher sind, erreicht ist.

      Wenn sie aber eine Methode wie One Time Pad anwenden sind sie absolut sicher, dass kein Geheimdienst die Nachricht je entschlüsseln kann – ausser der Geheimdienst verfügt über den Schlüssel. Nur ist der Schlüssel beim One Time Pad -Verfahren eben absolut einmalig. Das heisst: wenn der Geheimdienst einen One Time Pad – Schlüssel in seine Hände bringt, kann er nur gerade die Kommunikation entschlüsseln, die mit genau diesem One Time Pad verschlüsselt ist. Ganz anders ist es aber, wenn der Geheimdienst die RSA-Codierung mit 1024 Bits (oder Bytes) -Schlüsseln entschlüsseln kann. Denn dann kann er sämtliche RSA verschlüsselten Nachrichten verschlüsseln. Egal von wem oder wo sie verfasst wurden.

      Fazit: Verschlüsselungs- Verfahren wie RSA sind nur sicher bis sie nicht mehr sicher sind. Verschlüsselungs-Verfahren wie One Time Pad sind aber für alle Zeiten sicher. Allerdings sind solche absolut sicheren Verfahren nur geeignet für die Verschlüsselung im kleinen Kreis, also wenn etwa zwei Leute, die sich vertrauen einen Schlüssel untereinander austauschen.

Leave a Reply


E-Mail-Benachrichtigung bei weiteren Kommentaren.
-- Auch möglich: Abo ohne Kommentar. +