DataMuseum.dk

Presents historical artifacts from the history of:

DKUUG/EUUG Conference tapes

This is an automatic "excavation" of a thematic subset of
artifacts from Datamuseum.dk's BitArchive.

See our Wiki for more about DKUUG/EUUG Conference tapes

Excavated with: AutoArchaeologist - Free & Open Source Software.


top - metrics - download
Index: T c

⟦5a4705fb2⟧ TextFile

    Length: 45447 (0xb187)
    Types: TextFile
    Names: »c.blair-cryptography-notes.1«

Derivation

└─⟦4f9d7c866⟧ Bits:30007245 EUUGD6: Sikkerheds distributionen
    └─⟦this⟧ »./papers/cryptography/c.blair-cryptography-notes.1« 

TextFile

%  These are a set of notes for a course I taught this semester on 
% public-key cryptography, emphasizing complexity aspects.  The topics:
%
% 1. Introduction 2. Elementary Number Theory 3. RSA and Rabin systems
% 4. Knapsack systems (I don't discuss breaking them.  Sorry!)
% 5. Introduction to NP-Completeness 6. Rabin's primality test
% 7. Probabilistic Encryption (Goldwasser-Micali)
% 8. Pseudo-random number generators (Blum-Blum-Shub)
%
%   This is a LaTeX file in two parts (43 pages when printed).  Concat-
% enate the two parts before running LaTeX.  Advanced readers who only
% want sections 6-8 can delete from the line \section{Encryption Systems}
% to the end of this file before concatenating.
%
%   I have included a copyright announcement ``just in case.''  However,
% I specifically authorize use of this material for teaching classes,
% inclusion in ftp sites, and similar non-profit activities.  I would,
% of course, appreciate appropriate attribution.
%
%   I do not claim to have proven any new results and tried to give
% appropriate credits.  However, I hope that my exposition may be easier
% to understand than other versions.
%
%   Suggestions and (especially) corrections are welcome.
%
%    Charles Blair (ceblair@ux1.cso.uiuc.edu)
%
\documentstyle[12pt]{article}\begin{document}
\title{Notes on Cryptography }\author{Charles Blair\\
Business Administration Dept.\\University of Illinois\\
ceblair@ux1.cso.uiuc.edu}
\date{\copyright1991 by the author}\maketitle\tableofcontents
\newtheorem{Th}{Theorem}
\newtheorem{Le}[Th]{Lemma}\newtheorem{Co}[Th]{Corollary}
\newcommand{\pq}{\par\medskip}
\newcommand{\co}[3]{#1\equiv#2\hbox{ mod }#3}
\newcommand{\ro}{\hbox{ or }}
\newcommand{\zo}{Z^1_n}\newcommand{\ps}{p_{PS}}
\newcommand{\T}{{\bf T}} \newcommand{\F}{{\bf F}}
\newenvironment{lst}
{\begin{center}\begin{tabular}{l}}{\end{tabular}\end{center}}
\section{Encryption Systems}
An encryption system is a procedure which takes the original message
({\it plaintext\/}) and a small piece of information arranged in advance
between sender and receiver (the {\it key\/}) and creates an encoded
version of the message (the {\it ciphertext\/}).
\pq When we are considering the quality of an encryption system, we assume
the person trying to decode the message knows what the general pro\-cedure
is and is looking at the ciphertext--- the only thing he does not have
is the key.  We also assume the person sending messages does not spend
time trying to contrive a difficult-to-read message by using unusual
words or letter frequencies--- the sender is counting on the system to
provide all the needed security.
\pq Usually one assumes the person trying to break the code is only
working with the ciphertext.  However, there are situations in which
both plaintext and ciphertext of a previously encoded message are 
available.  For example, I often keep encrypted versions of examinations
on a mainframe computer, only decoding them just before having them
printed, and deleting the plaintext file immediately afterward.  If a
student was able to look at my files, he could keep a copy of the
encoded test and compare this with the test he took.  As we will see,
this may be very useful in decoding future tests.
\pq[One countermeasure against this type of {\it known-plaintext attack\/}
is to continually change keys, assuming an encryption using one key is
not helpful on a different key.  It can become difficult to keep track
of the different keys in use, especially if they are long.]
\pq A more demanding standard is that a code may be safe against a
{\it chosen-plaintext attack}.  We are imagining that the encryption is
done by a machine, and that unauthorized persons may have access to
the machine (although we assume they are only using it in the normal
way, not allowed to take it apart).
\subsection*{Example 1: Simple substitution}
This is the simple letter-for-letter method found in Poe's ``The Gold
Bug'' and many other stories.  The key is a rearrangement of the
26 letters:\begin{lst}\tt ABCDEFGHIJKLMNOPQRSTUVWXYZ\\ \tt
actqgwrzdevfbhinsymujxplok\end{lst}
  Using this key, the plaintext:\begin{lst}\tt \label{pl}
  THE SECURITY OF THE RSA ENCODING SCHEME RELIES ON THE\\
\tt FACT THAT NOBODY HAS BEEN ABLE TO DISCOVER HOW TO TAKE \\ \tt
CUBE ROOTS MOD N WITHOUT KNOWING NS FACTORS\end{lst}
becomes the ciphertext:\begin{lst}\tt
  UZG MGTJYDUO IW UZG YMA GHTIQDHR MTZGBG YGFDGM IH UZG\\ \tt
WATU UZAU HICIQO ZAM CGGH ACFG UI QDMTIXGY ZIP UI UAVG \\ \tt
TJCG YIIUM BIQ H PDUZIJU VHIPDHR HM WATUIYM\end{lst}
The messages can be made harder to decode (but also harder to read!)
by leaving out the spaces between words.
\pq Most messages can be decoded by looking for frequently occuring
pairs of letters ({\tt TH} and {\tt HE} are by far the most common),
using these to identify a few letters to begin, and filling in the
remaining letters one at a time (``The Gold Bug'' gives a good
description, as do many books).
\pq In a known-plaintext situation, the whole code is obtained 
almost immediately.  However, in our example, the letters {\tt J},
{\tt P}, and others do not occur in the plaintext, so we could not
tell how they are encoded.  If we were allowed a chosen plaintext,
we would  use all the letters to get the entire key.
\subsection*{Example 2: The Vign\`ere cipher and one-time pads}
This cipher works by replacing each letter by another letter a 
specified number of positions further in the alphabet.  For example
{\tt J} is 5 positions further than {\tt E}.  {\tt D} is 5 positions
after {\tt Y}.  ({\tt Y,Z,A,B,C,D})  The key is a sequence of 
shift amounts.  If the sequence is of length~10, the 1st, 11th, 21st,
\dots letters of the plaintext are processed using the first member
of the key.  The second member of the key processes plaintext letters
2, 12, 22, \dots and so forth.  If we omit spaces from the plaintext
on page~\pageref{pl}\  and use the key-sequence:$$3\ 1\ 7\ 23\ 10\ 5\ 19\ 14\ 
19\ 24$$we obtain \begin{lst} \tt
WILPOHNFBR BPMQRJKGTC QDVASSZGVF\\ \tt
HNLOOQBSLM QUOBPFVHMF DUULLTWMAY VCLBXFUZXR\\ \tt
REPPMTOSKF RXALDFDSVS EFYLYYLAHB QXPQRTNHDL\\ \tt
RXPKQSLTTA WPYP\end{lst}
(We have divided the ciphertext into groups of ten letters for
convenience.  The division into lines is arbitrary.)
\pq This type of cipher was considered very secure at one time
($\sim1600$), but is not really difficult.  Suppose we guess that
the first letter is {\tt T}.  This implies the eleventh letter 
is {\tt Y}, the 21st letter is {\tt N}, etc.  Now look at the
two-combinations that occur from different possiblilities for the
second letter:\begin{lst}{\tt
TI YP ND EN NU AU SC OE OX BF NX OX TP}\quad(no shift of 2nd letter)\\
\tt TJ YQ NE EO NV AV SD OF OY BG NY OY TQ\\ \tt
TK YR NF EP NW AW SE OG OZ BH NZ OZ TR\\ \tt
TL YS NG EQ NX AX SF OH OA BI NA OA TS\\
\quad(skipping over some in the middle)\\ \tt
TF YM NA EK NR AR SZ OB OU BE NU OU TM\\ \tt
TG YN NB EL NS AS SA OC OV BD NV OV TN\\ \tt
TH YO NC EM NT AT SB OD OW BE NW OW TO\end{lst}
The last line is the ``right answer.''  Although it shows several
bad combinations ({\tt NC NT SB NW}), mostly caused by the last
letter of one word being adjacent to the first letter of the next
word, it looks better than the other possible rows.  Once the
second letter has been identified, the same approach can be used
to get the third letter. This approach is easily automated using
a table of digrams.\pq It is necessary to know the first letter
and the length of the key-sequence. If we assume the length is
not too large, a program can just try all possibilities, eventually
choosing the plaintext which looks best.\footnote{Mike Mendelson,
a student in this course in 1989, wrote a program to implement this algorithm.}
\subsubsection*{One-time pads}
\pq A long key-sequence makes this approach more difficult, since
we have fewer rows.  The extreme case is that in which the key-%
sequence is as long as the plaintext itself.  This leads to a
theoretically {\it unbreakable\/} cipher.  For any possible plain%
text, there is a key for which the given ciphertext comes from
that plaintext.\pq This type of cipher has reportedly been used
by spies, who were furnished with notebooks containing page after
page of randomly generated key-sequence.  Notice that it is essential
that each key-sequence be used only once (hence the name of the
system).  Otherwise the approach for Vign\`ere systems described
above could be tried, since we would have at least two rows to
work with.\pq One-time pads seem practical in situations where one
agent is communicating with a central command.  They become less
attractive if several agents may
need to communicate with each other.  The one-time feature is lost
if X and Y inadvertently use the same page to talk as W and Z are
using.  Also capture of X's equipment makes it possible to overhear
a conversation between Y and Z.
\subsection*{Example 3: A Transposition System}
In this system, we will assume every line of the message is 63
characters long.  The key is a permutation of the numbers from
1 to 63, and each line of the plaintext is rearranged using the
permutation to produce the corresponding ciphertext.  For example
if the key is $$1\ 11\ 21\dots61\ 8\ 18\dots54$$(we would really want
to use a more complicated permutation) and we use the same
plaintext as in the previous two examples, we obtain:\begin{lst}\tt
TTRNRT UHOMO SFECE HYSGEH REDEN E NHS E A LE I I\ \ \ CTCE\ \ \ O SI\\ \tt
FN\ \ ET AHBCT\ \ DNDO AOBTRA TALOO TY IW CBEO K\ \ SEV\ \ H AS\ \ TOE HE\\ \tt
C HNO\ \ OWOA\ \ \ \ \ S\ \ UMOGR\ \ TIWC\ \ RNK\ \ \ BOU S\ \ STIT\ \ O NF\ \ EDTN\end{lst}
We are using the version of the plaintext including blanks.  The
second line of the plaintext has 55 characters, so we add 8 blanks
on the end.
\pq One method of decoding looks at a column of the ciphertext and
asks what other column could immediately follow it.  For example,
it is possible that the column following {\tt OBO} (the tenth
ciphertext column) is {\tt UAO} (the 8th), but the column {\tt TFC} would
yield the improbable two-letter combination {\tt BF}.
\pq As always, a longer message is easier to decode.  Unlike simple
substitution, it seems that blanks make the decoding process more 
difficult.
\pq What about a known-plaintext attack? Since there is only one 
Y in the first line of the plaintext, we can tell that column~12
of the plaintext is column~21 of the ciphertext, but there are
other things we can't tell.  In this example, there are 8 columns
of three blanks at the end of the plaintext, and we can't be sure
which of these corresponds to which of the all-blank ciphertext
columns.  (it doesn't matter for this message, but we would like to
know the entire key to deal with longer plaintexts in the future)
A carefully chosen plaintext can give us the entire key at once.
\section{Introduction to Number Theory\label{numth}}\subsection{Congruences}
The congruence $\co abn$ (``$a$ is congruent to $b$ mod $n$'') says that,
when divided by $n$, $a$ and $b$ have the same remainder.
$$\co{100}{34}{11}\qquad\co{-6}{10}{8}$$
In the second congruence, we are using $-6=8(-1)+2$. We always have
$\co abn$ for some $0\le b\le n-1$, and we are usually concerned with
that $b$.  If $\co abn$ and $c\equiv d$, we can add or multiply
$$\co {a+c}{b+d}n\qquad \co{ac}{bd}n$$
Division does not always work: $\co6{18}{12}$ but $3\not\equiv9$.
\subsection{The Greatest Common Divisor}
For $a$ and $b$, the number $(a,b)$ is the largest number which
divides $a$ and $b$ evenly. $$(56,98)=14\qquad(76,190)=38$$\begin{Th}
\label{T1}For any $a,b$ there are integers $x,y$ with $ax+by=(a,b)$\end{Th}
Proof: The equation can be solved by making a sequence of simplifying
substitutions:\begin{eqnarray*}30x+69y&=&3\\30x'+9y&=&3\quad[x'=x+2y]\\
3x'+9y'&=&3\quad[y'=y+3x']\\3x''+0y'&=&3\quad[x''=x'+3y']\end{eqnarray*}
It is easy to see that $x''=1$, $y'=0$ is a solution to the final
equation and we get a solution to the original equation by working
backwards:$$x'=x''-3y'=1\quad y=y'-3x'=-3\quad x=x'-2y=7$$
\pq We could also solve an equation like $30x+69y=15$ by multiplying
our solution: $y=-15$, $x=35$.  It should be clear that the equation
will have no solution in integers if 15 is replaced by something
that is not a multiple of $(30,69)=3$.\pq All other integer solutions of
$30x+69y=15$ may be obtained by changing the first solution:
$$y=-15+\frac{30}3t\quad x=35-\frac{69}3t\quad\hbox{for $t$ integer}$$
\pq If we do the process illustrated on the previous page for any
equation $ax+by=(a,b)$, we eventually get one of the coefficients
as zero and the other as $(a,b)$.  [In fact, this process is usually
presented as ``Euclid's algorithm for finding the greatest common
divisor.'']
\pq It is important that this process is feasible [on a computer]
even if $a$ and $b$ are several hundred digits long.  It is easy
to show that the larger of the two coefficients decreases by at 
least $1/2$ every two equations, hence that in twenty equations
the larger coefficient has decreased by $2^{-10}<10^{-3}$, so
a 600-digit number would not require more than 4000 equations.
[this analysis can be improved]
\pq We pointed out earlier that division does not work with congruences.
An important application of Theorem~\ref{T1} is that it does work for prime
numbers.\begin{Co} If p is a prime number, $\co {ar}{as}p$ and 
$a\not\equiv 0$, then $r\equiv s$.\end{Co}
Proof: Since $p$ is a prime,
$(a,p)=1$, so Theorem~\ref{T1} says there are integer $x,y$ with $ax+py=1$.
Hence $$\co {ax}1p\quad\hbox{and}\quad r\equiv(1)r\equiv axr\equiv xar
\equiv\co {xas}sp$$
\begin{Co}If p is a prime number and $a\not\equiv0$~mod~$p$, then
for any $b$, there is $y$ with $\co{ay}bp$.\end{Co}
Proof: We showed in the preceding proof that there is $x$ with
$\co {ax}1p$.  Let $y=bx$.
\begin{Co}[The ``Chinese Remainder Theorem''] If $(p,q)=1$, then
for any $a,b$, there is an $n$ with $$\co nap\quad
\hbox{and}\quad\co nbq$$\end{Co}
Proof: Theorem\ref{T1} implies there are integers $x,y$ such that
$$px+a=qy+b\quad\hbox{so let }n=px+a$$
\subsection{Powers modulo a prime}The sequence $$a\quad a^2\quad
a^3\dots\quad\hbox{mod }p$$ has many applications in cryptography.
Before looking at theoretical properties, the example below (done
using a pocket calculator) should make clear that it is practical
to compute these numbers, even when many digits are involved.
\pq Suppose we want to compute $432^{678}$~mod~987.  The basic trick
is to start with a number and keep squaring:
$$432^2=186624\equiv81\quad432^4\equiv81^2\equiv639\quad432^8\equiv
639^2\equiv690\dots432^{512}\equiv858$$
Since $678=512+128+32+4+2$, $$432^{678}\equiv(81)(639)\dots(858)
\equiv204\qquad\hbox{(I hope!)}$$Calculations with exponents 
involve not-too-many multiplications.  If the numbers have several
hundred digits, however, it is necessary to design special sub%
routines to do the multiplications (see Knuth, volume~2).
\pq Let us look at the sequence of powers of 2 mod 11:
$$2\ 4\ 8\ 5\ 10\ 9\ 7\ 3\ 6\ 1$$Each number from 1 to 10 appears
in the sequence.\begin{Th}\label{proot}Let $p$ be a prime.  There
is an $a$ such that for every $1\le b\le p-1$, there is $1\le x\le
p-1$ such that $\co{a^x}bp$.\end{Th}
It is not always the case that $a=2$.  The powers of 2 mod~7 are
2,~4,~1 after which the sequence repeats and we never get 3, 5, or 6.
\pq The proof of Theorem~\ref{proot} requires several steps, so we
will give it later.  For now, we want to look at some of its consequences.
\begin{Co}Let $a$ be as in Theorem~\ref{proot}. Then $\co{a^{p-1}}1p$.
\label{PF}\end{Co}Proof: We know that $a^d\equiv1$ for some $1\le d\le p-1$.  If
$d<p-1$, the sequence of powers of $a$ would start repeating before
we got all the numbers: $a^{d+1}\equiv a$, $a^{d+2}\equiv a^2$, etc.
\begin{Co}\label{Fe}For any $b\not\equiv0$,
 $\co{b^{p-1}}1p$.\end{Co}Proof: Let $a$ be
as in Theorem~\ref{proot}. Using Corollary~\ref{PF} $$b^{p-1}
\equiv a^{x(p-1)}\equiv\left(a^{p-1}\right)^x\equiv1$$
\begin{Co}If $\co xy{(p-1)}$, then $\co {b^x}{b^y}p$\end{Co}
Proof: For some integer $r$, $y=r(p-1)+x$ and by Corollary~\ref{Fe}
$$b^y\equiv\co{\left(b^{p-1}\right)^rb^x}{b^x}p$$
\begin{Le}\label{di}Let $b\not\equiv0$, $d$ the smallest
positive number such that $b^d\equiv1$.  Then for any $e>0$ with $b^e
\equiv 1$  $d$ divides $e$ evenly. In particular, by Corollary~\ref{Fe},
$d$ divides $p-1$ evenly.\end{Le}Proof: If $d$ does not divide $e$, then
$e=dr+s$ for some $0<s<d$, but $$b^s\equiv b^{e}\left(b^d\right)%
^{-r}\equiv1$$ would contradict the definition of $d$.
\subsection{Primitive roots}
Theorem~\ref{proot} showed that if $p$ is a prime, there is an $a$ such that
the equation $$\co{a^x}bp$$has a solution for any $b\not\equiv0$.
Such an $a$ is called a {\it primitive root\/} of $p$, and $x$
is called the {\it discrete logarithm\/} of $b$.\pq We showed in
the beginning of section~\ref{numth} that it is easy to obtain $b$ given
$a$ and $x$.  Finding $x$ given $a$ and $b$ is much harder.  Many modern
encryption systems are based on the fact that no efficient way of
computing discrete logarithms is known.
\pq No efficient method for always finding primitive roots is known.
However, it is often possible to find one in special cases.
We will use $p=1223$ as an example.  $p-1=2\cdot13\cdot47$. By Lemma~%
\ref{di}, if $a$ is not a primitive root, then we will either have
$a^{26}$, $a^{94}$, or $\co{a^{611}}1{1223}$.  $a=2$ and 3 fail, but
$a=5$ satisfies all three conditions, so it is a primitive root.  (we
could tell that $a=4$ would not be a primitive root without testing.  Why?)
\pq It is easy to show that, if $a$ is a primitive root, $a^x$ is a
primitive root if and only if $(x,p-1)=1$.  In this example, this means
the number of primitive roots is$$1222\left(\frac12\right)\left(\frac
{12}{13}\right)\left(\frac{46}{47}\right)=552$$Thus, if we had just chosen
$a$ at random, the probability that it would be a primitive root is
$\approx.45$.  Choosing $a$ at random and testing until we found a
primitive root would not be expected to take too long.
\pq This is an example of a {\it probabilistic algorithm}.  It is possible
for it to take a long time, but the amount of time needed on average is
reasonably small.  We will see many other probabilistic algorithms later.
\subsubsection*{Proof of Theorem \ref{proot}}
Let $a\not\equiv0$, $d$ be the smallest positive number for which $a^d
\equiv1$ (there must be such a $d$ since $a^K\equiv a^L$ implies $a^{K-L}
\equiv1$).  If $d=p-1$, $a$ is a primitive root.  If $d<p-1$, we will
find $a',d'$ with $d'>d$.  If $d'<p-1$ the process is repeated until
we eventually obtain a primitive root.\begin{Le}There are at most $d$
solutions to a congruence involving a polynomial of degree $d$:
$$\co{x^d+\alpha_1x^{d-1}+\dots\alpha_d}0p$$In particular, there are at
most $d$ $x$ with $x^d\equiv1$.\label{B}
\end{Le}{\bf Proof:} This can be proved in
the same way as the corresponding theorem in ordinary algebra: if $x=\beta$
is a solution, the polynomial can be written as $(x-\beta)$ times a poly%
nomial of degree $d-1$, which by induction has $\le d-1$ solutions.
\pq We return to the proof of Theorem~\ref{proot}. The sequence 
$$a\quad a^2\quad a^3\dots \co {a^d}1p$$consists of $d$ different solutions
of $x^d\equiv1$.  If $d<p-1$, let $b$ be any non-member of the sequence,
with $e$ the smallest positive number with $b^e\equiv1$. 
If $e>d$, we may take $a'=b$, so we will assume $e\le d$ from now on.  
By Lemma~\ref{B}, $b^d\not\equiv1$, which implies $e$ does not divide
$d$ and $e/(d,e)>1$. 
$$\hbox{Let\quad}a'=a^{(d,e)}b\qquad c=\frac 
d{(d,e)}$$To complete the proof, we will show that if $a'^x\equiv1$,
then $x$ is divisible by $ce=de/(d,e)>d$.\pq
Since $(c,e)=1$, Theorem~\ref{T1} implies there are $K,L$
with $cK+eL=1$. If $a'^x\equiv1$, then $a'^{cx}\equiv b^{cx}\equiv1$.
By Lemma~\ref{di}, $cx=eM$ for some integer $M$ and
$$x=(cK+eL)x=e(KM+Lx)$$so $x=ex'$ for some integer $x'$.  Together with
$a'^x\equiv1$ and Lemma~\ref{di}, this implies for some integer $N$
$$(d,e)ex'=dN\Rightarrow ex'=cN\Rightarrow (cK+eL)x'=c(Kx'+LN)$$so
$x'$ is divisible by $c$.
\section{Encryption techniques based on powers and congruences}
\subsection{The Diffie-Hellman key exchange procedure}
A and B are communicating.  C hears everything A and B say.
A and B want to agree on a number, without C knowing what the 
number is. It may be, for example, that A~and~B plan to use the
number as the key for future encoded messages.
The procedure (also often called a {\it protocol\/}):
\pq A and B agree on a (large) prime $p$ and a primitive root $a$.
These numbers are also known to C.  A secretly chooses a (large)
number $X_1$, B secretly chooses $X_2$.  $a^{X_1}$ and $a^{X_2}$
 mod~$p$ are publicly announced (hence known to C).  The secret number
will be $S=a^{X_1X_2}$~mod~$p$.
$$\hbox{A calulates }S\equiv\left(a^{X_2}\right)^{X_1}\qquad
\hbox{B calculates }S\equiv\left(a^{X_1}\right)^{X_2}$$
A possible drawback to this system is that neither A nor B controls
what $S$ is.  If $S$ is not a satisfactory number, they may have
to repeat the protocol.
\pq Diffie and Hellman suggest the procedure can also be used in
a situation in which $n$ people must find, for each pair of people,
an agreed-upon number.  For $1\le i,j\le n$ the number is $a^{X_iX_j}$.
\subsection{The Rivest-Shamir-Adleman public key system}
A sets up a system so that anyone can send him an encoded
message, but only A will be able to decode it.  The message is represented
as a number $M$.  The encoding is done by a publicly known function $f(M)$,
with A the only person who knows how to compute $f^{-1}$.
A chooses two large primes $p$, $q$ which he keeps secret.  He announces
$n=pq$ and another number $d$, with $(d,p-1)=(d,q-1)=1$ (one way to
do this is to choose $d$ a prime larger than $p/2$ and $q/2$.)
The encoding is $$f(M)\equiv M^d\hbox{
mod n}$$where $M$ and $f(M)$ are both $\le n-1$.
We have seen $f$ can be computed in a realistic amount of time
even if $M$, $d$, $n$ are many digits long.
\pq A computes $M$ from $M^d$ using his knowledge of $p$, $q$. By
 Corollary 8, $$\hbox{If }\co {de}1{(p-1)}\hbox{ then }\co{\left(M^d\right)^e}
1p$$Similarly $\co{\left(M^d\right)^e}Mq$ if $\co {de}1{(q-1)}$.
$e$ satisfies these two conditions if $\co{ed}1{(p-1)(q-1)}$.  Theorem~1
says we can let $e=x$, where $x$ is a solution of $$dx+(p-1)(q-1)y=1$$
Since $\left(M^d\right)^e-M$ is divisible by $p$ and by $q$, it is
divisble by $pq$, hence we can recover $M$ from $M^d$ by taking to
the $e$-th power mod $pq$.
\pq It is crucial to the security of this system that knowledge of
$n$ does not allow an eavesdropper to calculate $p$ and $q$. The
crude approach of dividing $n$ by all numbers up to $\sqrt n$ would
take $\sim10^{50}$ steps for a 100-digit $n$. However, many famous
mathematicians have been unable to devise significantly better
factoring algorithms, and this problem has been studied for at
least 100~years.
\pq One practical difficulty in using this system is the need to
do calculations with many-digit numbers, especially to find primes.
Another difficulty is that the inventors of this system have patented
it.  Amateur programmers who have posted implementations on electronic
bulletin boards have received letters from ``RSA Security, Inc''
warning of possible patent infringement.
\subsection{A public key system as hard as factoring\label{Ra}}
It is possible in theory that there is some way of computing $f^{-1}$
for the system in the previous section that does not involve deter%
mining $p$ and $q$. In the original RSA paper, the authors say
\begin{quotation}It may be possible to prove that any general method
of breaking our scheme yields an efficient factoring algorithm.  This
would establish that any way of breaking our scheme must be as diff%
icult as factoring. We have not been able to prove this conjecture,
however.\end{quotation}
To see the difficulties involved in trying to prove such a thing,
suppose that one could show that knowledge of a ciphertext $f(M)$
and a plaintext $M$ enabled one to find $p$ and $q$.  Then one 
could factor $n$ as follows:\begin{enumerate}
\item Choose any $M$.\item Compute $f(M)$.  [Remember, we are assuming
$f$ is publicly available.  Furthermore, $f(M)$ can't be too hard
to compute, or the code would be impractical.]\item Use the assumed
method to obtain $p$, $q$.\end{enumerate}
In words, we are unable to distinguish between the situation
in which $f(M)$ is obtained from $M$ (easy) and the (presumably difficult)
situation in which $M$ is obtained from $f(M)$.
\pq Rabin has suggested an alternative to the RSA system in which there
is a direct connection to factoring.  As in RSA, $n=pq$ is announced
publicly, with primes $p$, $q$ kept secret.  For technical reasons, we
assume $\co {p,q}34$.
The encoding function is  $$\co {f(M)}{M^2}n$$  
 The way we avoid the difficulty described above is that there are 
{\it four\/} numbers $M_1,M_2,M_3,M_4$ with $f(M_i)\equiv f(M)$.  The
key facts are:\begin{enumerate}\item If $p,q$ are known, it is easy
to compute all the $M_i$ given $f(M)$.\item If we are given $n$, $f(M)$,
and all the $M_i$, we can calculate $p$, $q$.
\end{enumerate}We are {\it not\/} able to obtain $p$, $q$ from just one
of the $M_i$, so the approach based on $M$ and $f(M)$ described above
won't work.  One drawback of this system is that, even with knowledge of
$p$ and $q$, one can only say the number sent is one of the four $M_i$,
without being able to identify which one.  In practice, this is not
serious, since it is very unlikely that more than one of the $M_i$ would
correspond to a feasible message.
\pq{\bf proof of 1:} Since $\co p34$, there is an integer $k$ with $4k=p+1$.
If $\co x{\left(f(M)\right)^k}p$, then using Corollary~8:
$$x^2\equiv \left(\left(M^2\right)^k\right)^2\equiv \co {M^{4k}}{M^2}p$$
Similarly if $\co y{\left(f(M)\right)^L}q$ [$4L=q+1$], then $\co {y^2}{M^2}q$.
The $M_i$ are given from Corollary~4 by$$\begin{array}{llll}
\co {M_1}xp&\co{M_2}xp&\co{M_3}{-x}p&\co{M_4}{-x}p\\
\co{M_1}yq&\co{M_2}{-y}q&\co{M_3}yq&\co{M_4}{-y}q\end{array}$$
\pq{\bf proof of 2:} If we know the $M_i$, there will be two like $M_1$
and $M_3$ above with $\co {M_1+M_3}0p$ and $M_1+M_3\not\equiv0$~mod~$q$.
Thus we can calculate $(M_1+M_3,n)$ to obtain $p$.
\pq Unfortunately, this cryptosystem is vulnerable to a chosen-plaintext
attack, even if we assume that the person trying to break the code gets
only one of the $M_i$, chosen randomly.  The attacker keeps generating
pairs $M$, $f(M)$ until he gets an $M_i$ with $(M+M_i,n)=p$ or $q$.
\section{Subset-Sum (Knapsack) problems and their uses}
\subsection{Subset-sum problems are hard}
A subset of the numbers $$\begin{array}{*{10}{r}}267&493&869&961&1000&1153&
1246&1598&1766&1922\end{array}$$ adds up to 5842. Spend a few minutes
trying to find such a subset.  Whether you succeed or not, I hope
you are convinced the task is not trivial.
\pq This is an example of a {\it subset-sum\/} problem.  In general,
we are given $n$ natural numbers $a_i$ and a target number $T$ and 
asked whether there is a $S\subset\{1,\dots,n\}$ with $$\sum_{i\in S}
a_i=T\qquad(*)$$A seemingly simpler problem is the {\it subset-sum decision\/}
problem.  For a given $a_i$ and $T$, decide whether there is an $S$
for which $(*)$ holds, without being required to identify such an $S$.
However, it can be proved that the decision problem is just as difficult
as the subset-sum problem in this sense:
\begin{Th} \label{re}
Suppose we had a method of solving the subset-sum decision
problem.  Then we could solve a subset-sum problem using the assumed
method $n$ times.\end{Th}(the $n$ is not particularly important--- the
main thing is that the number of uses of the method is not too large.)
\pq Proof: Begin by using the method to decide if
$T$ is a sum of the $a_i$--- if not, we can stop immediately.  Then
use the method to determine if $(*)$ holds for some $S\subset\{2,\dots,n\}$.
If the answer is yes, we ignore $a_1$ for the rest of the analysis.  If
the answer is no, we know we must have $1\in S$.  In this second case,
we continue by using the method to decide whether there is $S\subset\{3,\dots,
n\}$ with $$\sum_{i\in S}a_i=T-a_1$$A yes answer means we can assume $2\notin
S$, otherwise $2\in S$.\pq The idea of this proof is more important than
the specific result.  We show that one problem is as difficult as another
by showing that a method of solving the supposedly easier problem can be
used to solve another problem.  This involves constructing one or several
easier problems whose solution answers the hard problem.  We saw another
example of this idea in our discussion of the Rabin encryption system
(section~\ref{Ra}).\pq Using more elaborate versions of the techniques in
Theorem~\ref{re}, it can be shown that a method of solving
the subset-sum decision problem could be used to solve many other problems,
including:\begin{itemize}\item Factoring\item The Travelling Salesman
Problem\item Any Integer Programming Problem\item The Satisfiability
Problem\end{itemize}Don't worry if you are not familiar with the details
of these problems.  The important point is that they are all well-known
problems for which many people have been unable to find efficient solu%
tion methods, which makes it unlikely that there is a method which solves
all subset-sum decision problems efficiently (we will go into more detail
on this in section~\ref{NP}).
\pq The discussion above makes it plausible that some subset-sum problems
are difficult.  Further, there is some evidence that the ``typical''
subset-sum problem is not easy.
V.~Chvatal\footnote{``Hard Knapsack Problems,'' {\it Operations Research},
vol.~28, pp 1402--1411} has shown that if the $a_i$ and $T$
 are randomly chosen, then with high probablility (i) there will be no
$S$ for which $(*)$ holds (ii) certain simple ways of proving this will
not work.
\subsection{A proposed public-key system based on subset-sum}As an exam%
ple of an easy subset sum problem, consider the task of determining
what subset of $$\begin{array}{*{10}{r}}1&3&6&14&27&60&150&300&650&1400
\end{array}$$ adds up to 836.  The $a_i$ in this problem have the special
property that every number is greater than the sum of all preceding 
numbers ($27>1+3+6+14$, etc).  1400 clearly cannot be in the set.  If
650 is not in the set, we would be in trouble, since the sum of the
remaining numbers is $<650$, hence $<836$.  Thus 650 must be in the 
set, and we now have the task of finding numbers which add up to $836-650
=186$.  300 is too big, and the same reasoning as before shows that 150
must be in the set.  If we continue, it is easy to identify the desired
set as $\{650,150,27,6,3\}$.
\pq We began with the problem of identifying a subset of
$$\begin{array}{*{10}{r}}267&493&869&961&1000&1153&
1246&1598&1766&1922\end{array}$$ which adds up to 5842. 
What we didn't mention before was that the $a_i$ were carefully
chosen to make them directly related the $a_i$ in the easy subset-sum
problem:$$\co{267}{300(1000)}{2039}\quad\co{493}{27(1000)}{2039}\quad
869\equiv60(1000)\dots$$where 2039 is a prime chosen in
advance (larger than any of the numbers in the easy subset-sum problem)
and 1000 is an arbitrarily chosen number.
\pq To find the subset, begin by solving $\co{1000x}1{2039}$, which
gives $x=1307$.  If we let $a_i$ be the numbers in the easy problem,
the hard problem can be written as$$\co{\sum_{i\in S}(1000a_i)}{5842}{2039}$$
When we multiply by 1307, this becomes$$\sum a_i\equiv1307(5842)\equiv
1478$$It is easy to identify $\{1400,60,14,3,1\}$ as a subset
which adds to 1478, and the desired subset of the original
system is $$\{\co{1246}{1400(1000)}{2039},869\equiv60(1000),1766,
961,1000\}$$\par This would seem to give us a good public-key
system: a problem which is easy once some special information
(the 2039 and the 1000) is known, difficult without the information.
Unfortunately, the special type of subset-sum problem created in
this way can be solved even without the special information.
There is a sequence of papers showing how to solve special subset-sum
problems and proposing a refinement which, in turn, was solved by
the next paper in the sequence. This has not happened with the 
RSA system, but there is no guarantee that it won't!
\subsection{Other uses of the subset-sum problem}
The results mentioned at the end of the last section do {\it not\/}
contradict the presumed difficulty of subset-sum problems in
general.  It is only the specially constructed problems which
are known to be easy.  There are security problems other than
public-key codes for which subset-sum problems are useful.
\subsubsection{Computer passwords} A computer needs to verify a
user's identity before allowing him or her access to an account.
The simplest system would have the machine keep a copy of the
password in an internal file, and compare it with what the user
types.  A drawback is that anyone who sees the internal file
could later impersonate the user.\pq I believe this alternative
is actually implemented on some systems: the computer generates
a large number (say 500) of $a_i$.  They are stored in the 
internal file. A password is a subset of $\{1,\dots,500\}$.
(in practice, there is a program to convert a normal sequence-%
of-symbols password to such a subset.) Instead of having the
password for the user, the computer keeps the total associated
with the appropriate subset.  When the user types in the subset,
the computer tests whether the total is correct.  It does not
keep a record of the subset.  Thus impersonation is possible
only if somebody can reconstruct the subset knowing the $a_i$
and the total.
\subsubsection{Message verification} A sender (S) wants to send
messages to a receiver (R).  Keeping the message secret
is not important.  However, R wants to be sure that the message
he is receiving is not from an imposter and has not been tampered
with.  $S$ and $R$ agree on a set of $a_i$ (say 500) and a 
set of totals $T_j$ (say 200).  These numbers may be publicly
known, but only $S$ knows which subsets of the $a_i$ correspond
to which $T_j$.  The message sent by $S$ is a subset of size 100
of $\{1,\dots,200\}$.  He does this by sending 100 subsets of the
$a_i$ corresponding to the message he wants to send.
\section{Subset-Sum Problems and NP-Completeness\label{NP}}
The phrase ``NP-complete'' has an intimidating sound. In this 
section, we will first define a new problem involving formulas
in logic, called the Satisfiability~Problem (SP).  We will use
the abbreviation~(SSP) for the subset-sum problem.  Our main
results will be:\begin{enumerate}\item If there is an algorithm
which efficiently solves (SSP), it can be used to efficiently
solve (SP).\item If there is an algorithm which efficiently
solves (SP), it can be used to solve (SSP).\item An algorithm
to solve~(SP) efficiently would give efficient solutions to
factoring and many other problems.\end{enumerate}
\subsection{The Satisfiability Problem}We will use capital\label{SP}
letters $A_i$, $B_i$, to stand for logical variables. These
stand for statements like ``221 is a prime number'' or
``{\tt TH} is the most common two-letter sequence in English,''
which are either true or false, i.~e., these variables have
values of either \T\ or \F. $\sim A_i$ (``not $A_i$'') is the
statement that $A_i$ is false, so it has value \T\ if $A_i$ has
value \F, and $\sim A_i$ has value \F\ if $A_i$ has value \T.
We will also be interested in more elaborate formulas:
$$A_1\ro\sim A_2\ro\sim A_4\ro A_7\ro A_8$$
This formula says that either $A_1$ is true or $A_2$ is false, or
$A_4$ is false, etc.  The value of this formula will be \T\ unless
$A_1=\F$, $A_2=\T$, $A_4=\T$, $A_7=\F$, $A_8=\F$.  Thus, there is only
one way in which the formula will be false.
\pq Figure~\ref{EP} illustrates a  satisfiability problem.
\begin{figure}[ht]
$$\begin{array}{c}A_1\ro A_2\\ \sim A_1\ro A_5\\ \sim A_1 \ro A_3
\ro A_4\\A_3\ro\sim A_5\\A_4\ro A_5\\ \sim A_3\ro\sim A_4\\
\sim A_2\ro A_3\end{array}$$\caption{A small (SP)\label{EP}}\end{figure}
We want to assign \T, \F\ to all the variables so that
all of the formulas have value \T. Even in this small example, it 
may take you a minute or so to find such an assignment.
\subsection{Converting (SP) to (SSP)}\label{SSP}
We want to construct numbers $a_i$ and a target number $T$ so that
there is a subset adding up to $T$ if and only if there is an
assignment for (SP) which makes all the formulas
true.  Using this construction allows us to use an algorithm for
(SSP) to solve (SP).  It implies that solving (SSP) is at least as
hard as solving (SP).
\pq We will illustrate the construction using the example in
Figure~\ref{EP}.  We will have numbers $a_1,\dots,a_5$
corresponding to the logic variables $A_1,\dots,A_5$ with 
$A_i=\T$ corresponding to $a_i$ being included in the subset.
We will also need additional $a_i$, $i>5$ for technical reasons.
\begin{figure}[ht]$$\vbox{\offinterlineskip
\halign{\hfil$#{}$&$\hfil#\,$&&\vrule#
&\hfil$\,#$\cr a_1=&1&&01&&01&&00&&00&&00&&00\cr a_2=&2&&00&&00&&00&&00
&&00&&01\cr a_3=&&&&&2&&01&&00&&01&&02\cr a_4=&&&&&4&&00&&01&&02
&&00\cr a_5=&&&2&&00&&02&&02&&00&&00\cr a_6=&1&&00&&00&&00&&00&&00&&00\cr
a_7=&2&&00&&00&&00&&00&&00&&00\cr
a_8=&&&1&&00&&00&&00&&00&&00\cr a_{9}=&&&1&&00&&00&&00&&00&&00\cr
a_{10}=&&&4&&00&&00&&00&&00&&00\cr a_{11}=&&&&&1&&00&&00&&00&&00\cr
 a_{12}=&&&&&2&&00&&00&&00&&00\cr a_{13}=&&&&&3&&00&&00&&00&&00\cr
 a_{14}=&&&&&8&&00&&00&&00&&00\cr a_{15}=&&&&&&&1&&00&&00&&00\cr
 a_{16}=&&&&&&&3&&00&&00&&00\cr\noalign{\smallskip}
\dots=&&&&&&&&&\dots&&&&\cr \noalign{\smallskip}
T=&4&&04&&08&&04&&04&&04&&04\cr}}$$
\caption{\label{UGH}Subset-sum problem}
\end{figure}
\pq The subset-sum problem is shown in Figure~\ref{UGH}.
For clarity, we have divided the numbers into zones. $T$ will be a
sum of a
subset of the $a_i$ if and only if the totals within each zone
are appropriate.
Each zone corresponds to one of the logic formulas. 
For $a_1$ through $a_5$, the value in a zone is 0 if the corresponding
$A_i$ does not appear in the logic formula.  If $A_i$ does appear,
a power of 2 is used. (the reason for using powers of 2 is that
different subsets of $\{a_1,\dots,a_5\}$ will have different totals)
\pq The leftmost zone corresponds to the first logic formula in 
our example: $A_1\ro A_2$.  By making suitable
decisions about inclusion of $a_6$ or $a_7$, we will be able to get
a total of 4 in this zone, unless both $a_1$ and $a_2$ are left
out of the set, which is precisely what would make the logic formula
have value \F.
\pq The second zone corresponds to $\sim A_1\ro A_5$, which has
value \T\ unless $a_1$ is in the set and $a_5$ is not.  $a_8$,
$a_9$, and $a_{10}$ can be used to get the total for the zone
equal to 4 in any other case.
\pq Similarly, each of the other zones\footnote{
We have omitted the $a_i$ for the last three zones in Figure~\ref{UGH}. }
 has $a_i$ associated with
it which can be used to obtain the correct total except in one case.
\pq The general problem is that we want numbers which can be used to
obtain any total between 1 and $2^n$, except for one ``forbidden
total'' $M$.  [In the two zones discussed above $M$ is 4 in one
case and 3 in the other] We start with the powers of 2 from 1 to
$2^n$.  If $2^j\le M<2^{j+1}$, replace $2^j$ by the two numbers
$M-2^j$ and $M+1$.
[We did not follow exactly the procedure described in this paragraph
in constructing Figure~\ref{UGH}.]
\subsection{Converting (SSP) to (SP)}\label{S}
Suppose we have a subset-sum problem with 50 $a_i$, all between 1
and $2^{20}$, with $T<50(2^{20})<2^{26}$.  Solving the SSP may be
crudely broken into two steps:\begin{enumerate}\item Decide which
$a_i$ are in the subset.\item Verify that the sum of the chosen $a_i$ is $T$.
\end{enumerate}Our (SP) will also carry out these steps.  The first
is simple: we will have logic variables $A_1,\dots,A_{50}$ with $A_i=\T$
corresponding to $a_i$ being in the subset.  To carry out the second step,
we need to construct a set of logic formulas which acts as an ``adding
machine'' to check the total.
\pq We will represent numbers in base~2.  Since all relevant numbers are
$<2^{26}$, numbers may be represented by 26 logical variables.  For each
$1\le i\le50$, we will have variables $B_{i1},\dots,B_{i26}$.
These will represent
$a_i$ if $A_i=\T$, 0 if $A_i=\F$.  To do this, we need formulas which show
how the value of $A_i$ determines the value of all the $B_{ij}$:
$$\sim A_i\ro B_{ij}\quad A_i\ro\sim B_{ij}
\hbox{\quad if $j$th digit of $a_i=1$}$$ with the simple formula $\sim B_{ij}$
if the $j$th digit of $a_i$ is 0.
\pq Next, we need, for $2\le i\le 50$, variables $C_{i1},\dots C_{i26}$
which represent the total of the first $i$ of the numbers given by the
$B$-variables.  Formulas are needed which show the $B$-variables deter%
mining the $C$-variables.\pq We begin with a set of formulas 
${\cal S}(V,W,X,Y,Z)$ which have $Y$ get value \T\ if and only if an odd number
of $V,W,X$ have value \T. $Z$ gets value \T\ if and only if 
2~or~3 of $V,W,X$ have value \T:$$\begin{array}{ll}
V\ro W\ro X\ro \sim Y & \sim V\ro\sim W\ro Z\\
\sim V\ro W\ro X\ro Y & \sim V\ro\sim X\ro Z\\
V\ro \sim W\ro X\ro Y & \sim W\ro\sim X\ro Z\\
V\ro W\ro \sim X\ro Y & V\ro W\ro \sim Z\\
\sim V\ro\sim W\ro X\ro \sim Y &  V\ro X\ro\sim Z\\
\sim V\ro W\ro\sim X\ro \sim Y &  W\ro X\ro\sim Z\\
V\ro\sim W\ro\sim X\ro \sim Y \\
\sim V\ro\sim W\ro\sim X\ro Y \end{array}$$
If $V,W,X$ represent three one-digit numbers 
(0 or 1), the formulas ${\cal S}(V,W,X,Y,Z)$ have
the effect that $Y$ is the number in the column with the three numbers,
while $Z$ shows whether there is a number carried into the next column.
\pq We will use $D_{i1}$,\dots,$D_{i27}$, $2\le i\le50$, to keep track of
numbers being carried. Since there are no numbers carried in the rightmost
column, we have the formulas $\sim D_{i1}$.  The formulas
$${\cal S}(B_{1j},B_{2j},D_{2j},C_{2j},D_{2(j+1)})\qquad1\le j\le26$$
have the effect of making $C_{2j}$ represent the sum of the numbers $B_{1j}$
and $B_{2j}$.
To have $C_{ij}$
represent the sum of $C_{(i-1)j}$ and $B_{ij}$, we use $${\cal S}(B_{ij},
C_{(i-1)j}, D_{ij},C_{ij},D_{i(j+1)})\qquad 3\le i\le50\quad1\le j\le26$$
\pq These logic formulas together  have the effect that the $A_i$ 
determine the $B_i$,
which determine $C_{2j}$ through $C_{50j}$ successively.  This last group
of variables corresponds to the base-2 representation of the sum of 
the $a_i$ which are in the set we have chosen.  Finally, we add the formulas 
$C_{50j}$ or $\sim C_{50j}$ depending on whether the $j$th
digit of $T$ is 1 or 0.  As planned, a solution to this satisfiability
problem gives a solution to the subset-sum problem (look at which $A_i$
have value \T), which implies that a method of solving (SP) can be used
to solve (SSP).\pq The (SP) we have constructed is rather large, 
involving approximately $15(26)(50)$ formulas.  However, the rate of
growth if we had more $a_i$ with a larger upper limit is not too bad.
\subsection{Cook's Theorem}It is more important to understand the
general idea of what we did in section~\ref{S} than to get involved in
the details of the construction of the ``adding machine.'' We used 
logical variables (the $A_i$) to represent our guess as to what a solution
to the (SSP) was, then constructed a set of formulas to verify the
guess was correct.\pq You should be able to convince yourself that a
similar thing could be done with a factoring problem.  We could have
variables $A_i$ and $B_i$ represent our guesses as to two factors, then
construct a ``multiplication machine'' to verify that the product is
what we want.  Thus an efficient algorithm for (SP) would lead to an
efficient algorithm for factoring\footnote{Unlike (SSP), nobody has
been able to show that an algorithm for factoring would give an algorithm
for (SP).}.\pq Many other problems can be viewed as making some guesses
as to what the correct answer is, with the process of verification
relatively easy. An example might be an integer programming problem:
$$\begin{array}{l}\max cx\\Ax\le b\\x_i=0\ro1\end{array}$$
We ask if there is a feasible solution with objective function value $>K$.
For a given $x$, it is easy to check that it satisfies all the problem
constraints and tell if the value is big enough.\pq The detailed 
construction in section~\ref{S} was intended primarily to convince
you that, if a verification can be done efficiently, it can be 
simulated by a set of logic formulas.  It should make you willing to
believe\begin{Th}[Cook]Any ``guess and verify'' problem can be converted
to a satisfiability problem. Thus, an efficient algorithm for (SP) can
efficiently solve any ``guess and verify'' problem.\end{Th}
\subsection{Terminology} The concept we have vaguely described as
solving efficiently is technically known as ``polynomial time.''
The types of problem that can be considered as ``guess and verify''
are called NP (for Nondeterministic [the guessing stage] Polynomial
[the verification stage]).  Cook's Theorem says that (SP) is as hard
as any NP problem--- the usual terminology is to say (SP) is NP-complete.
Since we showed in section~\ref{SSP} that (SSP) could be used to solve
(SP), we essentially proved that (SSP) is also NP-complete.
\pq {\it Computers and Intractability}, by Garey and Johnson, is strongly
recommended for more information.