Security Glossary


Advanced Encryption Standard (AES)

Advanced Encryption Standard (AES) is a 128-bit block cipher with a choice of a 128-bit, 192-bit, or 256-bit key.


American National Standards Institute (ANSI) is one of the main organizations responsible for furthering technology standards within the USA. ANSI is also a key player with the International Standards Organization (ISO).


Authentication refers to the verification of the authenticity of either a person or of data. An example is a message authenticated as originating from its claimed source. Authentication techniques usually form the basis for all forms of access control to systems and data.


Authorization is the process whereby a person approves a specific event or action. In companies with access rights hierarchies, it is important that audit trails identify both the creator and the authorizer of new or amended data. It is often an unacceptably high-risk situation for one to have the power to create new entries and then to authorize those same entries oneself.


Block Cipher

A block cipher is a type of cipher that works on a block of data. For example, the DES block cipher works on a block size of 64 bits and the AES block cipher works on a block size of 128 bits.

Most block ciphers operate by alternately performing a reversible (“affine”) non-linear transformation on groups of bits in the block (often using a small carefully designed look-up table), then permuting bits or small groups of bits and then mixing in key information all in a series of “rounds” that are repeated a number of times with different parts of the key or with sub-keys derived from the key.

Block Cipher Modes of Operation

Since block ciphers only work on relatively small blocks of data such as 64 or 128 bits, some form of unambiguous padding is required for messages that are not exact multiples of the block size, and a scheme for handling multiple blocks is needed.

One way to pad is to add a one to the end of the message, and then fill with zeroes until the next block boundary.

The simplest mode for handling multiple blocks of data is just to encrypt each block individually using the same secret key. This is called Electronic Codebook (ECB) mode, since it is equivalent to using a hypothetical (albeit humongous) code book with 2128 input-output pairs recorded in it (for the case of a 128-bit block cipher like AES). Though this efficiently scrambles the contents of each block, it is unsuitable for use in most cases because repeated message blocks are encrypted exactly the same way; a situation that is all too common in real messages.

Popular modes of operation that overcome this problem include:

The NIST recommended block cipher modes are documented in Special Publication (SP) 800-38 parts A, B, C, D, and E:



The Computer Emergency Response Team (CERT) is recognized as the Internet's official emergency team. It was established in the USA by the Defense Advanced Research Projects Agency (DARPA) in 1988, following the Morris computer Worm incident, which crippled approximately 10% of all computers connected to the Internet.

CERT is located at the Software Engineering Institute (SEI), a US government funded research and development center operated by Carnegie Mellon University, and focuses on security breaches, 
denial-of-service incidents, providing alerts, and establishing incident-handling and avoidance guidelines. CERT also covers hardware and component security deficiencies that may compromise existing systems.

CERT is the publisher of Information Security alerts, training, and awareness campaigns. The CERT website is


Checksum is a technique whereby the individual binary values of a string of storage locations on your computer are totaled, and the total retained for future reference. On subsequent accesses, the summing procedure is repeated, and the total compared to the one derived previously. A difference indicates that an element of the data has changed during the intervening period. Agreement provides a high degree of assurance (but not total assurance) that the data has not changed during the intervening period.

A checksum is also used to verify that a network transmission has been successful. If the counts agree it is assumed that the transmission was completed correctly.

A checksum refers to the unique number that results from adding up every element of a pattern in a programmable logic design. Typically either a four- or eight-digit hex number, it is a quick way to identify a pattern, since it is very unlikely any two randomly selected patterns ever have the same checksum. Because they are linear functions, checksums are virtually useless in the face of a malicious adversary who can easily find two messages with the same checksum.

See also, Cyclic Redundancy Check (CRC), Hash Function, and Message Digest.


A cipher is the generic term used to describe a means of encrypting data. In addition, the term cipher can refer to the encrypted text itself (ciphertext, as opposed to the unencrypted plaintext). Encryption ciphers use an algorithm, which is a complex mathematical calculation required to scramble the text and a key. Knowledge of the key allows the encrypted data to be decrypted.

Ciphers scramble bits or digits or characters or blocks of bits, whereas codes replace natural language words or phrases with another word or symbol. Modern block ciphers like AES use alternating non-linear substitutions and permutations repeated for a number of “rounds” to encrypt the data. AES, for example, does byte-wide operations on the contents of a 16-byte data block for 10, 12, or 14 rounds, depending upon the key size chosen. Modern ciphers such as AES can be very resistant to mathematical cryptanalysis, requiring an infeasible number of messages encrypted under the same key and a practically infinite amount of computing power to break them.


Codes are a technique for encrypting data, usually in a natural language such as English, by substituting each word or phrase with a secret word or symbol. Because codes require the cumbersome distribution of large code books (essentially a dictionary-like look-up table) to all the participants they are seldom used today. Ciphers are used instead; they work at the alphabet or binary level and require only a relatively short (256-bit) key to be shared by the users.

Codes can be broken through the use of word frequency analysis, and by correctly guessing plaintext words from the message. For example, it may be known that a weather report is sent at a certain time each day, and by examining several of these messages from known locations the code for “rain” can be guessed. Codes were traditionally used both for confidentiality, and to make telegraph messages, which were charged by length, shorter. Sometimes codes are cascaded with a cipher, a weak form of double-encryption.


In FPGAs, cloning is the act of copying a design without making any changes. No understanding of the design or the ability to modify the design is required.


The act of programming an FPGA. For SRAM-based FPGAs this must be done at each system power-up to make it functional. Configuration of SRAM FPGAs require the use of an external configuration device, which is typically a PROM (see the entry for PROM) or other type of nonvolatile memory which must be present in the system.

Since they are nonvolatile, flash- and antifuse-based FPGAs only require configuring once, usually during the system assembly process. Flash FPGAs have the option of being reconfigured, but antifuse FPGAs are intrinsically one-time programmable.

Corrupt Data

Corrupt data is data that has been received, stored, or changed, such that it cannot be read or used by the program that originally created the data.


A complex programmable logic device is usually a simple low density programmable logic solution. It typically contains macrocells that are interconnected through a central global routing pool. This type of architecture provides moderate speed and predictable performance. CPLDs are traditionally targeted towards low end consumer products.


See Cyclic Redundancy Check (CRC).


Cryptography is primarily concerned with maintaining the privacy of communications and modern methods use a number of techniques to achieve this. Encryption is the transformation of data into another usually unrecognizable form. The only means to read the data is to decrypt the data using the secret key. Other common cryptographic services include ensuring data integrity, authentication of data sources, and digital signatures.

Cyclic Redundancy Check (CRC)

A class of algorithms for computing a short digest value from an arbitrarily long message, similar to a checksum or hash. CRC may also refer to the resulting digest value itself. The “cyclic” in CRC refers to the underlying cyclic codes describing the mathematics of the algorithm. More precisely, CRC algorithms use linear operations in a Galois Field (usually a binary extension field) which are similar to polynomial division using a generator polynomial.

Common CRC algorithms and their generator polynomials have been standardized for many uses, such as detection of bit errors in data transmission. CRC codes are efficient in detecting large bursts of errors, which suits some types of storage media or transmission channels. Examples of some standardized CRC algorithms are CRC-16-CCITT, which is used by Bluetooth (personal area wireless network), CRC-32-IEEE, which is used in 802.3 (wired Ethernet), and MPEG-2 (video).

Because they are linear operations, they are unsuitable for use in the presence of malicious attacks. 
An attacker can easily create messages with arbitrary CRC digest values. Cryptographic hash functions must be used instead of a CRC in applications such as digital signatures, data integrity, and authentication where there might be non-random errors (malicious attacks).

See also, Hash Function.


Data Encryption

Data encryption is a means of scrambling data so it can be read only by the person(s) holding the key—a password of some sort. Without the key, the cipher (hopefully) cannot be broken and the data remains secure. Using the key, the cipher is decrypted and the data is returned to its original value or state.

For example, using the DES cipher, a key from approximately 72,000,000,000,000,000 possible key variations is randomly generated and used to encrypt the data. The same key must be made known to the receiver so the data can be decrypted at the receiving end. DES can be broken in a matter of hours using a brute-force search because the number of possible keys is low by today's standards.

See also, Public Key Cryptography.

Data Encryption Standard (DES)

An unclassified cryptographic algorithm adopted by the U.S. National Bureau of Standards (NBS, now called the National Institute of Standards and Technology, NIST) for public and government use as Federal Information Processing Standard (FIPS) 46. It is a 64-bit block cipher with a 56-bit effective key length.

DES is a data encryption standard for the scrambling of data to protect its confidentiality. It was developed by IBM in cooperation with the United States National Security Agency (NSA) and published in 1974 by NIST. It is extremely popular and, because at the time it was thought to be so difficult to break, with approximately 72,000,000,000,000,000 possible key variations, was banned from export from the USA. However, restrictions by the US Government on the export of encryption technology to the countries of Europe and a number of other countries were lifted in 2000.

DES was cracked by researchers in 96 days in 1997 by the DESSHALL project and again in 41 days by, both projects using thousands of distributed personal computers, where they showed that DES was susceptible to brute force attacks. One of the final blows to the short 56-bit key length of DES was in 1998 when the Electronic Frontier Foundation (EFF) and Cryptography Research, Inc. (CRI) discovered several DES keys, first in 56 hours and then later in only 22 hours, using a custom-designed computer called DES Cracker. The industry then turned to Triple DES, which uses DES three times, as a short term standard to secure transactions. Generally sluggish performance caused an outcry that resulted in a new standard. The NIST has since standardized the Advanced Encryption Standard (AES), based on the Rijndael algorithm, as recommended for all new block cipher applications, although Triple DES is still used extensively in the finance industry for legacy reasons.


The process by which encrypted data is restored to its original form in order to be understood/usable by another computer or person.

Denial of Service

Denial of service (DoS) attacks deny service to valid users trying to access a site. Consistently ranked as the single greatest security problem for IT professionals, DoS attack is an Internet attack against a website whereby a client is denied the level of service expected. In a mild case, the impact can be unexpectedly poor performance. In the worst case, the server can become so overloaded that it crashes the system.

DoS attacks are not primarily intended for theft or corruption of data, and are often executed by persons who nurse a grudge against the target organization. The following are the main types of DoS attacks:

Differential Power Analysis (DPA)

An analysis technique that relies on multiple measurements of a security device's instantaneous power consumption in order to recreate a secret being manipulated inside the device. Simple and Differential Power Analysis were first reported by Paul Kocher et al in 1989. Generally this class of techniques uses statistical methods to amplify the effects of small unintentional leakages of the secret information in power consumption measurements, buried in large amounts of noise.

For example, if the same secret key is used to process multiple independent blocks of data, a DPA attack might be mounted to determine the secret key using anywhere from a handful of power consumption traces to over a million, depending on the magnitude of the leak, the amount of noise that may be obscuring the secret data, and what countermeasures are used. Systems that handle large amounts of data using the same key, or which can be repeatedly be given random or chosen input data which is then processed using the secret key, are especially vulnerable to DPA.

Diffie-Hellman Key Exchange

The Diffie-Hellman key exchange algorithm, named after Whitfield Diffie and Martin Hellman, was the first public key algorithm ever published, in 1976. The third inventor was Ralph Merkle. With it, they revolutionized the field of cryptology, and made secure communication over the Internet feasible.

It is based upon the difference in difficulty of a particular function and its inverse, namely the ease of exponentiation and the difficulty of computing the discrete logarithm (both) in a finite field. When the numbers involved are large (that is, over one thousand bits) the difference in difficulty is approximately 30 orders of magnitude, and grows with the size of the numbers.

The Diffie-Hellman protocol allows two entities (computers or people) who do not have nor have ever had a secure channel between them to compute a common secret using public information they send to each other. Anyone eavesdropping on the conversation would find it computationally infeasible to learn the shared secret, even though they see all the messages. This is because each of the parties to the computation holds one secret they do not transmit, but use in the exponentiation formula to compute a value that is practically impossible to reverse; and this is the value that is sent over the insecure channel.

Prior to this invention, secret communications always involved having a shared secret key. This shared key had to be transmitted securely between the parties by a trusted courier or some similar means before encrypted communication over an insecure medium such as radio or telegraph could be done using the shared secret key. Since each possible pair of entities might need a unique shared key, the system did not scale well to large groups where the number of combinations can be exceedingly large. It can thus be claimed that public key cryptography made practical encrypted communications between large numbers of parties, such as shopping with credit cards on the Internet, that was not feasible before.

Digital Signatures

With the advent of public key cryptography a number of new cryptographic services were born, with digital signatures perhaps being the most important.

The concept of digital signatures is that the signer performs a computation using a secret key that only the signer knows, but which can be confirmed by anyone having the matching public key.

Using the RSA cryptosystem, this is done mainly by interchanging the usual role of the private and public keys: In “normal” encryption, any sender encrypts the message using the recipient's public key and the recipient decrypts it using the private key that only the recipient knows. In the RSA digital signature algorithm, the signer “encrypts” the message using the private key that only the signer knows, and any verifier can “decrypt” the signature and verify it is the same as the message using the freely available public key.

Since only the signer has a copy of the private key, it is difficult for the signer to repudiate any valid signatures. This is different from symmetric (shared key) systems where at least two parties must be in possession of a key for it to have any use.

In practice, the whole message is not signed. Because of computational efficiency, and to reduce the size of the signature that has to be transmitted along with the message, a hybrid scheme is used. The message is first hashed; that is, a short digest is computed from the message, and it is this digest that is signed using the private key. The verifier also hashes the received message, and verifies the signature matches the hash using the public key. Standards such as PKCS#1 specify other details important to the security of the system such as how padding is done, and the use of random nonces.

There are variations of this hybrid signature scheme using the ElGamal and elliptic curve cryptosystems.


Disabling is the process by which hardware or software is deliberately prevented from functioning in some way. For hardware, it may be as simple as switching off a piece of equipment, or disconnecting a cable. It is more commonly associated with software, particularly shareware or promotional software, which has been supplied to a user at little or no cost, to try before paying the full purchase or registration fee. Such software may be described as “crippled”, in that certain functions, such as saving or printing files, are not permitted. Some in-house development staff may well disable parts of a new program, so that the user can try out the parts that have been developed, while work continues on the disabled functions.

Disabling is also often used as a security measure. For example, the risk of virus infection through the use of infected floppy diskettes or USB thumb drives can be greatly reduced by disconnecting a cable within the PC, thereby disabling the drive. Even greater protection is achieved by removing the drive altogether.


Electromagnetic Analysis (EMA)

A form of side-channel analysis where the unintentional information leakage from the cryptographic system is via electromagnetic (EM) emissions. Electromagnetic emissions have been a well-known source of leakage, prompting the US government to specify EM requirements for secure applications in what are called TEMPEST requirements. In one example of EM leakage, the van Eck radiation of a display terminal is read from a distance of hundreds of meters using simple equipment.

Many power analysis (PA) classifications have an EMA analog where a similar attack can be performed using essentially the same method for EMA as for PA. For instance, differential electromagnetic analysis (DEMA) is the analog of differential power analysis (DPA), and can be used to extract the AES key, for example, from an unprotected device using an RF antenna and amplifier instead of a current monitor. One important difference is that in EMA the usable signal is often more strongly modulated on harmonics of the fundamental frequencies due to the better propagation properties of higher frequencies; therefore demodulation is often used to bring these harmonic-related signals back to baseband before completing the analysis. Often, it is possible to focus the area of the attack by the placement of the antenna, resulting in an improved signal-to-noise ratio (from the analyst's perspective).

Elliptic Curve Cryptography (ECC)

Elliptic curve cryptography is a public key cryptographic system defined using elliptic curve polynomials in finite fields. The important principle is related to the Diffie-Hellman problem of finding discrete logarithms in finite fields, but instead of exponentiation the group operator is scalar point multiplication. Since some of the most efficient (non-quantum) algorithms available for finding discrete logarithms do not work on elliptic curves, the key sizes required for elliptic curves can be much shorter than for the Diffie-Hellman (or RSA) cryptosystems for a roughly equivalent security strength.

As an example, for a security strength of around 128 bits, i.e., requiring an attack with approximately 2128 operations to brute-force attack on AES-128. ECC requires a key size of 256 bits, whilst RSA requires around 3072 bits. As a result, ECC is generally substantially more computationally efficient than RSA. ECC's “hard problem” is susceptible to Shor's attack using a quantum computer. When suitable quantum computers are available, ECC will become ineffective at providing security.


The process by which data is temporarily rearranged into an unreadable or unintelligible form for confidentiality, transmission, or other security purposes.


In information theory, entropy is a measure of the uncertainty of a system. For example, if all the bits of an n-bit binary number are unbiased (equal probability of a one or zero) and independent (not correlated with any other bits) and are unknown, then the number “contains” n bits of entropy and is said to have full entropy.

In this case, there would be no better method of guessing the number than a brute force search attempting every possible value (2n values), with an expected match after about half of the values have been tried. However, if the bits were known to be biased (for example: 1/3 were randomly selected as zero, and 2/3 as one), then the entropy would be less than n bits and a more efficient search could be performed that started by guessing more ones than zeroes, with an expected match much earlier than in the unbiased case.

In cryptographic applications it is usually critically important that random numbers, such as those used for secret keys, have full entropy.

There is a beautiful and unexpected relationship between entropy as used in information theory and entropy as used in the physical sciences (such as thermodynamics), but in most practical applications the two uses are distinct.


Fault Analysis

Fault attacks attempt to break the security of a cryptographic implementation by injecting energy in the form of voltage glitches on the power supply, or light or concentrated electromagnetic energy to generate a fault in the device. Other ways to induce faults can be to operate the temperature or voltage outside the normal ranges. Faults can be used by an adversary in different ways, depending on the precise fault and the design of the system. For example, if a microcontroller can always be made to take a certain branch, whether or not a passcode is matched properly, the passcode protection may be made ineffective. Another broad class of fault attacks are called differential fault analysis (DFA) where if the correct and one or more faulted outputs of a cryptographic calculation using the same secret key can be obtained, the key may be extracted. Fault analysis a subset of the general class of active side channel analysis.


Firmware is a sort of halfway house between hardware and software. Firmware often takes the form of a device that is attached to or built into a computer–such as a ROM chip–which performs some software function but is not a program in the sense of being installed and run from the computer's storage media.

Flash FPGA

A flash-based FPGA uses flash memory technology to control the switching of the interconnect and the operation of the logic elements. Flash-based FPGAs are nonvolatile, live on power-up, and reprogrammable They are and relatively secure from reverse engineering or cloning since the programming bitstream is only required to be loaded once, during the initial configuration. This can be performed either in a trusted location, or using strong cryptographic techniques in less trusted locations.

Most flash FPGAs also allow for secure field upgrades using encrypted bitstream files and a decryption key which was loaded in the nonvolatile memory during the initial configuration process. The discovery of the possibly millions of configuration bit values stored in the internal nonvolatile flash memory cells is considered a very difficult problem, thus contributing to the security of flash FPGAs.


A field programmable gate array is a very complex programmable logic device (PLD). The FPGA usually has an architecture that comprises a large number of simple logic blocks, a number of input/output pads, and a method to make the desired connections between the elements. The largest programmable logic devices have gate counts running into the millions, and modern devices often have many ancillary hardware blocks such as microprocessor units (MPUs), phase-locked loops (PLLs), static random access memory (SRAM), specialized digital signal processing (DSP) elements, embedded nonvolatile memory (eNVM).

These devices are user customizable and programmable on an individual device basis. They are valued by designers for their flexibility.



A hacker is an individual whose primary aim in life is to penetrate the security defenses of large, sophisticated, computer systems. A truly skilled hacker can penetrate a system right to the core and withdraw without leaving a trace of the activity. Hackers are a threat to all computer systems that allow access from outside the organization's premises, and the fact that most hacking is just an intellectual challenge should not allow it to be dismissed as a prank. Clumsy hacking can do extensive damage to systems even when such damage was not intentional.

In 2015 the US government issued a report from the Defense Cybersecurity Culture and Compliance Initiative (DC3I) that indicated there were around 100,000 attempted malicious network attacks on Department of Defense assets every day.

Hash Function

A cryptographic hash, also called a message digest, is a publicly-known function that takes as its input a message of (almost) any length and compresses it into a random-like short message called a digest or fingerprint. “Hash” may refer to either the function or the output digest value itself.

Commonly used digest output lengths are from 160 to 512 bits. Hash functions are important components of integrity, authentication, and digital signature schemes, amongst other uses.

A good cryptographic hash must have several properties:

  1. 1.Pre-image resistance—it must be infeasible to determine any part of the input message from the output digest.
  2. 2.Second pre-image resistance—it must be infeasible to generate any input message with a given output digest.
  3. 3.Collision resistance—it must be infeasible to find any two input messages with the same output digest.

These imply a strong one-way-ness property for cryptographic hash functions. For a good hash function, if even one bit of the input message is changed, roughly one-half of the output bits changes pseudo-randomly.

Commonly-used hash functions are MD5, SHA-1, and the SHA-2 family of hashes, including SHA- 256, SHA-384, and SHA-512. Though still in widespread use, MD5 is considered broken, and SHA-1 has some serious weaknesses. The US government agency NIST recently completed a competition for a new family of hash functions called SHA-3 that must have better security than the current standard hash functions. An algorithm called Keccak was selected as the winner. It uses different principles than most prior hash functions, and is very efficient in hardware implementations.

Cryptographic hashes are related to, but not the same as hashes used in computer science for creating tables for looking up data by value. Those hash functions do not have the three security properties (above) required for a cryptographic hash and as a result must never be used in a cryptographic (adversarial) setting.

See also, Cyclic Redundancy Check (CRC) and Security Strength.

HEX / Hexadecimal

Hexadecimal, or hex, is a base 16 numbering system (as opposed to the usual decimal base 10). Hex is a useful way to express binary computer numbers. A byte is normally expressed as having 8 binary bits. Two hex characters of four bits each, called nibbles, represent eight binary digits, also known as a byte. Nibbles are sometimes represented using the sixteen 8-bit ASCII character set symbols 0-9 and a-f (or A-F) for human consumption such as when displayed or printed.


In-Application Programming (IAP)

IAP is the ability of a microcontroller to run an application that reconfigures (reprograms) its own nonvolatile program code storage. Some flash FPGAs having a built-in microcontroller natively support both IAP and ISP.

See also the entries for “In-System Programming (ISP)”.

In-System Programming (ISP)

ISP is the ability to program and reprogram an FPGA that is mounted on a circuit as part of a functional system. Flash and SRAM-based FPGA technologies support ISP.

Intellectual Property (IP)

Intellectual property is defined as creative, technical, and intellectual products, often associated with custom circuit designs implemented in ASIC or programmable logic architectures.

Invasive Attack

Invasive attack is an attack on a semiconductor to determine its functionality and requires physical entry to the part. Typical methods include probing, etching, and FIB (focused ion beam) intrusion.

See also the entries for “Noninvasive Attack” and “Semi-Invasive Attack”.


Malicious Code

Malicious code includes all and any programs (including macros and scripts) that are deliberately coded in order to cause an unexpected (and usually unwanted) event on a PC or other system. However, whereas antivirus definitions (vaccines) are released weekly or monthly, they operate retrospectively. In other words, someone's PC has to become infected with the virus before the antivirus definition can be developed. In May 2000, when the Love Bug was discovered, although the antivirus vendors worked around the clock, the virus had already infected tens of thousands of organizations around the world, before the vaccine became available.

Message Authentication Code

A Message Authentication Code (MAC) is similar to a hash function in that it computes a random-like output digest from any size input message, but unlike a hash, which is a public function that anyone can compute, a MAC uses a secret key so that only those in possession of the secret can correctly create or verify it.

Message Digest

See Hash Function.

Modes of Operation

See Block Cipher Modes of Operation.


National Institute of Standards and Technology (NIST)

NIST was formerly the National Bureau of Standards (NBS). NIST is the government agency that sets weights and measures for the United States. It is an agency of the Commerce Department. In security and cryptography, NIST works closely with the National Security Agency (NSA), a part of the Defense Department, to set government standards and make recommendations for private sector use.


A number used only once. Nonces are an important element of many protocols because they help protect against replay attacks. By incorporating a unique nonce in the protocol the attacker cannot replay data from an earlier run of the protocol that, by definition, used a different nonce. Nonces are also often required for initialization vectors such as those used with some block cipher modes of operation, or stream ciphers. If the same initialization vector is used with the same key on more than one message, the security of the cipher mode can be very seriously compromised. Nonces are also used in some types of digital signatures.

Common ways of generating nonces are by counting, using a time stamp, or using a sufficiently large random number whose chance of repeating is vanishingly small. The best choice depends upon the circumstances, because each of these has its own difficulties and advantages. For instance, in many systems it is very difficult to be sure of a secure time source. With a counter, the issue is to make sure that it is never reset or a count value used twice, even if the power supply is tampered with. In other systems there may not be a good source of entropy with which to create sufficiently large random numbers.

Noninvasive Attack

A noninvasive attack is an attack on a semiconductor to determine its functionality that does not require physical entry to the part. Types of attacks include actively varying voltage levels to gain access, and passive side-channel analysis.

See also the entries for “Invasive Attack”, “Semi-Invasive Attack” and “Side-Channel Analysis”.


A device is nonvolatile if it does not lose its contents when its power is removed. Nonvolatile memory is useful in microcomputer circuits because it can provide instructions for a CPU as soon as the power is applied, before secondary devices, such as disk, can be accessed. Nonvolatile memories include metal-mask read-only memory (ROM), fusible-link programmable ROM (PROM), ultra-voilet- erasable electrically-programmable ROM (UV-EPROM), and electrically-erasable PROM (EEPROM) including “flash” memory, a special type of EEPROM where the memory is erased in large blocks rather than by individual bytes or words, making it much faster and also less expensive.



Unscrupulous contract manufacturers (CM) overbuild on a program or contract and sell the excess on the gray market.


Power Analysis

See Side-Channel Analysis (a super-set of power analysis), Simple Power Analysis, and Differential Power Analysis (DPA) (both sub-types).

Public Key Cryptography

Public key cryptography is based upon the revolutionary principle that instead of using a shared secret key for two or more parties to communicate privately, as in all ciphers and codes before 1976, a key can have two parts: a public part and a secret part. The public part may be communicated to anyone and does not have to be kept secret. It can be used for encryption, thus allowing anyone in the world to encrypt a message intended for a given recipient. Only the recipient, namely the holder of the secret part of the key, can perform the decryption.

The first public key scheme, called the Diffie-Hellman key exchange algorithm, was published by Whitfield Diffie, Martin Hellman, and Ralph Merkle, in which they used mathematics based upon the difficulty of the discrete logarithm problem to generate a shared secret key between two parties that had no prior secret communication. This was later expanded into the ElGamal encryption system for enciphering messages. Shortly after, Ron Rivest, Adi Shamir, and Len Adleman published the now well known RSA encryption scheme named after them, based upon the difficulty of factoring large primes.

Besides greatly simplifying key distribution between anonymous parties, public key cryptography also introduced a new cryptographic service called digital signatures. The holder of the secret key “signs” a message with a message-dependent code only they can generate, and anyone in possession of the public key can verify the integrity of the data and the correctness of the signature. Since only one person holds the private key (unlike in symmetric key systems where at least two people have the key), it makes it much more difficult for the signer to later repudiate their signature.

Though attributed to the inventors mentioned above who were the first to publish their results, it is now known that public key cryptography had been invented a few years earlier by James Ellis, Clifford Cocks and Malcolm Williamson, employees of the General Communications Headquarters (GCHQ), a British government agency, which kept their results secret and largely failed to recognize the importance of the discoveries.


Random Numbers

Random numbers are used extensively in cryptography, for generating secret keys and nonces, for example. In most implementations, they are binary numbers. The random numbers must be unknown and unpredictable to an adversary. An n-bit binary number which is completely unpredictable and unknown to an adversary is said to contain n bits of entropy; if the adversary has a better than 50%- 50% chance of guessing some of the bits, the entropy is reduced.

True random numbers are derived from an unpredictable physical source, most often some form of electrical noise although radiation decay and some other physical processes are also sufficiently random though less practical. If each bit generated by the physical process is unbiased and uncorrelated with all the other bits then it has one bit of entropy. By gathering many such bits, one can accumulate a large amount of entropy.

Pseudo-random numbers are derived from a deterministic computational process. With good algorithms pseudo-random bits can be computationally indistinguishable from true random bits. However, no matter how many such bits are generated, the entropy content is limited by the lesser of the initial true random seed used to initialize the computation process and the number of bits of internal state storage. If an adversary were able to learn the internal state of a pseudo-random generator (by guessing or other means) he could predict all future values, and may even learn something about past values.

Important standards related to random numbers include:

Reverse Engineering

Reverse engineering is the act of examining a design to understand exactly how it works, perhaps with the intent to copy the design. The design is then altered to differentiate it from the original design for the purpose of improving upon it or to prevent legal action because of the theft, or to insert a “Trojan Horse”. Also, reverse engineering is sometime used to determine if any patents are being violated. Some applications of reverse engineering are legal depending on the subject and the legal jurisdiction, while other cases may be considered theft.


Security Strength

Security strength is a rough measure of the work effort, log base 2, required to attack a given cryptographic problem. For a well-designed block cipher, the best approach an attacker has is a brute force search over all the possible keys. In this case the security strength, measured in bits, is the same as the length of the key (in bits). For example, AES-128 has an estimated security strength of 128 bits since the best known attack is a brute force search of all 2128 keys.

For a well-designed hash function, the security strength varies depending upon which of the security properties is being depended upon in its usage (see the entry for Hash Function). For pre-image resistance and second-pre-image resistance, the security strength is the same as the digest output size (in bits). For collisions, the security strength is very nearly half the number of bits in the output. The reduced strength is due to the Birthday Attack, which is applicable in this situation.

For public key algorithms, the security strength is a complicated function of the key size but also depends upon the most efficient attack algorithm known. Since the most efficient attacks on RSA or Elgamal do not work on elliptic curve algorithms, shorter keys can be used with elliptic curve cryptography for a given security strength. For elliptic curve algorithms, the keys must be roughly twice as long as for symmetric algorithms such as AES. RSA, Diffie-Hellman, and Elgamal all require comparable (to each other) but much longer keys. For example, a one-thousand bit RSA key is roughly equivalent in security strength to an 80-bit symmetric key and a 160-bit elliptic curve key.

Not all block ciphers and hash functions have the ideal security strength shown above. If some attacks are known that reduce the work factor to find the key (or pre-image, or collision, etc.) caused by a weakness in the algorithm, then the security strength is correspondingly downgraded. For instance, the MD5 hash algorithm design in 1994, which has a digest size of 128 bits, must have a collision resistance security factor of 64 bits (which in itself is marginal), but attacks had been found by 2006 that reduced the work factor to less than 224, (one trillion times easier) making it unsuitable for cryptographic applications since the latest/best attack algorithm known can find an MD5 collision in less than one minute on a standard notebook computer.

Security strength is often equated with the length of time the algorithm or secret data is used. For short term (ephemeral) use, 80 bits may be enough for strong security, but for data that has to last a few years 100 bits or more is recommended, and for data that may have to keep secret for several decades, 128 bits is recommended. This is because attacks only get better, and computing equipment has been getting faster and cheaper due to Moore's Law.

Grover's algorithm, applicable to quantum computing, is expected to reduce the security strength of most cryptographic algorithms by a square-root factor, i.e., by about halving the security strength measured in bits. For example, to maintain a security strength of 128 bits in a post-quantum world, one should use AES-256. Shor's algorithm, which is applicable to many public key algorithms such as ECC, RSA, and DH (but not most block ciphers or hashes) has a more devastating effect on the security strength, making these algorithms next to worthless.

Semi-Invasive Attack

A semi-invasive attack is an attack on a cryptographic device such as an integrated circuit which may involve removing all or part of the package, but does not require internal probing or cutting of circuit lines. Instead, the attack is carried out using optical or electron microscope observations or by injecting (temporary) faults optically or electromagnetically, which do not require the active device to be touched. This family of attacks is generally less expensive to conduct than invasive attacks but more expensive than other types of active fault attack or passive side-channel analysis.

See also, Invasive Attack, Differential Power Analysis (DPA), and Side-Channel Analysis.

Side-Channel Analysis

Passive side-channel analysis is a noninvasive (or occasionally a semi-invasive) analysis technique which attempts to break the security of a cryptographic system by monitoring information unintentionally leaked via side-channels. These side-channels could be power consumption, electromagnetic emissions, optical emissions, thermal signatures, or timing of response times, for example. As all “real world” implementations of cryptographic systems have unintended side-channels, they represent a serious threat to the security provided by these systems.

Active side-channel analysis attempts to break the security by using light, electrons, electromagnetic energy, or other active sources of energy to probe or disrupt the target system. Sometimes active and passive techniques are combined.

See also, Simple Power Analysis, Differential Power Analysis (DPA), Electromagnetic Analysis (EMA), and Fault Analysis.

Simple Power Analysis

Simple power analysis is a side-channel analysis technique based upon one or just a few measurements of a security device's power consumption. Information about secrets being manipulated inside the device are unintentionally leaked out via the instantaneous power consumption of the device. In some cases, a secret key can be read more-or-less directly from simple observations of a single oscilloscope trace.


An SRAM FPGA is an FPGA that utilizes SRAM (Static Random Access Memory) technology to configure the interconnect and to define the logic. SRAM FPGAs are reprogrammable, volatile, and require a boot-up process to initialize. SRAM FPGAs are generally considered less secure than flash or antifuse technology based FPGAs because the design configuration bitstream has to be loaded from an external component at each power-up cycle.

See also, Differential Power Analysis (DPA).


Tamper Detection

Tamper detection is an alarm set off when any of a number of possible tamper detection sources is triggered. Common tamper detectors for high-end security integrated circuits include voltage, clock and temperature alarms, internal redundancy violations, physical tampering alarms such as a failure of a mesh covering important circuits, etc.

See also, Zeroization, which is one possible response to a tamper detection alarm.

Tamper Resistant Packaging

Often used in smart card systems, tamper resistant packaging is designed to render electronics inoperable if the product is physically (invasively) attacked. Tamper evident packaging can also be used to deter tampering attacks.

See also, Zeroization, and Tamper Detection.

Timing Analysis

Timing analysis uses detectable data dependent variations in the time to perform calculations to determine secrets contained in the data. The time may be detected by monitoring external signals, unintended side channel leakage, network response time, cache hits, or other means. To prevent timing analysis constant time forms of common cryptographic algorithms are used.



As applied to memory technology, volatile memory loses its data when power is removed. SRAM and DRAM technologies are volatile, while flash, EEPROM, and fuse-type memories are nonvolatile. The inability of an SRAM-based FPGA to maintain its configuration when power is removed is a function of the volatile memory technology upon which it is based. Thus, SRAM-based FPGAs require additional external nonvolatile memory components, and the sensitive data must be securely transported from the external device to the FPGA at each power-up cycle.



Active zeroization is used to erase critical information, followed by verification that the erase operation was successful. It can be used as one of many possible responses to a tamper detection alarm.

See also, Tamper Detection. Passive zeroization is erasure of nonvolatile memory by removal of the power source. Verification may be infeasible in this case.