2. FPGA Bitstream Protection Techniques and Technologies
To counter the above threats, FPGA vendors and researchers have developed several techniques to secure bitstreams. Key methods include encryption, authentication, obfuscation, and robust key management, often used in combination for defense-in-depth [
4,
6,
10].
This section outlines these methods:
-
A.
Bitstream Encryption (Confidentiality Protection)
Encryption is the primary mechanism to protect bitstream confidentiality. Modern FPGAs include on-chip decryption engines, typically implementing the Advanced Encryption Standard (AES), often at 256-bit strength [
4,
10]. The encrypted bitstream is stored externally (in flash or PROM), and on power-up, the FPGA decrypts the stream on-the-fly using a secret key stored in on-chip fuses, battery-backed RAM (BBRAM), or flash [
4]. Some devices even support unique per-device keys, enhancing granularity in secure deployment [
6]. FPGAs support 256-bit AES encryption of the bitstream in many. The encrypted bitstream is stored externally (in flash or PROM), and upon power-up the FPGA uses the internally stored key to decrypt the stream on-the-fly as it configures the device. Because decryption happens within the FPGA’s secure hardware, the plaintext bitstream never appears on any external interface. As long as the encryption key remains secret, an attacker who reads the bitstream data from SPI flash or intercepts it during loading sees only ciphertext – preventing reverse engineering or cloning of the design.
Encryption alone does not ensure integrity. Without authentication, a modified ciphertext could still load, potentially resulting in subtle or malicious changes [
6]. Hence, encryption is commonly paired with authentication. FPGA vendors typically allow the key to be stored in non-volatile one-time programmable fuses or in battery-backed RAM on the device. High-end FPGAs often use eFUSE or on-chip flash for permanent key storage, avoiding the need for a battery. Lower-cost FPGAs sometimes rely on a battery-backed SRAM key (BBRAM) – which works but requires maintaining battery power to retain the key when the device is off ither way, the key is not meant to be readable from outside the FPGA. During manufacturing, the secret key is injected into the FPGA’s secure memory, and from then on, the device will automatically decrypt any incoming encrypted bitstream using that key. Some devices even support unique keys per device, so that each FPGA has its own encryption key to decrypt a bitstream tailored for it (this limits the impact of a leaked key to one device).
Encryption alone does not guarantee that the bitstream hasn’t been altered; it only protects against unauthorized readout. If an attacker somehow obtains the encryption key (via side-channel attack or chip tampering), the encryption can be completely defeated. Moreover, without additional measures, encryption by itself doesn’t prevent a modified ciphertext from loading – meaning an attacker who can tweak the encrypted bitstream (even without fully decrypting it) might cause specific changes in the FPGA configuration if there’s no integrity check. For these reasons, encryption is often paired with authentication. Nonetheless, enabling AES encryption is a fundamental first step to thwart casual cloning and reverse engineering – it forces the adversary to undertake a much more sophisticated attack (extracting or cracking the key) rather than simply copying bytes from a memory chip.
-
B.
Bitstream Authentication and Integrity Checking
Authentication ensures that a bitstream originates from a trusted source and has not been modified in transit. This defends against tampering and unauthorized use [
4,
6].An authenticated bitstream means the FPGA will only load configuration data that has a valid cryptographic signature or hash, proving it comes from a trusted source and has not been tampered with. This protects against malicious modifications, trojans, or use of unauthorized bitstreams. Two common implementations are Hash-based Message Authentication Codes (HMAC) and digital signatures.
-
C.
HMAC/SHA Authentication
Cryptographic hash functions (e.g., SHA-256 or HMAC-SHA) are computed for the bitstream and verified at boot. A mismatch aborts configuration. This method is widely used in devices like Xilinx UltraScale+ and Intel Stratix 10 [
4]. The FPGA, during configuration, recomputes the hash/HMAC using a stored secret key and compares it to the expected value. If they do not match, the FPGA refuses to execute the bitstream. The bitstream file includes an HMAC digest (computed with a secret key known to the device), and on boot the FPGA verifies this digest before proceeding. If authentication fails (digest mismatch), the device will abort configuration, thus preventing any unauthorized or corrupted bitstream from running. It is best practice to use encryption and authentication together, performing an “encrypt-then-authenticate” scheme.
-
D.
Digital Signatures and Secure Boot Chains
High-end FPGAs support secure boot with public-key cryptography (RSA or ECDSA) [
4,
6]. A root-of-trust bootloader verifies a signed first-stage bootloader, which then authenticates the bitstream. This prevents booting rogue code and enforces a trust chain. The FPGA or its boot ROM holds the corresponding public key and will verify the signature before allowing the configuration to load. The use of asymmetric cryptography means that even if an attacker intercepts the bitstream, they cannot forge a new valid bitstream without the private signing key. Secure boot typically works as a chain-of-trust: a small on-chip ROM bootloader verifies a signed first-stage bootloader, which then verifies the FPGA bitstream or application code, and so on. In summary, authentication complements encryption by defending against bitstream tampering and reuse of unauthorized code. Encryption stops reads of the bitstream, while authentication stops writing of rogue bitstreams. Designers concerned with security should enable these features so that any attempt to modify a bitstream – whether by flipping bits or inserting trojans – will be detected and blocked.
-
E.
Obfuscation and Design Concealment Techniques
When encryption isn’t available, or as a complementary method, obfuscation techniques complicate reverse engineering efforts [
6]. Beyond cryptographic protection, various obfuscation techniques can make it harder for an attacker to reverse-engineer or misuse a bitstream. These techniques do not rely on secret keys, but rather on making the design representation intrinsically difficult to understand or modify. They serve as a secondary line of defense, especially in scenarios where full encryption might not be available (e.g., some low-cost FPGAs) or to augment security in depth. Key obfuscation approaches include:
-
F.
Bitstream Scrambling
Older FPGAs used fixed scrambling schemes (e.g., XOR masks), but these are weak and easily defeated. Most vendors have phased this out in favor of robust encryption [
6]. FPGA vendors historically have used simple scrambling on the bitstream format – a fixed XOR mask or bit permutation – to prevent casual interpretation of the bitstream. For example, older FPGAs (before AES support) often had a proprietary bitstream encoding. However, determined adversaries can reverse-engineer these schemes, and they are not cryptographically secure. Scrambling might stop a naive attacker but offers little resistance against serious reverse engineering efforts. It is now largely superseded by real encryption.
-
G.
Logic Encryption / Logic Locking
This technique inserts key-controlled logic during design, so the FPGA only functions with the correct runtime key. It increases security against cloning and reverse engineering [
6]. However, SAT-based attacks can sometimes break such protections [
10]. This is a design-time obfuscation where extra “key” inputs or gates are added to the FPGA design such that the circuit only functions correctly if the proper secret key bits are applied. For instance, a designer can insert key-gated logic or “lock” certain critical functional blocks with a user-defined key. The correct key is programmed into the FPGA (for example, stored in internal registers or supplied at runtime); without it, the circuit will malfunction or output wrong results. This technique can thwart an attacker who manages to copy the bitstream – the cloned FPGA would not operate correctly without knowing the secret key. Logic locking has been researched extensively and can increase the difficulty of reverse engineering. However, sophisticated attackers can sometimes defeat logic locking via SAT attacks or bypass logic if they manage to extract the key, so it is not foolproof method. It’s a useful supplemental protection, especially when combined with encryption (the key for logic locking can be another layer of security).
-
H.
Hardware Camouflaging and Dummy Logic
This method inserts misleading or unused logic to confuse attackers. It works as a “smokescreen” against netlist reconstruction tools [
6]. Designers can introduce “camouflaged” logic elements (look-alike dummy gates or routing that don’t affect functionality) to confuse reverse engineering. Dummy routes and unused logic might be deliberately inserted so that an extracted netlist contains misleading or extraneous circuitry. The goal is to make it harder to discern the true design intent. Some FPGAs or design flows may allow placing decoy state machines or mix up LUT configurations in ways that only correct initialization yields a working design.
-
I.
Partitioning and Partial Reconfiguration Obfuscation
Researchers have proposed two-stage bitstreams and runtime assembly of designs using PUF-generated keys [
8,
10]. This adds complexity and layering, frustrating attacker attempts to reconstruct the complete design. An emerging idea is to split the design into parts and only combine them at runtime via partial reconfiguration or multi-boot sequences, possibly under authentication. For instance, one academic approach uses a two-stage configuration where the full design only becomes functional after a second stage bitstream is loaded that “unlocks” certain features using a PUF-generated key. By dividing the bitstream, an attacker must defeat multiple layers (and possibly multiple keys) to get the whole design. Overall, obfuscation techniques increase the effort required for an attacker to clone or understand an FPGA’s bitstream. They are typically used in addition to encryption/authentication, not as a replacement (except in low-end devices where encryption isn’t available, in which case clever obfuscation and secure protocols might be the only option). It’s important to note that obfuscation security is often “security through complexity” – it raises the bar but does not provide mathematical guarantees. Thus, cryptographic protection remains the cornerstone of bitstream security, with obfuscation as a valuable adjunct in the security toolbox.
-
J.
Key Management and Physical Security (Securing the “Secrets”)
The strength of encryption and authentication in FPGA security ultimately hinges on protecting the keys and the configuration process from disclosure or manipulation. Robust key management and physical anti-tamper measures are therefore critical elements of bitstream security:
-
K.
On-Chip Key Storage
Keys are stored in eFUSE, flash, or BBRAM. The best practice is to use one-time programmable memory such as eFUSE, making it tamper-resistant and non-recoverable [
4,
6]. Best practice is to use one-time programmable memory (eFUSE or similar) for keys so that they are non-volatile and cannot be inadvertently erased or read out. The key is typically written once at manufacturing (and sometimes can be updated by blowing new fuses to invalidate the old key, depending on device capabilities). Once programmed, the key is not directly accessible through any user interface. Designers should ensure that the key programming interface is secure (e.g., use encrypted key programming files and perform this in a trusted environment).
-
L.
Physical Unclonable Functions (PUFs)
PUFs leverage inherent manufacturing variations in silicon to generate unique identifiers or cryptographic keys for each chip. Modern FPGAs integrate PUF circuits to either generate the bitstream key on the fly or protect it. The PUF output is used to encrypt or “wrap” the actual AES key, ensuring that even if an attacker reads out the non-volatile memory, they obtain only PUF-encrypted data, which cannot be decrypted off-chip. This approach effectively turns the FPGA into its own root of trust, making each device physically unclonable [
4,
8].
This mitigates the risk of stored key extraction because the key is never stored in plaintext form—it’s derived from silicon each time. One caveat is that PUF responses can be noisy and require helper data or error correction, but vendors have incorporated robust circuits to make PUF-derived keys reliable for encryption use [
8].
However, recent studies have shown that certain PUF architectures, like Transient Effect Ring Oscillator (TERO) PUFs, are vulnerable to side-channel attacks. Tebelmann et al. demonstrated that electromagnetic (EM) analysis could successfully extract frequency-domain information from TERO PUFs using Short-Time Fourier Transform (STFT) techniques, reducing entropy and revealing exploitable patterns [
11].
To address such concerns, Aghaie and Moradi introduced a side-channel resistant architecture known as TI-PUF, which uses threshold implementation (TI) masking. Their implementation makes it possible to protect strong PUF designs against side-channel leakage during response generation, enhancing resistance against EM and power attacks in FPGA-based applications [
11].
-
M.
Tamper Detection and Response
High-end FPGAs offer voltage glitch detection, temperature sensors, and active tamper pins. These hardware features zeroize cryptographic keys on tamper detection [
4,
6]. Examples include voltage/glitch sensors, temperature sensors, and active tamper pins that zeroize keys if triggered. Additionally, an enclosure might have tamper-evident seals or an active mesh that triggers key erasure if someone attempts to probe the chip. If a device detects a tamper event (cover removal, sudden clock/voltage glitches, etc.), it can lock down or wipe critical storage to prevent an attacker from gleaning the bitstream or keys. Implementing such measures adds a layer of protection, especially against skilled adversaries with physical access.
-
N.
Side-Channel Attack Countermeasures
Side-channel attacks such as Differential Power Analysis (DPA) and Electromagnetic Analysis (EMA) can extract secret keys or PUF responses based on physical leakage. To mitigate these, FPGA vendors have implemented hardware-level defenses such as current masking, randomized clocking, dummy cycles, and noise injection in cryptographic cores [
4,
5,
6]. Developers may also configure FPGAs to only decrypt once at boot or insert non-deterministic delays to prevent repeatable measurements.
Recent academic work expands on these protections. The TI-PUF architecture, proposed by Aghaie and Moradi, represents a breakthrough in side-channel resistance. By applying threshold implementation masking to any PUF design, TI-PUFs maintain full functionality while preventing intermediate variable leakage during evaluation. Their design has demonstrated high resistance to state-of-the-art SCA attacks in practice and is implementable on commercial FPGAs [
11].
-
O.
Secure Configuration Protocols
Configuration interfaces like JTAG must be secured or disabled in production. Enforcing encrypted-only bitstreams and authenticated partial reconfigurations is key [
4,
10]. Protocol-level lockdowns ensure attackers can’t reconfigure deployed systems. The interface through which an FPGA is programmed can be a vulnerability if not secured. JTAG, for instance, is a common programming and debug interface on FPGAs. Locking down JTAG is a recommended practice so that attackers cannot use it to read back configuration memory or reprogram the device with a custom bitstream. Vendors allow JTAG access to be password-protected or permanently disabled for security. Likewise, for devices that support partial reconfiguration or remote update, the configuration ports (e.g., SPI, PCAP, etc.) should be secured — many devices have an option to only allow encrypted bitstreams or to require a valid authentication header, preventing an attacker from loading arbitrary partial bitstreams. In essence, key management and physical security are about safeguarding the root secrets (keys) and hardening the device against direct attacks. Even the best encryption algorithm fails if the key is compromised. Therefore, FPGA developers should: choose devices with proven secure key storage (eFUSE/PUF), enable tamper sensors if needed, lock debug ports, and be mindful of side-channel threats. By combining robust confidentiality, encryption, integrity, authentication, and hardware security measures, the FPGA’s bitstream can be protected against a wide spectrum of attacks.