I’m been studying lately Completely humorous encryption, Which is the miraculous ability to perform arbitrary calculations on encrypted data without learning any information about the basic message. This is the most comprehensive private computing solution that can exist (and it does exist!).

the first one FHE Program By Craig Gentry was based on an ideal knit and was considered very complex (I never took the time to learn how it works). Some Later plans (GSW = Gentry-Sahai-Waters) are based on a matrix multiplication, and are much simpler conceptually. Even More up-to-date FHE programs Build on GSW or use it as a core sub-routine.

All of these consents flow random noise into the encryption text, and any homomorphic action amplifies the noise. Once the noise becomes too large, you can no longer decrypt the message, so occasionally you need to apply a process called “bootstrapping” that reduces noise. It also tends to be the performance bottleneck of any FHE program, and this bottleneck is why FHE is not yet considered practical.

To help reduce noise growth, many FHE programs like GSW use machine technical structure *Dismantling gadgets*. Despite the appallingly vague name, this is a crucial limitation on noise growth. When it appears in a newspaper, it is usually indicated as “known in the literature”, and the details you will need to apply it are omitted. It’s one of *those *Topics.

So I will give some details. The code from this post is on GitHub.

## Decomposition of binary digits

To create an FHE schema, you must apply two homomorphic operations to cipher texts: addition and multiplication. Most FHE programs Admit one of the two surgeries in a trivial way. If the encrypted texts are numbers like in RSA, you double them as numbers and it doubles the basic messages, but an unknown addition is possible. If encrypted texts are vectors as in “Learning with Errors” Program (LWE) – the basis of many FHE programs – you add them as vectors and it adds the basic messages. (Here the “error” in LWE is synonymous with “random noise”, I will use the term “noise”) In LWE and most FHE programs, cipher text hides the basic message by adding random noise, and adding two texts Codes add the appropriate noise. After too many insignificant additions, the noise will increase so much that it will block the message. Then you stop calculating, or you apply a boot operation to reduce the noise.

Most FHE schemas also allow you to duplicate encrypted text permanently , But then the noise expands by a factor of , And it is not desirable if It’s big. So you need to limit the coefficients of your linear combinations to some upper limit, or use the version of the gadget disassembly.

The simplest version of disassembling the gadget works this way. Instead of encrypting a message , You were encrypting For any choice of , And then multiply < 2^k" class="latex"/> You write the binary digits of And you’re a computer . If the noise in any encryption is , And summarizing cipher texts summarizes noise, so this trick reduces noise growth from To , At a tracking price Encrypted texts. (Calls to noise Is a bit abusive – in reality the error is sampled from a random distribution – but I hope you understand my point).

Some people call for mapping , And for the sake of this article is called the act of writing a number In terms of his binary literature (Note that the first digit is the least significant part, that is, it is a small-Indian representation). Then PowersOf2 and Bin extend an entire product to a point product, with a strong transfer of 2 from one side to the other.

This inspired the next “proof by meme” that I can not resist generalizing.

Works example, if the message is and , Then and (Again, little-endian), and the point product is

## General gadget construction

It is possible to generalize the decomposition of binary digits into different bases, or collectors of messages instead of a single message, or to include a subset of digits for variable approximations. I was wondering about the FHE program that does all three. In my search for clarity I came across a nice article by Janice, Mitchancio and Polyakov called “Building an effective toolkit for gadgets: sub-Gaussian sampling and more“, Where they specify a nice general definition.

**definition: **For each final set of plugins , A –*Player* Of size And quality Is a vector Such that each group component Can be written as an integer combination where There is a norm at most .

The main groups considered in my case are , Where He usually is or , That is, unsigned int sizes in the computers for which we receive modulus operations for free. In this case, a -Gadget is a matrix , And the representation of supplier .

Here and Fixed, and Replaced to make the selected gadget set more efficient (smaller) ) Or better in noise reduction (smaller ). An example of how this can work is shown in the next section by including the decomposition of binary digits into an arbitrary basis . This allows you to use fewer digits to represent the number , But any digit may be as large as And so the quality is .

One common construction is conversion a -Gadget for an -Gadget using the Kronecker product. Let bean – A quality gadget . So the next matrix is an -Sized gadget And quality :

Empty spaces represent zeros, for the sake of clarity.

Example with . God -Gadget is . It has size And quality . Then go for -Gadget, we’m building

A vector is now given We write this as follows, with each small-endian representation chained to a single locator.

And finally,

To use the definition more strictly, if we had to write the matrix above as a “vector” gadget, it would be in order of columns from left to right, . Since the vector Can be at worst every 1, his norm is at most , As argued above.

## Signed representation at base B.

As we have seen, the disassembly of the gadget replaces the noise reduction for a larger cipher size. With modulo integers , It can be adjusted a bit more by using a larger base. Instead of PowersOf2 we can set PowersOfB, where , so that Divide . For example, with , We will only need to follow 4 cipher texts. And the gadget decomposition of the number we multiply will be the small-and-digits of its base- representation. The cost here is that the maximum entry of the decomposed representation is 255.

We can adjust it a little more by using a *underwriter* Base- representation. To the best of my knowledge this is not the same as what computer programmers refer to as a signed integer, nor does it have anything to do with *Completion of two* Representation of negative numbers. Instead of the usual base- Digits For number , Selects the signed representation .

The calculation of the digits is a little more involved, and it works by shifting large coefficients by , And “absorption” of the effect of this transition to the next more significant digit. For example, if and (All 1 to the 10th digit), then the small -andian base without a sign- Representation of he . The corresponding *underwriter* Base- Representation is poor From the first digit, adding 1 to the second digit, resulting in . This works in general because of the next “add zero” identity, where and Are two consecutive digits without a sign at the unmarked base- Representation of a number.

So if , Repeat and move the 1 to the next high coefficient.

The result of all this is that the maximum absolute value of a coefficient of the signed representation is reduced by half of the unmarked representation, which reduces the noise growth at the cost of a slightly more complex representation (from the point of view of application). Another side effect is that the largest number represented is less than . If you try to apply this algorithm to such a large number, it will be necessary to move the largest digit, but there is no successor to move to it. Or rather, if any Digits in base without mark- Representation, the maximum number that can be represented in the signed version contains all the digits . In our example with And 32-bit, the largest digit is 127. The formula for the maximum integer that can be represented is .

```
max_digit = base // 2 - 1
max_representable = (max_digit
* (base ** (num_bits // base_log) - 1) // (base - 1)
)
```

simple Python application Computes the signed representation, with a code copied below, in it Is the `base`

, And he `base_log`

.

```
def signed_decomposition(
x: int, base_log: int, total_num_bits=32) -> List[int]:
result = []
base = 1 << base_log
digit_mask = (1 << base_log) - 1
base_over_2_threshold = 1 << (base_log - 1)
carry = 0
for i in range(total_num_bits // base_log):
unsigned_digit = (x >> (i * base_log)) & digit_mask
if carry:
unsigned_digit += carry
carry = 0
signed_digit = unsigned_digit
if signed_digit >= base_over_2_threshold:
signed_digit -= base
carry = 1
result.append(signed_digit)
return result
```

In a future article I would like to demonstrate the disassembly of gadgets in action in a practical environment called *Key replacement*, Which allows to convert an LWE Encrypted text with a key Into LWE encrypted text with another key . This increases the noise, so the breakdown of the gadget is used to reduce the growth of the noise. Key replacement is used in FHE because some operations (like initialization) have a side effect of replacing the encryption key.

By then!