Date:

Share:

The Gadget Decomposition in FHE – Math ∩ Programming

Related Articles

I’m been studying lately Completely humorous encryption, Which is the miraculous ability to perform arbitrary calculations on encrypted data without learning any information about the basic message. This is the most comprehensive private computing solution that can exist (and it does exist!).

the first one FHE Program By Craig Gentry was based on an ideal knit and was considered very complex (I never took the time to learn how it works). Some Later plans (GSW = Gentry-Sahai-Waters) are based on a matrix multiplication, and are much simpler conceptually. Even More up-to-date FHE programs Build on GSW or use it as a core sub-routine.

All of these consents flow random noise into the encryption text, and any homomorphic action amplifies the noise. Once the noise becomes too large, you can no longer decrypt the message, so occasionally you need to apply a process called “bootstrapping” that reduces noise. It also tends to be the performance bottleneck of any FHE program, and this bottleneck is why FHE is not yet considered practical.

To help reduce noise growth, many FHE programs like GSW use machine technical structure Dismantling gadgets. Despite the appallingly vague name, this is a crucial limitation on noise growth. When it appears in a newspaper, it is usually indicated as “known in the literature”, and the details you will need to apply it are omitted. It’s one of those Topics.

So I will give some details. The code from this post is on GitHub.

Decomposition of binary digits

To create an FHE schema, you must apply two homomorphic operations to cipher texts: addition and multiplication. Most FHE programs Admit one of the two surgeries in a trivial way. If the encrypted texts are numbers like in RSA, you double them as numbers and it doubles the basic messages, but an unknown addition is possible. If encrypted texts are vectors as in “Learning with Errors” Program (LWE) – the basis of many FHE programs – you add them as vectors and it adds the basic messages. (Here the “error” in LWE is synonymous with “random noise”, I will use the term “noise”) In LWE and most FHE programs, cipher text hides the basic message by adding random noise, and adding two texts Codes add the appropriate noise. After too many insignificant additions, the noise will increase so much that it will block the message. Then you stop calculating, or you apply a boot operation to reduce the noise.

Most FHE schemas also allow you to duplicate encrypted text permanently , But then the noise expands by a factor of A, And it is not desirable if A It’s big. So you need to limit the coefficients of your linear combinations to some upper limit, or use the version of the gadget disassembly.

The simplest version of disassembling the gadget works this way. Instead of encrypting a message m  in  mathbb Z, You were encrypting M, 2 m, 4 m, ..., 2 ^ k-1 m For any choice of K, And then multiply < 2^k" class="latex"/> You write the binary digits of A =  sum_ i = 0 ^ k-1 a_i 2 ^ i And you’re a computer  sum_ i = 0 ^ k-1 a_i  textup Enc (2 ^ im). If the noise in any encryption is God, And summarizing cipher texts summarizes noise, so this trick reduces noise growth from O (AE) To O (kE) = O ( log (A) E), At a tracking price K Encrypted texts. (Calls to noise God Is a bit abusive – in reality the error is sampled from a random distribution – but I hope you understand my point).

Some people call for mapping  textup PowersOf2 (m) = m  cdot (2 ^ 0, 2 ^ 1, 2 ^ 2,  dots, 2 ^ k-1), And for the sake of this article is called the act of writing a number A In terms of his binary literature  textup Bin (A) = (a_0,  dots, a_ k-1) (Note that the first digit is the least significant part, that is, it is a small-Indian representation). Then PowersOf2 and Bin extend an entire product to a point product, with a strong transfer of 2 from one side to the other.

 displaystyle A  cdot m =  langle  textup Bin (A),  textup PowersOf2 (m)  rangle

This inspired the next “proof by meme” that I can not resist generalizing.

powersof2

Works example, if the message is m = 7 and A = 100, k = 7, Then  textup PowersOf2 (7) = (7, 14, 28, 56, 112, 224, 448, 896) and  textup Bin (A) = (0,0,1,0,0,1,1,0) (Again, little-endian), and the point product is

 displaystyle 28  cdot 1 + 224  cdot 1 + 448  cdot 1 = 700 = 7  cdot 2 ^ 2 + 7  cdot 2 ^ 5 + 7  cdot 2 ^ 6

General gadget construction

It is possible to generalize the decomposition of binary digits into different bases, or collectors of messages instead of a single message, or to include a subset of digits for variable approximations. I was wondering about the FHE program that does all three. In my search for clarity I came across a nice article by Janice, Mitchancio and Polyakov called “Building an effective toolkit for gadgets: sub-Gaussian sampling and more“, Where they specify a nice general definition.

definition: For each final set of plugins A, A APlayer Of size w And quality  beta Is a vector  mathbf g  in A ^ w Such that each group component u  in A Can be written as an integer combination u =  sum_ i = 1 ^ w g_i x_i where  mathbf x = (x_1,  dots, x_w) There is a norm at most  beta.

The main groups considered in my case are A = ( mathbb Z / q  mathbb Z) ^ n, Where That He usually is 2 ^ 32 or 2 ^ 64, That is, unsigned int sizes in the computers for which we receive modulus operations for free. In this case, a ( mathbb Z / q  mathbb Z) ^ n-Gadget is a matrix G  in ( mathbb Z / q  mathbb Z) ^ n  times w, And the representation x  in  mathbb Z ^ w of u  in ( mathbb Z / q  mathbb Z) ^ n supplier Gx = u.

Here N and That Fixed, and w,  beta Replaced to make the selected gadget set more efficient (smaller) w) Or better in noise reduction (smaller  beta). An example of how this can work is shown in the next section by including the decomposition of binary digits into an arbitrary basis B. This allows you to use fewer digits to represent the number A, But any digit may be as large as B And so the quality is  beta = O (B  sqrt w).

One common construction is conversion a A-Gadget for an א ^ נ-Gadget using the Kronecker product. Let g  in A ^ w bean A– A quality gadget  beta. So the next matrix is ​​an א ^ נ-Sized gadget nw And quality  sqrt n  beta:

 displaystyle G = I_n  otimes  mathbf g ^  top =  begin pmatrix g_1 &  dots & g_w & & & & & & & \ & & & g_1 &  dots & g_w & & & &   & & & & & &   ddots & & & \ & & & & & & g_1 &  dots & g_w  end pmatrix

Empty spaces represent zeros, for the sake of clarity.

Example with A = ( mathbb Z / 16  mathbb Z). God A-Gadget is  mathbf g = (1,2,4,8). It has size 4 =  log (q) And quality  beta = 2 =  sqrt 1 + 1 + 1 + 1. Then go for A ^ 3-Gadget, we’m building

png

A vector is now given (15, 4, 7)  in  mathbb A ^ 3 We write this as follows, with each small-endian representation chained to a single locator.

pmatrix

And finally,

png

To use the definition more strictly, if we had to write the matrix above as a “vector” gadget, it would be in order of columns from left to right,  mathbf g = ((1,0,0), (2,0,0),  dots, (0,0,8))  in A ^ wn. Since the vector  mathbf x Can be at worst every 1, his norm is at most  sqrt 12 =  sqrt nw =  sqrt n  beta = 2  sqrt 3, As argued above.

Signed representation at base B.

As we have seen, the disassembly of the gadget replaces the noise reduction for a larger cipher size. With modulo integers q = 2 ^ 32, It can be adjusted a bit more by using a larger base. Instead of PowersOf2 we can set PowersOfB, where B = 2 ^ b, so that B Divide 2 ^ 32. For example, with b = 8, B = 256, We will only need to follow 4 cipher texts. And the gadget decomposition of the number we multiply will be the small-and-digits of its base-B representation. The cost here is that the maximum entry of the decomposed representation is 255.

We can adjust it a little more by using a underwriter Base-B representation. To the best of my knowledge this is not the same as what computer programmers refer to as a signed integer, nor does it have anything to do with Completion of two Representation of negative numbers. Instead of the usual base-B Digits n_i  in  0, 1,  dots, B-1 For number N =  sum_ i = 0 ^ k n_i B ^ i, Selects the signed representation n_i  in  -B / 2, -B / 2 + 1,  dots, -1, 0, 1,  dots, B / 2 - 1 .

The calculation of the digits is a little more involved, and it works by shifting large coefficients by -In 2, And “absorption” of the effect of this transition to the next more significant digit. For example, if B = 256 and N = 2 ^ 11 - 1 (All 1 to the 10th digit), then the small -andian base without a sign-B Representation of N he (255, 7) = 255 + 7  cdot 256. The corresponding underwriter Base-B Representation is poor B From the first digit, adding 1 to the second digit, resulting in (-1, 8) = -1 + 8  cdot 256. This works in general because of the next “add zero” identity, where E and That Are two consecutive digits without a sign at the unmarked base-B Representation of a number.

 displaystyle  begin aligned pB ^ k-1 + qB ^ k & = pB ^ k-1 - B ^ k + qB ^ k + B ^ k \ & = (pB) B ^ k -1 + (q + 1) B ^ k  end aligned

So if q + 1  geq B / 2, Repeat and move the 1 to the next high coefficient.

The result of all this is that the maximum absolute value of a coefficient of the signed representation is reduced by half of the unmarked representation, which reduces the noise growth at the cost of a slightly more complex representation (from the point of view of application). Another side effect is that the largest number represented is less than 2 ^ 32 -1. If you try to apply this algorithm to such a large number, it will be necessary to move the largest digit, but there is no successor to move to it. Or rather, if any K Digits in base without mark-B Representation, the maximum number that can be represented in the signed version contains all the digits B / 2 - 1. In our example with B = 256 And 32-bit, the largest digit is 127. The formula for the maximum integer that can be represented is  sum_ i = 0 ^ k-1 (B / 2 - 1) B ^ i = (B / 2 - 1)  frac B ^ k - 1 B-1.

max_digit = base // 2 - 1
max_representable = (max_digit 
    * (base ** (num_bits // base_log) - 1) // (base - 1)
)

simple Python application Computes the signed representation, with a code copied below, in it B = 2 ^ b Is the base, And b =  log_2 (B) he base_log.

def signed_decomposition(
  x: int, base_log: int, total_num_bits=32) -> List[int]:
    result = []
    base = 1 << base_log
    digit_mask = (1 << base_log) - 1
    base_over_2_threshold = 1 << (base_log - 1)
    carry = 0

    for i in range(total_num_bits // base_log):
        unsigned_digit = (x >> (i * base_log)) & digit_mask
        if carry:
            unsigned_digit += carry
            carry = 0

        signed_digit = unsigned_digit
        if signed_digit >= base_over_2_threshold:
            signed_digit -= base
            carry = 1
        result.append(signed_digit)

    return result

In a future article I would like to demonstrate the disassembly of gadgets in action in a practical environment called Key replacement, Which allows to convert an LWE Encrypted text with a key s_1 Into LWE encrypted text with another key s_2. This increases the noise, so the breakdown of the gadget is used to reduce the growth of the noise. Key replacement is used in FHE because some operations (like initialization) have a side effect of replacing the encryption key.

By then!

Source

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Popular Articles