Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rescale to next funcion #247

Open
safiia opened this issue Aug 5, 2024 · 1 comment
Open

Rescale to next funcion #247

safiia opened this issue Aug 5, 2024 · 1 comment
Labels

Comments

@safiia
Copy link

safiia commented Aug 5, 2024

Description
When I used rescale_to_next(), it did not work as it should. It turns cypher text from level 1, I mean <mod_level=1>, to none instead of level 0 <mod_level=0>

Code To Reproduce Error

    def forward(self, encrypted_indices, relin_key):
        print(f"Forward pass of EncryptedEmbedding with input length: {len(encrypted_indices)}")
        result = []
        for enc_index in encrypted_indices:
            if enc_index is None:
                print("Warning: enc_index is None. Skipping.")
                continue

            one_hot_vector = [self.HE.encryptFrac(np.array([0.0], dtype=np.float64)) for _ in range(len(self.embedding_matrix))]
            for i in range(len(self.embedding_matrix)):
                enc_i = self.HE.encryptFrac(np.array([float(i)], dtype=np.float64))
                print("enc_i",enc_i)
                print("enc_index",enc_index)
                comparison_result = self.HE.square(self.HE.sub(enc_index, enc_i))
                print("comparison_result", comparison_result)
                comparison_result = self.HE.rescale_to_next(comparison_result)
                print("comparison_result", comparison_result)
                \`\`\`one_hot_element = self.HE.sub(self.HE.encryptFrac(np.array([1.0], dtype=np.float64)), comparison_result)\`\`\`
                print("one_hot_element", one_hot_element)

                for j in range(len(self.embedding_matrix)):
                    print("i=", i, "j=", j)
                    one_hot_element, one_hot_vec_elem = self.align_scales(one_hot_element, self.one_hot_vectors[i][j])
                    # print("one_hot_element", one_hot_element)
                    # print("one_hot_vec_elem", one_hot_vec_elem)
                    multiplied_element = self.custom_rescale_to_next(self.HE.multiply(one_hot_element, one_hot_vec_elem))
                    print("multiplied_element", multiplied_element)
                    one_hot_vector[j] = self.HE.add(one_hot_vector[j], multiplied_element)
            print(f"Size of one_hot_vector: {len(one_hot_vector)}")
            for idx, vec in enumerate(one_hot_vector):
                print(f"one_hot_vector[{idx}]: scale={vec.scale}, mod_level={vec.mod_level}")

            embedded_vector = self.encrypted_matrix_vector_multiplication(self.HE, self.embedding_matrix, one_hot_vector, relin_key)
            result.append(embedded_vector)
        return result

Result:
enc_i <Pyfhel Ciphertext at 0x7b2ba1e1f0b0, scheme=ckks, size=2/2, scale_bits=30, mod_level=0>
enc_index <Pyfhel Ciphertext at 0x7b2fc82e5d50, scheme=ckks, size=2/2, scale_bits=30, mod_level=0>
comparison_result <Pyfhel Ciphertext at 0x7b2fc82e5d50, scheme=ckks, size=3/3, scale_bits=60, mod_level=1>
comparison_result None

Pyfhel/Pyfhel.pyx in Pyfhel.Pyfhel.Pyfhel.sub()

RuntimeError: scheme type mistmatch in sub terms ({ctxt._scheme} VS {ctxt_other._scheme}

I expect the value of comparison_result to be rescaled to (level 0) instead of none, but I think the rescale function behaviour.

My Setup:

  • OS: [Ubuntu]
  • Python: 3.8
  • C compiler version: (AMD64)
  • Pyfhel Version: 3.4.2
@safiia safiia added the bug label Aug 5, 2024
@ShokofehVS
Copy link

I was reading some issues having somewhat similar descriptions. Should it be due to runnig out of available rescalings? Then the problem can be solved following #224.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants