I noticed that the norms of embeddings for most models are 1, I thought at first that the default value of the normalize_embeddings argument in the encode() method is True, but I found that the default value of the normalize_embeddings is False. I found that if modules.json has a module of type sentence_transformers.models.Normalize (2_Normalize), the output embeddings will always be normalized, regardless of the value of the normalize_embeddings argument. This happens because the forward() method, includes all the loaded modules.