Skip to content

Commit

Permalink
Merge pull request #7698 from realm/tg/file-map-cache
Browse files Browse the repository at this point in the history
RCORE-2141 RCORE-2142 Clean up a bunch of old encryption cruft
  • Loading branch information
tgoyne authored Jun 6, 2024
2 parents 948949a + 7ab83e8 commit 42e4a85
Show file tree
Hide file tree
Showing 54 changed files with 2,001 additions and 3,288 deletions.
6 changes: 6 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,11 @@
### Fixed
* <How do the end-user experience this issue? what was the impact?> ([#????](https://github.com/realm/realm-core/issues/????), since v?.?.?)
* Fix some client resets (such as migrating to flexible sync) potentially failing with AutoClientResetFailed if a new client reset condition (such as rolling back a flexible sync migration) occurred before the first one completed. ([PR #7542](https://github.com/realm/realm-core/pull/7542), since v13.11.0)
* Encrypted files on Windows had a maximum size of 2GB even on x64 due to internal usage of `off_t`, which is a 32-bit type on 64-bit Windows ([PR #7698](https://github.com/realm/realm-core/pull/7698), since the introduction of encryption support on Windows in v3.0.0).
* The encryption code no longer behaves differently depending on the system page size, which should entirely eliminate a recurring source of bugs related to copying encrypted Realm files between platforms with different page sizes. One known outstanding bug was ([RNET-1141](https://github.com/realm/realm-dotnet/issues/3592)), where opening files on a system with a larger page size than the writing system would attempt to read sections of the file which had never been written to ([PR #7698](https://github.com/realm/realm-core/pull/7698)).
* There were several complicated scenarios which could result in stale reads from encrypted files in multiprocess scenarios. These were very difficult to hit and would typically lead to a crash, either due to an assertion failure or DecryptionFailure being thrown ([PR #7698](https://github.com/realm/realm-core/pull/7698), since v13.9.0).
* Encrypted files have some benign data races where we can memcpy a block of memory while another thread is writing to a limited range of it. It is logically impossible to ever read from that range when this happens, but Thread Sanitizer quite reasonably complains about this. We now perform a slower operations when running with TSan which avoids this benign race ([PR #7698](https://github.com/realm/realm-core/pull/7698)).
* Tokenizing strings for full-text search could pass values outside the range [-1, 255] to `isspace()`, which is undefined behavior ([PR #7698](https://github.com/realm/realm-core/pull/7698), since the introduction of FTS in v13.0.0).

### Breaking changes
* Any `stitch_` prefixed fields in the `BsonDocument` returned from `app::User::custom_data()` are being renamed on the server to have a `baas_` prefix instead ([PR #7769](https://github.com/realm/realm-core/pull/7769)).
Expand All @@ -21,6 +26,7 @@
* Removed references to `stitch_` fields in access tokens in sync unit tests ([PR #7769](https://github.com/realm/realm-core/pull/7769)).
* Added back iOS simulator testing to evergreen after Jenkins went away ([PR #7758](https://github.com/realm/realm-core/pull/7758)).
* `realm-trawler -c` did not work on Realm using SyncClient history ([PR #7734](https://github.com/realm/realm-core/pull/7734)).
* `File::Map`'s move constructor and assignment operator left `m_fd` unchanged, which appears to have never actually resulted in problems with how it was used ([PR #7698](https://github.com/realm/realm-core/pull/7698)).

----------------------------------------------

Expand Down
36 changes: 14 additions & 22 deletions src/realm/alloc.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -119,9 +119,7 @@ char* Allocator::translate_less_critical(RefTranslation* ref_translation_ptr, re
RefTranslation& txl = ref_translation_ptr[idx];
size_t offset = ref - get_section_base(idx);
char* addr = txl.mapping_addr + offset;
#if REALM_ENABLE_ENCRYPTION
realm::util::encryption_read_barrier(addr, NodeHeader::header_size, txl.encrypted_mapping, nullptr);
#endif
util::encryption_read_barrier(addr, NodeHeader::header_size, txl.encrypted_mapping);
auto size = NodeHeader::get_byte_size_from_header(addr);
bool crosses_mapping = offset + size > (1 << section_shift);
// Move the limit on use of the existing primary mapping.
Expand All @@ -135,27 +133,21 @@ char* Allocator::translate_less_critical(RefTranslation* ref_translation_ptr, re
}
if (REALM_LIKELY(!crosses_mapping)) {
// Array fits inside primary mapping, no new mapping needed.
#if REALM_ENABLE_ENCRYPTION
realm::util::encryption_read_barrier(addr, size, txl.encrypted_mapping, nullptr);
#endif
util::encryption_read_barrier(addr, size, txl.encrypted_mapping);
return addr;
}
else {
// we need a cross-over mapping. If one is already established, use that.
auto xover_mapping_addr = txl.xover_mapping_addr.load(std::memory_order_acquire);
if (!xover_mapping_addr) {
// we need to establish a xover mapping - or wait for another thread to finish
// establishing one:
const_cast<Allocator*>(this)->get_or_add_xover_mapping(txl, idx, offset, size);
// reload (can be relaxed since the call above synchronizes on a mutex)
xover_mapping_addr = txl.xover_mapping_addr.load(std::memory_order_relaxed);
}
// array is now known to be inside the established xover mapping:
addr = xover_mapping_addr + (offset - txl.xover_mapping_base);
#if REALM_ENABLE_ENCRYPTION
realm::util::encryption_read_barrier(addr, size, txl.xover_encrypted_mapping, nullptr);
#endif
return addr;
// we need a cross-over mapping. If one is already established, use that.
auto xover_mapping_addr = txl.xover_mapping_addr.load(std::memory_order_acquire);
if (!xover_mapping_addr) {
// we need to establish a xover mapping - or wait for another thread to finish
// establishing one:
const_cast<Allocator*>(this)->get_or_add_xover_mapping(txl, idx, offset, size);
// reload (can be relaxed since the call above synchronizes on a mutex)
xover_mapping_addr = txl.xover_mapping_addr.load(std::memory_order_relaxed);
}
// array is now known to be inside the established xover mapping:
addr = xover_mapping_addr + (offset - txl.xover_mapping_base);
util::encryption_read_barrier(addr, size, txl.xover_encrypted_mapping);
return addr;
}
} // namespace realm
55 changes: 18 additions & 37 deletions src/realm/alloc.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -171,7 +171,7 @@ class Allocator {
// into equal chunks.
struct RefTranslation {
char* mapping_addr;
uint64_t cookie;
uint64_t cookie = 0x1234567890;
std::atomic<size_t> lowest_possible_xover_offset = 0;

// member 'xover_mapping_addr' is used for memory synchronization of the fields
Expand All @@ -183,14 +183,12 @@ class Allocator {
#if REALM_ENABLE_ENCRYPTION
util::EncryptedFileMapping* encrypted_mapping = nullptr;
util::EncryptedFileMapping* xover_encrypted_mapping = nullptr;
#else
static inline util::EncryptedFileMapping* const encrypted_mapping = nullptr;
static inline util::EncryptedFileMapping* const xover_encrypted_mapping = nullptr;
#endif
explicit RefTranslation(char* addr)
explicit RefTranslation(char* addr = nullptr)
: mapping_addr(addr)
, cookie(0x1234567890)
{
}
RefTranslation()
: RefTranslation(nullptr)
{
}
~RefTranslation()
Expand Down Expand Up @@ -222,7 +220,7 @@ class Allocator {
};
// This pointer may be changed concurrently with access, so make sure it is
// atomic!
std::atomic<RefTranslation*> m_ref_translation_ptr;
std::atomic<RefTranslation*> m_ref_translation_ptr{nullptr};

/// The specified size must be divisible by 8, and must not be
/// zero.
Expand Down Expand Up @@ -252,7 +250,7 @@ class Allocator {
char* translate_critical(RefTranslation*, ref_type ref) const noexcept;
char* translate_less_critical(RefTranslation*, ref_type ref) const noexcept;
virtual void get_or_add_xover_mapping(RefTranslation&, size_t, size_t, size_t) = 0;
Allocator() noexcept;
Allocator() noexcept = default;
size_t get_section_index(size_t pos) const noexcept;
inline size_t get_section_base(size_t index) const noexcept;

Expand All @@ -271,11 +269,9 @@ class Allocator {
// used to detect if the allocator (and owning structure, e.g. Table)
// is recycled. Mismatch on this counter will cause accesors
// lower in the hierarchy to throw if access is attempted.
std::atomic<uint_fast64_t> m_content_versioning_counter;

std::atomic<uint_fast64_t> m_storage_versioning_counter;

std::atomic<uint_fast64_t> m_instance_versioning_counter;
std::atomic<uint_fast64_t> m_content_versioning_counter{0};
std::atomic<uint_fast64_t> m_storage_versioning_counter{0};
std::atomic<uint_fast64_t> m_instance_versioning_counter{0};

inline uint_fast64_t get_storage_version(uint64_t instance_version)
{
Expand Down Expand Up @@ -547,14 +543,6 @@ inline bool Allocator::is_read_only(ref_type ref) const noexcept
return ref < m_baseline.load(std::memory_order_relaxed);
}

inline Allocator::Allocator() noexcept
{
m_content_versioning_counter = 0;
m_storage_versioning_counter = 0;
m_instance_versioning_counter = 0;
m_ref_translation_ptr = nullptr;
}

// performance critical part of the translation process. Less critical code is in translate_less_critical.
inline char* Allocator::translate_critical(RefTranslation* ref_translation_ptr, ref_type ref) const noexcept
{
Expand All @@ -566,30 +554,23 @@ inline char* Allocator::translate_critical(RefTranslation* ref_translation_ptr,
if (REALM_LIKELY(offset < lowest_possible_xover_offset)) {
// the lowest possible xover offset may grow concurrently, but that will not affect this code path
char* addr = txl.mapping_addr + offset;
#if REALM_ENABLE_ENCRYPTION
realm::util::encryption_read_barrier(addr, NodeHeader::header_size, txl.encrypted_mapping,
NodeHeader::get_byte_size_from_header);
#endif
util::encryption_read_barrier(addr, NodeHeader::header_size, txl.encrypted_mapping);
size_t size = NodeHeader::get_byte_size_from_header(addr);
util::encryption_read_barrier(addr, size, txl.encrypted_mapping);
return addr;
}
else {
// the lowest possible xover offset may grow concurrently, but that will be handled inside the call
return translate_less_critical(ref_translation_ptr, ref);
}
// the lowest possible xover offset may grow concurrently, but that will be handled inside the call
return translate_less_critical(ref_translation_ptr, ref);
}
realm::util::terminate("Invalid ref translation entry", __FILE__, __LINE__, txl.cookie, 0x1234567890, ref, idx);
return nullptr;
}

inline char* Allocator::translate(ref_type ref) const noexcept
{
auto ref_translation_ptr = m_ref_translation_ptr.load(std::memory_order_acquire);
if (REALM_LIKELY(ref_translation_ptr)) {
return translate_critical(ref_translation_ptr, ref);
}
else {
return do_translate(ref);
if (auto ptr = m_ref_translation_ptr.load(std::memory_order_acquire); REALM_LIKELY(ptr)) {
return translate_critical(ptr, ref);
}
return do_translate(ref);
}


Expand Down
Loading

0 comments on commit 42e4a85

Please sign in to comment.