LakeSoul is a cloud-native Lakehouse framework that supports scalable metadata management, ACID transactions, efficient and flexible upsert operation, schema evolution, and unified streaming & batch processing.
LakeSoul supports multiple computing engines to read and write lake warehouse table data, including Spark, Flink, Presto, and PyTorch, and supports multiple computing modes such as batch, stream, MPP, and AI. LakeSoul supports storage systems such as HDFS and S3.
LakeSoul was originally created by DMetaSoul company and was donated to Linux Foundation AI & Data as a sandbox project since May 2023.
LakeSoul implements incremental upserts for both row and column and allows concurrent updates.
LakeSoul uses LSM-Tree like structure to support updates on hash partitioning table with primary key, and achieves very high write throughput while providing optimized merge on read performance (refer to Performance Benchmarks). LakeSoul scales metadata management and achieves ACID control by using PostgreSQL.
LakeSoul uses Rust to implement the native metadata layer and IO layer, and provides C/Java/Python interfaces to support the connecting of multiple computing frameworks such as big data and AI.
LakeSoul supports concurrent batch or streaming read and write. Both read and write supports CDC semantics, and together with auto schema evolution and exacly-once guarantee, constructing realtime data warehouses is made easy.
LakeSoul supports multi-workspace and RBAC. LakeSoul uses Postgres's RBAC and row-level security policies to implement permission isolation for metadata. Together with Hadoop users and groups, physical data isolation can be achieved. LakeSoul's permission isolation is effective for SQL/Java/Python jobs.
LakeSoul supports automatic disaggregated compaction, automatic table life cycle maintenance, and automatic redundant data cleaning, reducing operation costs and improving usability.
More detailed features please refer to our doc page: Documentations
Follow the Quick Start to quickly set up a test env.
Please find tutorials in doc site:
- Checkout Examples of Python Data Processing and AI Model Training on LakeSoul on how LakeSoul connecting AI to Lakehouse to build a unified and modern data infrastructure.
- Checkout LakeSoul Flink CDC Whole Database Synchronization Tutorial on how to sync an entire MySQL database into LakeSoul in realtime, with auto table creation, auto DDL sync and exactly once guarantee.
- Checkout Flink SQL Usage on using Flink SQL to read or write LakeSoul in both batch and streaming mode, with the supports of Flink Changelog Stream semantics and row-level upsert and delete.
- Checkout Multi Stream Merge and Build Wide Table Tutorial on how to merge multiple stream with same primary key (and different other columns) concurrently without join.
- Checkout Upsert Data and Merge UDF Tutorial on how to upsert data and Merge UDF to customize merge logic.
- Checkout Snapshot API Usage on how to do snapshot read (time travel), snapshot rollback and cleanup.
- Checkout Incremental Query Tutorial on how to do incremental query in Spark in batch or stream mode.
Please find usage documentations in doc site: Usage Doc
- Data Science and AI
- Native Python Reader (without PySpark)
- PyTorch Dataset and distributed training
- Meta Management (#23)
- Multiple Level Partitioning: Multiple range partition and at most one hash partition
- Concurrent write with auto conflict resolution
- MVCC with read isolation
- Write transaction (two-stage commit) through Postgres Transaction
- Schema Evolution: Column add/delete supported
- Table operations
- LSM-Tree style upsert for hash partitioned table
- Merge on read for hash partition with upsert delta file
- Copy on write update for non hash partitioned table
- Automatic Disaggregated Compaction Service
- Data Warehousing
- Spark Integration
- Table/Dataframe API
- SQL support with catalog except upsert
- Query optimization
- Shuffle/Join elimination for operations on primary key
- Merge UDF (Merge operator)
- Merge Into SQL support
- Merge Into SQL with match on Primary Key (Merge on read)
- Merge Into SQL with match on non-pk
- Merge Into SQL with match condition and complex expression (Merge on read when match on PK) (depends on #66)
- Flink Integration and CDC Ingestion (#57)
- Table API
- Batch/Stream Sink
- Batch/Stream source
- Stream Source/Sink for ChangeLog Stream Semantics
- Exactly Once Source and Sink
- Flink CDC
- Auto Schema Change (DDL) Sync
- Auto Table Creation (depends on #78)
- Support sink multiple source tables with different schemas (#84)
- Table API
- Hive Integration
- Export to Hive partition after compaction
- Apache Kyuubi (Hive JDBC) Integration
- Realtime Data Warehousing
- CDC ingestion
- Time Travel (Snapshot read)
- Snapshot rollback
- Automatic global compaction service
- MPP Engine Integration (depends on #66)
- Presto
- Trino
- Cloud and Native IO (#66)
- Object storage IO optimization
- Native merge on read
- Multi-layer storage classes support with data tiering
Please feel free to open an issue or dicussion if you have any questions.
Join our Discord server for discussions.
Email us at [email protected].
LakeSoul is opensourced under Apache License v2.0.