2 comments

  • arthi1899 5 hours ago ago

    Nice approach with BLAKE3 for hashing. I've been working on file tooling too (offline converter with Rust/Tauri) and the performance difference with modern hashing algorithms is significant. How are you handling large file sets — streaming or loading into memory?

    • pmig 4 hours ago ago

      Thanks. If you are referring to how we handle large OCI images for our OCI-compatible container registry, we create a temporary volume and stream/cache the layers there before streaming them to S3-compatible storage. This mitigates the need to keep large layers in memory, which previously led to memory resource exhaustion.

      If want to dig even deeper, this specific implementation was done in this pr: https://github.com/distr-sh/distr/pull/1478