Skip to content
Back

Storage System

  • 0
  • Tools
  • General
  • Storage
  • Cloud
  • Self Hosted
wleci
2 May, 2026, 08:01

Hey guys, quick question regarding massive storage scaling. I’m working in digital forensics and I’m constantly dealing with huge binary disk images, usually between 1TB and 20TB per single file. Because the data is super sensitive, everything has to stay self-hosted. Does anyone have experience hooking up something like MinIO (S3) to Appwrite or a custom Claude API setup to handle files of this size? I'm specifically wondering if Appwrite's storage layer will choke on multi-terabyte objects and if there's a way to let an LLM index/access that local S3 data without the traditional upload process. Would love some advice on how to architecture this so it actually scales. Thanks!

TL;DR
Developer working in digital forensics needs advice on setting up a storage system for handling massive binary disk images between 1TB and 20TB per file. Specifically asking about integrating MinIO (S3) with Appwrite or a custom Claude API setup. Looking for insights on whether Appwrite's storage layer can handle multi-terabyte objects and how to efficiently manage indexes and access local S3 data. Solution: Consider distributing the load across multiple servers and exploring MinIO's distributed mode for scalability.
Reply

Reply to this thread by joining our Discord

Reply on Discord

Need support?

Join our Discord

Get community support by joining our Discord server.

Join Discord

Get premium support

Join Appwrite Pro and get email support from our team.

Learn more