Hey guys, quick question regarding massive storage scaling. I’m working in digital forensics and I’m constantly dealing with huge binary disk images, usually between 1TB and 20TB per single file. Because the data is super sensitive, everything has to stay self-hosted. Does anyone have experience hooking up something like MinIO (S3) to Appwrite or a custom Claude API setup to handle files of this size? I'm specifically wondering if Appwrite's storage layer will choke on multi-terabyte objects and if there's a way to let an LLM index/access that local S3 data without the traditional upload process. Would love some advice on how to architecture this so it actually scales. Thanks!
Recommended threads
- _APP_OPTIONS_ROUTER_PROTECTION
Hi Everyone, I just setup a fresh 1.9.0 on a server. Configured everything and now when i try to connect to appwrite for first time setup i get a _APP_OPTIONS_R...
- API key without database.read/write
I had some issues with my previous API key and I deleted it then I wanted to create a new one and discovered the database checkbook has no database.read/write j...
- dynamic key missing scopes for database ...
Here are the scopes listed, I get permission errors for reading row and document. Appears to be missing since last time i checked. Database 6 Scopes policies....