Hi, we are seeing a reproducible large upload failure with Appwrite 1.8.0 using S3-compatible storage through RustFS.
A file upload of about 10.7 GB consistently fails around the 1651st 5 MB chunk with a 500 error. In the logs we also saw: Invalid document structure: Attribute "metadata" has invalid type. Value must be a valid string and no longer than 75000 chars.
Our current understanding is that Appwrite stores multipart upload state in the internal files.metadata field, and that this field becomes too large when many chunks are uploaded. We also checked the current state and found:
Appwrite 1.9.0 is out, but we could not find a clear fix for this in the release notes in the 1.9.0 source, files.metadata still seems to be limited to 75000 the 1.9.0 migration even appears to set bucket metadata to 65534 So our question is: is this understanding actually correct, or are we missing something? If the upload logic is still the same, it looks like this issue would happen earlier in 1.9.0 rather than later.
Is there an official fix or recommended workaround for this?
thanks and best regards Matthias
Recommended threads
- Terraform tablesdb_column type inconsist...
Hi, I am trying out the new terraform provider for appwrite that was introduced a few weeks back. As a first step I wanted to import our existing databases into...
- Self-hosted 1.9.0 — what's the canonical...
Self-hosted 1.9.0 — canonical way for functions to call the API on the same host? Setup: Go functions on self-hosted 1.9.0. Public domain via traefik, valid LE...
- Function executions via custom domain fr...
Aplogies if this was asked already. I'm self-hosting 1.9 on a self-hosted instance of Dokploy. I've made the necessary adjustments to the original compose file ...