If those are IDs, you could iterate over the new data and issue a get request for each one. Get document API calls are cached so they should be faster than list documents from a server side processing standpoint
i see. unfortunately those are not documentID, but a custom ID. Which means, i can only use listDocuments. I will take note of this. I may consider converting the custom ID into document ID. Thanks for your input @Steven
What if you enforced "unique" on an index and just failed the createDocument request?
I wonder if the performance would be better with less calls 🤣
Thanks @VincentGe .the thing is, I'm not trying to insert doc, but merely doing a verification whether any of my existing papers have been removed.
🤔 interesting. Large volume data handling is really something we haven't done a ton with. I wonder if we can batch this operation with graphql
@Steven From what I remember, one of the cool things about our new graphql api is that we can batch many calls at once 👀
Ya but in this case, there's a bit too much data to batch in one graphql request
🤔
Eu… just a random thought, @Said H why don’t you combine your Appwrite instance with something like meilisearch? It’s optimal for searching and is very powerful at that…. It could help lessen the burden on Appwrite?
hey @Olivier Pavie , thank you for the suggestion. I will take a look at meilisearch 🙂
I just learned the tutorial of Meilisearch @Olivier Pavie, and i noticed that we are feeding our json data to the system. That gives me an idea that we too can do that in appwrite, yeah?
- We can create a String attribute and set the length to the max, then dump our json object string containing >5k records inside. So, we will only need to do 1x query call to AppWrite. We can then do the record comparison afterward
- or, we can create a String attribute and treat that as an Array. Insert the array record 5k times. I'm not sure though whether there is any limit to array record we can insert per doc. Would you have any idea @Steven or @VincentGe ?
It depends on what you want to do. I would assume Meilisearch's full text search is a little smarter than mariadb's
it has smarter search, but i wont need that. i just need to be able to read the record
i am inclined to create a string array attribute to handle these large data. but i worry there is a size limit to that
i can do a 1x call to the doc, then process the comparison in the code. it'
it's a fast comparison, since each record will have its unique ID
so, i will end up having 1 doc containing 5k array records
but i only need to do 1x call to this 1 doc
i can give it a try, if you are unsure whether there is a limit to the max array record length 🙂
It might be best to use a very large string attribute and store a JSON string where the JSON string is essentially a lookup table {"id1": 1, "id2": 1}
i see. this is fine too
i can use this. and the comparison will work in my case as well
alright. thanks everyone 🙂
[SOLVED] Compare large data
Recommended threads
- Type Mismatch in AppwriteException
There is a discrepancy in the TypeScript type definitions for AppwriteException. The response property is defined as a string in the type definitions, but in pr...
- What Query's are valid for GetDocument?
Documentation shows that Queries are valid here, but doesn't explain which queries are valid. At first I presumed this to be a bug, but before creating a githu...
- Appwrite exception: user_unauthorized, t...
After refreshing the app it is working perfectly