I would like to understand the solution for this production use case The project will be like this Using Appwrite with Database and Auth modules.
The final production docker environment is published as Docker Swarm service, When the volumes are bind as sshfs volumes
The MariaDB will be in external server, the redis
and telegraf
will be scale only once but will get higher resources usages
I have two types of function Event driven those function will be triggered and for them I don't really mind to have high availability and for now setting the replica up to 3 seems to do the work
Direct execution these function are being using mostly between the App and the Database and are going to be execute up to 1000/req/sec for what I can see now when the appwrite
main container runs on 4GB RAM and 2 vCPU is able to process only between 5-10 function at once. and if the rest of the function will be queued then it could be half-solution. But for now the rest of the functions are just being timed-out in there internal logic in the part when connection to database An internal curl error has occurred within the executor! Error Msg: Operation timed out
So I have two questions:
- Is there a way anyway (oc it won't be 100% accurate) to calculate how much resource is needed per request. ( resource = request * n / second )
- Is scaling and adding replica to the
appwrite
main container is the right way to allow more function request per seconds and more database access per second.
Thanks a lot in advance π
@Meldiron would you be able to chime in on this?
Thanks
A bit more information about the function I tried it one time with no logic at all, just returning json In this case I was able to execute 1000 requests in 28 seconds (35/req/sec)
Then when I added the database data fetching 10 request took 5 seconds the 11th execution faced the database timeout and for there the server need to rest for 1-5 minutes before it can execute any functions again
I'm attaching these two container logs (all the rest didn't log anything unusual) Appwrite-executor
[Error] Type: Exception
[Error] Message: An internal curl error has occurred within the executor! Error Msg: Operation timed out
[Error] File: /usr/src/code/app/executor.php
[Error] Line: 544
Appwrite
An internal curl error has occurred within the executor! Error Msg: Operation timed out
Thanks again
- Regarding Appwrite Functions, benchmarks can give pretty accurate results. You can scale up Appwrite with Docker Swarm, and have a server with only executor and functions worker. Give this server limited resources. Then run benchmarks with tools such as https://k6.io/ , and see how many sync executions it can do reliably. With some aftermath, you can calculate approximately how much resources your server will need. Make sure to add some extra to be safe.
- To allow more overall requests, you need to scale
appwrite
container. Alongside that, you need to scale other containers based on what action you want to be able to handle more. Some examples: Functions: executor + worker-functions Webhooks: appwrite, worker-webhooks Messages: appwrite, worker-mails, worker-messaging Database queries: appwrite, Redis, mariaDB
Operation timed out
usually means your runtime with your code got frozen. It might be possible that one execution is freezing others, depending on the runtime and code you are running.
You can scale executor horizontally. Having multiple executors (on separate server) will spawn multiple containers.
Hey @Meldiron Thanks for your details answer.
- That's sound a good approach I will try it. p.s. thanks for make me familiar with k6 benchmark tool, I've use variety of other bm tools and this look quite cool.
- Great.
And for the last thing just want to make sure For example to be able to have at most 10 instance of the same function container I will need to write my yaml file something like this?
appwrite-executor:
image: appwrite/appwrite:1.2.1
deploy:
mode: replicated
replicas: 10
placement:
max_replicas_per_node: 1
Is that looks right?
But is that means that it will be better also to match to appwrite
main image?
Like this?
Also, it doesn't look like we're setting a number of workers in the executor. Should we be?
To what workers you're referring?
Http Server workers. It's a swoole thing
Got it π
I believe new executor will use coroutine-style http server, no need for workers. Currently might be unstable, since our framework is not stable on coroutine-style. Im sure we will sort it out before Appwrite 1.4, and release new executor.
For now, it should not be problem. Existing solution might be slower than the new one will be.
Looks correct, yes. Good max_replicas_per_node
rule.
Regarding functions worker, they handle async=true function executions (as well as event-triggered). Those can also be scaled. Regarding functions worker, it's fine to have multiple on the same machine.
Thanks π
[Solved] - Requests per second
Recommended threads
- How to Avoid Double Requests in function...
I'm currently using Appwrite's `functions.createExecution` in my project. I want to avoid double requests when multiple actions (like searching or pagination) a...
- Project in AppWrite Cloud doesn't allow ...
I have a collection where the data can't be opened. When I check the functions, there are three instances of a function still running that can't be deleted. The...
- Get team fail in appwrite function
I try to get team of a user inside appwrite function, but i get this error: `AppwriteException: User (role: guests) missing scope (teams.read)` If i try on cl...