
As I just read from Appwrite Twitter account, you're cooking up a new runtime which is designed to perform machine learning tasks. Because Appwrite is scalable with Docker Swarm, does that mean that if a ML task is given to the swarm, then the task is automatically distributed across the swarm nodes to speed up the learning process? Technically, would this mean that we are able to easily scale ML tasks across multiple server machines? Also, would the ML runtime support distributed training of large language models like LLaMA?

Hi - The runtime will be basically a Python runtime with all system libraries needed to do machine learning things so it becomes easier for ML devs to work with it. Ofcourse it would be open to gather community feedback to see how we want to improve it and make it better. That being said, No, currently in first iteration of release, ML runtime will not support distributed training of large language models like LLaMA, but in next iterations (if community feedbacks suggest) it will be added.
It will scale exactly the same as any other Appwrite Function BUT it will proper access to host machine GPU, which is almost necessary for machine learning.

Thank you Jyoti! š I would strongly support the ability to distribute LLM training over multiple Appwrite swarm nodes.

Thanks for the feedback! We will definitely consider it ā¤ļø
Recommended threads
- Import "appwrite.exceptions" could not b...
Import "appwrite.exceptions" could not be resolved I tried using many versions of appwrite but this error still occuring. Please help me to fix it.
- [Self Hosted]
fresh instance of Appwrite - cant deploy function from local to instance ā Error ⢠func2 (68134cd9002358f96e4a) ⢠Invalid `specification` p...
- dart function very slow
sometimes waiting too long, about 3mins to 5mins, sometimes very fast, not build time, just execute, anyway to speed up?
