As I just read from Appwrite Twitter account, you're cooking up a new runtime which is designed to perform machine learning tasks. Because Appwrite is scalable with Docker Swarm, does that mean that if a ML task is given to the swarm, then the task is automatically distributed across the swarm nodes to speed up the learning process? Technically, would this mean that we are able to easily scale ML tasks across multiple server machines? Also, would the ML runtime support distributed training of large language models like LLaMA?
Hi - The runtime will be basically a Python runtime with all system libraries needed to do machine learning things so it becomes easier for ML devs to work with it. Ofcourse it would be open to gather community feedback to see how we want to improve it and make it better. That being said, No, currently in first iteration of release, ML runtime will not support distributed training of large language models like LLaMA, but in next iterations (if community feedbacks suggest) it will be added.
It will scale exactly the same as any other Appwrite Function BUT it will proper access to host machine GPU, which is almost necessary for machine learning.
Thank you Jyoti! 🙂 I would strongly support the ability to distribute LLM training over multiple Appwrite swarm nodes.
Thanks for the feedback! We will definitely consider it ❤️
Recommended threads
- HTTP POST to function returning "No Appw...
Hi everyone, I’m running into an issue with my self-hosted Appwrite instance. I’ve set up my environment variables (APPWRITE_FUNCTION_PROJECT_ID, APPWRITE_FUNC...
- Can't add dart 3.5 runtime
Modified the `.env` to enable dart 3.5 runtime on my self-hosted instance but still can't find the runtime when creating a new function. I manually pulled the i...
- How to verify an user using AppWrite Fun...
I have seen similar questions but none whose solutions serve me. I have a function to verify a user with their secret and their id: https://blahblah.appwrite.gl...