Building AI-powered applications comes with responsibility toward your users and their data. Whether you're using AI development tools to build with Appwrite or integrating AI capabilities into your applications, following these best practices helps you build trustworthy and secure experiences.
Protect user data
When sending data to AI providers like OpenAI, Anthropic, or others, be mindful of what information leaves your application.
- Avoid sending personal data to AI providers unless necessary for the feature. Strip personally identifiable information (PII) like names, emails, and addresses from prompts before sending them to an LLM.
- Review provider data policies to understand how each AI provider handles the data you send. Some providers use input data for model training unless you opt out.
- Use Appwrite permissions to control which users and roles can trigger AI-powered features. Appwrite's permission system lets you restrict access at the database, storage, and function level.
Secure your API keys
AI provider API keys grant access to paid services and should be treated with the same care as any other secret.
- Store API keys as environment variables in your Appwrite Functions. Never hardcode keys in your source code or expose them to the client side.
- Use scoped keys when your AI provider supports them. Restrict keys to only the permissions and models your application needs.
- Rotate keys regularly and revoke any keys that may have been exposed.
Learn more about managing secrets in Appwrite Functions.
Validate inputs and outputs
AI models can produce unexpected or inappropriate results. Build safeguards into your application to handle these cases.
- Validate user inputs before sending them to an AI provider. Set character limits, sanitize content, and reject malicious prompts to prevent prompt injection attacks.
- Review AI outputs before displaying them to users or storing them in your database. Implement content filters or moderation layers for user-facing features.
- Handle errors gracefully when AI providers are unavailable or return unexpected responses. Your application should function even when AI features fail.
Be transparent with users
Users should understand when they are interacting with AI-generated content or AI-powered features.
- Disclose AI usage in your application. Let users know when content is generated by an AI model or when their input is processed by an AI service.
- Provide opt-out options where possible. Give users control over whether their data is used in AI-powered features.
- Set expectations about AI limitations. AI-generated content can be inaccurate, and users should understand that responses may not always be correct.
AI-assisted development
When using AI development tools like Cursor, VS Code, or Claude Code to build with Appwrite, keep the following in mind.
- Review generated code before committing. AI-generated code may contain security vulnerabilities, incorrect API usage, or outdated patterns.
- Keep API keys out of prompts when chatting with AI assistants. Avoid pasting secrets, credentials, or sensitive configuration into AI chat interfaces.
- Use official documentation as the source of truth. Point your AI tools to Appwrite's Markdown documentation for accurate and up-to-date context.