2.0 KiB
Research & Decisions
Feature: Anonymous Desire Aggregator Date: 2025-10-09
1. Semantic Analysis Provider
-
Decision: Per the user's directive, the project will use Google's Gemini 2.0 Flash model, accessed via the
@google/generative-aiNode.js library. -
Rationale: This decision aligns with the user's specific requirement for a powerful, cloud-based, free-to-use LLM without usage limits. Using a managed cloud service simplifies the backend architecture significantly, removing the need to maintain a self-hosted model. The
Gemini Flashfamily of models is designed for high speed and efficiency, making it suitable for a real-time application. This approach maintains the privacy-first architecture by ensuring user data is only held transiently in the backend's memory during the API call and is never persisted on the server. -
Alternatives Considered:
- Self-Hosting Open-Source Models: Rejected as this contradicts the user's explicit choice of a specific cloud-based model.
- Other Cloud Providers: Rejected to adhere to the user's specific directive to use a Google Gemini model.
2. Integration Best Practices
-
Decision: The Node.js backend will contain a dedicated, stateless
LLMServiceresponsible for all communication with the Gemini API. -
Implementation Details:
- Prompt Engineering: The service will construct a structured prompt instructing the Gemini model to perform semantic clustering on the list of raw text desires. The prompt will request a JSON object as output that maps each unique desire to a canonical group name.
- API Key Management: The Google AI API key will be managed securely as an environment variable in the backend Docker container and will not be exposed to the frontend.
- Resiliency: The service must implement error handling, including retries with exponential backoff for transient network errors and proper error reporting to the client if the LLM API call fails permanently.