13 KiB
ForFarm Project Setup Guide
This guide provides instructions for setting up and running the ForFarm project using different methods.
Prerequisites
Ensure you have the following tools installed:
- Go: Version 1.23 or later (see
backend/go.mod). - Node.js: Version 20 or later (see
frontend/next.dockerfile). - pnpm: Node package manager (
npm install -g pnpm). - Docker: Latest version.
- Docker Compose: Latest version (often included with Docker Desktop).
- kubectl: Kubernetes command-line tool.
- gcloud: Google Cloud SDK (if deploying to GKE).
Configuration
Environment variables are used for configuration.
-
Backend:
- Copy
backend/sample.envtobackend/.env. - Fill in the required values in
backend/.env:DATABASE_URL: Connection string for your PostgreSQL database. (e.g.,postgres://postgres:@Password123@localhost:5433/postgres?sslmode=disablefor local Docker Compose setup).RABBITMQ_URL: Connection string for RabbitMQ (e.g.,amqp://user:password@localhost:5672/for local Docker Compose).JWT_SECRET_KEY: A strong, random secret key (at least 32 characters).GOOGLE_CLIENT_ID,GOOGLE_CLIENT_SECRET: For Google OAuth.OPENWEATHER_API_KEY: Your OpenWeatherMap API key.GEMINI_API_KEY: Your Google AI Gemini API key.GCS_BUCKET_NAME: Your Google Cloud Storage bucket name.GCS_SERVICE_ACCOUNT_KEY_PATH: (Optional) Path to your GCS service account key JSON file if not using Application Default Credentials (ADC). Leave empty if using ADC (recommended for GKE with Workload Identity).
- Copy
-
Frontend:
- Copy
frontend/sample.envtofrontend/.env. - Fill in the required values in
frontend/.env:NEXT_PUBLIC_BACKEND_URL: URL of the running backend API (e.g.,http://localhost:8000for local/Compose).NEXT_PUBLIC_GOOGLE_CLIENT_ID: Your Google Client ID for OAuth on the frontend.NEXT_PUBLIC_GOOGLE_MAPS_API_KEY: Your Google Maps API Key.- (Other
NEXTAUTH_*variables might be needed if you integratenext-authfully).
- Copy
Running Locally (Manual Setup)
This method requires running services like Postgres and RabbitMQ separately.
-
Start Database: Run PostgreSQL (e.g., using Docker:
docker run --name some-postgres -e POSTGRES_PASSWORD=yourpassword -p 5432:5432 -d postgres:16-alpine). Ensure theDATABASE_URLinbackend/.envpoints to it. -
Start RabbitMQ: Run RabbitMQ (e.g., using Docker:
docker run --name some-rabbit -p 5672:5672 -p 15672:15672 -d rabbitmq:3-management-alpine). EnsureRABBITMQ_URLinbackend/.envpoints to it. -
Backend Migrations:
cd backend go run cmd/forfarm/main.go migrate -
Run Backend API:
cd backend # For live reloading (requires air - go install github.com/cosmtrek/air@latest) # air # Or run directly go run cmd/forfarm/main.go apiThe backend should be running on
http://localhost:8000. -
Run Frontend:
cd frontend pnpm install pnpm devThe frontend should be running on
http://localhost:3000. -
(Optional) Add dummy data in /backend/dummy directory to database
- Do it manually or
make seed
Installation Steps (In detailed)
-
Clone the Repository:
git clone https://github.com/your-username/ForFarm.git # Replace with your repo URL cd ForFarm -
Environment Variables:
- Copy the example environment file:
cp .env.example .env - Edit the
.envfile: Fill in the required values, especially for:POSTGRES_USER,POSTGRES_PASSWORD,POSTGRES_DB(you can keep defaults for local setup)JWT_SECRET_KEY(generate a strong, random secret)GOOGLE_CLIENT_ID(if using Google OAuth)NEXT_PUBLIC_GOOGLE_CLIENT_ID(Frontend Google Client ID)NEXT_PUBLIC_GOOGLE_MAPS_API_KEY(Required for maps)OPENWEATHER_API_KEY(Required for weather features)GEMINI_API_KEY(Required for AI chatbot features)RABBITMQ_URL(Keep default if using the docker-compose setup)- (Optionally adjust
RATE_LIMIT_*variables)
- Copy the example environment file:
-
Build and Run Services: Use Docker Compose to build the images and start the backend, frontend, and database containers.
docker compose up --build -d--build: Forces Docker to rebuild images if Dockerfiles have changed.-d: Runs containers in detached mode (in the background).
-
Run Backend Database Migrations: Apply the necessary database schema changes. Open a new terminal in the project root and navigate to the backend directory:
cd backend make migrate cd ..- This command uses Go to connect to the database (running in Docker) and applies migrations located in
backend/migrations.
- This command uses Go to connect to the database (running in Docker) and applies migrations located in
-
Install Frontend Dependencies: Navigate to the frontend directory and install its dependencies using pnpm.
cd frontend pnpm install cd ..(Docker Compose might handle this during build if configured in
next.dockerfile, but running it explicitly ensures dependencies are up-to-date) -
Access the Application:
- Frontend: Open your browser and navigate to http://localhost:3000 (or the port specified by
FRONTEND_PORTin your.env). - Backend API: The API is accessible at http://localhost:8000 (or the port specified by
BACKEND_PORT). You can use tools like Postman orcurlto interact with it.
- Frontend: Open your browser and navigate to http://localhost:3000 (or the port specified by
-
(Optional) Add dummy data in /backend/dummy directory to database
- Do it manually or
make seed
Running with Docker Compose
This is the recommended way for local development and testing the containerized setup.
- Ensure
.envfiles are configured as described in the Configuration section. Uselocalhostfor hostnames in URLs (e.g.,DATABASE_URL='postgres://postgres:@Password123@db:5432/postgres?sslmode=disable',RABBITMQ_URL=amqp://user:password@rabbitmq:5672/- note the service namesdbandrabbitmq). - Build and Start:
docker compose up --build -d # -d runs in detached mode - Run Migrations (First time or after changes):
docker compose exec backend /app/api migrate # Or if using source mount and go is available: # docker compose exec backend go run cmd/forfarm/main.go migrate - Access Services:
- Frontend:
http://localhost:3000 - Backend API:
http://localhost:8000 - RabbitMQ Management:
http://localhost:15672(user/password from.env) - Database: Connect via
localhost:5433using credentials from.env.
- Frontend:
- View Logs:
docker compose logs -f [service_name](e.g.,docker compose logs -f backend) - Stop:
docker compose down
Development Workflow
- Live Reload (Backend): While
docker compose up -dkeeps the backend running, for active Go development with live reload, stop the backend service (docker compose stop backend) and run:
This usescd backend make liveair(configured in.air.toml) to automatically rebuild and restart the Go application when code changes. - Live Reload (Frontend): The
pnpm devcommand used in the frontend Dockerfile typically includes hot module replacement (HMR). Changes made to frontend code should reflect in the browser automatically when runningdocker compose up. If not, check the Next.js configuration.
Deploying to Google Kubernetes Engine (GKE)
This requires a configured GKE cluster and gcloud CLI authenticated.
-
Prerequisites:
- Create a GKE cluster.
- Configure
kubectlto connect to your cluster (gcloud container clusters get-credentials YOUR_CLUSTER_NAME --zone YOUR_ZONE --project YOUR_PROJECT_ID). - Enable Google Container Registry (GCR) or Artifact Registry API.
-
Configure GCS:
- Create a GCS bucket (
YOUR_GCS_BUCKET_NAME). - Authentication:
- (Recommended) Workload Identity: Set up Workload Identity to grant your Kubernetes Service Account permissions to access the GCS bucket without key files. This involves creating a GCP Service Account, granting it
roles/storage.objectAdminon the bucket, creating a K8s Service Account (e.g.,backend-sa), and binding them. Updatebackend-deployment.yamlto useserviceAccountName: backend-sa. - (Alternative) Service Account Key: Create a GCP Service Account, grant it permissions, download its JSON key file.
- (Recommended) Workload Identity: Set up Workload Identity to grant your Kubernetes Service Account permissions to access the GCS bucket without key files. This involves creating a GCP Service Account, granting it
- Create a GCS bucket (
-
Build and Push Docker Images:
-
Authenticate Docker with GCR/Artifact Registry (
gcloud auth configure-docker YOUR_REGION-docker.pkg.dev). -
Build the images:
# Backend docker build -t YOUR_REGION-docker.pkg.dev/YOUR_GCR_PROJECT_ID/forfarm/backend:latest -f backend/go.dockerfile ./backend # Frontend docker build -t YOUR_REGION-docker.pkg.dev/YOUR_GCR_PROJECT_ID/forfarm/frontend:latest -f frontend/next.dockerfile ./frontend -
Push the images:
docker push YOUR_REGION-docker.pkg.dev/YOUR_GCR_PROJECT_ID/forfarm/backend:latest docker push YOUR_REGION-docker.pkg.dev/YOUR_GCR_PROJECT_ID/forfarm/frontend:latest -
Update
k8s/*.yamlfiles: ReplaceYOUR_GCR_PROJECT_ID/forfarm-backend:latestandYOUR_GCR_PROJECT_ID/forfarm-frontend:latestwith your actual image paths.
-
-
Create Kubernetes Secrets:
- Encode Secrets: Base64 encode all values needed in
k8s/secrets.yaml.echo -n "your_password" | base64 # For GCS key file (if using): cat path/to/your-gcs-key.json | base64 | tr -d '\n' # Ensure no newlines in output - Update
k8s/secrets.yaml: Paste the base64 encoded values into thedatasection. - Apply Secrets:
Alternatively, create secrets imperatively (safer as values aren't stored in YAML):kubectl apply -f k8s/namespace.yaml kubectl apply -f k8s/secrets.yaml -n forfarmkubectl create secret generic forfarm-secrets -n forfarm \ --from-literal=POSTGRES_PASSWORD='your_db_password' \ --from-literal=RABBITMQ_PASSWORD='your_rabbit_password' \ # ... add other secrets ... # If using key file: # --from-file=GCS_SERVICE_ACCOUNT_KEY_JSON=/path/to/your-gcs-key.json
- Encode Secrets: Base64 encode all values needed in
-
Create ConfigMap:
- Update
k8s/configmap.yaml: Replace placeholders likeYOUR_GOOGLE_CLIENT_ID,YOUR_GOOGLE_MAPS_API_KEY,YOUR_GCS_BUCKET_NAME. Adjust service URLs if needed. - Apply ConfigMap:
kubectl apply -f k8s/configmap.yaml -n forfarm
- Update
-
Apply Deployments, Services, PVCs:
# Apply database and message queue first kubectl apply -f k8s/postgres-pvc.yaml -n forfarm # Only if using self-hosted postgres kubectl apply -f k8s/postgres-deployment.yaml -n forfarm kubectl apply -f k8s/postgres-service.yaml -n forfarm kubectl apply -f k8s/rabbitmq-pvc.yaml -n forfarm kubectl apply -f k8s/rabbitmq-deployment.yaml -n forfarm kubectl apply -f k8s/rabbitmq-service.yaml -n forfarm # Wait for DB and RabbitMQ to be ready (check pods: kubectl get pods -n forfarm -w) # Apply backend and frontend kubectl apply -f k8s/backend-deployment.yaml -n forfarm kubectl apply -f k8s/backend-service.yaml -n forfarm kubectl apply -f k8s/frontend-deployment.yaml -n forfarm kubectl apply -f k8s/frontend-service.yaml -n forfarmNote: The
initContainerinbackend-deployment.yamlshould handle migrations. -
Setup Ingress:
- Update
k8s/ingress.yaml: Replaceyour-domain.comwith your domain. Configure TLS and managed certificates if needed (requires creating aManagedCertificateresource in GKE). - Apply Ingress:
kubectl apply -f k8s/ingress.yaml -n forfarm - Get Ingress IP: Wait a few minutes, then run
kubectl get ingress forfarm-ingress -n forfarm. Note theADDRESS. - Configure DNS: Point your domain's A record(s) to the Ingress IP address.
- Update
-
Alternative: Cloud SQL: Instead of running Postgres in K8s, consider using Cloud SQL. Create a Cloud SQL instance, configure its user/database, and update the
DATABASE_URLin yourk8s/configmap.yamlto point to the Cloud SQL proxy or private IP. You won't need thepostgres-*.yamlfiles.
Troubleshooting
- Docker Compose: Use
docker compose logs -f <service_name>to check logs. Usedocker compose exec <service_name> shto get a shell inside a container. - Kubernetes: Use
kubectl get pods -n forfarm,kubectl logs <pod_name> -n forfarm [-c <container_name>],kubectl describe pod <pod_name> -n forfarm. - Migrations: Check the
goose_db_versiontable in your database to see applied migrations.