| title | emoji | colorFrom | colorTo | sdk | pinned | license | short_description |
|---|---|---|---|---|---|---|---|
Grammo |
📚 |
purple |
yellow |
docker |
false |
gpl-3.0 |
AI Translation and Grammar Correction |
Django REST API backend for Grammo, an AI-powered translation and grammar correction service.
The Grammo backend provides a RESTful API for translation and grammar correction services. It leverages LangChain and HuggingFace models to process language requests, with LangGraph managing conversation state across sessions.
- 🌐 Translation Service - Natural, contextually appropriate translations between languages
- ✏️ Grammar Correction - Fixes grammar, spelling, and punctuation errors
- 💬 Session Management - Maintains conversation context using Django sessions and LangGraph checkpoints
- 🎭 Customizable Modes - Supports Default and Grammar modes
- 🎨 Tone Control - Configurable tone (Default, Formal, Casual) for responses
- 🔒 Security - CORS support, CSRF protection, secure session management
- 📦 HuggingFace Integration - Uses GPT-OSS-Safeguard-20B model via HuggingFace API
- Django 5.2.7 - Web framework
- Django REST Framework - API development
- LangChain - AI agent orchestration
- LangGraph - Conversation state management
- HuggingFace - Language model integration (GPT-OSS-Safeguard-20B)
- Python 3.14+ - Programming language
- SQLite - Database (development)
- Uvicorn - ASGI server
- Python 3.14 or higher
- pip (Python package manager)
- HuggingFace API Token (Get one here)
cd backend# Create virtual environment
python -m venv venv
# Activate virtual environment
# On macOS/Linux:
source venv/bin/activate
# On Windows:
venv\Scripts\activatepip install -r requirements.txtCreate a .env file in the backend directory (or copy from the example):
cp .env.example .env # or: touch .envAt minimum, set the variables below (see Environment Variables for details):
# Required
SECRET_KEY=your-secret-key-here
HUGGINGFACEHUB_API_TOKEN=your-huggingface-api-token
# Common
DEBUG=True
BUILD_MODE=development # change to "production" for deploymentTo generate a Django secret key:
python -c "from django.core.management.utils import get_random_secret_key; print(get_random_secret_key())"python manage.py migrateCreate a .env file in the backend directory. The backend loads variables from this file using python-dotenv.
# Django Secret Key (generate one using the command above)
SECRET_KEY=your-secret-key-here
# HuggingFace API Token (any of these will be picked up; preferred shown first)
HUGGINGFACEHUB_API_TOKEN=your-huggingface-api-token
# HF_TOKEN=your-huggingface-api-token
# HF_API_TOKEN=your-huggingface-api-token# Debug mode (default: True)
DEBUG=True
# App build mode: "development" (default) or "production"
BUILD_MODE=development
# Port only used when running `python app.py` (Hugging Face Spaces)
# PORT=7860When BUILD_MODE=production, the following become relevant:
# Allowed hosts (comma-separated, no spaces)
ALLOWED_HOSTS=yourdomain.com,www.yourdomain.com
# CSRF trusted origins (comma-separated)
CSRF_TRUSTED_ORIGINS=https://yourdomain.com,https://www.yourdomain.comNotes:
- Most security and CORS flags are derived automatically from
BUILD_MODEinbackend/settings.py:- In development: permissive defaults for local usage
- In production:
CORS_ALLOW_ALL_ORIGINS=False, secure cookies, HSTS, content type nosniff, and SSL redirect are enabled
- Do not set
SESSION_COOKIE_SECURE,CSRF_COOKIE_SECURE,CORS_ALLOW_ALL_ORIGINS, orSECURE_*directly via env; they are computed fromBUILD_MODE.
Option 1: Django Development Server (with warnings)
python manage.py runserverThe server will run on http://localhost:8000
Option 2: Uvicorn ASGI Server (production-like, no warnings)
uvicorn backend.asgi:application --host 0.0.0.0 --port 8000 --reload# Set DEBUG=False in .env
uvicorn backend.asgi:application --host 0.0.0.0 --port 8000
# With multiple workers:
uvicorn backend.asgi:application --host 0.0.0.0 --port 8000 --workers 4The backend can also be run as a standalone script:
python app.pyThis uses the PORT environment variable (defaults to 7860) and is configured for HuggingFace Spaces deployment.
All endpoints are prefixed with /api/v1/
Health check endpoint.
Response:
{
"message": "Hello from Grammo!"
}Send a message to start or continue a chat session.
Request Body:
{
"message": "Translate this text to French",
"chatSession": 0,
"mode": "default",
"tone": "default"
}Parameters:
message(required): The user's messagechatSession(optional): Session identifier to maintain conversation contextmode(optional):"default"or"grammar"- Determines how the message is processedtone(optional):"default","formal", or"casual"- Sets the tone of the response
Response (Success):
{
"status": "success",
"response": "**Original**: \nTranslate this text to French\n**Output**: \nTraduisez ce texte en français\n___\n**Explanation**: \n> Direct translation maintaining the original meaning"
}Response (Error):
{
"status": "error",
"response": "Invalid message."
}End the current chat session and clear conversation history.
Request Body:
{}Response (Success):
{
"status": "success",
"message": "Session ended successfully"
}Response (Error):
{
"status": "error",
"response": "No active session."
}backend/
├── agent_manager/ # AI agent management module
│ └── __init__.py # LangChain agent setup, session management
├── api/ # Django REST API application
│ ├── views.py # API view handlers (chat, hello, end)
│ ├── urls.py # URL routing
│ └── apps.py # App configuration
├── backend/ # Django project settings
│ ├── settings.py # Django configuration
│ ├── urls.py # Main URL configuration
│ ├── asgi.py # ASGI application
│ └── wsgi.py # WSGI application
├── app.py # Standalone entry point (HuggingFace Spaces)
├── manage.py # Django management script
├── requirements.txt # Python dependencies
├── Dockerfile # Docker configuration for deployment
└── README.md # This file
- Sessions are managed using Django's session framework
- Session data is stored in the cache backend (in-memory for development)
- Each session maintains its own LangGraph agent with conversation checkpointing
- Sessions expire after 24 hours of inactivity or when explicitly ended
- Uses LangChain's
create_agentwith a structured output wrapper - Structured output ensures consistent JSON responses for translation/correction tasks
- Agents are cached per session key for efficient memory usage
- Supports task types:
translation,correction,follow-up,invalid
- Uses SQLite by default (suitable for development)
- No models are currently defined, but Django is configured for future database needs
- Run
python manage.py migrateto set up the database schema
- In-memory cache is used for sessions (development)
- Note: For production, consider switching to Redis or another persistent cache backend
- CORS is configured to allow cross-origin requests
- In production, configure
CORS_ALLOW_ALL_ORIGINSandALLOWED_HOSTSappropriately
The backend includes a Dockerfile configured for HuggingFace Spaces deployment.
-
Set environment variables in your Space settings:
SECRET_KEYHUGGINGFACEHUB_API_TOKENBUILD_MODE=productionDEBUG=FalseALLOWED_HOSTS=your-space-name.hf.spaceCSRF_TRUSTED_ORIGINS=https://your-space-name.hf.space
-
Push your code to the Space repository
-
The API will be available at
https://your-space-name.hf.space/api/v1/
- Set production environment variables (see Environment Variables)
BUILD_MODE=production,DEBUG=FalseALLOWED_HOSTSandCSRF_TRUSTED_ORIGINS
- Configure a proper database (PostgreSQL recommended)
- Set up Redis or another cache backend for sessions
- Use a production ASGI server (Uvicorn with multiple workers or Gunicorn with Uvicorn workers)
- Configure reverse proxy (Nginx, Apache) with SSL/TLS
- Set up static file serving or use a CDN
To test the API endpoints:
# Health check
curl http://localhost:8000/api/v1/hello/
# Send a chat message
curl -X POST http://localhost:8000/api/v1/chat/ \
-H "Content-Type: application/json" \
-d '{"message": "Hello, translate this to Spanish", "mode": "default", "tone": "default"}'
# End session
curl -X POST http://localhost:8000/api/v1/end/- Module not found errors: Ensure your virtual environment is activated and dependencies are installed
- Secret key errors: Make sure
SECRET_KEYis set in your.envfile - HuggingFace API errors: Verify your
HUGGINGFACEHUB_API_TOKENis valid - CORS errors: Check
CORS_ALLOW_ALL_ORIGINSandALLOWED_HOSTSsettings - Session not persisting: Ensure cache backend is configured correctly
- The application uses in-memory session storage for development. For production, consider using Redis.
- The HuggingFace model (
openai/gpt-oss-safeguard-20b) is used for all language processing tasks. - Conversation state is managed per Django session using LangGraph's checkpoint system.
- The structured output wrapper ensures responses follow a consistent JSON schema.
See the LICENSE file for details.