A full-stack fake-news analysis platform with live global news ingestion, claim verification, and a local TensorFlow LSTM model — wrapped in a React 19 dashboard for feed browsing and real-time inference.
Veritas AI is a full-stack platform that ingests live global news via RSS, lets you verify arbitrary claims against aggregated evidence, and runs a locally-hosted TensorFlow LSTM model to classify text as real or fake news.
The FastAPI backend exposes a clean REST API for feed browsing, per-article lookup, claim verification, and model inference. It runs in local-model mode when TensorFlow artifacts are present, or falls back to a heuristic-fallback lexical classifier — so it never hard-crashes.
The frontend is a Vite + React 19 + TypeScript dashboard with three pages: a live Home feed, an AI Verification page, and a Model Showcase with metrics and demo inference. Trained on True.csv and Fake.csv tracked via Git LFS.
Continuously pulls from global RSS sources. Browse by topic, genre, source, or full-text search with pagination via GET /api/feed.
Submit any text claim to POST /api/verify and receive aggregated evidence. Fact-check arbitrary statements against live sources.
A TensorFlow/Keras LSTM model trained on True.csv + Fake.csv runs entirely on-device. Call POST /api/model/infer with any text for a fake/real verdict.
Python 3.11 RequiredThe model service auto-detects its environment. In local-model mode, it loads detector_model.keras, detector_tokenizer.json, and detector_config.json. When those aren't available it seamlessly enters heuristic-fallback mode using lexical heuristics. Check which mode is active via GET /api/model/metrics.
The metrics endpoint exposes F1, precision, recall, ROC-AUC, overall score, prediction count, and average inference latency (last_prediction_ms, avg_prediction_ms).
True.csv and Fake.csv are tracked with Git LFS. After cloning, run git lfs pull to fetch the full training data before running final_train.py.
Vite + React 19 + TypeScript + React Router frontend with three pages: Home feed, AI Verification, and Model Showcase. API base URL configurable via VITE_API_BASE_URL.
final_train.py trains the LSTM model. simple_predict.py runs standalone inference. showcase_local_model.py demos against the live backend API.
"status":"ok"claim_texttext → fake/real verdictgit clone https://github.com/sourya2007/Veritas-AI.git
cd Veritas-AI
git lfs install
git lfs pull
git lfs pull is what fetches the real data.python3.11 -m venv .venv-tf
source .venv-tf/bin/activate
pip install -r backend/requirements.txt
cd backend
python -m uvicorn app.main:app --reload --port 8000
py -3.11 -m venv .venv-tf
.\.venv-tf\Scripts\Activate.ps1
pip install -r backend\requirements.txt
Set-Location backend
python -m uvicorn app.main:app --reload --port 8000
curl http://localhost:8000/healthcd frontend
npm install
npm run dev
http://localhost:5173. Override API URL: create frontend/.env.local and set VITE_API_BASE_URL=http://localhost:8000.python final_train.py
detector_model.keras · detector_model.h5 · detector_tokenizer.json · detector_config.json · ml/artifacts/metrics.json · training_results.txtpython simple_predict.py
results_demo.txt. For API showcase: start backend then run python showcase_local_model.py.Veritas AI is open-source. Star the repo, open issues, or submit a pull request to help fight misinformation.