Full-Stack · Fake News Detection · Open Source

Veritas AI

A full-stack fake-news analysis platform with live global news ingestion, claim verification, and a local TensorFlow LSTM model — wrapped in a React 19 dashboard for feed browsing and real-time inference.

View on GitHub Get Started
React 19 + TypeScript FastAPI TensorFlow / Keras LSTM Model RSS Ingestion Claim Verification Git LFS Python 3.11
What is Veritas AI?

Read the news.
Verify the truth.

Veritas AI is a full-stack platform that ingests live global news via RSS, lets you verify arbitrary claims against aggregated evidence, and runs a locally-hosted TensorFlow LSTM model to classify text as real or fake news.

The FastAPI backend exposes a clean REST API for feed browsing, per-article lookup, claim verification, and model inference. It runs in local-model mode when TensorFlow artifacts are present, or falls back to a heuristic-fallback lexical classifier — so it never hard-crashes.

The frontend is a Vite + React 19 + TypeScript dashboard with three pages: a live Home feed, an AI Verification page, and a Model Showcase with metrics and demo inference. Trained on True.csv and Fake.csv tracked via Git LFS.

Inference Preview Model Active
Fake
HOAX: Government to seize all private savings accounts starting Monday morning.
POST /api/model/infer · local-model mode
Real
Federal Reserve holds interest rates steady amid ongoing inflation concerns.
POST /api/model/infer · local-model mode
Verifying
Government confirms national inflation dropped this quarter.
POST /api/verify · evidence aggregation…
Real
WHO releases updated seasonal influenza vaccine guidelines.
POST /api/model/infer · local-model mode
Capabilities

Everything in one platform

📡

Live News Feed Ingestion

Continuously pulls from global RSS sources. Browse by topic, genre, source, or full-text search with pagination via GET /api/feed.

🔍

Claim Verification

Submit any text claim to POST /api/verify and receive aggregated evidence. Fact-check arbitrary statements against live sources.

🧠

Local LSTM Inference

A TensorFlow/Keras LSTM model trained on True.csv + Fake.csv runs entirely on-device. Call POST /api/model/infer with any text for a fake/real verdict.

Python 3.11 Required

Dual Runtime Modes

The model service auto-detects its environment. In local-model mode, it loads detector_model.keras, detector_tokenizer.json, and detector_config.json. When those aren't available it seamlessly enters heuristic-fallback mode using lexical heuristics. Check which mode is active via GET /api/model/metrics.

📊

Live Model Metrics

The metrics endpoint exposes F1, precision, recall, ROC-AUC, overall score, prediction count, and average inference latency (last_prediction_ms, avg_prediction_ms).

🗄️

Git LFS Datasets

True.csv and Fake.csv are tracked with Git LFS. After cloning, run git lfs pull to fetch the full training data before running final_train.py.

🖥️

React 19 Dashboard

Vite + React 19 + TypeScript + React Router frontend with three pages: Home feed, AI Verification, and Model Showcase. API base URL configurable via VITE_API_BASE_URL.

🔧

CLI Scripts

final_train.py trains the LSTM model. simple_predict.py runs standalone inference. showcase_local_model.py demos against the live backend API.

Tech Stack

Built with the right tools

Frontend

  • React 19
  • TypeScript
  • Vite
  • React Router
  • CSS

Backend

  • FastAPI
  • Pydantic
  • Uvicorn
  • Python 3.11
  • RSS Parsing

Machine Learning

  • TensorFlow / Keras
  • LSTM Architecture
  • scikit-learn
  • pandas
  • numpy

Data & Tooling

  • True.csv + Fake.csv
  • Git LFS
  • Node.js 20+ / npm
  • .keras / .h5 artifacts
  • detector_config.json
Structure & API

Clean layout,
clear endpoints

Veritas-AI/
frontend/Vite + React app
backend/FastAPI app & routes
ml/artifacts/metrics & model artifacts
docs/
final_train.pytrains LSTM model
simple_predict.pylocal inference demo
showcase_local_model.pyCLI vs API demo
detector_model.kerasprimary artifact
detector_model.h5secondary artifact
detector_tokenizer.json
detector_config.jsonmax_len, threshold
True.csvGit LFS
Fake.csvGit LFS
API Endpoints
GET
/health
Health check — returns "status":"ok"
GET
/api/feed
Live news feed. Query params: topic, genre, source, search, page, page_size
GET
/api/articles/{article_id}
Single article by ID
POST
/api/verify
Verify a claim with evidence aggregation. Body: claim_text
POST
/api/model/infer
Run LSTM inference. Body: text → fake/real verdict
GET
/api/model/metrics
F1, precision, recall, ROC-AUC, mode, avg_prediction_ms
Get Started

Up and running
in minutes

⚠ Python Note: TensorFlow wheels are not available for Python 3.14 in this setup. Use Python 3.11 for model training and local-model inference. Also requires Node.js 20+ and Git LFS installed.
1
Clone the repo and pull LFS datasets
git clone https://github.com/sourya2007/Veritas-AI.git cd Veritas-AI git lfs install git lfs pull
If dataset files look tiny after clone, git lfs pull is what fetches the real data.
2
Backend setup — macOS / Linux
python3.11 -m venv .venv-tf source .venv-tf/bin/activate pip install -r backend/requirements.txt cd backend python -m uvicorn app.main:app --reload --port 8000
3
Backend setup — Windows (PowerShell)
py -3.11 -m venv .venv-tf .\.venv-tf\Scripts\Activate.ps1 pip install -r backend\requirements.txt Set-Location backend python -m uvicorn app.main:app --reload --port 8000
Health check: curl http://localhost:8000/health
4
Frontend setup (new terminal)
cd frontend npm install npm run dev
Runs at http://localhost:5173. Override API URL: create frontend/.env.local and set VITE_API_BASE_URL=http://localhost:8000.
5
(Optional) Train the LSTM model yourself
python final_train.py
Outputs: detector_model.keras · detector_model.h5 · detector_tokenizer.json · detector_config.json · ml/artifacts/metrics.json · training_results.txt
6
Run standalone prediction demo
python simple_predict.py
Writes prediction logs to results_demo.txt. For API showcase: start backend then run python showcase_local_model.py.
Open Source

Read. Verify. Trust.

Veritas AI is open-source. Star the repo, open issues, or submit a pull request to help fight misinformation.