A New Lens for the News
An interactive memoir of the News Advance project, an AI-powered tool built to combat misinformation and foster a more informed readership.
The Spark: Confronting the Information Problem
The digital world gives us more information than ever, but finding the truth is harder. The News Advance project was born from a need to fix this.
Information Overload
A constant flood of news, opinions, and stories makes it tough to tell the difference between real journalism and fake news.
Filter Bubbles
Algorithms show us what we already agree with, creating echo chambers that shield us from different points of view.
The Mission
To build a smart helper that uses AI to help people check articles, understand bias, and spot potential misinformation.
Our Solution: A Hybrid AI in Action
News Advance combines specialized AI models and powerful LLMs to provide a multi-layered analysis of any news article.
AI-Powered Summarization
A custom-trained BART model provides fast, high-quality summaries that capture the main idea of an article without the choppy feel of simpler methods.
Nuanced Bias & Sentiment Analysis
Using a local LLM via Ollama, the system detects political leaning and emotional tone, providing detailed explanations for its conclusions to ensure transparency.
Unified User Experience
All analyses are presented on a single, clean interface. Users see the article, its summary, a bias spectrum visualization, and a sentiment breakdown, turning them into critical analysts.
The Blueprint: Our Tech Stack
We chose best-in-class, open-source tools to build a powerful and flexible system. Hover over an icon to learn more.
Django 5.2
Backend Framework
Python 3.8+
Programming Language
SQLite / PostgreSQL
Database
HTML/CSS/JS
Frontend Interface
Transformers
AI/NLP Library
Ollama
Local LLM Framework
Newspaper3k
Data Gathering
BeautifulSoup4
Data Parsing
Building the Brains: The AI Journey
Our AI development was a story of planned success and strategic adaptation when faced with unexpected challenges.
Path 1: A Custom-Trained Success
Our first goal was to build a custom summarization model. We fine-tuned the BART architecture on the BBC News Summary dataset. The result was a big win, proving we could build our own specialized AI solutions.
Model Performance (ROUGE Scores)
Path 2: The Strategic Pivot
When we tried to build a bias detection model, we hit a wall: no good, neutral training data existed. Instead of giving up, we pivoted to a better solution.
Lack of high-quality, neutral datasets for political bias.
Integrated Ollama to run powerful LLMs locally for nuanced, explainable, and private analysis.
This smart move ahead let us deliver a bias analysis feature that was far superior to what we had originally imagined.
Live AI Analysis β¨
Paste in text from a news article below and get a real-time analysis powered by the Gemini API.
Summary
Bias Assessment
Sentiment
Confidence
Potential Fallacies & Manipulative Language
Sorry, something went wrong. Please try again.
Future Horizons
The journey isn't over. Our future plans are ambitious, aiming to make News Advance an even more essential tool.
To increase our impact, we'll create a RESTful API. This will let other news apps, researchers, and developers use our analysis tools in their own projects, helping us spread our mission.
We plan to train models to do even more detailed analysis, like identifying specific logical fallacies (e.g., personal attacks) or common propaganda tricks.
A top priority is to build out our fact-checking feature. This means creating a system that can automatically pull out key claims from an article and check them against trusted fact-checking sites like PolitiFact, Snopes, and Reuters in real time.
Our models will continuously scan and analyze new content to detect misleading narratives as they emerge, allowing users to respond to misinformation faster than ever before.
We're developing tools to detect and surface trending misinformation narratives across the web, allowing users to understand whatβs gaining traction and why.
Users will receive alerts when potentially false or manipulated content is detected, helping them stay informed and avoid sharing harmful misinformation.
For complex or nuanced topics, we'll enable collaboration with subject matter experts to ensure our analyses remain accurate, fair, and contextually aware.