Feature Deep Dive

Stop Using Black-Box AI
For Systematic Reviews

Lumina gives you unmatched transparency. Instantly generate PRISMA-trAIce and RAISE compliant methods sections and download a complete audit trail of every AI decision. Publishing with AI has never been safer.

The "Black-Box" Problem

Most AI screening tools give you a ranked list of papers but hide the underlying mechanics. When peer reviewers ask "What model did you use? What were the hyperparameters? How often did human reviewers' final choices differ from the AI?", you're left guessing. This hurts reproducibility and violates emerging open-science guidelines.

The Lumina Solution

We believe in Open Science. Lumina tracks every parameter—from embedding models to active learning variables—and snapshots them. With one click, you get a beautiful Transparency Report that proves the rigorousness of your methodology to journal editors.

Everything You Need to Publish

Copy & Paste

Auto-Generated Methods Paragraph

Writing the methodology section for your systematic review is a breeze. Lumina automatically generates a highly structured text block detailing exactly how the AI was used, suitable for direct insertion into your manuscript.

  • Details embedding models and classifier types
  • Documents hyperparameters and Rocchio configurations
  • Reports the exact Human-AI agreement rate
"Title and abstract screening was supported by an active learning system (Lumina, v1.0). Text representation utilized a custom sentence-transformer embedding model. The prioritization algorithm employed was an SVM Classifier..."
Total Accountability

Line-by-Line AI Audit Trail

For every paper screened, Lumina exports the AI's internal state. This creates a bulletproof CSV file that you can attach to your publication as supplementary material.

  • Tracks AI recommendation vs Human decision
  • Exports the exact AI 'Reasoning' for each abstract
  • Records the specific model version used at that moment
audit_log_project_14.csv 22 columns
paper_id, semantic_score, relevance_score, ai_recommendation, ai_reasoning, human_decision...
1492, 0.824, 0.991, Include, "Matches primary outcome metrics regarding CBT in adolescents.", Included
1493, 0.412, 0.201, Exclude, "Study population focuses on adults, not adolescents.", Excluded
1495, 0.791, 0.884, Include, "Validates CBT effectiveness, unclear if age range matches.", Excluded
... and 4,997 more rows
Immutable Science

Configuration Snapshots

AI models update frequently. Lumina takes a permanent 'snapshot' of the exact neural network, weighting schema, and hyperparameters the moment your project begins training. Even if we upgrade our system tomorrow, your study's reported methodology will perfectly reflect the tools you actually used.

project_config.json
🧠
Embedding Model
all-MiniLM-L6-v2
Algorithm
SVM (Linear)
🎯
Sampling Strategy
Active Learning (Uncertainty + Relevance)
⚙️
Query Expansion
Rocchio Algorithm α = 1.0

Compliant by Design

Our Transparency Report is strictly modeled after the latest recommendations from the medical and academic community.

Frequently Asked Questions

Why is AI transparency important for systematic reviews?

To maintain reproducibility. If someone else tries to replicate your systematic review in 5 years, they need to know exactly how your automation prioritized papers, what heuristics were applied, and how much influence the AI had over your final human decisions.

Do Covidence or Rayyan offer this level of transparency?

Currently, most legacy screening tools operate as 'black boxes'. They may give you a relevance score, but they don't provide a publishable audit log of the model's exact parameters or reasoning, making it difficult for researchers to comply with emerging PRISMA-trAIce guidelines.

Is the AI Transparency Report included in the free trial?

The feature is part of our Pro and Team tiers, but you can fully test and view the Transparency Report during your 14-day free trial.

Start Screening Safely

Try Lumina today and see how easy it is to conduct an AI-accelerated systematic review without compromising on academic rigor.