Co-Researcher v2.1 — Claude Code · Gemini CLI · OpenAI Codex · OpenCode

Your agent just cited
a paper that doesn't exist.
Give it a protocol.

Fifteen research protocols — literature review, critical analysis, hypothesis testing, systematic review — installed natively into your AI CLI.

Install in 30 seconds View on GitHub →

The problem

A model trained on everything hasn't been trained to do research.

It invents citations that sound plausible. It conflates correlation with causation. It calls a literature review "comprehensive" after sampling a fraction of the field. These aren't random hallucinations — they're method gaps. The model was never given the protocol that trained researchers follow.

The principle

Systemic Honesty

Co-Researcher's core rule: accuracy over output count. Every skill requires the agent to flag uncertainty, refuse unverified sources, and distinguish what the evidence shows from what you might wish it showed. When it doesn't know, it says so.

Fifteen research protocols.

Each skill is a defined workflow. Type the command, and the agent follows the same steps a trained researcher takes: systematic search, explicit coding, verified citations, uncertainty quantification.

/research

Research Orchestration

Intelligent multi-agent coordination. Analyzes your question, selects the right agents, executes a phased research plan.

/analyze

Critical Analysis

Fallacy detection, bias identification, contradictory evidence handling. Evaluates the strength of an argument, not just its surface coherence.

Literature Review

Systematic search, citation chaining, hallucination detection, gap analysis. Refuses to cite sources it cannot verify.

Hypothesis Testing

Variable mapping, falsification criteria, experimental controls. Distinguishes testable claims from unfalsifiable ones.

Quantitative Analysis

Statistical method selection, effect size interpretation, Simpson's paradox detection, power analysis.

Qualitative Research

Thematic analysis, coding strategy, leading-question detection, theoretical saturation assessment.

/review

Peer Review

Manuscript critique with methodological rigor scoring. Structures feedback the way a journal reviewer would.

/ethics

Ethics Review

IRB compliance assessment, participant privacy risk, dual-use research concerns.

Systematic Review

PRISMA-standard protocol, inclusion/exclusion criteria, Risk of Bias assessment.

/synthesize

Research Synthesis

Narrative synthesis with explicit uncertainty quantification. Distinguishes strong evidence from weak consensus.

/methodology

Research Methodology

Design selection and validation — matches your research question to appropriate methods, sampling strategies, and validity controls.

/grant

Grant Writing

Funding strategy, Specific Aims development, alignment with agency priorities.

Lateral Thinking

Cross-domain analogies, constraint satisfaction, first-principles reasoning for novel research problems.

Academic Writing

Eliminates AI-isms from research prose — hedging, passive-voice defaults, vague transitions. Produces writing that reads like a human expert wrote it.

Multi-Source Investigation

Triangulates complex claims across three or more independent sources. Checks credibility, funding sources, and institutional bias for every source it cites.

Systemic Honesty is a protocol constraint, not a model instruction.

Telling a model "don't fabricate" improves outputs until the task gets hard. Co-Researcher embeds verification into the method itself — at search, at citation, at synthesis — so the constraint doesn't depend on a single instruction holding under pressure.

Never fabricate a citation. If a source cannot be verified, say so.

Distinguish what the evidence shows from what it suggests.

Quantify uncertainty — "likely," "insufficient evidence," "conflicting findings."

Refuse to produce a "comprehensive" review when coverage is partial.

Flag methodological limits before presenting conclusions.

One command in the tool you already use.

Co-Researcher installs as a native plugin or extension. Your existing agent gets research-grade method added to its repertoire.

Claude Code

/plugin marketplace add poemswe/co-researcher

Gemini CLI

gemini extension install https://github.com/poemswe/co-researcher

OpenAI Codex

Tell Codex: "Fetch and follow https://raw.githubusercontent.com/poemswe/co-researcher/main/.codex/INSTALL.md"

From source

git clone github.com/poemswe/co-researcher

After installing, run /using-co-researcher to orient Claude to all available skills.

22 test cases. Six rubrics. Every agent output preserved.

Claude and Codex scored across literature search, critical analysis, quantitative reasoning, and research design. Not demos — actual outputs with full rubric breakdowns.

Open Benchmark Arena →