Skip to the content.

🚀 Getting started

← Home

Prerequisites

Setup

1. Clone + install

git clone https://github.com/saadmsft/nanoresearch.git
cd nanoresearch
python3.12 -m venv .venv
source .venv/bin/activate
pip install -e ".[dev]"

2. Configure Azure (no API keys)

cp .env.example .env
# Edit:
#   AZURE_OPENAI_ENDPOINT=https://<your-foundry>.services.ai.azure.com/
#   AZURE_OPENAI_DEPLOYMENT=gpt-5.1
#   AZURE_OPENAI_API_VERSION=2024-12-01-preview
az login

Verify access:

nanoresearch health --azure --no-local

3. Backend

nanoresearch serve            # http://127.0.0.1:8000

4. Frontend

cd ui
npm install
npm run dev                   # http://localhost:5173

Open http://localhost:5173. Tell the assistant your name and field, e.g.:

Hi! I’m Mia, an ecologist. I prefer field studies, 6-month timeline. Start a run on canopy cover and breeding-bird species richness in city parks.

Optional add-ons

Local SDPO planner

Required for the full tri-level co-evolution (the planner learns from your feedback via SDPO). Adds ~3 GB of wheels and downloads ~15 GB of Qwen weights.

pip install -e ".[local]"
huggingface-cli download Qwen/Qwen2.5-7B-Instruct \
  --local-dir data/models/Qwen2.5-7B-Instruct

# Quick smoke test (loads the model)
pytest -m local_model

LaTeX compiler

# Heavy option (full MacTeX):
brew install --cask mactex

# Light option (single binary, recommended):
brew install tectonic

NanoResearch tries pdflatex first then falls back to tectonic. If neither is installed, you’ll still get a .tex source ready to compile elsewhere.

Useful CLI commands

nanoresearch settings                  # print resolved config
nanoresearch health --azure            # AAD round-trip to GPT-5.1
nanoresearch health --azure --local    # also load Qwen on MPS
nanoresearch serve --port 8001         # alternative port
nanoresearch serve --access-log         # verbose HTTP logs

Inspect a run

After a run, the full audit trail is on disk:

# All events for the latest run:
ls -t runs/ | head -1 | xargs -I {} cat runs/{}/events.jsonl

# Generated project + paper:
ls runs/workspaces/proj-*/
ls runs/papers/proj-*/

# Your accumulated stores:
ls data/users/<id>/skills/
ls data/users/<id>/memories/

Or via the API:

curl http://127.0.0.1:8000/api/users/<id>/skills    | jq
curl http://127.0.0.1:8000/api/users/<id>/memories  | jq
curl http://127.0.0.1:8000/api/runs/<run_id>        | jq

Troubleshooting

Symptom Fix
Tenant provided in token does not match resource tenant az login --tenant <correct-tenant>
400 BadRequest from OpenAlex Network blocked or proxy interfering — set HTTPS_PROXY if needed
pdflatex not found warning Install tectonic or MacTeX — paper still ships as .tex
Stage II fails with ModuleNotFoundError Generated code asked for a package outside the allow-list — debug loop should patch on retry
UI shows endless 404 polling Stale run_id in localStorage. Open DevTools → Application → Local Storage → clear nano.runId

See security.html for sandbox details and api.html for the full HTTP surface.