flowchart TD
    A([Research Goal]) --> B{Methodology\nAnalyzer}
    B -->|JTBD 0.87| C[Job-to-be-Done]
    B -->|Journey 0.71| D[User Journey]
    B -->|Hypothesis 0.62| E[Hypothesis\nTesting]
    B -->|Segmentation 0.55| F[Segmentation]
    C --> G[Survey\nGenerator]
    D --> G
    E --> G
    F --> G
    G --> H[4 Sections · 18 Questions]
    H --> I[Pre-flight\nSimulation]
    I --> J{Quality Check\nn=50 personas}
    J -->|Q07 leading\nflagged| G
    J -->|Clean| K[Field\nDeployment]
    K --> L[Response\nStream]
    L --> M[AI Analysis\nEngine]
    M --> N[Opportunity\nScores]
    M --> O[Report\nDraft]
    N --> P[Chat Insights]
    O --> Q[PDF · PPTX · XLS]
      
sequenceDiagram
    participant U as Team
    participant CL as Claude
    participant OE as OAIRA Engine
    participant SIM as Simulation
    participant FLD as Field

    U->>CL: /oaira-study-design
    Note over U,CL: Q3_Churn.pdf attached
    CL->>OE: analyze_goal(context)
    OE-->>CL: JTBD selected · score 0.87
    CL->>OE: generate_survey(JTBD, n=18)
    OE-->>CL: draft ready · 4 sections
    CL->>SIM: pre_flight(personas=50)
    SIM-->>CL: flags · Q07 leading · Q14 double-barreled
    CL->>OE: revise(flags)
    OE-->>CL: survey v2 · clean
    CL-->>U: ready to field
    U->>CL: ship it
    CL->>FLD: deploy()
    FLD-->>OE: responses streaming
    OE-->>U: live analytics
      
graph LR
    A[Claude.ai\nClaude Code] -->|/oaira skill| B[OAIRA\nEngine]
    C[ChatGPT\nCursor] -->|MCP tools| B
    D[Your Product\nREST API] --> B
    B --> E{Methodology\nRouter}
    E --> F[Survey\nBuilder]
    E --> G[Interview\nAgent]
    E --> H[Deep Research\nPipeline]
    F --> I[Simulation\nLayer n=50]
    G --> I
    H --> I
    I --> J[Field\nDeployment]
    J --> K[Analysis\nEngine]
    K --> L[oaira.worksona.io]
    L --> M[Analytics\nDashboard]
    L --> N[Report\nExport]
    L --> O[Team\nManagement]
      

Orchestrated AI Research Agents · Atomic47 Labs

A47 OAIRA
Claude · /oaira-study-design
Q3_Churn_Research.pdf · 248 KB · 1 file
"Understand why power users churn at month 4. Design and run a complete JTBD study — simulate with AI personas, field it, and produce a board-ready report."
Methodology Survey Simulate Field Analyze Report

Modern market research, headless.

OAIRA lives where your team already works. Drop in the /oaira skill and you're running full market research from inside Claude — attach a brief, state the goal, and get a methodology-validated, simulation-tested study back in the same thread. When a project needs the complete environment, oaira.worksona.io delivers: survey builder, agentic simulation, AI interviewer, streaming analytics, and polished exports. One engine. Chat and app, same data, same rigour.

8
Methodologies, encoded
4
Surfaces — one engine
50×
Pre-flight AI personas
6
Claude skills in the suite
The problem

Weeks from question to defensible answer.

The team that should be running research lives inside Claude, Cursor, and Linear. The research function still runs inside Typeform, Sheets, and a Friday-morning deck.

01
Decisions on instruments nobody validated. A Typeform written at midnight, no methodology check, no pilot — feeding the deck on Thursday.
02
The methodology question never gets asked. Nobody on the team knows which framework to reach for, so the work picks itself.
03
Qual is rationed. Interviews don't scale. Most of the signal you'd want never gets gathered.
04
The output dies inside a PDF. Nobody opens it twice. The study stops the day the deck ships.
The shift

Research is a capability your tools call — not a department you brief.

The Old Shape

  • Research as a project.
  • Owned by specialists.
  • Briefed weeks in advance.
  • Delivered as a deck.
  • Stops when the deck ships.

The New Shape

  • Research as a function call.
  • Owned by the team asking.
  • Triggered on the question.
  • Delivered as structured data.
  • Compounds across studies.
One engine, four ways in

A research platform that runs headless
inside the tools you already use.

Every capability a REST endpoint and an MCP tool. Survey, simulate, field, analyse, report — callable from chat. A polished web app when the moment calls for it. Same engine. Same data.

01 · Claude.ai / Claude Code

Drop in the /oaira skill.

Draft a survey, run a simulation, query results — without leaving the chat. Each skill is a focused, composable action that calls the OAIRA engine directly.

02 · Any MCP-enabled agent

Point the agent at the MCP server.

ChatGPT, your own agent, any MCP-compatible host. Every capability becomes a tool the agent can call. The instrument is wherever the work is.

▸ oaira.mcp / tools
create_survey · run_simulation
field_study · analyze · report
// callable from any agent
03 · Your own product

Embed wherever your users already are.

Survey by URL, iframe, or API. Onboarding, NPS, post-purchase moments. The respondent never leaves your surface.

04 · oaira.worksona.io

The full web app, when the moment calls for it.

Admin, builder, analytics, reports. Complex builds, client readouts, final reports. Polished interface. Same engine as the API.

Methodology is code, not advice

You say. OAIRA does. You get.

Step 01 · You say

"I want to understand why power users keep churning at month 4."

  • Plain language goal.No methodology, no schema. Just the question you actually have.
  • Inside Claude or the app.Wherever you happen to be working — chat is the input surface.
Time invested: 30 seconds
Step 02 · OAIRA does

Picks framework. Writes survey. Validates design.

▸ methodology-analyzer jobs_to_be_done 0.87 user_journey 0.71 exploratory 0.62 ▸ generating survey 4 sections · 18 questions ▸ pre-flight (n=50) flagged: Q07 leading frame
Time elapsed: ~4 minutes
Step 03 · You get

A methodologically sound study. Leading frames already fixed.

  • Ready to ship or revise.Further refinement in the same chat.
  • Eight frameworks, encoded.JTBD · Journey · Gap · Hypothesis · Comparative · Sentiment · Segmentation · Discovery.
  • Available to every PM, marketer, operator.Not just the research function.
Total time to live study: hours
The app

Full UX when the moment calls for it.

AI builder for surveys. AI analyst for results. AI author for reports. Streaming, observable, inspectable. One coherent system from first question to final readout.

Survey library
Survey library
Survey builder
Survey builder

01 / 08

AI-Guided Survey Design

Describe your research goal. The AI recommends a methodology, walks you through guided steps, and generates a complete structured survey — with instrument design encoded as executable workflow, not advice.

8 methodologies · AI co-builder · real-time methodology scoring

Simulations
Simulations
New simulation
New simulation

02 / 08

Agentic Simulation

Every study runs against AI personas before it touches a human. Catch leading questions, double-barreled items, and scope problems in a 4-minute dry run — not after the fieldwork budget is gone.

cost: $3.40 of $200 budget flags: Q07 leading · Q14 double-barreled status: complete · 4m 12s # next: revise · re-run · ship
AI interviewer lab
AI interviewer lab
Voice recognition
Voice recognition

03 / 08

Autonomous AI Interviewer

Deploy a conversational AI agent that conducts open-ended qualitative interviews. Adapts probing in real time. Tracks coverage across the research brief. Extracts structured answers from the full conversation.

Qual that scales. Text and voice. Coverage tracking.

Analytics dashboard
Analytics dashboard

04 / 08

Chat With Your Data

Studies become conversations, not deliverables. Methodology-specific analysis on arrival — opportunity scores for JTBD, friction rates for Journey, segment profiles for segmentation. Streaming, citation-backed answers to ad-hoc questions.

you → which segment is most at risk of churning? segment-C (power users, year-2+) churn risk: high (0.72) top unmet: "predict workflow blockers" opportunity score 14.2
Deep research
Deep research pipelines
New research run
New research run

05 / 08

Deep Research Pipelines

An 8-phase agentic workflow — planning through finalization — that ingests documents, performs semantic search, tracks citation chains, and produces confidence-scored synthesis. The analyst runs overnight. You read the brief in the morning.

8 phases · citation chains · confidence scoring · /oaira-corpus for ingestion

AI personas
AI personas · respondent management
Human respondents
Human respondents
Respondent pools
Respondent pools

06 / 08

Respondent Management

Human respondents and AI personas in the same system. Build reusable pools. Segment by profile. Edge personas — skeptical, power, churned — persist across studies and reuse across teams.

Real + synthetic · reusable pools · campaign tracking

Skills

A growing suite of Claude skills
— drop them in, compose them.

Install the /oaira skill bundle in Claude and every capability below is one command away — inside the same chat you already use for specs, tickets, and roadmap.

01 / 06
01
/oaira-study-design

Turns a research goal into a methodology recommendation and a complete draft survey. The whole pre-field workflow in one command.

02 / 06
02
/oaira-simulate

Runs an AI-persona simulation against any study. Flags problematic questions. Estimates real-fieldwork cost before you spend it.

03 / 06
03
/oaira-interview

Spins up an autonomous AI interviewer following a research brief. Tracks coverage. Extracts structured responses from the full conversation.

04 / 06
04
/oaira-analyze

Runs methodology-specific analytics on submissions. Streaming, citation-backed answers to ad-hoc questions against your live study data.

05 / 06
05
/oaira-report

Drafts a structured, multi-section report in your voice. Exports to PDF, PPTX, or Excel. The report writes itself.

06 / 06
06
/oaira-enrich

Layers OAIRA findings onto specs, briefs, roadmaps, and decks without leaving the host app. /oaira-corpus ingests source documents for deep research.

Who this is for · Market research teams

Research teams — wanting modern tooling.

◆ A good fit if you feel the platform gap.

  • You have the methodology. OAIRA handles the orchestration.Study design, simulation, fielding, analysis, reporting — one platform instead of four stitched together.
  • Chat when you want fast. App when you need deep.Ask your data a question in plain language and get a streaming, citation-backed answer. Open the full web app for report work, team sharing, and client readouts.
  • Pre-flight is the rigour you always wanted to enforce.Every study runs against 50 AI personas before it touches a real respondent. Leading frames and double-barreled items caught in four minutes — not after fieldwork.
  • Qual that actually scales.An autonomous AI interviewer — text and voice — that tracks coverage against your research brief and extracts structured responses from the full conversation.
  • Findings that compound, not rot in a PDF.Structured outputs, live analytics, cross-study memory. The next study starts with everything the last one learned.

◇ Probably not the right fit if…

  • You need a full-service managed engagement.We're a platform, not an agency. No consultant deliverable or relationship manager at the end.
  • You need panel recruitment or sample sourcing.OAIRA manages AI personas and your own respondent pools — not third-party panel access or biometric data collection.
  • Your org requires on-premise data storage.We're cloud-native. If data residency requirements are strict, check with us first before committing.
  • You can't invest any time in setup.The platform rewards small, sustained input — especially early on. If there's truly no bandwidth, the timing isn't right yet.
Who this is for · Product teams

Product teams in AI-native tools — feeling the research gap.

◆ A good fit if you feel the gap weekly.

  • You run product, growth, or researchat a team where Claude, Cursor, ChatGPT are the daily workflow.
  • You have live questions every quarterthat should get researched — and currently don't, because the cost is too high.
  • You're comfortable with API-first tooling.JSON doesn't scare you. MCP sounds like opportunity.
  • You want rigour without a research function— or you have one, and want to give them 10× leverage.
  • ~30 minutes a week, the first few weeksshaping the platform to your stack.

◇ Probably not yet if…

  • You need a managed engagement.We're a platform, not an agency. No consultant deliverable at the end.
  • You need panel ops or biometrics.We do quant, qual via AI, simulation, deep research — not panel recruitment.
  • You're early-stage with one persona.A focused agency or a single Typeform serves you better right now.
  • You can't spare any time for setup.The platform rewards small, sustained input — but it requires some.
How we build

Principles we ship against.

Files, not platforms.

Studies, personas, findings are diffable, portable, version-controllable artifacts. You own them.

API-first, UX-second.

Every capability is a tool before it is a screen. The headless layer is not a feature — it is the foundation.

Methodology is code.

Rigour is something you ship, not something you hope for. Eight frameworks, encoded as executable workflows.

Composable, not monolithic.

OAIRA is a layer in a stack — yours. Every output feeds the next tool in the chain.

The design partner program

A working OAIRA, wired into your stack, with the team that built it on the line.

Free through the program. Hands-on setup — we scaffold your workspace, install the Claude skills, wire up the MCP server, and stand up your first study with you. A direct line to the team. Weekly conversations. The skills, methodologies, and integrations you need get built.

View the full deck → or Apply directly →
Cohort opens summer 2026 ~8 teams Free during program Lifetime founder rate Reply within 3 working days
D
David
Worksona · Atomic47 Labs · founder
david@atomic47.co