Document QA Check

Treat a bot as a checklist: each task is one check run against a document you upload, and the final task compiles a pass/fail report.

What this bot does

You upload a document (an insurance claim, a construction estimate, a blueprint, a contract) and the bot runs a checklist against it. Each task is one individual check. The final task rolls the results into a pass/fail report with every failure called out in plain English.

This pattern fits any manual QA work you do today that is repeatable and describable. If you can write down what a human reviewer looks for, you can put it in a bot and stop doing it by hand. Insurance claims, construction estimates, architectural blueprints, lease agreements, compliance filings, invoice audits: same shape, different checks.

Soul

You are a claims QA reviewer for a property insurance carrier. You read
submitted claim documents and check them against our intake standards.

On every check you run, you must:
- Reference the exact field or section of the document you are checking
- Return a clear PASS or FAIL for the check
- On FAIL, quote the offending text or describe the missing field
- Never guess at missing information; if a required field is absent, FAIL

Never:
- Relax a check because the document "looks fine overall"
- Combine multiple checks into one result
- Approve a check that depends on data outside the document

Tasks

Each task is one check. All checks can run in parallel in stage 1 because they are independent. Stage 2 compiles the results.

Stage 1, checks (no tools, parallel)

  • Task A: Claimant identity. “Read the uploaded document. Check that it contains a full legal name, a policy number, and a date of loss. PASS or FAIL. On FAIL, list which of the three fields are missing or malformed.”
  • Task B: Date of loss validity. “Check that the date of loss is a real calendar date, not in the future, and within the last 180 days. PASS or FAIL. Quote the date you found.”
  • Task C: Incident description. “Check that the incident description is at least two sentences, names a covered peril (fire, water, wind, theft, collision, or liability), and names the property location. PASS or FAIL. Quote the description.”
  • Task D: Photo evidence. “Check that the document references at least three attached photos of the damage. PASS or FAIL. List the photo references you found.”
  • Task E: Itemised loss schedule. “Check for an itemised list of lost or damaged items with per-item quantities and dollar amounts. PASS or FAIL. If present, confirm the items sum to the total loss figure stated elsewhere in the document.”
  • Task F: Signatures and attestation. “Check for a signed attestation by the claimant and a date next to the signature. PASS or FAIL. Quote the attestation text.”
  • Task G: Exclusions check. “Flag any language in the document that suggests a policy exclusion applies (flood in a non-flood policy, wear-and-tear, intentional act, business use on a personal policy). PASS if none apply. FAIL and quote the language if any do.”

Stage 2, compile (no tools)

  • Task H: Report. “Given the PASS/FAIL from tasks A through G, produce a QA report. Start with an overall verdict: READY FOR ADJUSTER if every check passed, or RETURN TO SUBMITTER if any check failed. Then list each check by name with its PASS or FAIL and the supporting quote or note from that task. Keep it under 400 words.”

Input data

The claim file is uploaded when you run the bot. If you also want structured fields passed alongside the file:

{
  "claim_id": "CLM-2026-04837",
  "submitter_email": "adjuster@example.com"
}

For text-only documents, paste the body into input data instead of uploading a file. For JSON payloads (for example, structured claim data from an intake form), the same pattern works: each task reads the JSON and returns PASS or FAIL.

Schedule

Most QA bots run on demand rather than on a schedule. Trigger them when a new document lands. If you review a queue on a regular cadence, schedule the bot to sweep the queue every morning.

What to tune

  • If a check keeps false-failing, tighten its prompt to quote exactly what counts as a PASS and what counts as a FAIL.
  • If the report buries the important failures, ask task H to lead with the first FAIL before listing the rest.
  • To reuse the pattern for a different document type, rewrite the soul and the stage 1 check list. The shape (parallel checks in stage 1, compile in stage 2) stays the same.
  • Add a stage 3 with the SendEmail tool to auto-route the report to the submitter on FAIL or to the next-stage reviewer on PASS.