Multiple steps
The evaluation runs through intake, evidence extraction, records research, gap analysis, and final synthesis. It is not a single generic prompt.
AI Disclosure
Last updated: April 27, 2026
USDWatch uses AI to help organize messy school-district facts into a working case file. The goal is not to make a legal decision for you. The goal is to help you understand evidence strength, gaps, records to request, meeting questions, and next steps.
The evaluation runs through intake, evidence extraction, records research, gap analysis, and final synthesis. It is not a single generic prompt.
Agent workers focus on different jobs: reading intake, extracting evidence, building timelines, finding missing records, and drafting practical next steps.
Your case is scoped to your workspace. Attorney, advocacy, media, or parent-group routing is manual opt-in only.
Short answer: no.
USDWatch can use large language models, but the product is the workflow around them: evidence intake, document parsing, structured schemas, model routing, agent run persistence, workspace isolation, and packet generation.
A general chatbot waits for you to know what to ask. USDWatch tries to ask the case-file questions for you: what happened, what evidence supports it, what is missing, what records could confirm it, and what a parent can do next.
The system reads your narrative and guided intake fields: who was impacted, age or grade band, issue categories, urgency, safety concerns, retaliation concerns, prior actions, and desired outcomes.
Uploaded documents in the Evidence Locker are parsed when possible. The system looks for dates, actors, claims, decisions, notices, records, policy references, and inconsistencies.
We may convert document text into embeddings and store them in a vector database so related passages can be found later. That is not the same as training a new neural network on your case. It is more like creating a searchable evidence map.
The system compares the story and available evidence against the kinds of records a parent would usually need: communications, incident reports, policies, meeting notes, IEP or 504 records, agency letters, and timeline anchors.
The output is a packet: case summary, timeline, evidence checklist, records request drafts, meeting questions, escalation options, and next-step plan. It is informational, not legal advice.
USDWatch is built to route different jobs to different models. Cheaper, faster models can handle extraction and classification. Stronger reasoning models can handle deeper review and synthesis. The configured stack can include DeepInfra-hosted NVIDIA/Nemotron-family models, fallback language models, and document/OCR providers depending on the task and deployment settings.
Model providers can change as quality, cost, and privacy options improve. The important promise is that model use should be purposeful: send the minimum useful context to the model needed for the job, then return structured outputs that the app can display and test.
AI may misunderstand school jargon, family history, disability context, state-specific rules, or the difference between a rumor and a document-backed fact.
USDWatch tries to show evidence strength, confidence, and gaps so the output does not read like a legal conclusion.
A qualified attorney, advocate, clinician, or emergency professional may see issues that a case evaluation misses. Use USDWatch as preparation, not as final authority.
USDWatch does not build or train a custom public model from your case file. The product may store structured outputs, extracted text, embeddings, logs, and evaluation artifacts needed to operate your workspace.
Treat it as a working draft and checklist. Before filing anything formal, review the facts, attach source records, and consider qualified legal or advocacy help.
Normal evaluation is automated. Human review may happen for support, safety, security, operations, abuse prevention, or if you opt into attorney, advocacy, media, or parent-group support.
Families are uploading sensitive facts. You deserve to know when AI is involved, what it is doing, and where the limits are.
AI disclosure expectations are becoming normal. We watch consumer-protection guidance, AI risk-management frameworks, and transparency rules as they evolve.