The AI-in-Gov Council of the College of Engineering and Computing invites proposals for Summer 2026 research projects. This seed fund supports focused, empirical work on the development and evaluation of AI systems intended for policy, governance, and public-sector applications. The goal is to generate rigorous evidence, methods, and prototypes that advance our understanding of how AI systems behave and fail in high-stakes institutional settings.
Funding and Timeline
Awards support summer salary, student stipends, research materials, and related project costs.
Budget: Up to $25,000 per project
Number of Awards: Up to three projects
Period of Performance: May 25 to August 24, 2026 (three months)
Proposal Deadline: April 15, 2026, 11:59 PM ET
Project Scope
We seek technically-rigorous summer projects with clearly defined deliverables, empirical grounding, and realistic scope. Proposals may focus on model evaluation, system design, measurement methodology, or benchmarking infrastructure. Work should be completable within the three-month performance period and should produce artifacts (datasets, metrics, prototypes, or technical reports) that others can build on.
The research questions below illustrate priority areas. Proposals may extend beyond them provided they demonstrate comparable depth and evaluation rigor.
High-priority Themes
Reliability and Failure Characterization
- How do large language models fail on domain-specific government texts, such as regulatory language, legal memoranda, policy briefs, and how do failure modes differ from those observed on general-purpose benchmarks?
- What lightweight evaluation protocols can reliably surface brittleness in AI-assisted workflows before deployment in public-sector settings?
- How should uncertainty be represented and communicated in AI-generated summaries or recommendations intended for non-technical decision-makers?
Integrity and Provenance
- Under what conditions do retrieval-augmented generation systems produce unfaithful or unsupported citations, and what metrics best detect these failures in document-grounded policy contexts?
- How can automated consistency checking across large regulatory or policy corpora be made robust to ambiguity, implicit cross-references, and evolving statutory language?
- What are the practical tradeoffs between privacy preservation and output utility in LLM workflows operating over sensitive government records?
Human-AI Decision Architecture
- How do public-sector professionals actually use, override, or defer to AI-generated outputs in structured decision tasks, and what design variables shape that behavior?
- What audit trail structures are sufficient to reconstruct and evaluate AI-informed decisions in accountability-sensitive government contexts?
- When AI systems simplify complex policy language for readability, what semantic or normative content is most at risk of distortion, and how can that risk be measured?
Expected Outcomes
Project outcomes include but are not limited to:
- Prototype systems, detectors, or evaluation pipelines
- Curated datasets and reproducible metrics
- Quantitative analyses and technical reports
- Implementation guidelines or evaluation frameworks for government-relevant AI workflows
Proposal Requirements
Proposals should be no more than three pages, single-spaced, in 11-point font. Each proposal should include the following:
- Project Title
- Team Members (include student/s as relevant)
- Project Summary (250 to 300 words)
- Research Objectives
- Proposed Methods or Approaches
- Expected Outcomes
- Demonstrated Related Work
- Budget – line items with estimate amounts
Eligibility
Proposals are open to faculty within the College of Engineering and Computing. Interdisciplinary collaborations, including co-PIs from other colleges, are encouraged where they strengthen the proposed work.
Submission
Please submit your proposal using the linked form: Call for Proposal: 2026 CEC AI-in-Gov Summer Research Seed Fund.
For questions about scope, eligibility, or the review process, please contact Peng Warweg at [email protected]