GIAG Research Output • 2026

Publications

Research newsletters, practitioner articles, and working papers from the Government IT/AI Governance Initiative. All publications are freely available for download.

All Publications — click any card to jump to full entry

Newsletter • Issue 3

Human Oversight Quality in Agentic AI

Government AI in Practice — April 16, 2026

Agentic AI Human Oversight WP-2 Findings
Newsletter • Issue 2

When Policy Moves Faster Than Organizations Can Learn

Government AI in Practice — Late March 2026

M-25-21 Implementation Fidelity Federal Compliance
Newsletter • Issue 1

What We Don’t Know About NIST AI RMF in Practice

Government AI in Practice — March 2026

NIST AI RMF Federal Agencies Research Agenda
Research Article

Your AI Governance Framework Won’t Save You. Your Contract Might.

March 23, 2026

Procurement Contract Governance Vendor Risk
Research Article

The AI Threshold Problem Government IT Can’t Measure

February 6, 2026

AI Thresholds Digital Sovereignty State CIOs
Working Paper • WP-2

When Humans Must Intervene

GIAG Research Series — April 2026

Agentic AI Human Oversight EU AI Act
Working Paper • WP-1

Implementation Fidelity: Why AI RMF Adoption Metrics Are Measuring the Wrong Thing

GIAG Research Series — March 2026

NIST AI RMF Governance Metrics M-25-21
Technical Methods • D-1

Functional Sizing as a Foundation for AI Governance Measurement

GIAG Research Series — March 2026

Function Point Analysis COSMIC Governance Metrics
Industry Observations • 4 posts

Implementation Fidelity & RMF Practice

April 2026

NIST AI RMF Documentation vs. Operations Risk Registers GQM
Industry Observations • 4 posts

Agentic AI & Deployment Risk

March – April 2026

Scope Expansion Integration Depth Governance Design
Industry Observations • 2 posts

Contract Governance & Procurement

February – March 2026

Vendor Risk Agentic AI Contracts Dispute Architecture
Industry Observations • 3 posts

AI Measurement & Functional Sizing

February – April 2026

Functional Sizing IFPUG / COSMIC Procurement Baselines
Industry Observations • 2 posts

CIO Leadership & Accountability

March – April 2026

CIO Role Accountability Architecture Governance Ownership

What We Publish and Why

ThinkCapital publications span three formats. The Government AI in Practice newsletter delivers research analysis and field observations to practitioners on a regular schedule. Short-form research articles address specific governance questions with enough depth to be useful without requiring a full working paper. The GIAG Research Series working papers and technical methods documents are the most substantive output — intended for researchers, senior practitioners, and policy audiences who need the underlying argument and evidence, not just the conclusions.

All publications are freely available. Working papers and technical methods documents may be cited with attribution for non-commercial research and professional purposes.

Subscribe to Government AI in Practice

The newsletter is published on Substack. New issues go to subscribers first, with archived issues available here. If the research questions GIAG is working on are relevant to your work, the newsletter is the fastest way to stay current.

Subscribe on Substack →

Research Newsletter

3 Issues

Research Articles

2 Articles

Short-form analytical pieces on specific government IT and AI governance questions, distributed through professional networks. Each develops a focused argument grounded in the same measurement discipline as the longer GIAG working papers.

Research Article

Your AI Governance Framework Won’t Save You. Your Contract Might.

March 23, 2026

The Pentagon–Anthropic–OpenAI sequence of late February and early March 2026 as a live case study in AI governance architecture. The dispute was not resolved by NIST RMF compliance, OMB memoranda, or any risk management documentation. It was resolved by contract language.

“The operational governance that actually constrains AI behavior in deployment does not live in policy frameworks. It lives in contract terms, technical configurations, and vendor relationships.”

Examines the supply chain risk designation as a governance architecture story, and draws the implication for every CIO whose AI contract language has not received the same scrutiny as their risk documentation.

Procurement M-25-22 Contract Governance Vendor Risk Supply Chain CIO Decision-Making
Download PDF PDF  ·  2 pages
Research Article

The AI Threshold Problem Government IT Can’t Measure

February 6, 2026

Government IT leaders face competing mandates to modernize with AI while maintaining digital sovereignty. The problem is not lack of metrics — agencies are accumulating AI KPIs — but that measurement frameworks built for earlier technology generations cannot price what sovereignty actually costs.

“At what threshold does AI process automation become mission-critical enough to require sovereign controls? You can’t measure jurisdictional control in the same framework you use to measure server utilization.”

Develops the threshold question that matters for state CIO investment decisions and argues for measurement frameworks built around decision logic rather than activity metrics.

AI Thresholds Digital Sovereignty Measurement State CIOs ROI Frameworks Mission-Critical AI
Download PDF PDF  ·  1 page

Working Papers & Technical Methods

GIAG Research Series

The GIAG Research Series documents the theoretical and empirical foundations of the initiative’s research streams. Working papers develop the core arguments. Technical methods papers document the measurement approaches applied. These are the reference documents underlying the newsletter analysis and practitioner articles.

Working Paper  •  WP-2

When Humans Must Intervene: A Decision-Grounded Framework for Human Oversight in Government and Commercial Agentic AI Deployments

GIAG Research Series  —  April 2026  ·  Stream Two: Human Oversight Quality

Establishes a decision-level standard for mandatory human intervention in agentic AI deployments — one that operates independently of system risk classification. Identifies five decision characteristics that consistently require a human in the execution chain before action proceeds: irreversibility, consequence transfer, distributional novelty, value conflict, and legal or regulatory significance. Any one criterion is sufficient to trigger mandatory review.

“Nominal oversight — human review that exists on paper but provides no genuine control — is more dangerous than its absence, because it creates documented accountability that is not backed by actual human judgment.”

Provides a five-phase implementation framework for building durable intervention architecture in government and commercial settings, with direct attention to the reviewer quality problem: the gap between oversight presence and oversight substance. Proposes measurement criteria for distinguishing genuine human control from rubber-stamp compliance. Designed to be operationally deployable at the decision-type level without changes to existing AI system architecture.

Agentic AI Human Oversight Decision-Level Governance Irreversibility Consequence Transfer Distributional Novelty NIST AI RMF OMB M-24-10 EU AI Act Reviewer Quality Implementation Framework
Working Paper  •  WP-1

Implementation Fidelity: Why AI RMF Adoption Metrics Are Measuring the Wrong Thing

GIAG Research Series  —  March 2026

Defines implementation fidelity as the degree to which a governance framework changes actual decision behavior — and distinguishes it from documentation compliance, adoption rates, and reporting scores, which current practice conflates with it.

“Current AI RMF adoption metrics count governance documentation activity. They measure the governance equivalent of lines of code: technically precise, functionally uninformative about what the governance system delivers.”

Draws on the software measurement community’s resolution of the lines-of-code problem to argue that the same conceptual move is required in AI governance. Develops the measurement framework for GIAG Stream One and introduces three concepts that current practice incorrectly treats as proxies for implementation fidelity.

NIST AI RMF Implementation Fidelity Governance Metrics Measurement Frameworks Function Points Capers Jones M-25-21
Technical Methods  •  D-1

Functional Sizing as a Foundation for AI Governance Measurement

GIAG Research Series  —  March 2026  ·  Applying Function Point Analysis and COSMIC to AI System Scope and Complexity

Documents the application of Function Point Analysis and the COSMIC functional size measurement method to the problem of AI system scope characterization. Argues that governance frameworks built on adoption metrics fail at the same structural level that pre-FPA software metrics failed.

“Adoption rates, documentation scores, and compliance checklists in AI governance represent the same category of failure as lines-of-code metrics. They describe activity at the implementation layer without reaching the functional layer where governance either works or does not work.”

Applies Albrecht’s FPA methodology — validated by Capers Jones at Software Productivity Research across 250+ enterprise assessments — to AI system scope characterization, then develops COSMIC-based extensions for the internal computational behavior that FPA alone does not address.

Function Point Analysis COSMIC Software Measurement AI Scope Governance Metrics IFPUG SPR

Industry Observations

LinkedIn Commentary  •  14 posts

Short-form practitioner commentary on government AI governance, published to LinkedIn and professional networks. Each post addresses a specific observation from the field or an inference from the GIAG research program. These are brief contributions to the ongoing professional conversation — not extended arguments — grouped here by subject area for reference.

Implementation Fidelity & RMF Practice
Apr 15, 2026 RMF Practice A Diagnostic for AI Risk Registers in Government: Three Questions Most Agencies Cannot Answer Apr 11, 2026 RMF Practice Federal AI Governance Produces Two Kinds of Records. Most Agencies Cannot Tell Them Apart. Apr 9, 2026 Governance Timing Three Governance Clocks, and All of Them Are Running Behind Apr 3, 2026 Implementation Fidelity Documentation Compliance vs. Operational Compliance: The RMF Gap
Agentic AI & Deployment Risk
Apr 8, 2026 Scope Management AI Scope Expansion Does Not Look Like a Decision. It Looks Like a Series of Reasonable Accommodations. Apr 6, 2026 Integration Risk The Integration Depth Problem: When AI Becomes Load-Bearing Mar 27, 2026 Scope Management Scope Expansion Without Review Triggers Is a Deployment Risk, Not a Policy Gap Mar 2, 2026 Agentic AI Government CIOs: We May Be Governing the Wrong AI
Contract Governance & Procurement
Mar 3, 2026 Vendor Risk Guardrails Were Never the Issue. This Week’s AI Contract Dispute Revealed Something More Important. Feb 19, 2026 Procurement When the Governance Gap Becomes a Contract Dispute
AI Measurement & Functional Sizing
Apr 22, 2026 Governance Verification GQM as Governance Verification: The Measurement Gap in NIST AI RMF Mar 27, 2026 Functional Sizing Can We Size AI Systems? Adapting Functional Measurement for Non-Deterministic Software Feb 6, 2026 Measurement The AI Threshold Problem Government IT Can’t Measure
CIO Leadership & Accountability
Apr 23, 2026 CIO Accountability The CIO Cannot Be Neutral on AI Governance and Expect the Accountability to Land Somewhere Useful Mar 2, 2026 CIO Strategy Government CIOs: We May Be Governing the Wrong AI

Posts are freely downloadable as PDF. Each was originally published on LinkedIn. Some posts were also published on Substack. Follow Michael Bragen on LinkedIn for current commentary as new observations are published.

Citation and use. Working papers and technical methods documents are copyright ThinkCapital LLC. They may be cited and shared for non-commercial research and professional purposes with attribution. Suggested format: Bragen, M. (2026). [Title]. ThinkCapital GIAG Research Series. ThinkCapital LLC. thinkcapital.org/publications.html — For other uses, contact via the Engage page.

Get Involved

Participate in This Research

GIAG is conducting structured interviews with government IT leaders, AI governance practitioners, and policy implementers with direct experience in federal, state, or local government AI deployment or oversight.

Participation is a single 30–45 minute interview. Participants receive early access to preliminary findings and may be acknowledged by name or participate anonymously.

Express Interest in Participating View Research Program

Accounts of difficulty or partial implementation are as valuable as accounts of success. Direct experience within the past 18 months is the primary qualifier.