About These Publications
What We Publish and Why
ThinkCapital publications span three formats. The Government AI in Practice newsletter delivers research analysis and field observations to practitioners on a regular schedule. Short-form research articles address specific governance questions with enough depth to be useful without requiring a full working paper. The GIAG Research Series working papers and technical methods documents are the most substantive output — intended for researchers, senior practitioners, and policy audiences who need the underlying argument and evidence, not just the conclusions.
All publications are freely available. Working papers and technical methods documents may be cited with attribution for non-commercial research and professional purposes.
Subscribe to Government AI in Practice
The newsletter is published on Substack. New issues go to subscribers first, with archived issues available here. If the research questions GIAG is working on are relevant to your work, the newsletter is the fastest way to stay current.
Research Newsletter
3 IssuesResearch Articles
2 ArticlesShort-form analytical pieces on specific government IT and AI governance questions, distributed through professional networks. Each develops a focused argument grounded in the same measurement discipline as the longer GIAG working papers.
Your AI Governance Framework Won’t Save You. Your Contract Might.
March 23, 2026
The Pentagon–Anthropic–OpenAI sequence of late February and early March 2026 as a live case study in AI governance architecture. The dispute was not resolved by NIST RMF compliance, OMB memoranda, or any risk management documentation. It was resolved by contract language.
Examines the supply chain risk designation as a governance architecture story, and draws the implication for every CIO whose AI contract language has not received the same scrutiny as their risk documentation.
The AI Threshold Problem Government IT Can’t Measure
February 6, 2026
Government IT leaders face competing mandates to modernize with AI while maintaining digital sovereignty. The problem is not lack of metrics — agencies are accumulating AI KPIs — but that measurement frameworks built for earlier technology generations cannot price what sovereignty actually costs.
Develops the threshold question that matters for state CIO investment decisions and argues for measurement frameworks built around decision logic rather than activity metrics.
Working Papers & Technical Methods
GIAG Research SeriesThe GIAG Research Series documents the theoretical and empirical foundations of the initiative’s research streams. Working papers develop the core arguments. Technical methods papers document the measurement approaches applied. These are the reference documents underlying the newsletter analysis and practitioner articles.
When Humans Must Intervene: A Decision-Grounded Framework for Human Oversight in Government and Commercial Agentic AI Deployments
GIAG Research Series — April 2026 · Stream Two: Human Oversight Quality
Establishes a decision-level standard for mandatory human intervention in agentic AI deployments — one that operates independently of system risk classification. Identifies five decision characteristics that consistently require a human in the execution chain before action proceeds: irreversibility, consequence transfer, distributional novelty, value conflict, and legal or regulatory significance. Any one criterion is sufficient to trigger mandatory review.
Provides a five-phase implementation framework for building durable intervention architecture in government and commercial settings, with direct attention to the reviewer quality problem: the gap between oversight presence and oversight substance. Proposes measurement criteria for distinguishing genuine human control from rubber-stamp compliance. Designed to be operationally deployable at the decision-type level without changes to existing AI system architecture.
Implementation Fidelity: Why AI RMF Adoption Metrics Are Measuring the Wrong Thing
GIAG Research Series — March 2026
Defines implementation fidelity as the degree to which a governance framework changes actual decision behavior — and distinguishes it from documentation compliance, adoption rates, and reporting scores, which current practice conflates with it.
Draws on the software measurement community’s resolution of the lines-of-code problem to argue that the same conceptual move is required in AI governance. Develops the measurement framework for GIAG Stream One and introduces three concepts that current practice incorrectly treats as proxies for implementation fidelity.
Functional Sizing as a Foundation for AI Governance Measurement
GIAG Research Series — March 2026 · Applying Function Point Analysis and COSMIC to AI System Scope and Complexity
Documents the application of Function Point Analysis and the COSMIC functional size measurement method to the problem of AI system scope characterization. Argues that governance frameworks built on adoption metrics fail at the same structural level that pre-FPA software metrics failed.
Applies Albrecht’s FPA methodology — validated by Capers Jones at Software Productivity Research across 250+ enterprise assessments — to AI system scope characterization, then develops COSMIC-based extensions for the internal computational behavior that FPA alone does not address.
Industry Observations
LinkedIn Commentary • 14 postsShort-form practitioner commentary on government AI governance, published to LinkedIn and professional networks. Each post addresses a specific observation from the field or an inference from the GIAG research program. These are brief contributions to the ongoing professional conversation — not extended arguments — grouped here by subject area for reference.
Posts are freely downloadable as PDF. Each was originally published on LinkedIn. Some posts were also published on Substack. Follow Michael Bragen on LinkedIn for current commentary as new observations are published.
Citation and use. Working papers and technical methods documents are copyright ThinkCapital LLC. They may be cited and shared for non-commercial research and professional purposes with attribution. Suggested format: Bragen, M. (2026). [Title]. ThinkCapital GIAG Research Series. ThinkCapital LLC. thinkcapital.org/publications.html — For other uses, contact via the Engage page.Get Involved
Participate in This Research
GIAG is conducting structured interviews with government IT leaders, AI governance practitioners, and policy implementers with direct experience in federal, state, or local government AI deployment or oversight.
Participation is a single 30–45 minute interview. Participants receive early access to preliminary findings and may be acknowledged by name or participate anonymously.
Accounts of difficulty or partial implementation are as valuable as accounts of success. Direct experience within the past 18 months is the primary qualifier.