brianletort.ai

Board & Advisory

Board & Advisory

I sit at the intersection of data, AI, governance, and enterprise transformation — the four domains boards now treat as interdependent.

Operate. Publish. Teach.

Why this moment, and why this seat.

More than 62% of public-company directors now dedicate full-board agenda time to AI, and the posture of the boardroom is moving from awareness to integrated governance. The translators those conversations need — directors fluent in economics, architecture, risk surface, and operating model at the same time — are not yet common in the seats that most need them.

My operating tenure, research program, and patent portfolio were built across exactly those four dimensions. I hold five U.S. patents spanning private AI, SOX automation, and massively parallel system architectures. I have published a four-paper research program on Context Compilation Theory — the systems layer between retrieval and reasoning that determines whether enterprise AI is reliable. I currently lead five concurrent enterprise AI transformation programs at a publicly-traded global infrastructure operator, a $30M portfolio, and 120+ matrixed contributors, with daily engagement across the C-suite and CEO-level platform usage.

That combination — operator at scale, inventor of record, and published researcher — is designed for exactly the kind of director conversation most boards are now trying to have.

Committee fit.

Technology Committee

Enterprise AI strategy. Reference architectures. Platform economics and model routing. Agentic-system reliability. The operating model that turns AI investment into measurable capability. I have architected and operate governed AI platforms at global public-company scale and I hold U.S. patents in private AI and massively parallel system architectures.

I can evaluate technology decisions at the depth an engineering team makes them and at the altitude a board must defend them. The translation between those two altitudes is the single most frequent failure mode in board-level AI conversations — and it is exactly the translation I make every week in my operating role.

Audit & Risk Committee

AI governance design. Model risk management. Evidentiary traceability and provenance of agentic outputs. Third-party AI risk. The control environment for AI-assisted decisions. I hold a U.S. patent in SOX automation and have worked inside the audit stack for regulated enterprises. My research program includes dedicated papers on evidence blocks and a governed lifecycle for agentic engineering work.

I can read an AI control narrative the way an auditor reads a control narrative, and I can translate that between engineering teams and the committee without losing either audience. That is the voice audit and risk committees need in 2026, and it is rare.

Cybersecurity & Data Committee

Private AI for regulated environments. Data sovereignty. Master Data Management as a control surface, not just a reporting input. Identity and access in agentic systems. The architecture of trust for AI that touches customer data. My Northrop Grumman tenure — Northrop Grumman Fellow, Chief Enterprise Architect, Chief Data Scientist, Top Secret cleared — was spent inside security-first environments. That posture is not a style I adopted; it is the operating default I built from.

The boards most exposed to cyber and data risk benefit most from directors who know how to reason about security before they know how to reason about AI. I bring that ordering by construction.

Innovation Committee

How to separate experimentation from production at enterprise scale. How to build an intake model for AI that does not create shadow-AI chaos. How to operationalize responsible AI without stopping the business. How to allocate innovation budget against a coherent platform thesis rather than against a proliferation of pilots.

Every enterprise AI program I have run was designed against exactly those constraints. I can tell the difference between a pilot that will scale and a pilot that will die — at the pilot-proposal stage, before the committee funds it.

Compensation & Human Capital Committee

AI's effect on organizational design. Operating-model shifts between technology and business functions. The economics of augmented roles. The cultural operating system that makes AI adoption durable. I have built and run the kind of internal AI community program that most enterprises attempt and few execute — a 1,000+ member community with 170–178 live attendees per session inside a global public company — and I have written publicly on what separates programs that compound from programs that do not.

The compensation committees that will matter most this decade are the ones that can govern the re-wiring of human work. That is a conversation I can carry.

What I bring to a boardroom.

A director who can translate between a $3,000 token pipeline and a $1M vendor contract, between a retrieval stack and a board-level risk statement, and between a research paper and a quarterly operating plan, is not yet common on boards. I can be that director.

  • Plain-English translation. I take the most complex AI topic in a board pack and return three sentences the committee can actually govern against.

  • Pattern recognition at scale. I have seen the failure modes at public-company scale, not the glossy versions. Boards benefit most from directors who recognize the failure mode before the business unit does.

  • Research gravitas. Five U.S. patents, a four-paper research program on Context Compilation Theory, two books, and sixteen Pluralsight courses are the external validators that let me bring a point of view to the room with appropriate authority — not borrowed, not downloaded, not paraphrased.

  • Operator credibility, present tense. I am currently leading five concurrent enterprise AI transformation programs, a $30M budget, and 120+ matrixed contributors. My instincts are sharp in real time, not retrospectively.

  • Quiet rigor. I do not bring showmanship to a boardroom. I bring the posture a Northrop Grumman Fellow brings to a mission-critical design review: careful, direct, and focused on the decision the board actually has to make.

How I engage.

I accept a small number of engagements per year, at three depths.

Board director.
Full independent director roles where the company’s exposure to AI, data, and enterprise transformation is material to strategy, risk, or both. Committee fit: Technology · Audit & Risk · Cybersecurity & Data · Innovation · Compensation & Human Capital.
Advisory board.
Ongoing advisory-board membership for private companies, private-equity portfolios, and technology companies where an AI-literate, director-grade voice is additive to an existing board. Typical cadence: quarterly meetings, interim engagement on material decisions, inclusion in annual strategy cycles.
Committee advisor.
Fixed-engagement committee advisory for public-company audit, risk, technology, or innovation committees that want durable AI counsel without a full director seat. Typical cadence: twice-annual committee presence, targeted working sessions on specific board-level AI decisions.

Across all three, my engagement style is the same: material-read in advance, specific questions delivered in writing, no grandstanding in the room, follow-up memos that the committee can cite.

Board bio.

A one-page, committee-ready board bio is available on request at brian@brianletort.ai, or as a download from the Executive Bio page.

The board bio is tailored to the company and committee and is written in the fact-based, action-verb-forward style that director-search partners expect. It does not narrate a career. It communicates value, metrics, governance exposure, intellectual capital, external stakeholder interaction, and committee fit — in one page.

To engage.

Operate. Publish. Teach.