Generative AI • Formal Methods • HCI

Halley Young

Novelty, diversity, and verification in LLMs

I build algorithms and tools that expand people’s option space—producing many high‑quality, meaningfully different candidates under human‑meaningful constraints—so AI augments creativity and engineering without collapsing into a single “safe” style.

About

One program, many domains: novelty, diversity, and verification

I’m a postdoctoral researcher at Microsoft Research NYC. My work sits at the intersection of generative modeling, human‑AI interaction, and formal methods. Across creative domains (music, poetry, identity) and technical domains (code and verification), I focus on a single question: How do we generate many valid possibilities—and let humans steer them—while retaining guarantees when they matter?

Option Space

Diversity, novelty, and anti‑homogenization: keep the set of “good answers” broad rather than collapsing to the mean.

Control

Steering interfaces and “counter‑egos”: systems that amplify decisions instead of replacing the decision‑maker.

Guarantees

Structure, constraints, and verification: when reliability matters, make it explicit and checkable.

If you’re looking for a one‑liner: I work on distribution design under constraints—for art, for code, and for systems that people can trust.

Research Program

A common toolkit: constraints/specifications → many valid candidates → human steering

The distribution, not just the best guess

Many deployed systems optimize for a single “best” output, which can inadvertently encourage homogenization and reduce user agency. I instead treat generation as set construction: define what counts as valid, produce a diverse set of valid candidates, and expose controllable levers so people can navigate the space.

Option Space

Diversity & novelty as first‑class objectives

I build evaluation metrics and generation methods that resist collapse to a statistical mean, especially in settings where innovation, exploration, and perspective matter.

  • Generative collisions / novelty metrics for text and structured outputs
  • Sampling and search that produces many high‑quality alternatives, not one answer
  • Diversity‑aware objectives aligned with real user goals

Control

Steering without erasing the human

I focus on controllability that feels human‑meaningful: compositional levers, identity‑grounded distributions, and collaborations that treat AI as a “counter‑ego,” not a replacement.

Guarantees

Constraints, structure, and verification

When outputs must satisfy hard rules—musical form, program semantics, safety constraints—explicit structure and formal methods become the backbone of trustworthy generation.

Work

Methods, systems, and public scholarship

MethodRelational‑Constraint Generation

Global coherence with checkable structure

Generative models for sequence data that obey relational constraints—useful when “locally good” isn’t enough and outputs must satisfy global structure.

NeurIPS 2022 paper

MethodProgram‑Synthesis‑Guided Generation

Latent structure discovered by synthesis

Using program synthesis to discover and enforce latent structure, making generation more interpretable and controllable.

ICML 2019 / arXiv

MethodCompositional Steering

Control without changing model parameters

Steering music transformer generation by composing controllable mechanisms, keeping the model’s capabilities while adding user levers.

HAI‑GEN 2022 paper

SystemAI for Creativity

aiforcreativity.net

Essays and public‑facing projects on identity, authorship, and the future of creative work with generative systems.

SystemIdentity → Generative Distributions

From statements of self to usable creative option spaces

Methods for turning individual identity statements into “desirable” artistic distributions—supporting exploration rather than prescribing a single style.

HAI‑GEN 2024 paperProject page

DemosMusic Generation Experiments

SoundCloud

Practical demonstrations of controllable music generation: tools that help musicians iterate, explore, and keep authorship.

FeaturedJuGeo — Judgment Geometry

Novelty, diversity, and verification in one system · thehalleyyoung.github.io/jugeo

Proof‑carrying Python verification via sheaf‑theoretic judgment geometry. Multi‑channel evidence (solver proofs, runtime witnesses, AI proposals) with trust accounting and cohomological obstruction analysis.

  • AI‑generated proofs as first‑class evidence alongside Z3 and Lean
  • Diverse evidence channels: many valid perspectives per judgment, not a single proof term
  • 960+ modules, 93 Lean proofs, 13 decidable SMT fragments

Selected Publications

Unified by controllability, structure, and the production of many valid candidates

Across venues, the through‑line is consistent: make the implicit structure explicit, generate a set of valid candidates, and give people meaningful control over which region of the space they explore.

Halley Young
HAI‑GEN @ IUI Workshops 2024

Shows how identity statements can be converted into usable, steerable distributions—supporting exploration and agency rather than prescribing a single “correct” output.

Joint work with Osbert Bastani and Maxwell Du
NeurIPS 2022

Develops methods for measuring and enforcing global coherence in generated sequences, maintaining both local quality and structural integrity.

Joint work with Jesse Engel, Vincent Dumoulin, Pablo Castro, and Anna Huang
HAI‑GEN @ IUI Workshops 2022

Introduces steering mechanisms that add user control without retraining or parameter manipulation, preserving model capabilities while expanding controllability.

Joint work with Osbert Bastani and Mayur Naik
ICML 2019

Uses program synthesis to discover latent structure that can guide generation, laying groundwork for interpretable and controllable generative modeling.

Experience

Roles spanning controllable generation, creative AI, and reliable systems

2023–Present

Microsoft Research, NYC

Postdoctoral Researcher

Developing algorithms and evaluation methods for diverse, steerable generation and for reliability‑oriented tooling that connects LLMs to checkable artifacts.

2021–2023

Google Brain / Magenta

Research Intern

Built controllable generative music models and tooling focused on user agency, avoiding both de‑agentification and homogenization.

2019–2020

Riffit

Software Engineer / Researcher

Research in automated music production for therapeutic purposes, resulting in a patent.

2020–2021

University of Pennsylvania

Teaching Assistant

TA for Graduate AI; Head TA for Graduate Applied ML. Emphasized both technical rigor and real‑world impacts of deployment.

Education

2017–2023

PhD in Computer Science

University of Pennsylvania

Research on controllable generative models and structure‑aware generation (including neurosymbolic approaches). Chateaubriand Fellow.

2014–2017

B.A. magna cum laude, Computer Science

New York University

Selected honors include the Faculty Memorial Prize and the Computer Science Prize for Outstanding Performance in Computer Science.

2013–2014

B.A. Candidate

University of Chicago

Awarded the Lilian Gertrude Selz Prize.

Skills

Programming Languages

Python (PyTorch, JAX)
Julia
C
Matlab
Haskell

Research Areas

Generative Models
Novelty & Diversity Metrics
Controllability
Neurosymbolic Models
Program Synthesis
Formal Methods
Human‑AI Interaction
Music & Creative AI

Contact

Reach out for collaboration, speaking, or advising

© 2025–2026 Halley Young • Last updated March 2026