my face

I'm a third-year computer science PhD student at DIRO at the Université de Montréal, advised by Bang Liu. I study the structured use of LLMs within larger pipelines and agent systems, from both robustness and sociotechnical perspectives. Here's my resume (and the academic version).

Previously I was a speech scientist at Cobalt Speech & Language, a company that designs custom speech recognition, text-to-speech, and dialogue models. While at Cobalt I worked on some neat projects, including language modeling for recognizing air traffic control speech and creating an online training system for ASR models.

I graduated from BYU with a BS in Applied and Computational Mathematics (ACME) with an emphasis in linguistics and a minor in computer science. ACME's rigorous curriculum includes graduate-level courses in algorithms, analysis, optimization, statistics, data science, optimal control, and machine learning.

During my undergrad I interned with Cobalt Speech, as well as Emergent Trading, an automated trading firm that made the news for reporting a problem in a Eurodollar exchange rule that unfairly favored larger competitors. (I developed the analysis tools that were used to track the issue down and determine how our opponent was taking advantage of the rule.)

contact (he/him) permalink

Around the web I'm known by the username kylrth. I prefer to be contacted through the Matrix protocol (@kyle:kylrth.com). (If you'd like an account on my Matrix server, follow the instructions here.) My GPG public key is here.

Matrix  /  email  /  GitHub  /  LinkedIn  /  phone  /  WhatsApp  /  Signal  /  Session

research permalink

Here's my academic CV.

Several of my projects center on providing LLM systems with formal structure for procedural (how-to) knowledge. Providing explicit structure for procedure planning allows developers and users to "open the hood" on the planning and action phases of a pipeline, and allows us to build "skill libraries" of procedures that can be adapted to new tasks.

icon of a procedural knowledge database
analogy-augmented generation

We developed a simple formalism for procedural knowledge, built a custom retrieval system using this formal structure, and designed a custom LLM pipeline to retrieve and adapt known procedures to new tasks by analogy. We test this system on three datasets: cooking recipes, coding tutorials, and step-by-step math solutions. This version is on arXiv, and we're now building a version that uses a more complex, graph-based formalism.

RedHarbour icon
RedHarbour

I am currently leading a project to design an open-source, privacy-oriented, extensible chat UI and LLM application framework. The application framework defines a set of "skills" as Python code that calls upon resources such as LLMs, retrievers, local files, conversational memory, multi-modal models, or web services. These sandboxed skill scripts can be developed and shared by humans, or automatically generated and fine-tuned using LLMs. The project is designed to logically separate the skill system from the local UI application so the former can be generally applicable in new environments. This will build on the formal representations we have developed for AAG.

annealing results plot for code search project
code search

In this work we developed a pipeline that first searches over unique functions called in the codebase, then leverages a standard static analysis tool to find all snippets that call the top functions, and finally checks those snippets using several dense models of varying sizes (and runtimes) to selectively filter these snippets by each model's confidence. The thresholds of this pipeline are optimized for search speed using simulated annealing over a small dataset of query-snippet pairs, with a constraint on false positives and false negatives. This system has been deployed in production on the Moderne platform to perform high-quality code search on industrial-scale codebases.

other stuff permalink