Ryan Bahlous-Boldi
(bah-LOOSE BOWL-dee)
I'm an undergrad at UMass Amherst researching how intelligence emerges in adaptive artificial systems that evolve and learn.
My focus includes enabling agents to adapt in dynamic, open-ended environments, understanding how complex behaviors arise from simple rules, and designing systems that capture the flexibility and generality of human intelligence.
I am advised by Lee Spector and Scott Niekum, with whom I work on lexicase selection, genetic programming, reinforcement learning, preference learning, and lots of other weird things!
In the past, I worked with Stefanos Nikolaidis at the University of Southern California and Katia Sycara at Carnegie Mellon University.
My research interests include reinforcement learning, evolutionary computation, deep learning, and cognitive science.
CV /
Google Scholar /
GitHub /
LinkedIn /
Education /
Talks /
Blog
Email: r bahlous bold [at] umass [dot] edu
|
|
Representative Papers
published under Ryan Bahlous-Boldi and Ryan Boldi
For a full list, see google scholar.
* = equal contribution
|
|
Dominated Novelty Search: Rethinking Local Competition in Quality-Diversity
Ryan Bahlous-Boldi*, Maxence Faldor*, Luca Grillotti, Hannah Janmohamed, Lisa Coiffard, Lee Spector, Antoine Cully
Under Submission, 2025
PDF
TL;DR: We propose a new class of quality-diversity algorithms that are simply genetic algorithms with fitness augmentations.
|
|
Pareto Optimal Learning from Preferences with Hidden Context
Ryan Bahlous-Boldi, Li Ding, Lee Spector, and Scott Niekum
Pluralistic Alignment Workshop @ NeurIPS 2024 & Under Submission, 2024
PDF
TL;DR: We frame reward function inference from diverse groups of people as a multi-objective optimization problem.
|
|
Solving Deceptive Problems Without Explicit Diversity Maintenance
Ryan Boldi, Li Ding, Lee Spector
Agent Learning in Open Endedness @ NeurIPS 2023 & GECCO '24 Companion, 2024
PDF /
DOI
TL;DR: We present an approach that uses lexicase selection to solve deceptive problems by optimizing a series of defined objectives, implicitly maintaining population diversity.
|
|
Informed Down-Sampled Lexicase Selection: Identifying productive training cases for efficient problem solving
Ryan Boldi, Martin Briesch, Dominik Sobania, Alexander Lalejini, Thomas Helmuth, Franz Rothlauf, Charles Ofria, Lee Spector
Evolutionary Computation Journal - MIT Press, 2024
PDF /
DOI
TL;DR: We develop methods to identify the most productive training cases for lexicase selection, improving computational efficiency while maintaining solution quality.
|
Dec 19, 2024
|
I am excited to announce that I have been selected as a 2025 HRI Pioneer by the IEEE/ACM International Conference on Human-Robot Interaction. Looking forward to seeing you all in Melbourne!
|
Oct 28, 2024
|
Some of our work on Pareto Optimal Preference Learning (POPL) was accepted to the Pluralistic Alignment workshop at NeurIPS 2024.
|
Jun 3, 2024
|
Excited to be at Carnegie Mellon University's Robotics Institute this summer working with Katia Sycara on emergent communication between diverse agents in multi-agent reinforcement learning settings.
|
Mar 29, 2024
|
Happy to announce that I was selected as a 2024 Goldwater Scholar! I am grateful for the support of my mentors, friends, and family. This year, 438 scholarships were awarded to undergrads in the US, with only 30 going to students in the field of Computer Science.
|
Mar 10, 2024
|
3 short papers accepted to GECCO 2024! Amongst them is some work on integrating Quality Diversity Optimization with Reinforcement Learning, and an extension of our work on the interaction between selection and down-sampling training sets. I am excited to present this work in Melbourne, Australia this July!
|
© 2025 Ryan Bahlous-Boldi
Last Updated: March 2025
Design adapted from Jon Barron
|
|