<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>gaussian processes | Katarzyna (Kasia) Kobalczyk</title><link>https://kasia-kobalczyk.github.io/tag/gaussian-processes/</link><atom:link href="https://kasia-kobalczyk.github.io/tag/gaussian-processes/index.xml" rel="self" type="application/rss+xml"/><description>gaussian processes</description><generator>Wowchemy (https://wowchemy.com)</generator><language>en-us</language><lastBuildDate>Tue, 12 May 2026 00:00:00 +0000</lastBuildDate><item><title>LILO: Bayesian Optimization with Natural Language Feedback</title><link>https://kasia-kobalczyk.github.io/publications/articles/lilo/</link><pubDate>Tue, 12 May 2026 00:00:00 +0000</pubDate><guid>https://kasia-kobalczyk.github.io/publications/articles/lilo/</guid><description>&lt;p>
&lt;figure >
&lt;div class="d-flex justify-content-center">
&lt;div class="w-100" >&lt;img alt="imgage" srcset="
/publications/articles/lilo/featured_hufafc4e48bd692fae3f5d9cfd87e87488_1199464_1ab6e35098bb176d20a9885dc65cb2cf.webp 400w,
/publications/articles/lilo/featured_hufafc4e48bd692fae3f5d9cfd87e87488_1199464_43a2ef5263968feef7bc91f2c0098e23.webp 760w,
/publications/articles/lilo/featured_hufafc4e48bd692fae3f5d9cfd87e87488_1199464_1200x1200_fit_q100_h2_lanczos_3.webp 1200w"
src="https://kasia-kobalczyk.github.io/publications/articles/lilo/featured_hufafc4e48bd692fae3f5d9cfd87e87488_1199464_1ab6e35098bb176d20a9885dc65cb2cf.webp"
width="760"
height="564"
loading="lazy" data-zoomable />&lt;/div>
&lt;/div>&lt;/figure>
&lt;/p>
&lt;p>Many real-world optimization problems are guided by complex, subjective preferences that are difficult to express as explicit closed-form objectives. In response, we introduce Language-in-the-Loop Optimization (LILO), a Bayesian optimization (BO) framework that employs a large language model (LLM) to translate free-form natural language feedback and prior knowledge from a decision maker into structured preference signals, going beyond the restrictive scalar or pairwise feedback formats typically assumed in preferential BO. The LLM-derived preferences are integrated by a Gaussian process proxy model, enabling principled acquisition-driven exploration with calibrated uncertainty. By placing the LLM in a supporting role rather than as the optimizer itself, LILO preserves the sample efficiency and stability of BO while providing a flexible and expressive feedback interface. Across synthetic and real-world benchmarks, LILO consistently outperforms both conventional preference-based BO methods and LLM-only optimizers, with particularly strong gains in feedback-limited regimes.&lt;/p></description></item></channel></rss>