We study online linear optimization with matrix variables constrained by the operator norm, a setting where the geometry renders designing data-dependent and efficient adaptive algorithms challenging. We extend the gradient-based prediction scheme to adaptive matrix online learning and cast algorithm design as constructing a family of smoothed potentials for the nuclear norm. We define a notion of admissibility for such smoothings and prove any admissible smoothing yields a regret bound matching the best-known guarantees of one-sided Shampoo. We instantiate this framework with two efficient methods that avoid quadratic projections: an adaptive Follow-the-Perturbed-Leader (FTPL) method using Gaussian stochastic smoothing, and Follow-the-Augmented-Matrix-Leader (FAML), which uses a deterministic hyperbolic smoothing. Using the online-to-nonconvex conversion, we derive two matrix-based optimizers, Pion and Leon, with convergence guarantees in nonsmooth nonconvex settings.