Sketch In, Sketch Out: Accelerating both Learning and Inference for Structured Prediction with Kernels - Département Image, Données, Signal Accéder directement au contenu
Pré-Publication, Document De Travail Année : 2023

Sketch In, Sketch Out: Accelerating both Learning and Inference for Structured Prediction with Kernels

Résumé

Surrogate kernel-based methods offer a flexible solution to structured output prediction by leveraging the kernel trick in both input and output spaces. In contrast to energy-based models, they avoid to pay the cost of inference during training, while enjoying statistical guarantees. However, without approximation, these approaches are condemned to be used only on a limited amount of training data. In this paper, we propose to equip surrogate kernel methods with approximations based on sketching, seen as low rank projections of feature maps both on input and output feature maps. We showcase the approach on Input Output Kernel ridge Regression (or Kernel Dependency Estimation) and provide excess risk bounds that can be in turn directly plugged on the final predictive model. An analysis of the complexity in time and memory show that sketching the input kernel mostly reduces training time while sketching the output kernel allows to reduce the inference time. Furthermore, we show that Gaussian and sub-Gaussian sketches are admissible sketches in the sense that they induce projection operators ensuring a small excess risk. Experiments on different tasks consolidate our findings.
Fichier principal
Vignette du fichier
ScalingOVK_Preprint.pdf (533.38 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04001898 , version 1 (23-02-2023)

Identifiants

Citer

Tamim El Ahmad, Luc Brogat-Motte, Pierre Laforgue, Florence d'Alché-Buc. Sketch In, Sketch Out: Accelerating both Learning and Inference for Structured Prediction with Kernels. 2023. ⟨hal-04001898⟩
39 Consultations
28 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More