Multilinear multitask learning

Bernardino Romera-Paredes, Min Hane Aung, Nadia Bianchi-Berthouze, Massimiliano Pontil

Research output: Contribution to conferencePaperpeer-review

68 Citations (Scopus)


Many real world datasets occur or can be arranged into multi-modal structures. With such datasets, the tasks to be learnt can be referenced by multiple indices. Current multitask learning frameworks are not designed to account for the preservation of this information. We propose the use of multilinear algebra as a natural way to model such a set of related tasks. We present two learning methods; one is an adapted convex relaxation method used in the context of tensor completion. The second method is based on the Tucker decomposition and on alternating minimization. Experiments on synthetic and real data indicate that the multilinear approaches provide a significant improvement over other multitask learning methods. Overall our second approach yields the best performance in all datasets.

Original languageEnglish
Number of pages9
Publication statusPublished - 1 Jan 2013
Externally publishedYes
Event30th International Conference on Machine Learning, ICML 2013 - Atlanta, United States
Duration: 16 Jun 201321 Jun 2013


Conference30th International Conference on Machine Learning, ICML 2013
Country/TerritoryUnited States

Cite this