---
_id: '999'
abstract:
- lang: eng
  text: 'In multi-task learning, a learner is given a collection of prediction tasks
    and needs to solve all of them. In contrast to previous work, which required that
    annotated training data must be available for all tasks, we consider a new setting,
    in which for some tasks, potentially most of them, only unlabeled training data
    is provided. Consequently, to solve all tasks, information must be transferred
    between tasks with labels and tasks without labels. Focusing on an instance-based
    transfer method we analyze two variants of this setting: when the set of labeled
    tasks is fixed, and when it can be actively selected by the learner. We state
    and prove a generalization bound that covers both scenarios and derive from it
    an algorithm for making the choice of labeled tasks (in the active case) and for
    transferring information between the tasks in a principled way. We also illustrate
    the effectiveness of the algorithm on synthetic and real data. '
alternative_title:
- PMLR
article_processing_charge: No
arxiv: 1
author:
- first_name: Anastasia
  full_name: Pentina, Anastasia
  id: 42E87FC6-F248-11E8-B48F-1D18A9856A87
  last_name: Pentina
- first_name: Christoph
  full_name: Lampert, Christoph
  id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
  last_name: Lampert
  orcid: 0000-0001-8622-7887
citation:
  ama: 'Pentina A, Lampert C. Multi-task learning with labeled and unlabeled tasks.
    In: Vol 70. ML Research Press; 2017:2807-2816.'
  apa: 'Pentina, A., &#38; Lampert, C. (2017). Multi-task learning with labeled and
    unlabeled tasks (Vol. 70, pp. 2807–2816). Presented at the ICML: International
    Conference on Machine Learning, Sydney, Australia: ML Research Press.'
  chicago: Pentina, Anastasia, and Christoph Lampert. “Multi-Task Learning with Labeled
    and Unlabeled Tasks,” 70:2807–16. ML Research Press, 2017.
  ieee: 'A. Pentina and C. Lampert, “Multi-task learning with labeled and unlabeled
    tasks,” presented at the ICML: International Conference on Machine Learning, Sydney,
    Australia, 2017, vol. 70, pp. 2807–2816.'
  ista: 'Pentina A, Lampert C. 2017. Multi-task learning with labeled and unlabeled
    tasks. ICML: International Conference on Machine Learning, PMLR, vol. 70, 2807–2816.'
  mla: Pentina, Anastasia, and Christoph Lampert. <i>Multi-Task Learning with Labeled
    and Unlabeled Tasks</i>. Vol. 70, ML Research Press, 2017, pp. 2807–16.
  short: A. Pentina, C. Lampert, in:, ML Research Press, 2017, pp. 2807–2816.
conference:
  end_date: 2017-08-11
  location: Sydney, Australia
  name: 'ICML: International Conference on Machine Learning'
  start_date: 2017-08-06
corr_author: '1'
date_created: 2018-12-11T11:49:37Z
date_published: 2017-06-08T00:00:00Z
date_updated: 2025-06-04T08:19:03Z
day: '08'
department:
- _id: ChLa
ec_funded: 1
external_id:
  arxiv:
  - '1602.06518'
  isi:
  - '000683309502093'
intvolume: '        70'
isi: 1
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://arxiv.org/abs/1602.06518
month: '06'
oa: 1
oa_version: Submitted Version
page: 2807 - 2816
project:
- _id: 2532554C-B435-11E9-9278-68D0E5697425
  call_identifier: FP7
  grant_number: '308036'
  name: Lifelong Learning of Visual Scene Understanding
publication_identifier:
  isbn:
  - '9781510855144'
publication_status: published
publisher: ML Research Press
publist_id: '6399'
quality_controlled: '1'
scopus_import: '1'
status: public
title: Multi-task learning with labeled and unlabeled tasks
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 70
year: '2017'
...
---
_id: '1098'
abstract:
- lang: eng
  text: Better understanding of the potential benefits of information transfer and
    representation learning is an important step towards the goal of building intelligent
    systems that are able to persist in the world and learn over time. In this work,
    we consider a setting where the learner encounters a stream of tasks but is able
    to retain only limited information from each encountered task, such as a learned
    predictor. In contrast to most previous works analyzing this scenario, we do not
    make any distributional assumptions on the task generating process. Instead, we
    formulate a complexity measure that captures the diversity of the observed tasks.
    We provide a lifelong learning algorithm with error guarantees for every observed
    task (rather than on average). We show sample complexity reductions in comparison
    to solving every task in isolation in terms of our task complexity measure. Further,
    our algorithmic framework can naturally be viewed as learning a representation
    from encountered tasks with a neural network.
acknowledgement: "This work was in parts funded by the European Research Council under
  the European Union’s Seventh Framework Programme (FP7/2007-2013)/ERC grant agreement
  no 308036.\r\n\r\n"
alternative_title:
- Advances in Neural Information Processing Systems
article_processing_charge: No
author:
- first_name: Anastasia
  full_name: Pentina, Anastasia
  id: 42E87FC6-F248-11E8-B48F-1D18A9856A87
  last_name: Pentina
- first_name: Ruth
  full_name: Urner, Ruth
  last_name: Urner
citation:
  ama: 'Pentina A, Urner R. Lifelong learning with weighted majority votes. In: Vol
    29. Neural Information Processing Systems Foundation; 2016:3619-3627.'
  apa: 'Pentina, A., &#38; Urner, R. (2016). Lifelong learning with weighted majority
    votes (Vol. 29, pp. 3619–3627). Presented at the NIPS: Neural Information Processing
    Systems, Barcelona, Spain: Neural Information Processing Systems Foundation.'
  chicago: Pentina, Anastasia, and Ruth Urner. “Lifelong Learning with Weighted Majority
    Votes,” 29:3619–27. Neural Information Processing Systems Foundation, 2016.
  ieee: 'A. Pentina and R. Urner, “Lifelong learning with weighted majority votes,”
    presented at the NIPS: Neural Information Processing Systems, Barcelona, Spain,
    2016, vol. 29, pp. 3619–3627.'
  ista: 'Pentina A, Urner R. 2016. Lifelong learning with weighted majority votes.
    NIPS: Neural Information Processing Systems, Advances in Neural Information Processing
    Systems, vol. 29, 3619–3627.'
  mla: Pentina, Anastasia, and Ruth Urner. <i>Lifelong Learning with Weighted Majority
    Votes</i>. Vol. 29, Neural Information Processing Systems Foundation, 2016, pp.
    3619–27.
  short: A. Pentina, R. Urner, in:, Neural Information Processing Systems Foundation,
    2016, pp. 3619–3627.
conference:
  end_date: 2016-12-10
  location: Barcelona, Spain
  name: 'NIPS: Neural Information Processing Systems'
  start_date: 2016-12-05
date_created: 2018-12-11T11:50:08Z
date_published: 2016-12-01T00:00:00Z
date_updated: 2025-06-03T11:35:58Z
day: '01'
ddc:
- '006'
department:
- _id: ChLa
ec_funded: 1
file:
- access_level: open_access
  content_type: application/pdf
  creator: system
  date_created: 2018-12-12T10:12:42Z
  date_updated: 2018-12-12T10:12:42Z
  file_id: '4961'
  file_name: IST-2017-775-v1+1_main.pdf
  file_size: 237111
  relation: main_file
- access_level: open_access
  content_type: application/pdf
  creator: system
  date_created: 2018-12-12T10:12:43Z
  date_updated: 2018-12-12T10:12:43Z
  file_id: '4962'
  file_name: IST-2017-775-v1+2_supplementary.pdf
  file_size: 185818
  relation: main_file
file_date_updated: 2018-12-12T10:12:43Z
has_accepted_license: '1'
intvolume: '        29'
language:
- iso: eng
month: '12'
oa: 1
oa_version: Published Version
page: 3619-3627
project:
- _id: 2532554C-B435-11E9-9278-68D0E5697425
  call_identifier: FP7
  grant_number: '308036'
  name: Lifelong Learning of Visual Scene Understanding
publication_status: published
publisher: Neural Information Processing Systems Foundation
publist_id: '6277'
pubrep_id: '775'
quality_controlled: '1'
scopus_import: '1'
status: public
title: Lifelong learning with weighted majority votes
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 29
year: '2016'
...
---
OA_place: publisher
_id: '1126'
abstract:
- lang: eng
  text: "Traditionally machine learning has been focusing on the problem of solving
    a single\r\ntask in isolation. While being quite well understood, this approach
    disregards an\r\nimportant aspect of human learning: when facing a new problem,
    humans are able to\r\nexploit knowledge acquired from previously learned tasks.
    Intuitively, access to several\r\nproblems simultaneously or sequentially could
    also be advantageous for a machine\r\nlearning system, especially if these tasks
    are closely related. Indeed, results of many\r\nempirical studies have provided
    justification for this intuition. However, theoretical\r\njustifications of this
    idea are rather limited.\r\nThe focus of this thesis is to expand the understanding
    of potential benefits of information\r\ntransfer between several related learning
    problems. We provide theoretical\r\nanalysis for three scenarios of multi-task
    learning - multiple kernel learning, sequential\r\nlearning and active task selection.
    We also provide a PAC-Bayesian perspective on\r\nlifelong learning and investigate
    how the task generation process influences the generalization\r\nguarantees in
    this scenario. In addition, we show how some of the obtained\r\ntheoretical results
    can be used to derive principled multi-task and lifelong learning\r\nalgorithms
    and illustrate their performance on various synthetic and real-world datasets."
acknowledgement: "First and foremost I would like to express my gratitude to my supervisor,
  Christoph\r\nLampert. Thank you for your patience in teaching me all aspects of
  doing research\r\n(including English grammar), for your trust in my capabilities
  and endless support. Thank\r\nyou for granting me freedom in my research and, at
  the same time, having time and\r\nhelping me cope with the consequences whenever
  I needed it. Thank you for creating\r\nan excellent atmosphere in the group, it
  was a great pleasure and honor to be a part of\r\nit. There could not have been
  a better and more inspiring adviser and mentor.\r\nI thank Shai Ben-David for welcoming
  me into his group at the University of Waterloo,\r\nfor inspiring discussions and
  support. It was a great pleasure to work together. I am\r\nalso thankful to Ruth
  Urner for hosting me at the Max-Planck Institute Tübingen, for the\r\nfruitful
  collaboration and for taking care of me during that not-so-sunny month of May.\r\nI
  thank Jan Maas for kindly joining my thesis committee despite the short notice and\r\nproviding
  me with insightful comments.\r\nI would like to thank my colleagues for their support,
  entertaining conversations and\r\nendless table soccer games we shared together:
  Georg, Jan, Amelie and Emilie, Michal\r\nand Alex, Alex K. and Alex Z., Thomas,
  Sameh, Vlad, Mayu, Nathaniel, Silvester, Neel,\r\nCsaba, Vladimir, Morten. Thank
  you, Mabel and Ram, for the wonderful time we spent\r\ntogether. I am thankful to
  Shrinu and Samira for taking care of me during my stay at the\r\nUniversity of Waterloo.
  Special thanks to Viktoriia for her never-ending optimism and for\r\nbeing so inspiring
  and supportive, especially at the beginning of my PhD journey.\r\nThanks to IST
  administration, in particular, Vlad and Elisabeth for shielding me from\r\nmost
  of the bureaucratic paperwork.\r\n\r\nThis dissertation would not have been possible
  without funding from the European\r\nResearch Council under the European Union's
  Seventh Framework Programme\r\n(FP7/2007-2013)/ERC grant agreement no 308036."
alternative_title:
- ISTA Thesis
article_processing_charge: No
author:
- first_name: Anastasia
  full_name: Pentina, Anastasia
  id: 42E87FC6-F248-11E8-B48F-1D18A9856A87
  last_name: Pentina
citation:
  ama: Pentina A. Theoretical foundations of multi-task lifelong learning. 2016. doi:<a
    href="https://doi.org/10.15479/AT:ISTA:TH_776">10.15479/AT:ISTA:TH_776</a>
  apa: Pentina, A. (2016). <i>Theoretical foundations of multi-task lifelong learning</i>.
    Institute of Science and Technology Austria. <a href="https://doi.org/10.15479/AT:ISTA:TH_776">https://doi.org/10.15479/AT:ISTA:TH_776</a>
  chicago: Pentina, Anastasia. “Theoretical Foundations of Multi-Task Lifelong Learning.”
    Institute of Science and Technology Austria, 2016. <a href="https://doi.org/10.15479/AT:ISTA:TH_776">https://doi.org/10.15479/AT:ISTA:TH_776</a>.
  ieee: A. Pentina, “Theoretical foundations of multi-task lifelong learning,” Institute
    of Science and Technology Austria, 2016.
  ista: Pentina A. 2016. Theoretical foundations of multi-task lifelong learning.
    Institute of Science and Technology Austria.
  mla: Pentina, Anastasia. <i>Theoretical Foundations of Multi-Task Lifelong Learning</i>.
    Institute of Science and Technology Austria, 2016, doi:<a href="https://doi.org/10.15479/AT:ISTA:TH_776">10.15479/AT:ISTA:TH_776</a>.
  short: A. Pentina, Theoretical Foundations of Multi-Task Lifelong Learning, Institute
    of Science and Technology Austria, 2016.
corr_author: '1'
date_created: 2018-12-11T11:50:17Z
date_published: 2016-11-01T00:00:00Z
date_updated: 2026-04-09T10:49:34Z
day: '01'
ddc:
- '006'
degree_awarded: PhD
department:
- _id: ChLa
doi: 10.15479/AT:ISTA:TH_776
ec_funded: 1
file:
- access_level: open_access
  content_type: application/pdf
  creator: system
  date_created: 2018-12-12T10:14:07Z
  date_updated: 2018-12-12T10:14:07Z
  file_id: '5056'
  file_name: IST-2017-776-v1+1_Pentina_Thesis_2016.pdf
  file_size: 2140062
  relation: main_file
file_date_updated: 2018-12-12T10:14:07Z
has_accepted_license: '1'
language:
- iso: eng
month: '11'
oa: 1
oa_version: Published Version
page: '127'
project:
- _id: 2532554C-B435-11E9-9278-68D0E5697425
  call_identifier: FP7
  grant_number: '308036'
  name: Lifelong Learning of Visual Scene Understanding
publication_identifier:
  issn:
  - 2663-337X
publication_status: published
publisher: Institute of Science and Technology Austria
publist_id: '6234'
pubrep_id: '776'
status: public
supervisor:
- first_name: Christoph
  full_name: Lampert, Christoph
  id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
  last_name: Lampert
  orcid: 0000-0001-8622-7887
title: Theoretical foundations of multi-task lifelong learning
type: dissertation
user_id: ba8df636-2132-11f1-aed0-ed93e2281fdd
year: '2016'
...
---
_id: '1425'
abstract:
- lang: eng
  text: 'In this work we aim at extending the theoretical foundations of lifelong
    learning. Previous work analyzing this scenario is based on the assumption that
    learning tasks are sampled i.i.d. from a task environment or limited to strongly
    constrained data distributions. Instead, we study two scenarios when lifelong
    learning is possible, even though the observed tasks do not form an i.i.d. sample:
    first, when they are sampled from the same environment, but possibly with dependencies,
    and second, when the task environment is allowed to change over time in a consistent
    way. In the first case we prove a PAC-Bayesian theorem that can be seen as a direct
    generalization of the analogous previous result for the i.i.d. case. For the second
    scenario we propose to learn an inductive bias in form of a transfer procedure.
    We present a generalization bound and show on a toy example how it can be used
    to identify a beneficial transfer algorithm.'
alternative_title:
- Advances in Neural Information Processing Systems
article_processing_charge: No
author:
- first_name: Anastasia
  full_name: Pentina, Anastasia
  id: 42E87FC6-F248-11E8-B48F-1D18A9856A87
  last_name: Pentina
- first_name: Christoph
  full_name: Lampert, Christoph
  id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
  last_name: Lampert
  orcid: 0000-0001-8622-7887
citation:
  ama: 'Pentina A, Lampert C. Lifelong learning with non-i.i.d. tasks. In: Vol 2015.
    Neural Information Processing Systems Foundation; 2015:1540-1548.'
  apa: 'Pentina, A., &#38; Lampert, C. (2015). Lifelong learning with non-i.i.d. tasks
    (Vol. 2015, pp. 1540–1548). Presented at the NIPS: Neural Information Processing
    Systems, Montreal, Canada: Neural Information Processing Systems Foundation.'
  chicago: Pentina, Anastasia, and Christoph Lampert. “Lifelong Learning with Non-i.i.d.
    Tasks,” 2015:1540–48. Neural Information Processing Systems Foundation, 2015.
  ieee: 'A. Pentina and C. Lampert, “Lifelong learning with non-i.i.d. tasks,” presented
    at the NIPS: Neural Information Processing Systems, Montreal, Canada, 2015, vol.
    2015, pp. 1540–1548.'
  ista: 'Pentina A, Lampert C. 2015. Lifelong learning with non-i.i.d. tasks. NIPS:
    Neural Information Processing Systems, Advances in Neural Information Processing
    Systems, vol. 2015, 1540–1548.'
  mla: Pentina, Anastasia, and Christoph Lampert. <i>Lifelong Learning with Non-i.i.d.
    Tasks</i>. Vol. 2015, Neural Information Processing Systems Foundation, 2015,
    pp. 1540–48.
  short: A. Pentina, C. Lampert, in:, Neural Information Processing Systems Foundation,
    2015, pp. 1540–1548.
conference:
  end_date: 2015-12-12
  location: Montreal, Canada
  name: 'NIPS: Neural Information Processing Systems'
  start_date: 2015-12-07
date_created: 2018-12-11T11:51:57Z
date_published: 2015-01-01T00:00:00Z
date_updated: 2025-06-03T11:41:45Z
day: '01'
department:
- _id: ChLa
ec_funded: 1
intvolume: '      2015'
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: http://papers.nips.cc/paper/6007-lifelong-learning-with-non-iid-tasks
month: '01'
oa: 1
oa_version: None
page: 1540 - 1548
project:
- _id: 2532554C-B435-11E9-9278-68D0E5697425
  call_identifier: FP7
  grant_number: '308036'
  name: Lifelong Learning of Visual Scene Understanding
publication_status: published
publisher: Neural Information Processing Systems Foundation
publist_id: '5781'
quality_controlled: '1'
scopus_import: '1'
status: public
title: Lifelong learning with non-i.i.d. tasks
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 2015
year: '2015'
...
---
_id: '1857'
abstract:
- lang: eng
  text: 'Sharing information between multiple tasks enables algorithms to achieve
    good generalization performance even from small amounts of training data. However,
    in a realistic scenario of multi-task learning not all tasks are equally related
    to each other, hence it could be advantageous to transfer information only between
    the most related tasks. In this work we propose an approach that processes multiple
    tasks in a sequence with sharing between subsequent tasks instead of solving all
    tasks jointly. Subsequently, we address the question of curriculum learning of
    tasks, i.e. finding the best order of tasks to be learned. Our approach is based
    on a generalization bound criterion for choosing the task order that optimizes
    the average expected classification performance over all tasks. Our experimental
    results show that learning multiple related tasks sequentially can be more effective
    than learning them jointly, the order in which tasks are being solved affects
    the overall performance, and that our model is able to automatically discover
    the favourable order of tasks. '
article_processing_charge: No
arxiv: 1
author:
- first_name: Anastasia
  full_name: Pentina, Anastasia
  id: 42E87FC6-F248-11E8-B48F-1D18A9856A87
  last_name: Pentina
- first_name: Viktoriia
  full_name: Sharmanska, Viktoriia
  id: 2EA6D09E-F248-11E8-B48F-1D18A9856A87
  last_name: Sharmanska
  orcid: 0000-0003-0192-9308
- first_name: Christoph
  full_name: Lampert, Christoph
  id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
  last_name: Lampert
  orcid: 0000-0001-8622-7887
citation:
  ama: 'Pentina A, Sharmanska V, Lampert C. Curriculum learning of multiple tasks.
    In: IEEE; 2015:5492-5500. doi:<a href="https://doi.org/10.1109/CVPR.2015.7299188">10.1109/CVPR.2015.7299188</a>'
  apa: 'Pentina, A., Sharmanska, V., &#38; Lampert, C. (2015). Curriculum learning
    of multiple tasks (pp. 5492–5500). Presented at the CVPR: Computer Vision and
    Pattern Recognition, Boston, MA, United States: IEEE. <a href="https://doi.org/10.1109/CVPR.2015.7299188">https://doi.org/10.1109/CVPR.2015.7299188</a>'
  chicago: Pentina, Anastasia, Viktoriia Sharmanska, and Christoph Lampert. “Curriculum
    Learning of Multiple Tasks,” 5492–5500. IEEE, 2015. <a href="https://doi.org/10.1109/CVPR.2015.7299188">https://doi.org/10.1109/CVPR.2015.7299188</a>.
  ieee: 'A. Pentina, V. Sharmanska, and C. Lampert, “Curriculum learning of multiple
    tasks,” presented at the CVPR: Computer Vision and Pattern Recognition, Boston,
    MA, United States, 2015, pp. 5492–5500.'
  ista: 'Pentina A, Sharmanska V, Lampert C. 2015. Curriculum learning of multiple
    tasks. CVPR: Computer Vision and Pattern Recognition, 5492–5500.'
  mla: Pentina, Anastasia, et al. <i>Curriculum Learning of Multiple Tasks</i>. IEEE,
    2015, pp. 5492–500, doi:<a href="https://doi.org/10.1109/CVPR.2015.7299188">10.1109/CVPR.2015.7299188</a>.
  short: A. Pentina, V. Sharmanska, C. Lampert, in:, IEEE, 2015, pp. 5492–5500.
conference:
  end_date: 2015-06-12
  location: Boston, MA, United States
  name: 'CVPR: Computer Vision and Pattern Recognition'
  start_date: 2015-06-07
corr_author: '1'
date_created: 2018-12-11T11:54:23Z
date_published: 2015-06-01T00:00:00Z
date_updated: 2025-06-11T07:19:52Z
day: '01'
department:
- _id: ChLa
doi: 10.1109/CVPR.2015.7299188
external_id:
  arxiv:
  - '1412.1353'
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: http://arxiv.org/abs/1412.1353
month: '06'
oa: 1
oa_version: Preprint
page: 5492 - 5500
publication_status: published
publisher: IEEE
publist_id: '5243'
quality_controlled: '1'
scopus_import: '1'
status: public
title: Curriculum learning of multiple tasks
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2015'
...
---
_id: '1706'
abstract:
- lang: eng
  text: We consider a problem of learning kernels for use in SVM classification in
    the multi-task and lifelong scenarios and provide generalization bounds on the
    error of a large margin classifier. Our results show that, under mild conditions
    on the family of kernels used for learning, solving several related tasks simultaneously
    is beneficial over single task learning. In particular, as the number of observed
    tasks grows, assuming that in the considered family of kernels there exists one
    that yields low approximation error on all tasks, the overhead associated with
    learning such a kernel vanishes and the complexity converges to that of learning
    when this good kernel is given to the learner.
alternative_title:
- LNCS
article_processing_charge: No
arxiv: 1
author:
- first_name: Anastasia
  full_name: Pentina, Anastasia
  id: 42E87FC6-F248-11E8-B48F-1D18A9856A87
  last_name: Pentina
- first_name: Shai
  full_name: Ben David, Shai
  last_name: Ben David
citation:
  ama: 'Pentina A, Ben David S. Multi-task and lifelong learning of kernels. In: Vol
    9355. Springer; 2015:194-208. doi:<a href="https://doi.org/10.1007/978-3-319-24486-0_13">10.1007/978-3-319-24486-0_13</a>'
  apa: 'Pentina, A., &#38; Ben David, S. (2015). Multi-task and lifelong learning
    of kernels (Vol. 9355, pp. 194–208). Presented at the ALT: Algorithmic Learning
    Theory, Banff, AB, Canada: Springer. <a href="https://doi.org/10.1007/978-3-319-24486-0_13">https://doi.org/10.1007/978-3-319-24486-0_13</a>'
  chicago: Pentina, Anastasia, and Shai Ben David. “Multi-Task and Lifelong Learning
    of Kernels,” 9355:194–208. Springer, 2015. <a href="https://doi.org/10.1007/978-3-319-24486-0_13">https://doi.org/10.1007/978-3-319-24486-0_13</a>.
  ieee: 'A. Pentina and S. Ben David, “Multi-task and lifelong learning of kernels,”
    presented at the ALT: Algorithmic Learning Theory, Banff, AB, Canada, 2015, vol.
    9355, pp. 194–208.'
  ista: 'Pentina A, Ben David S. 2015. Multi-task and lifelong learning of kernels.
    ALT: Algorithmic Learning Theory, LNCS, vol. 9355, 194–208.'
  mla: Pentina, Anastasia, and Shai Ben David. <i>Multi-Task and Lifelong Learning
    of Kernels</i>. Vol. 9355, Springer, 2015, pp. 194–208, doi:<a href="https://doi.org/10.1007/978-3-319-24486-0_13">10.1007/978-3-319-24486-0_13</a>.
  short: A. Pentina, S. Ben David, in:, Springer, 2015, pp. 194–208.
conference:
  end_date: 2015-10-06
  location: Banff, AB, Canada
  name: 'ALT: Algorithmic Learning Theory'
  start_date: 2015-10-04
corr_author: '1'
date_created: 2018-12-11T11:53:35Z
date_published: 2015-01-01T00:00:00Z
date_updated: 2025-09-23T09:36:51Z
day: '01'
department:
- _id: ChLa
doi: 10.1007/978-3-319-24486-0_13
ec_funded: 1
external_id:
  arxiv:
  - '1602.06531'
  isi:
  - '000367595100013'
intvolume: '      9355'
isi: 1
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: http://arxiv.org/abs/1602.06531
month: '01'
oa: 1
oa_version: Preprint
page: 194 - 208
project:
- _id: 2532554C-B435-11E9-9278-68D0E5697425
  call_identifier: FP7
  grant_number: '308036'
  name: Lifelong Learning of Visual Scene Understanding
publication_status: published
publisher: Springer
publist_id: '5430'
quality_controlled: '1'
scopus_import: '1'
status: public
title: Multi-task and lifelong learning of kernels
type: conference
user_id: 317138e5-6ab7-11ef-aa6d-ffef3953e345
volume: 9355
year: '2015'
...
---
_id: '2160'
abstract:
- lang: eng
  text: Transfer learning has received a lot of attention in the machine learning
    community over the last years, and several effective algorithms have been developed.
    However, relatively little is known about their theoretical properties, especially
    in the setting of lifelong learning, where the goal is to transfer information
    to tasks for which no data have been observed so far. In this work we study lifelong
    learning from a theoretical perspective. Our main result is a PAC-Bayesian generalization
    bound that offers a unified view on existing paradigms for transfer learning,
    such as the transfer of parameters or the transfer of low-dimensional representations.
    We also use the bound to derive two principled lifelong learning algorithms, and
    we show that these yield results comparable with existing methods.
article_processing_charge: No
author:
- first_name: Anastasia
  full_name: Pentina, Anastasia
  id: 42E87FC6-F248-11E8-B48F-1D18A9856A87
  last_name: Pentina
- first_name: Christoph
  full_name: Lampert, Christoph
  id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
  last_name: Lampert
  orcid: 0000-0001-8622-7887
citation:
  ama: 'Pentina A, Lampert C. A PAC-Bayesian bound for Lifelong Learning. In: Vol
    32. ML Research Press; 2014:991-999.'
  apa: 'Pentina, A., &#38; Lampert, C. (2014). A PAC-Bayesian bound for Lifelong Learning
    (Vol. 32, pp. 991–999). Presented at the ICML: International Conference on Machine
    Learning, Beijing, China: ML Research Press.'
  chicago: Pentina, Anastasia, and Christoph Lampert. “A PAC-Bayesian Bound for Lifelong
    Learning,” 32:991–99. ML Research Press, 2014.
  ieee: 'A. Pentina and C. Lampert, “A PAC-Bayesian bound for Lifelong Learning,”
    presented at the ICML: International Conference on Machine Learning, Beijing,
    China, 2014, vol. 32, pp. 991–999.'
  ista: 'Pentina A, Lampert C. 2014. A PAC-Bayesian bound for Lifelong Learning. ICML:
    International Conference on Machine Learning vol. 32, 991–999.'
  mla: Pentina, Anastasia, and Christoph Lampert. <i>A PAC-Bayesian Bound for Lifelong
    Learning</i>. Vol. 32, ML Research Press, 2014, pp. 991–99.
  short: A. Pentina, C. Lampert, in:, ML Research Press, 2014, pp. 991–999.
conference:
  end_date: 2014-06-26
  location: Beijing, China
  name: 'ICML: International Conference on Machine Learning'
  start_date: 2014-06-21
corr_author: '1'
date_created: 2018-12-11T11:56:03Z
date_published: 2014-05-10T00:00:00Z
date_updated: 2024-10-09T20:55:36Z
day: '10'
department:
- _id: ChLa
intvolume: '        32'
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://dl.acm.org/citation.cfm?id=3045003
month: '05'
oa: 1
oa_version: Submitted Version
page: 991 - 999
publication_status: published
publisher: ML Research Press
publist_id: '4844'
quality_controlled: '1'
scopus_import: '1'
status: public
title: A PAC-Bayesian bound for Lifelong Learning
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 32
year: '2014'
...
