UNDERLINE DOI: https://doi.org/10.48448/sqvk-y475

technical paper

IJCNLP-AACL 2021

August 03, 2021

Thailand

What Context Features Can Transformer Language Models Use?

Please log in to leave a comment

Downloads

SlidesPaperTranscript English (automatic)

Next from IJCNLP-AACL 2021

Adapting High-resource NMT Models to Translate Low-resource Related Languages without Parallel Data
technical paper

Adapting High-resource NMT Models to Translate Low-resource Related Languages without Parallel Data

IJCNLP-AACL 2021

Wei-Jen Ko
Wei-Jen Ko

03 August 2021

Similar lecture

The Case for Translation-Invariant Self-Attention in Transformer-Based Language Models
technical paper

The Case for Translation-Invariant Self-Attention in Transformer-Based Language Models

IJCNLP-AACL 2021

Ulme Wennberg
Ulme Wennberg

03 August 2021

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRODUCT

  • Presentations
  • Conferences
  • Societies
  • Speakers
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2025 Underline - All rights reserved