Skip to ContentSkip to Navigation
Over ons Praktische zaken Waar vindt u ons A. (Arianna) Bisazza, PhD

Publicaties

A Primer on the Inner Workings of Transformer-based Language Models

Are Character-level Translations Worth the Wait? Comparing ByT5 and mT5 for Machine Translation

Democratizing Advanced Attribution Analyses of Generative Language Models with the Inseq Toolkit

Encoding of lexical tone in self-supervised models of spoken language

Endowing Neural Language Learners with Human-like Biases: A Case Study on Dependency Length Minimization

Model Internals-based Answer Attribution for Trustworthy Retrieval-Augmented Generation

Non Verbis, Sed Rebus: Large Language Models Are Weak Solvers of Italian Rebuses

Communication Drives the Emergence of Language Universals in Neural Agents: Evidence from the Word-order/Case-marking Trade-off

Cross-Lingual Consistency of Factual Knowledge in Multilingual Language Models

Inseq: An Interpretability Toolkit for Sequence Generation Models

Lees meer

Pers/media

Can Word-level Quality Estimation Inform and Improve Machine Translation Post-editing?