Skip to ContentSkip to Navigation
About us Practical matters How to find us A. (Arianna) Bisazza, PhD

Publications

A Primer on the Inner Workings of Transformer-based Language Models

Are Character-level Translations Worth the Wait? Comparing ByT5 and mT5 for Machine Translation

Democratizing Advanced Attribution Analyses of Generative Language Models with the Inseq Toolkit

Encoding of lexical tone in self-supervised models of spoken language

Endowing Neural Language Learners with Human-like Biases: A Case Study on Dependency Length Minimization

Model Internals-based Answer Attribution for Trustworthy Retrieval-Augmented Generation

Communication Drives the Emergence of Language Universals in Neural Agents: Evidence from the Word-order/Case-marking Trade-off

Cross-Lingual Consistency of Factual Knowledge in Multilingual Language Models

Inseq: An Interpretability Toolkit for Sequence Generation Models

Inseq: An Interpretability Toolkit for Sequence Generation Models

Read more