Skip to ContentSkip to Navigation
Over ons Praktische zaken Waar vindt u ons A. (Arianna) Bisazza, PhD

Publicaties

A Primer on the Inner Workings of Transformer-based Language Models

Are Character-level Translations Worth the Wait? Comparing ByT5 and mT5 for Machine Translation

Endowing Neural Language Learners with Human-like Biases: A Case Study on Dependency Length Minimization

Model Internals-based Answer Attribution for Trustworthy Retrieval-Augmented Generation

Communication Drives the Emergence of Language Universals in Neural Agents: Evidence from the Word-order/Case-marking Trade-off

Cross-Lingual Consistency of Factual Knowledge in Multilingual Language Models

Inseq: An Interpretability Toolkit for Sequence Generation Models

Inseq: An Interpretability Toolkit for Sequence Generation Models

Quantifying the Plausibility of Context Reliance in Neural Machine Translation

Wave to Syntax: Probing spoken language models for syntax

Lees meer