Audio Description is a narrated commentary designed to aid vision-impaired audiences in perceiving key visual elements in a video. While short-form video understanding has advanced rapidly, a solution for maintaining coherent long-term visual storytelling remains unresolved. Existing methods rely solely on frame-level embeddings, effectively describing object-based content but lacking contextual information across scenes. We introduce DANTE-AD, an enhanced video description model leveraging a dual-vision Transformer-based architecture to address this gap. DANTE-AD sequentially fuses both frame and scene level embeddings to improve long-term contextual understanding. We propose a novel, state-of-the-art method for sequential cross-attention to achieve contextual grounding for fine-grained audio description generation. Evaluated on a broad range of key scenes from well-known movie clips, DANTE-AD outperforms existing methods across traditional NLP metrics and LLM-based evaluations.
@inproceedings{Deganutti:DANTE-AD:ArXiv:2025,
AUTHOR = Deganutti Adrienne, and Hadfield, Simon and Gilbert, Andrew ",
TITLE = "DANTE-AD: Dual-Vision Attention Network for Long-Term Audio Description",
BOOKTITLE = " ArXiv abs/X.X",
YEAR = "2025",
}