91ÇàÇà²Ý

Events

FFN Fusion: Rethinking Sequential Computation In Large Language Models

LLM seminar event about the paper "FFN Fusion: Rethinking Sequential Computation In Large Language Models" by NVIDIA.
Image with writing about the presenter name, title, time and place of the event. Black background with a book

Title: FFN Fusion: Rethinking Sequential Computation In Large Language Models

Presenter: Mustapha Abdullahi

Abstract: The authors introduce FFN Fusion, an architectural optimization technique that reduces sequential computation in large language models by identifying and exploiting natural opportunities for parallelization. Their key insight is that sequences of Feed-Forward Network (FFN) layers, particularly those remaining after the removal of specific attention layers, can often be parallelized with minimal accuracy impact. They develop a principled methodology for identifying and fusing such sequences, transforming them into parallel operations that significantly reduce inference latency while preserving model behavior. Applying these techniques to Llama-3.1-405B-Instruct, they create Llama-Nemotron-Ultra-253B-Base (Ultra-253B-Base), an efficient and soon-to-be publicly available model that achieves a 1.71X speedup in inference latency and 35X lower per-token cost while maintaining strong performance across benchmarks. Through extensive experiments on models from 49B to 253B parameters, they demonstrate that FFN Fusion becomes increasingly effective at larger scales and can complement existing optimization techniques like quantization and pruning. Most intriguingly, they find that even full transformer blocks containing both attention and FFN layers can sometimes be parallelized, suggesting new directions for neural architecture design.

Paper link:

Disclaimer: The presenter is not part of the authors!

LLM seminar
  • Updated:
  • Published:
Share
URL copied!