Discussion
Mamba-3
robofanatic: > Mamba-3 is a new state space model (SSM) designed with inference efficiency as the primary goal — a departure from Mamba-2, which optimized for training speed. The key upgrades are a more expressive recurrence formula, complex-valued state tracking, and a MIMO (multi-input, multi-output) variant that boosts accuracy without slowing down decoding.Why can’t they simply say -Mamba-3 focuses on being faster and more efficient when making predictions, rather than just being fast to train like Mamba-2.
esquire_900: This is sort of what their first sentence states? Except your line implies that they are fast in training and inference, they imply they are focusing on inference and are dropping training speed for it.It's a nice opening as it is imo
nl: I'm looking forward to comparing this to Inception 2 (the text diffusion model) which in my experience is very fast and reasonably high quality.
E-Reverance: The first sentence basically does though, no?
robofanatic: Of course my only objection was the language. LLMs are now old enough to leave the jargon behind and talk in simple easy to understand terms.