Mixtral 8x22B just released, already running on MLX

In classic Mistral fashion, they released it on Twitter/X last night as a magnet link:

And in classic AI community fashion, it’s already been ported to Apple’s MLX, thanks to Prince Canuma:

Initial evals show it performs between Opus and GPT4-Turbo, and with the quantized MLX port, you should be able to run it on Macs with at least 96GB of RAM, though Prince recommends 128-192GB for the best experience.

2 Likes