Codú
Ollama taps Apple’s MLX framework to make local AI models faster on Macs | shared by The New Stack | Codú