Apple aims to run AI models directly on iPhones, other devices

Ars Technica - Apple aims to run AI models directly on iPhones, other devices

If you look at the last few months of open source projects and papers published by Apple Research (MLX, LLM In a Flash), it’s clear that this is the direction they want to move in. They also specifically mentioned ML research as a group that could benefit from the maxed-out 192GB RAM config when they announce Mac Studio with M2.

I’m excited about this, as it stands I feel Apple’s Neural Engine in their chips is underutilized, and when it is used, generally it’s Apple’s own applications that are making use of it. I imagine with the implementation of their own LLM into their OS they will open up some side of it for direct interaction and implementation on the dev side of things, without the need for one to bring in a data model and convert it for use with Apple’s ecosystem which can be tiring and quite a bit of work. Unsure if that will be in their short term or long term plans, but I expect to see some side of this opened up to developers which will finally make it much easier to implement locally run AI features in iOS/MacOS apps!

5 Likes

I personally see them opening this up, but allowing for bringing your own models. Or maybe they start with that. I like the idea of on device LLM. I’m hoping that it can be more of a personal AI that can learn what I’m doing on the phone and offer ideas for how to better utilize my local data. So it could look at my Notes, Reminders, Calendar, etc and learn.

I know they have some of that now, but I think they could do more with a local LLM as it would allow me to actually chat to feed it more info about assumptions it made. I’m sure there’s more I’m missing as I’m definitely not an AI/ML expert!

1 Like