Apple releases ‘MGIE’, a revolutionary AI model for instruction-based image editing

It’s interesting that Apple is leveraging and contributing to open-source models as a part of their AI work, their example code combines Vicuna-7B and LLaVA-7B with a model that comprehends and executes plain-english instructions for photo editing. Imagine if Siri could edit a photo just by telling it what you want to see.

It feels like 2024 is the year that LLMs will be more deeply integrated into applications, with more of these bridge-type models allowing them to execute tasks and actions inside of the application itself.