
The Foundation Models framework provides access to Apple’s on-device large language model that powers Apple Intelligence to help you perform intelligent tasks specific to your use case. https://developer.apple.com/documentation/FoundationModels
First, verify that the framework is supported and enabled on the user’s device (only Apple Intelligence–capable models qualify) and that you’ve accounted for any additional battery usage before you start crafting prompts or calling the API.
let systemModel = SystemLanguageModel.default
guard systemModel.isAvailable else {
return
}
Once you've confirmed that the device can invoke the model, calling it can be as simple as three lines of code:
import FoundationModels
var model = SystemLanguageModel.default
let session = LanguageModelSession(instructions: "You are a helpful assistant.")
let response = try await session.respond(to: "Give me a fun fact about space.")
let answer = response.content // Olympus Mons on Mars is the tallest volcano—and the tallest mountain—in the entire Solar System,
Privacy and zero cost are obvious benefits here, but being able to remove a layer of your infrastructure and even dynamically build prompts is really powerful. Beyond cost savings, it’s blazing fast compared to sending a request to an OpenAI API, for example.
The framework becomes even more powerful when you can pass Swift structs in and get Swift structs back, which significantly lowers the burden compared to solutions that only handle text input and output.
It’s incredible how quickly you can prototype intelligent features in an app. I’m looking forward to what developers come up with once iOS 26 and macOS 26 are released.