Apple’s research team has created an AI model that surpasses GPT-4 in understanding context, potentially transforming Siri’s capabilities. Meanwhile, 200 musicians have signed an open letter criticizing major tech companies for using artists’ work to train AI without permission. Additionally, researchers at Anthropic have managed to ‘trick’ their own model into saying things it shouldn’t by asking increasingly risky questions. Let’s dive in!

Key Points

  • ReALM outperforms GPT-4 in understanding context with less effort.
    Apple converts everything you say to Siri into text for better comprehension.
    The reALM could operate directly on your device, ensuring quick and private interactions.


Apple’s ReALM is revolutionizing Siri, making it smarter and faster. It transforms unclear commands into clear text, improving Siri’s understanding and responsiveness. ReALM is more efficient than GPT-4 because it uses fewer steps to grasp the context, such as identifying people or objects you’re referring to. This could allow Siri to work directly on your iPhone or iPad, safeguarding your information and speeding up responses. Unlike previous models, ReALM doesn’t need to analyze images to understand you. It converts spoken words and visual cues into text that Siri can easily process. Apple researchers have shown that this approach, along with fine-tuning for specific tasks, significantly outperforms traditional methods, including OpenAI’s GPT-4.

Why It Matters

Apple’s shift to a text-driven method for Siri’s understanding is exciting. It means interacting with Siri could feel more like chatting with a friend, without privacy concerns. As Apple prepares to unveil its comprehensive AI strategy at WWDC 2024, the future looks promising for Siri on our devices. This advancement could fundamentally change how we use AI, making it an integral part of our daily lives.

Leave a comment

Your email address will not be published. Required fields are marked *