Thanks to everyone who participated in Monday’s poll! I was surprised to learn that most people interact with an assistant daily, with Alexa being the most popular choice, especially since she typically requires a separate device.
Personally, I interact with Alexa and ChatGPT daily. Siri, on the other hand, I only use for quick tasks like turning off the lights or sending texts by voice. I recently joined the Apple Intelligence Beta, but so far, I’m underwhelmed. Apple is gradually adding features, though, so I’m not ruling them out just yet.
I’ve been trying to wrap my head around where all of this is going and how it’s going to impact my kids and…everything else. We’re at a turning point where the most common assistants handle basic tasks, while others can read and write better than many college freshmen.
I’ve been using ChatGPT for several years now to write and edit code. It continues to get better all the time and its competitors (Claude, in particular) are doing some really interesting things. I also use it to edit this newsletter. It’s not as good as my wife at editing, but it’s also not having all of its time taken by a two and a four year old.
I believe we’re just beginning to see how these assistants will help us daily. Up until now, Alexa could give you the weather or play music, and Siri could send texts. Now, ChatGPT can write plausible homework answers and code small programs.
One of the most impressive demos I’ve seen recently was Google’s NotebookLM which can create podcasts from text. Here’s a podcast it generated about this newsletter.
The biggest challenge these assistants face is their lack of context and understanding of motivation. They can read your questions and generate responses, but without knowing the “why” behind your request, they often miss the mark.
I’ve been exploring this idea while building my own assistant to help track family tasks. It lets family members assign tasks to each other to reduce nagging and improve coordination. The biggest challenge I’ve encountered is that the assistant needs a lot of context to be effective. For instance, saying “Sally wants Billy to take out the trash” isn’t enough. It needs to know existing tasks, the trash schedule, and Billy’s availability.
Sure, the assistant can add the task to a list and assign it to Billy, but any basic to-do list app can do that. To truly add value, the assistant needs enough context to coordinate between family members effectively.
I had hoped Apple Intelligence would be closer to achieving this level of context, but so far, it’s not. The same seems to be true for Google Gemini, which hasn’t attracted much enthusiasm.
As we continue integrating these assistants into our lives, it’s clear we’re on the cusp of something bigger. The gap between simple task execution and real contextual understanding is still wide, but it’s narrowing. I’m excited to see where things will be in a few years when our assistants do more than just follow instructions.
*legacy* assistants like Siri and Spot on, times are a changin! *legacy* voice assistants like Siri & Alexa are architecturally v different, and need to be redone to work with LLMs. MSFT copilot for biz knows your schedule, reads your email, knows who you work with. Google could do the same for consumers if they choose w/ Gemini. NotebookLM for instance will whip through your google drive like a hot knife thru butta