Skip to content

I’m counting on Apple to do AI right





An article yesterday at MacRumors.com asks “Should Apple Kill Siri and Start Over?” My answer to that would be yes. Siri was (and is) terrible. Siri fell far short of the vision that Apple described in 1987 in the video above, which gets the vision right. Siri was a huge embarrassment for Apple.

I remember watching this video in 1987 at an Apple promotional event, and I’ve never forgotten it. Almost 40 years later, Apple at last is in a position to make the “Knowledge Navigator” a reality.

There’s an important hardware angle here. AI’s need a lot of computing power. AI’s run better on graphics processors than on CPUs. Apple’s M1, M2, and M3 chips are generously supplied with graphics processors. I’m writing this on a 2023 M2 Mac Mini Pro. It has ten CPU cores and sixteen (!) GPU cores. AI’s run very well on Apple’s high-end M2 chips, but it seems that Apple is not going to release M3 models of some of its computers and will skip straight to M4 chips, which are engineered specifically to optimize AI’s.

As for software, we still don’t know much about what Apple is planning. We should hear a lot more at Apple’s annual developers conference, which starts June 10.

It’s interesting that, in its 1987 visionary video, Apple showed its “Knowledge Navigator” being used by a Berkeley professor. Sure, there are plenty of people who would use an AI for sports statistics or investment research. But to really advance human knowledge, we need an AI that has read everything. The best material is behind paywalls — all the daily newspapers, academic papers new and old, new books plus all the older books that have been digitized, and even much of the daily chatter on the web. That’s going to cost a lot of money, but if anyone can figure out how to ethically acquire all that material and pay for it, Apple can.

A major failing of current AI’s is that they don’t attribute anything. My guess is that that’s because the people who are building the currently available AI’s don’t want us to know where they are stealing their training material. But if an AI is to be trusted, and if an AI’s answers are to be suitably weighed for reliability, then the AI must tell us where it got its information with citations and footnotes.

AI development is moving very fast. Acquiring, licensing, and figuring out the economics of the training material is a huge undertaking. Google, my guess is, will try to gouge and steal. Apple, I think, will do a more trustworthy job. In a few years, I expect to have a truly useful Apple AI running on Apple hardware.

Post a Comment

Your email is never published nor shared. Required fields are marked *
*
*