The standout feature of the upcoming Copilot + PC laptops, running the Snapdragon X Plus/Elite SoCs', is no doubt the NPU. The Neural Processing Unit that handles much of the A.I. logic on the device itself. With up to 45 TOPS of NPU performance, it's one of the beefier NPU's that have been released so far. At least for now, as no doubt the competition is at work ready to launch their 100 TOPS SoC's soon as well.
But what I haven't seen so far, is just what these TOPS are capable of from a programming perspective. I know what a decent and capable CPU can do for my current workflow, mainly faster compilation and build times.
And since I don't really do any kind of game development, or graphics intensive web design, my GPU requirements aren't astronomical and a decently powered Intel Iris seems to do the job just fine.
And even having more RAM is pretty beneficial with very large VS Code projects that are constantly hot-refreshing on the browser.
But I don't quite yet know what an NPU can do for me just yet, outside of the marketed features, such as Recall and Voice/Video optimization on the OS itself.
Theoretically though, there is one use case where I could see myself upgrading my current Asus Zenbook to something more modern. And it has everything to do with the NPU.
For the past year now, I've been experimenting with the OpenAI API for my company to see just where and how we can leverage more and more API features, whether in customer support or in finding patterns with larger data sets. And, while progress has been made, it hasn't necessarily been the smoothest process overall.
For one, it takes a while to figure out the type of data needed for fine-tuning in order to get a specific type of result. And while for the most part, the OpenAI API is very cost effective, it would be nice to not have to worry about pricing, and just let lose and ramp up the process.
And theoretically, an NPU should be able to handle at least SLM (small-language model) level training on device without collapsing in on itself.
As far as support for on-device training goes, there are a few platforms that currently do support it already, such as TensorFlow Lite and ONNX Runtime. And you don't need an NPU to run those today, as they can rely on the GPU.
But seeing as how "NPU" is pretty new to my vocabulary, I don't know just yet whether a mid-level GPU is pretty good, or whether 45 TOPS is incredibly good. But it definitely has my curiosity peaked just a bit. Because we've seen some of the capabilities so far, such as image recognition with Recall and image generation with Cocreator on the device itself, so we have a vague idea of overall performance when it comes to the A.I. workload.
If it becomes possible to work out of a coffee shop with an all-day battery laptop, with no internet, and still be able to perform intensive work, then sign me up.
Having said that, these are some of the first A.I. focused laptops to hit the markets, so while I am excited to see what they can do, I'm also cautiously optimistic about the whole thing. Mainly because in order to actually use the NPU, you'd need full support from the various developer tools, frameworks, SDK's, etc. And so far, we haven't heard too much from those folks in terms of future plans.
Because a powerful engine that can run forever at the highest efficiency is great. But if it's not attached to a car, then we're not going anywhere.
Walter Guevara is a Computer Scientist, software engineer, startup founder and previous mentor for a coding bootcamp. He has been creating software for the past 20 years.
Last updated on: