ARM isn’t content to offer processor designs that are kinda-sorta ready for AI. The company has unveiled Project Trillium, a combination of hardware and software ingredients designed explicitly to speed up AI-related technologies like machine learning and neural networks. The highlights, as usual, are the chips: ARM ML promises to be far more efficient for machine learning than a regular CPU or graphics chip, with two to four times the real-world throughput. ARM OD, meanwhile, is all about object detection. It can spot “virtually unlimited” subjects in real time at 1080p and 60 frames per second, and focuses on people in particular — on top of recognizing faces, it can detect facing, poses and gestures.
The software component, ARM NN, serves as a go-between for neural network frameworks like Google’s TensorFlow and ARM-based processors.
It’s going to be a while before you see this technology in action. ARM isn’t offering previews until April, with wider availability in the middle of 2018. And remember, ARM doesn’t actually design finished chips. It’s up to Qualcomm, Samsung and other companies to translate these formulas into real products. The aim, however, is clear: ARM wants more devices that can handle AI tasks locally, rather than depending on a cloud-based helper like Alexa, Google Assistant or Siri. The company also expects Project Trillium to expand beyond mobile devices to include home theater, smart speakers and other categories where AI might come in handy.
Via: The Verge
Source: ARMRead Original: Engadget