Xilinx has taken a big step towards enabling wider market adoption of edge artificial intelligence (AI) and particularly embedded vision by entering the system on module (SOM) market, complete with pre-built software stack and an app store with ready-to-deploy pre-defined applications.
The first product in the company’s new portfolio of SOMs is the Kria K26 SOM, specifically targeting vision AI applications in smart cities and smart factories, along with an out-of-the-box ready low-cost development kit, the Kria KV260 AI vision starter kit. The company said it is addressing the rising complexity in vision AI as well as challenges for implementing AI at the edge. Hence it is priced with mass market appeal in mind, with the starter kit being sold online at $199, and the Kria K26 SOM selling for $250 for a commercial grade version, or $350 for the industrial grade variant.
Xilinx said that today’s vision AI is really complex, with many developers not necessarily having the hardware level expertise. So, by moving up to a higher level of abstraction, with pre-built hardware platforms with industry-standard software stacks plus a library of apps, it enables millions more software developers to access AI capabilities without needing chip level design expertise.
This is part of the Kria development experience: to provide a self-enabled path for exploration, design, and ultimately production deployment through a vast set of online resource. Hobbyists, makers, and commercial developers alike can accelerate through each phase of the design cycle with tutorial videos, training courses, and a vast ecosystem of providers offering accelerated applications, design services, and more.
To this end, the company said it has invested heavily in its tool flows to make adaptive computing more accessible to AI and software developers without hardware expertise. The Kria SOM portfolio takes this accessibility to the next level by coupling the hardware and software platform with production-ready vision accelerated applications. These turnkey applications eliminate all the FPGA hardware design work and only require software developers to integrate their custom AI models, application code, and optionally modify the vision pipeline – using familiar design environments, such as TensorFlow, Pytorch or Café frameworks, as well as C, C++, OpenCL, and Python programming languages—enabled by the Vitis unified software development platform and libraries.
The new Kria SOMs also enable customization and optimization for embedded developers with support for standard Yocto-based PetaLinux and, for the first time ever, Xilinx is announcing a coming collaboration with Canonical to provide Ubuntu Linux support, the highly popular Linux distribution used by AI developers. This offers widespread familiarity with AI developers and interoperability with existing applications. Customers can develop in either environment and take either approach to production. Both environments will come pre-built with a software infrastructure and helpful utilities.
“For smart vision applications, developers and innovators want the Ubuntu experience they’re used to from cloud to desktop,” said Thibaut Rouffineau, vice president of marketing, Canonical/Ubuntu. “Together with Xilinx, we’re excited to provide Kria SOM customers with out-of-the-box productivity, frictionless transition from development to production, and guaranteed stability and security in the field.”
Save 9 months development time for adding AI
The introduction of ready-to-deploy modules and apps is part of a growing trend to take the mystery out of embedding edge AI and vision AI and making it accessible to wider end-markets.
According to embedded computer vision technology analyst Jeff Bier, not every company has a machine learning or computer vision department, resulting in a big knowledge and skills gap when it comes to implementing edge AI. Bier, who is founder of the Edge AI and Vision Alliance which is holding the 2021 embedded vision summit next month, explained in a recent briefing with embedded.com that more and more vendors are striving to make technology accessible.
As an example, he said, not many companies have the resources or the skills to operate a deep neural network (DNN) on the data they pick up from the camera. Hence, he said, many semiconductor companies are offering more software reference designs as well.
In a press briefing at the launch event for the new SOMs, Chetan Khona, director of industrial, vision and healthcare at Xilinx, said, “Production ready systems are important for rapid deployment [of embedded vision AI]. Customers are able to save up to nine months in development time by using a module-based design rather than a device-based design.” He added that with the starter kit, users can get started within an hour, “with no FPGA experience needed.” In this respect, the user connects the camera, cables and monitor, inserts the programmed microSD card and powers up the board; then the user can select an accelerated application of their choice, and run that accelerated application.
A senior Xilinx executive, Kirk Saban, said, “Xilinx’s entrance into the burgeoning SOM market builds on our evolution beyond the chip-level business that began with our Alveo boards for the data center and continues with the introduction of complete board-level solutions for embedded systems. Saban, who is vice president, product and platform marketing at Xilinx, added, “The Kria SOM portfolio expands our market reach into more edge applications and will make the power of adaptable hardware accessible to millions of software and AI developers.”
The Kria K26 SOM is built on top of the Zynq UltraScale+ MPSoC architecture, which features a quad-core Arm Cortex A53 processor, more than 250,000 logic cells, and a H.264/265 video codec. The SOM also features 4GB of DDR4 memory and 245 IOs, which allow it to adapt to virtually any sensor or interface. Xilinx said with 1.4 tera-ops of AI compute, the Kria K26 SOM enables developers to create vision AI applications offering more than 3X higher performance at lower latency and power compared to GPU-based SOMs, critical for smart vision applications like security, traffic and city cameras, retail analytics, machine vision, and vision guided robotics.
As part of the new approach of offering accelerated applications for software-based design, Xilinx also announced the first embedded app store for edge applications. Building out beyond its Alveo catalog of apps for data center, the Xilinx app store now offers customers a wide selection of apps for Kria SOMs from Xilinx and its ecosystem partners. Khona explained further, “We are building up the library of applications. The app store is not yet automated, so you need to contact the IP vendor [if it is from an ecosystem partner].” The apps availability from Xilinx directly are open source accelerated applications, provided at no-charge, and range from smart camera tracking and face detection to natural language processing with smart vision.
Xilinx highlighted some early applications that have already deployed the Kria SOM. This includes Kutleng Engineering Technologies, who were able to deploy tracking cameras for wildlife safety in South Africa. Kutleng said it was able to fast track the launch of several new products within just two months, using vision functions available through Xilinx’ accelerated applications. In addition, Optimized Solutions Limited in India deployed AI-based vision for multi-object detection, recognition and identification for a smart city application using the Kria SOM.