Using AI to create GUIs for embedded devices




Deploying a pleasant and fluid user interface is one of the biggest concerns for those who develop modern software applications, whether for embedded systems, mobile devices or computers. Developing a graphical user interface (GUI) ends up being a costly step in the development process, as it requires a good design job to create the user interface/user experience (UI/UX) of the application and the coding itself, often having to go through modifications and rework when trying to improve the user experience in the use of the system.

This article shows how to reduce the time of development and improvement of these interfaces using KnowCode AI, a tool that uses deep learning to understand prototype images and convert into a GUI for embedded devices. This approach identifies the layout components designed by the design professional, generates an XML markup file and finally transforms the XML into a project ready to run on embedded systems, mobile devices or desktops. This process requires only a few minutes and uses open source technologies.

The Difficulties in Creating a User Interface

When a development idea for a new application arises, the user interface and experience are extremely important elements. It is very common for projects to need many hours of work dedicated to thinking about how the application should be presented to the user and the development of this initial executable interface.

In other words, there is a reasonable cost and time involved when it comes to transforming a good application idea into source code, especially if we are talking about embedded devices, which have less processing capacity and memory, when compared to conventional computers.

Thus, the KnowCode project emerged with the objective to reduce time on the development of software interfaces, assisting in the initial and costly process of developing application interfaces, transforming the designs of the screens into executable and functional code. In other words, the idea is to be a tool that uses deep learning to understand the images, identify the components designed by the design professional, convert that image into an XML markup file and then transform this file into a project ready to be executed on systems chips, cell phones or computers and using open source technologies like TotalCross for this purpose.

KnowCode Execution Process

KnowCode has two main execution modules: KnowCode-AI and KnowCode-XML. KnowCode-AI is based on a deep neural network trained with a data set of more than 60,000 images as input. KnowCode-XML is an open source library that allow developers to create the GUI using Android XML and run it with low footprint on Linux ARM devices with TotalCross SDK.

The process starts with the images prototypes that can be created with any design tool like Figma, Adobe XD or Photoshop. KnowCode-AI is not attached to any design tool and that means it can process even existing screenshots to migrate from one technology to the other. The next step is to run the algorithm on the prototype images that will detect every element on the screen like Button, TextView, ImageView, etc.

The output of the KnowCode-AI is an XML markup file that will serve as input to the system module, which will transform this XML into an executable file on various device platforms such as Android, Linux, Linux-ARM and Windows.

For that, the TotalCross SDK and the KnowCode-XML library are used, both free and open source, which enable the creation of an executable project on the most diverse types of platform. In other words, the KnowCode system with its two modules, KnowCode-AI and KnowCode-XML, allows the developer to start the procedure with a screen image and end with a project that can be executed on different platforms, ready to add functionality to each screen, using the Java language for this purpose.

Sample Application Walk-Through

The Home Appliance application, as illustrated in Figure 1, was a design created to demonstrate the complete process of using the KnowCode Tool. The code for this application is available on GitHub.

click for full size image

Figure 1. A design made to test the neural network. (Source: TotalCross)

Initially, the design of the screen is added to the test of the neural network, Figure 2 illustrates the exit markings of the same, resulting in the identification of 14 of the 17 components on the image.

click for full size image

Figure 2. The design with markings made by the neural network. (Source: TotalCross)

The next step is to adjust the missing components and the markings that came out with some difference in the contour of each component on the screen. Figure 3 illustrates the user making corrections to the markings.

In Figure 3, it’s possible to see the user adjusting the neural network markings, changing what was detected and making new markings just by dragging the mouse and inputting its type.

click for full size image

Figure 3. User adjustments to the neural network markings. (Source: TotalCross)

Each time a new screen design is tested and the user makes the markup corrections, two XML files are generated: one file feeds the neural network and the other is the screen that will be used to generate the application.      

The first XML file opens the possibility for the network to learn to make more precise markings in the next training sessions. The script saves the image and the file with correct markings so that the model undergoes continuous improvements.

The other file is an Android XML that represents the screen itself (Figure 4). We chose this technology because we didn’t want to introduce a new format unnecessarily. Why not apply one of the most established technologies for a different purpose (running on Linux ARM)?.

click for full size image

Figure 4. Android XML output. (Source: TotalCross)

To run the Android XML on the device, we just need to create a new project using TotalCross SDK, import the XML files and import the KnowCode-XML library to execute the XML in the application.

Figure 5 (below) illustrates the complete KnowCode flow starting with acquisition of the images to the conclusion of the project.

click for full size image

Figure 5. The KnowCode flow. (Source: TotalCross)

About suboptimal use cases

KnowCode-AI has good results when the input is screen images with proportions commonly used in the market such as 3:4, 16:9, 18:9 or 21:9. Screens with proportions very distant from these usually result in a low hit rate in the marking of the network. Although it is not an impediment in the follow-up of the project, much more adjustments will be necessary, thus increasing the final UI creation time.

Conclusion

GUI creation is a big challenge for designers and developers and for embedded devices this issue is even bigger with the adoption of low-level technologies like C/C++ that introduce a huge gap of TIME between the design prototype and real application ready to run on the device.

KnowCode brings a new approach for reducing this gap, using Computer Vision to convert images prototypes or screenshots of existing systems into real applications using high-level, established and open source technologies like TotalCross, Android XML and more. This approach reduces GUI-development-related time by up to 80%, while maintaining the same performance even on low-end devices. If you want to try the results of KnowCode AI, just send your GUI prototypes to devteam@totalcross.com and we will send you the executable files and the source code of the application.

We welcome your comments and suggestions. =)


Bruno Muniz is CEO at TotalCross. An entrepreneur for over 12 years, TotalCross is the fourth company he’s founded. Bruno has over 15 years of experience in software development, especially in mobile applications. He has a Master’s Degree in Computer Science and is a startup enthusiast and heavily into the startup scene in Brazil. Bruno consider himself an OpenSource noob but is always learning =P
Iaggo Quezado is a developer at TotalCross. He is a programmer, gamer, and science computer student who enjoys long gaming and programming sessions as bright as his room is with RGB.
Patrick Martins is a programmer at TotalCross and focused on machine learning, data science and computer vision. He is a computer engineering student, maker and aquarist.

Related Contents:

For more Embedded, subscribe to Embedded’s weekly email newsletter.

The post Using AI to create GUIs for embedded devices appeared first on Embedded.com.





Original article: Using AI to create GUIs for embedded devices
Author: Bruno Muniz, Iaggo Quezado and Patrick Martins