Using Linux with critical applications: Like mixing oil and water?




The rise of Linux has been nothing short of meteoric. Originally developed for personal computers based on the Intel x86 architecture, it now runs on the tiniest embedded devices and yet can also be found on the 500 fastest supercomputers in the world according to Top500, an independent benchmarking project. It is commonplace in mobile phones (in the form of Android), consumer electronics, in-vehicle infotainment (IVI), networking equipment… the list is almost endless.


Source: Aakash Kumar/Pixahive

Linux is simply an operating system like any other, but its free, open source status offers obvious commercial advantages. Strictly speaking, the term “Linux” refers specifically to the kernel – the part of the operating system that facilitates interactions between the hardware and the software. Familiar Linux desktop distributions such as Debian, Fedora and Ubuntu supplement that Linux kernel with supporting system software and libraries provided by the GNU project.

The relative rarity of Linux on these desktop machines acts as a reminder that Linux’s rise to prominence is not equally distributed across all sectors and environments. In particular, the world of safety critical embedded software presents a fundamental blocking issue, because of the traceability demands of the standards with which such systems are required to comply. As embedded projects have evolved to complex computer systems, build technology brings the various modules together, and requirements around safety become increasingly complex to verify. No one person can know exactly how every module is built, nor how the code interacts between the modules.

The Siren call of embedded Linux

Imagine a start-up with a great idea for a safety critical application. It could be an industrial controller, perhaps, or an innovative automotive braking system, but consider for now the specific example of a life-changing medical device. There’s a world of pain ahead in seeking FDA approval in the USA, while the EU’s MDR/IVDR compliance rules demand adherence to best practice without much clarity of just what that is. But right now, there’s no certainty that the idea can work at all – and the main focus is the science; software is merely a means to an end. A proof of concept (PoC) seems like a logical place to start. Worry about certification later.

It’s likely the budget will be tight in such a situation, and the selection of Linux is a no-brainer. You can download it in a few minutes, and there’s an internet full of advice, support and guidance. It costs nothing—no development seat fee, no licensing—and if you need help, there’s a massive pool of developer expertise to call upon. What could possibly go wrong?  

What are the problems?

Some of the challenges can be dealt with quite easily. For example, standard Linux does not have the real-time capabilities that are likely to be necessary for many applications in these sectors. The development of Real-Time Linux in the form of the PREEMPT_RT patch represents the Linux foundation’s answer to those concerns, and questions remain about its suitability for hard real-time applications when alternatives may need to be sought (see The Real-Time Linux Kernel: A Survey on Preempt_RT. ACM Computing Surveys). It has also yet to be refined to a point where it can be merged into the mainline Linux kernel, although that is the ultimate aim of its developers.

What about the quality of those individuals? Developers of the Linux kernel and system software are predominantly software professionals. Many contributors to the kernel outside the Linux foundation are paid as part of their regular employment, so their professionalism is not in question. In fact, open source software (OSS) is potentially more “correct” (Open Source Initiative, 2018) because by being accessible to everyone, the source code is continuously analysed by a large community, and anyone can fix bugs as they are found.  This in turn means end users do not have to wait for the next official release. 

So far, so good. But the functional safety standards such projects must adhere to have core objectives that must be complied with, including IEC 61508 (International Electrotechnical Commission, 2010), ISO 26262 (International Standards Organisation, 2018), and IEC 62304 (International Electrotechnical Commission, 2010) respectively for our industrial, automotive, and medical examples. These standards lay down a development lifecycle that is managed to ensure rigour in terms of quality assurance, configuration management, change management and verification/validation:

Quality Management: At the heart of any critical system is a robust quality management process, usually to ISO 9001 (International Standards Organisation, 2015) as a minimum. As a generalisation, quality management within an open source software environment tends to be less rigid. That said, compliance with a standard such as ISO 9001 ensures only that processes are defined and followed, and that the quality of product is repeatable, not necessarily high.

Change Management: The downside of anyone being able to fix bugs as they are found in open source developments is that it bypasses formal change control processes.

Configuration Management: Typically, both open source projects and more traditional developments use a repository (e.g. Git, SVN) that provides a controlled environment for not only the latest development version but also a controlled environment for builds, candidates and actual releases.

Verification and Validation: A key problem area for mission-critical software is in the verification and validation activities. The safety case requires evidence to support the argument that the software is fit for purpose; that is, it meets the documented requirements.

In common with many sector specific functional safety standards, ISO 26262 describes a V-model for automotive developments (Figure 1).

click for full size image

Figure 1: Traditional sequence for the application of an automated tool chain to the ISO 26262 process guidelines. (Source: LDRA)

This requires traceability of the requirements through the full lifecycle, verification and validation of the design, and verification and validation of the implementation. These stages are all difficult to achieve when adopting open-source solutions.

Functional safety standards typically also recommend the adoption of language subsets (often known colloquially as coding standards) such as the popular guidelines described by MISRA. Empirical evidence suggests that adoption of such guidelines is rare within the open-source community, perhaps because the guidelines themselves are not open source.

The net result is that you can develop application software of exemplary quality in line with the functional standard of choice. However, if your operating system doesn’t also achieve that level of quality—and just as importantly, doesn’t provide evidence of that quality—then your system cannot be compliant.

The ELISA project

The Linux foundation is no stranger to this conundrum. In February 2019, the Enabling Linux in Safety Applications (ELISA) open-source project was launched, with the aim of helping companies to “build and certify Linux-based safety-critical applications and systems whose failure could result in loss of human life, significant property damage or environmental damage.” While that is a laudable aim and the project is backed by some very significant supporters such as Arm, BMW and Toyota, it remains an aspiration for tomorrow and of no use if your development project is for today.

One-shot adoption

Right now, an open-source operating system cannot be used, uncontrolled and incrementally, within a mission-critical project. However, this does not mean that OSS cannot be used at all (with the proviso that if intellectual property is a concern, many OSS licences require that any derived code is also OSS and must be returned to the repository).

Linux is an open-source package no different from any other externally developed software package (e.g. a library or driver) in that it should be considered as Software of Unknown Pedigree (or SOUP) and a full verification and validation process applied to it—and reapplied in the event of any changes to the base package, which should be introduced only through a managed change-control process.

The traditional application of formal test tools is illustrated in the ISO 26262 ‘V’ model diagram. A team facing the task of building a standards-compliant application on an open-source operating system will be required to follow a more pragmatic approach not only because the code already exists, but because the code itself may well be the only available detailed “design document.” In other words, the existing code effectively defines the functionality of the system rather than any documentation.

In general, for a library or a driver, that might be a practical proposition. But thinking back to the medical device example, will such a development team really want to spend their time reverse engineering an operating system? Even if the OS footprint is minimised through the use of Yocto, for example, such an exercise would demand a knowledge of low-level software that would at best be a major distraction and at worst might be well outside an application developer’s comfort zone.

A pragmatic solution for today

If all proof-of-concept developments were to be completed on certifiable operating systems, then many of them would never even start. Any pragmatic solution must acknowledge that the use of Linux in some form or another is almost inevitable.

What happens after that depends very much on timescales. How long will the development take? And when will ELISA come of age?

Portability is one of the major advantages of any Portable Operation System Interface (POSIX) compliant development. While many Linux implementations are not fully POSIX compliant, it is entirely possible to limit the use of their functionality to those features that are, and still have a more than adequate toolkit to build a practical system.

Even when application development moves from a sandbox “hack it and see” approach to a formally documented, compliant development lifecycle, it is entirely practical for even an extended development team to continue to leverage Linux in this way. This means performing all the verification and validation activity demanded by the functional safety standard of choice, including requirements tracing, the applications of coding standards, unit test (Figure 2), structural coverage analysis and whatever else is required.

click for full size image

Figure 2: Performing unit test with the LDRA tool suite. (Source: LDRA)

Eventually a day of reckoning will come when the product has to be readied for market. It may be possible for ELISA to be deployed by then, in which case the road ahead is clear. If not, however, there are several POSIX conformant, commercially available RTOS such as those from QNX and Lynx that are certified for use. In principle, it is then a simple task to recompile your application with a single licence for your commercial RTOS of choice, re-run the dynamic analysis tests in the new environment, and hit the market!

To make this a practical proposition, there are two key considerations to consider early in the project.

POSIX compliance and conformance: The terms “compliance” and “conformance” may seem like synonyms, but beware! Figure 3 highlights the possible degrees of subtle mismatch, capturing in a nutshell the blurring of boundaries between what is defined by the POSIX specification and what is implemented in practice.

click for full size image

Figure 3: The Open Group’s illustration of architecture compliance and conformance. (Source: LDRA)

That raises questions about assumed portability. For example, if you’ve developed a system deploying an RTOS that includes non-conformant features, any change of RTOS is likely to involve at least a partial rewrite.

Now suppose that your original system used a fully POSIX-conformant RTOS and your selected replacement is conformant, but not fully conformant. How can you be sure that the new OS implements all of the features leveraged in the code base?

Fully automated test and requirements traceability: “Re-run the dynamic analysis tests in the new environment” is something of a throwaway phrase, but if all of those tests have been performed by manual means then even if things go smoothly there could be considerable overhead implied.

Now suppose that the shift to the certified RTOS of choice has necessitated a partial re-write. Keeping track of any implications for requirements, design and test could easily become a project management headache at exactly the time in the project when it is least welcome.   

Ensuring a fully integrated, automated approach to test and requirements traceability can minimise that impact, making the identification of necessary retests easy and their execution a simple matter of re-running them.

click for full size image

Figure 4: Impact analysis of changing requirements with the LDRA tool suite. (Source: LDRA)

Conclusions

Right now, the use of Linux as an operating system of choice for the most safety-critical applications is not an option. But that doesn’t mean it can’t be used during the development of such an application.

That might change if ELISA or a similar development make it an attainable aim. But if not, the portability inherent in POSIX offers an option for the transition from proof of concept into a certifiable project.  

Making that a practical proposition requires careful use of POSIX features, and a seamless mechanism for retest if and when the time comes to port the application from Linux to a standards-compliant alternative.  


Mark Pitchford is technical specialist with LDRA Software Technology. Mark has over 30 years’ experience in software development for engineering applications and has worked on many significant industrial and commercial projects in development and management, both in the UK and internationally. Since 2001, he has worked with development teams looking to achieve compliant software development in safety and security critical environments, working with standards such as DO-178, IEC 61508, ISO 26262, IIRA and RAMI 4.0. Mark earned his Bachelor of Science degree at Nottingham Trent University, and he has been a Chartered Engineer for over 20 years. He now works as Technical Specialist with LDRA Software Technology.

Related Contents:

For more Embedded, subscribe to Embedded’s weekly email newsletter.

The post Using Linux with critical applications: Like mixing oil and water? appeared first on Embedded.com.





Original article: Using Linux with critical applications: Like mixing oil and water?
Author: Mark Pitchford