Security & safety for embedded medical devices
01 August 2020
The security & safety of embedded medical devices has become one of the top priorities when it comes to designing connected medical equipment. By contrast, it wasn’t too long ago that many would have acknowledged the need for something to be done about the security of medical device designs.
This article was originally featured in the August 2020 issue of EPDT magazine [read the digital issue]. Sign up to receive your own copy each month.
However, due to the long design cycles for medical devices, implementation of security measures received a lower priority – and often had to wait for the next generation or two of product releases. However, as Marten L Smith, Business Development Manager for microcontroller & integrated circuit manufacturer, Microchip Technology’s Medical Products Group explains, with the advent of connected devices, that kind of thinking has evaporated – especially given the many recent IoT (Internet of Things) device security breaches…
There is a never-ending stream of news involving cybersecurity attacks and security vulnerabilities in all types of connected equipment. And unfortunately, medical devices are not exempt from these types of attacks and announcements. As more and more medical devices feature connectivity, becoming part of what is sometimes now being called the Internet of Medical Things (IoMT), the vulnerability of these devices – and indeed patient data – continues to rise. Concerns over legal liability, as well as the protection of company brand, intellectual property and revenue streams, are very real.
Security for embedded medical device designs
There are many different types of security threats that need to be addressed. The type of security measures that need to be designed into an embedded medical device depend on the application and the requirements of how it is going to be used.
Implementing new designs for remote patient monitoring and patient compliance are some of the biggest reasons to connect medical devices to the cloud. Unfortunately, once the device is connected to the cloud, it can become a target for hackers. An example of two common hacker threats to cloud-connected medical devices are the denial-of-service (DoS) attack and the man-in-the-middle attack.
An example of a denial-of-service attack is when hackers take control of a connected remote patient monitor system and flood a cloud server with many redundant requests, in order to overload the server and prevent legitimate requests from being fulfilled. A worst-case scenario is a distributed denial-of-service attack (DDoS). An example of this would be where all connected patient monitors in a hospital are hacked into and made to send so many redundant requests that they overload the one cloud server that supports all the monitors.
An example of a man-in-the-middle attack is when a hacker gains access to a connected infusion pump delivering a morphine drip to a patient. The hacker can intercept the communications between the pump and the server, as well as send false communications to either one of them. Among other things, the hacker would potentially be able to control the pump, and either not deliver medication to the patient, or deliver an overdose. Both scenarios could be disastrous for the patient and everyone involved.
Traditional counter measures for these and other types of attacks have been fully software-based solutions. However, these counter measures are now being implemented in faster and more cost-effective hardware solutions. An ideal implementation of hardware counter measures would be to integrate many of the traditional software functions into chips.
There are many different types of security chips. Medical device designers need to decide very early in the design cycle which functions and layers of security are necessary for their design. After doing that the designer then needs to select the security chips that implement those functions.
Some examples of different kinds of security chips are cryptography-enabled microcontrollers (MCU) and microprocessors (MPU), as well as secure elements. Combined together with well architected firmware and a managed cloud security architecture, these chips offer security features and counter measures that provide confidentiality, data integrity and authentication to connected medical devices.
One type of chip that mitigates both the denial-of-service attack and the man-in-the-middle attack is generally referred to as a secure element, or Cryptoauthentication™ device. Typically, these are small chips (for instance, in 8-pad UFDN or 8-pin SOIC packages) that are easily added into the design of a connected medical device. A secure element chip is designed to work as a companion to the medical device design’s MCU.
These secure element chips offer features such as a high-quality (hardware/true) random number generator, hardware-based cryptography, secure key storage and secure boot capabilities for microcontrollers. They also offer countermeasures, such as side channel attack protection and active anti-tampering, which can reduce potential backdoors linked to medical device software weaknesses.
A simple analogy for a secure element chip would be to compare it to a vault that protects secrets. These secrets are inserted into the vault during the manufacturing process. In terms of the secure element chip, the secrets are called keys. The keys can be considered as the credential in the design’s secure element chip that authorises the server, or tells it to grant connection to the medical device. This granting of connection process is called authentication.
The process of inserting these keys into the chip is called key provisioning. Provisioning can be viewed as a pre-programming process. It is done in the chip manufacturer’s secure facility. This secure provisioning process does not allow any human to see or access the keys, so they are never exposed.
The result is that the medical device is physically secure when it is out being used in hospitals, clinics or the patient’s home. Secure provisioning can also eliminate the threat of third-party board manufacturers potentially stealing the keys stored in the secure element chip.
When the medical device is in use and wants to connect to the cloud, it will go through an authentication process with the server. During this process, the Cloud server sends a challenge to the secure element chip, which offers a response derived by using a secret key. If the secure element’s response is correct, then it is granted access to the server.
Secure authentication to a cloud server is a complex and sometimes unfamiliar process for medical device designers to implement on their own. To address this challenge, the secure element chip can also be pre-configured and pre-provisioned in the chip vendor’s factory with credentials to authenticate to popular cloud services, such as Amazon Web Services (AWS) IoT Core, Microsoft Azure IoT Hub or Google IoT Core. Using a provisioning service can eliminate the complexity, design delays and high cost usually associated with medical device designers trying to do this themselves.
With so many security concerns today, the solutions available to improve medical device protection are readily available. Secure element or CryptoAuthentication chips can provide a comparatively simpler and more cost-effective way to implement cloud authentication functionality and improve overall security on today’s connected medical devices.
Safety topics for embedded medical device designs
Safety requirements have expanded in many industries. For our purposes, we can use the definition that safety processes and functions detect failures in an electrical and/or electronic system or product, and prevent injury, harm or potential life-threatening events. New safety issues and requirements are appearing in many different application areas, including automobiles, industrial electronic systems, home appliances – and of course, medical devices.
Much has already been done in industry to design safe and robust medical devices; however, since electronic systems can fail, there needs to be a way to safely handle such failures.
Simply put, the goal of functional safety is to detect failures and to react appropriately, so people don’t get hurt. This is accomplished in two basic ways: the first way is to reduce systematic errors during the design process; the second is for the design to be able to detect random failures and default to a safe mode when it does so.
One thing to bear in mind is that functional safety does not reduce total failure rates. That must be addressed by the design and manufacturing quality processes of both the separate components used in the design, as well as the medical device itself. The goal of designing to functional safety standards is to convert unsafe failures to safe failures. It is also a process by which the designer can define an allowable level of unsafe failures.
Safety issues with medical devices have generated their fair share of negative news headlines. Ensuring the safety of medical devices is critical for designers, manufacturers and healthcare providers, as medical devices can negatively impact the health of the patients who rely on them. Like security, safety needs to be considered at the beginning of each medical device design.
There are many functional safety standards for many different types of industries. The safety of an embedded medical device design may depend on designing to more than one of these standards. For example, medical device designers have found that they not only need to ensure their designs are based on the IEC 62304 standard for software life cycle processes – but that they also need to incorporate the industrial IEC 61508 functional safety standard into their design process. In fact, the IEC 62304 standard encourages those designing to this standard to also use IEC 61508 as a source for good software methods, techniques and tools.
It is important to keep in mind that functional safety is not just the hardware or software in the medical device, but includes an entire design process and ecosystem for the design to be safe. For example, MCU parts designed for functional safety applications are supported by hardware integrated functions, software diagnostic test libraries, safety manuals and FMEDA (failure modes, effects & diagnostic analysis) reports, depending on the standard and the level of safety they support.
The features of this ecosystem not only aid in meeting the safety standards, but they can also play a significant role in detecting failures during the design’s run-time operation. Functional safety considerations should play a significant part in the selection of the MCUs that designers use for safe medical device designs.
Here are a few examples of functional safety MCU product and support features that can be important in detecting failures for a medical device design:
• Diagnostic libraries that run both at reset, as well as run-time, to ensure that there are no failures in the system.
• MCU development tools need to be considered safe, so they do not introduce failures into the system. Development tool qualification to functional safety standards is an important part of this.
• Designing with MCUs that have integrated and intelligent peripherals helps to increase reliability and monitoring for safety-critical applications.
In terms of medical devices, safety should never be compromised. Together, these hardware and software features help ensure that medical devices operate as intended, with a safe shutdown if any exception or issue arises.
Both secure and safe
In the past, both the security and safety of embedded medical devices were typically afterthoughts in the design process. But not considering and designing security and safety functions in a medical device design is no longer acceptable. Public safety, as well as a company’s reputation, its mitigation of legal liability and financial success, is highly dependent on designs being both secure and safe.
Contact Details and Archive...