When it comes to producing Software as a Medical Device (SaMD), we know that there are standards to guide us in along the path of producing the right device in the right way (IEC 62304, IEC 62366-1, IEC 82304, IEC/TR 80002-1, IEC 80001-1, IEC 81001-5-1 - I could go on).
But what about the basics? I was talking to a colleague recently about what I would recommend as early steps in bedding quality management into software production in a young company, and I realised that there is plenty I would recommend that the standards say nothing about. The standards are concerned with many of the outputs and consequences of good software production and quality foundations, but don't tell you how to go about doing it in an efficient, scalable and sustainable way.
So, here are some basics to consider if you're starting out on a software production journey (and these go for any software production, not just SaMD):
1. Adopt a complete software configuration management (SCM) approach
SCM is required by the software standard (IEC 63404) anyway, but getting this right early helps people move fast and not make mistakes.
a) Use a distributed source code management platform such as Git
It's pretty standard to use Git as your source code repository, but other alternatives are available (such as Mercurial). You can host Git repositories yourself, or use online service providers.
The advantages of starting out with Git are that your source code can be worked on by multiple people at the same time, in a very agile way, allowing different features to be developed concurrently.
When choosing your SCM platform there are plenty of providers out there hungry for your business (and they all provide great services and are well worth it!):
These tools all support the concepts discussed here, as well as having work tickets, wikis to capture information and lots of integrations to other tools.
b) Control changes to source code using a workflow, such as GitFlow
I would say always use GitFlow, but perhaps there are aspects of source code change that you'd like to do slightly differently, and that's OK, but I'd strongly suggest that you get your head around GitFlow and use it as your inspiration. GitFlow seems to have bedded in as the industry standard now, and for good reason. When you use this change control paradigm, your developers can work on multiple features at the same time, while maintaining a working baseline codebase that is protected from unmanaged changes, and you'll have an always stable "master" codebase to work from that you can segregate from the product in development with your test and release process.
GitFlow is very compatible with the concepts required in IEC 62304 and ISO 13485 when you use code review, which is what we'll discuss next:
c) Establish code review early on
All of the major SCM providers offer this, and it's variously called "merge request" and "pull request". These tools offer the ability to capture a code review as the step between development of features and software releases, and the "merge" of the code into the main production codebase, and it therefore fits very nicely into an IEC 62304 software development system, providing the kind of control you need over the production code. To use merge requests to the best of their ability, consider:
any rules you want about numbers of reviewers, or what their skills are - you can usually put people into groups that can help you achieve this (e.g. have a group of senior developers, or approvers)
attaching any acceptance criteria you have for requirements to the request
using checklists that ensure all of your requirements are met (e.g. has the coding standard been followed, have the requirements been satisfied, has the documentation been updated)
integrating automated testing (see "continuous integration" later on) into the requirements for a review to be completed
One thing I've seen in the industry is that retrofitting code review to established development practices is extremely hard, so not doing it early can be very painful in the long run.
The myth that needs dispelling about code review is that it slows the development down. This idea is wrong, because the slowing effect is only true locally, on the act of changing the code, and fails to recognise the time cost of letting bugs slip past the development process, and code review can be good at catching certain types of bugs (and really good at catching bugs if you include automated testing in the review process - more on that later), and making sure code remains maintainable and scalable. When you know no-one is looking at your code, it's much easier to commit sins in the name of progress! Remember, the cost of fixing a bug goes up exponentially the farther down the production process it gets before it's found.
d) Control build outputs
Again, this requirement of SCM is right there in IEC 62304, so you'll have to do this eventually if you're producing SaMD.
The key is to not just put the built software on someone's hard drive, on a disk (some younger readers may not know what I mean!), or in some shared drive, but instead to store outputs in an appropriate industry standard package management repository, such as:
Linux packages (RPM, DEB or similar) in a standard Linux repository,
Python packages in a PyPI repository, or
Docker containers in a Docker repository
There may be a step to consider here, which is what is the format of the build output, and what I'm encouraging is the use of a package-type output, rather than components that there is a further complex process to install and use in the fully integrated system.
Again, the services such as GitLab and BitBucket can help here, as they can provide many of the most popular repositories necessary to push your build outputs to. Failing that, it's easy to set up these repositories on a server in your facility, or on a cloud server hosted by your favourite cloud provider.
The obvious benefits of a standard place to keep software items in a packaged format, is that everyone always knows they are getting the latest without having to hunt around, and the installation and integration process is repeatable and fast. It's also quite easy to simply "mirror" and freeze a package repository for production once you're ready for release, and be confident that it contains everything you need.
e) Keep versioning and change information
These are points of fine detail, but are still going to be useful in service of complying with the SaMD standard.
Versioning information is not just a nice to have, but is required by the MDD and MDR if you look carefully: device labelling guidance says that the software version should be used as device lot/batch number, and so it is part of the UDI and needs to go on the label. (And this is true in the US as well as the UK and EU.)
GitFlow includes the idea of software versions, and it's generally a good idea to use "semantic versioning" (e.g. "1.2.1"), as it's industry standard, and can support important regulatory and QMS production concepts, such as minor fixes, small changes and "substantial" changes (I'd suggest that a substantial change is a major version number change).
Also, encourage developers to use a changelog, as this encourages the idea of writing down the changes made in a way that's very local to the code, and can happen before they've finished completing their work. The longer you leave it to ask a group of engineers what they have changed, the less likely it is you'll get the truth!
If you're using good code review practices, you can build in checks for "changelog updated?" and "version number updated?" in your merge request checklist.
2. Get a "one-click build" and "continuous integration" (CI) working early on
As a concept, a "one-click build" is the ability for someone to be able to:
get the source code
type one command, and
the built software item will then pop out, with no other intervention needed.
The "CI" part is integration of this tooling into source code management, such that this happens automatically, every time someone makes a change to the code and commits it to the repository. There are CI pipeline tools offered by the various source code management companies already mentioned, and these can be surprisingly easy to get going once your one-click build works, and offer the opportunity to build the software and then push it to the software repository in one step.
Without one-click builds, the build process can be unrepeatable and take time away from your development team, and an automated pipeline is the best way to demonstrate a repeatable process.
For even better results, take things to the next level, with "continuous delivery" (CD) and Include automated testing within the automated pipeline, and the process gets even faster, giving developers feedback on whether they have caused a bug before they have finished and moved onto the next piece of work. So, the next thing to talk about is to...
3. Enable and use automated testing
Building software that can be automatically tested is the best way to ensure that a lot of testing happens - if you build something that can't be tested automatically, it is so hard to find the time to change it to being testable later, as the manual testing effort increases constantly, leaving less and less time to get this done.
There is of course an art to choosing what you should automated and what you can't but if you look at the requirements of IEC 62304, and build your software in a testable way, then you'll find that you can automate:
some integration verification,
integration testing, and
The one thing that automated testing is of course not particularly suited to is validation.
[NOTE: If you're interested in reading more about really good testing strategies, I'd recommend Agile Testing by Lisa Cripsin and Janet Gregory]
The watchword here is scalability: Can the system grow without the manual effort becoming a burden? Can the release frequency be increased without it being impossible for a test team to keep up? Will we need to hire 20 testers just to scale the business?
There are many advantages of automated testing:
It encourages developers to think about how software will be tested and to write tests themselves
It reduces the cycle time for a developer to discover they have made a mistake
Integrated into the GitFlow and review process, it means you can prevent a lot of bugs from ever reaching the production codebase
Making a computer tell people "no, you made a mistake" rather than a person helps maintain healthier team relationships, and can prevent an "us and them" relationship between development and QA
If you have automated testing built into the CI/CD system, then you can move at surprising pace, with little or no churn caused by mistakes leaking out beyond the end of the development process.
[If you want to get really advanced, think about how much quality assurance activity you can hang on the CI/CD pipeline: deployment to test systems? load testing? static analysis? installation and upgrade testing? rollback testing? backup and restore testing?]
I've talked about a small number of approaches here, that I consider a baseline for any development activity. Yes, it's possible to proceed with only some, or even none of them, but scaling up will be extremely difficult, and any SaMD standards compliance requirements may be a struggle to achieve.
More than that though, I advocate that including these basic foundational practices will speed up any software production, whether regulated or not. That is because these practices help to speed up the whole process of delivering great software to users, rather than just the process of writing code, because they build in and assure quality, rather than deferring it to quality control, which can result in re-work and churn.
There is a lot I haven't covered here that is needed to cover al of the requirements of IEC 62304 (not to mention all of those other software standards), and there are also other great foundational practices you can adopt, such as SOLID, scalable, flexible micro-services architectures, privacy and security by design, defense-in-depth, coding standards, static analysis and so on. Please, go ahead and read up on these things and form your view of how to build up the foundations of good SaMD development processes, and perhaps I'll write more on those in the future.