Document Management

Today many companies are going digital, striving for paperless, reinventing how individuals find information, record data and make decisions. It is often good when undergoing these decisions to go back to basics and make sure we are all on the same page before we proceed.

When talking about document management we are really discussing three major types or functions:

  • Functional Documents provide instructions so people can perform tasks and make decisions safely effectively, compliantly and consistently. This usually includes things like procedures, process instructions, protocols, methods and specifications. Many of these need some sort of training decision. Functional documents should involve a process to ensure they are up-to-date, especially in relation to current practices and relevant standards (periodic review)
  • Records provide evidence that actions were taken and decisions were made in keeping with procedures. This includes batch manufacturing records, logbooks and laboratory data sheets and notebooks. Records are a popular target for electronic alternatives.
  • Reports provide specific information on a particular topic on a formal, standardized way. Reports may include data summaries, findings and actions to be taken.

Often times these types are all engaged in a lifecycle. An SOP directs us to write a protocol (two documents), we execute the protocol (a record) and then write a report. This fluidity allows us to combine the types.

Throughout these types we need to apply good change management and data integrity practices (ALCOA).

All of these types follow a very similar path for their lifecycle.

document lifecycle

Everything we do is risk based. Some questions to ask when developing and improving this system include:

  • What are the risks of writing procedures at a “low level of detail versus a high level of detail) how much variability do we allow individuals performing a task?) – Both have advantages, both have disadvantages and it is not a one-sized fits all approach.
  • What are the risks in verifying (witnessing) non-critical tasks? How do we identify critical tasks?
  • What are the risks in not having evidence that a procedure-defined task was completed?
  • What are the risks in relation to archiving and documentation retrieval?

There is very little difference between paper records and documents and electronic records and documents as far as what is given above. Electronic records require the same concerns around generation, distribution and maintenance. Just now you are looking at a different set of safeguards and activities to make it happen.

Knowledge management as continuous improvement

An effective change management system includes active knowledge management, leveraging existing process and product knowledge; capturing new knowledge gained during implementation of the change; and, transferring that knowledge in appropriate ways to all stakeholders.

Any quality system (any system) has as part of it’s major function transforming data into information; the acquisition and creation of knowledge; and the dissemination and using information and knowledge. A main pipe of process improvement is how to implicit knowledge and make it explicit. This is one of the reasons we spend so much time developing standardized work.

And yet I, like many quality professionals, have found myself sitting at a table or standing in front of a visual management board, and have some type of leader ask why we spend so much time training when we should be able to hire anyone with an appropriate level of experience and have them just do the job.

Systems are made up of four things – process, organization, people and technology. When folks think of change management, or root cause analysis, or similar quality processes and tools they tend to think the system is only about the activity we are engaging in. But change management (or root cause analysis or data integrity) is not that simple.

This is really two-fold. A person assessing a change is not just needing to be knowledgeable about how the change control process works, they need to be able to analyze the change to each and every process within the systems they represent, to understand how moving the levers and adjusting the buffers within this change influences each and everything they do, even it that’s to be able to make a concrete and definitive no impact statement.

So when we process improve our change management system (or similar quality processes) we are both improving how we manage change and how folks apply that thinking to all their other activities.

In less mature systems we end up having a lot of tacit knowledge in one person. You have that great SME who understands master data in the ERP and how changes impact it. In way to transfer that tacit knowledge to another person is a lot of socialization. It is experiential, active and a “living thing,” involving capturing knowledge by spending a lot of time with that person and having shared experience, which results in acquired skills and common mental models.

For example, my master-data guru needs to be involved in each and every change that might possibly involve the ERP or master-data. I might have reached the point where the procedure has large sections that give detailed instructions on when to involve the master-data guru in a change. The master-data guru spends a lot of time justifying no-impact. Otherwise I might be having change after change forget to update master data. Which leads to deviations.

At this stage of maturity I’ve recognized I need a master data guru. I’ve identified the individual(s). Depending on maturity I either involve the master-data guru on every change or I’ve advanced enough that I have a decision tool that drives changes to the master-data guru.

So now either the master-data guru is becoming a pain point or we’ve had one too many changes that led to deviations because we failed to change master data in the ERP correctly. So we enter a process improvement cycle.

What we need to do here is make the master-data guru’s tacit knowledge explicit; we need to externalize this knowledge. We start building the tools that better define what never has impact, what always has impact, and what might have impact or be really unique. When a change has no impact, the change owner is able to note that and move on (no master-data guru involvement necessary). When it has definite impact the change owner is able to identify the actions required, knowing exactly what procedures to follow and how to execute those within a change. We still have a set of changes that will trigger the master-data guru’s involvement, but those are smaller in number and more complex in scope.

The steps we took to get here also allow us to more easily develop and train master-data gurus. Maybe we have a skills matrix and it is on development plans. Our training program now has the tools to allow internalization, the process of understanding and absorbing explicit knowledge into tacit knowledge held by the individual.

At this point I have the tools for my average change owner to know when to change master data and how-to-do it (this might not involve them actually doing master data management it is really knowing when to execute, and the outputs from and inputs back into the change management system) AND I have better mechanisms for producing master data experts. That’s the beauty here, the knowledge level required to execute change management properly is usually an expert level competency. By making that knowledge explicit I am serving multiple processes and interrelated systems.

Knowledge management Circular_Process_6_Stages (for expansion)

To breakdown the process:

  1. Capture all the knowledge. Interview the SME(s), evaluate the use of the system, gather together all the procedures and training and user manuals and power point slides
  2. Assess what is valuable, what needs to be transferred
  3. Share this knowledge, make sure others can understand it
  4. Contextualize into standard tools (job aids, user guides, checklists, templates, etc.)
  5. Apply the knowledge. Train others and also update your system processes (and maybe technology) to make sure the knowledge is used.
  6. Update – make sure the knowledge is sustained and regularly updated.

Change management has lots of inputs and outputs. As does data integrity and may other quality systems. Understanding these interrelationships, ensuring knowledge is appropriate captured and utilized, is a big way we improve and thrive.

Computer verification and validation – or what do I actually find myself worrying about every day

voting_software
xkcd “Voting Software” https://xkcd.com/2030/

This xkcd comic basically sums up my recent life. WFI system? Never seems to be a problem. Bioreactors? Work like clockwork. Cell growth? We go that covered. The list goes on. And then we get to pure software systems, and I spend all my time and effort on them. I wish it was just my company, but lets be frank, this stuff is harder than it should be and don’t trust a single software company or consultant who wants to tell you otherwise.

I am both terrified and ecstatic as everything moves to the cloud. Terrified because these are the same people who can’t get stuff like time clocks right, ecstatic because maybe when we all have the exact same problem we will see some changes (misery loves company, this is why we all go to software conferences).

So, confessional moment done, let us turn to a few elements of a competent computer systems validation program (csv).

Remember your system is more than software and hardware

Any system is made up of Process, Technology, People and Organization. All four need to be evaluated, planned for, and tested every step of the way. Too many computer systems fall flat because they focus on technology and maybe a little process.

Utilize a risk based approach regarding the impact of a computer system impact on product quality, patient and consumer safety, or related data integrity.

Risk assessments allow for a detailed, analytical review of potential risks posed by a process or system. Not every computer system has the same expectations on its data. Health authorities recognize that, and accept a risk based approach. This is reflected across the various guidances and regulations, best practices (GAMP 5, for instance) and the ISOs (14971 is a great example).

Some of the benefits of taking this risk based approach include:

  • Help to focus verification and validation efforts, which will allow you to better focus of resources on the higher-risk items
  • Determine which aspects of the system and/or business process around the system, require risk mitigation controls to reduce risk related to patient safety, product quality, data integrity, or business risk
  • Build a better understanding of  systems and processes from a quality and risk-based perspective

Don’t short the user requirements

A good user requirement process is critical. User requirements should include, among other things:

  • Technical Requirements: Should include things like capacity, performance, and hardware requirements.
  • System Functions: Should include things like calculations, logical security, audit trails, use of electronic signature.
  • Data: Should describe the data handling, definition of electronic records, required fields.
  • Environment: Should describe the physical conditions that the system will be required to operate in.
  • Interface: What and how will this system interface with other systems
  • Constraints: discuss compatibility, maximum allowable periods for downtime, user skill levels.
  • Lifecycle Requirements: Include mandatory design methods or special testing requirements.

Evaluate each of people, process, technology and organization.

This user requirement will be critical for performing a proper risk assessment. Said risk assessment is often iterative.

Build and test your system to mitigate risk

  • Eliminating risk through process or system redesign
  • Reduce risk by reducing the probability of a failure occurring (redundant design, more reliable solution)
  • Reduce risk by increasing the in-process detectability of a failure
  • Reduce risk by establishing downstream checks or error traps (e.g., fail-safe, or controlled fail state)
  • Increased rigor of verification testing may reduce the likelihood by providing new information to allow for a better assessment

After performing verification and validation activities, return to your risk assessment.

Apply a lifecycle approach once live

  • Apply proper change management
  • Perform periodic reviews of the system. This should include: current range of functionality, access and training, process robustness (do the current operating procedures provide the desired outcome), incident and deviation review, change history (including upgrades), performance, reliability, security and a general review of the current verified/validated state.
  • Ensure the risk assessment is returned to. On a periodic basis return to it and refresh based on new knowledge gained from the periodic review and other activities.

Do not separate any of this from your project management and development methodology

Too many times I’ve seen the hot new development lifecycle consider all this as an after thought to be done when the software is complete. That approach is expense, and oh so frustrating