Why ‘First-Time Right’ is a Dangerous Myth in Continuous Manufacturing

In manufacturing circles, “First-Time Right” (FTR) has become something of a sacred cow-a philosophy so universally accepted that questioning it feels almost heretical. Yet as continuous manufacturing processes increasingly replace traditional batch production, we need to critically examine whether this cherished doctrine serves us well or creates dangerous blind spots in our quality assurance frameworks.

The Seductive Promise of First-Time Right

Let’s start by acknowledging the compelling appeal of FTR. As commonly defined, First-Time Right is both a manufacturing principle and KPI that denotes the percentage of end-products leaving production without quality defects. The concept promises a manufacturing utopia: zero waste, minimal costs, maximum efficiency, and delighted customers receiving perfect products every time.

The math seems straightforward. If you produce 1,000 units and 920 are defect-free, your FTR is 92%. Continuous improvement efforts should steadily drive that percentage upward, reducing the resources wasted on imperfect units.

This principle finds its intellectual foundation in Six Sigma methodology, which can tend to give it an air of scientific inevitability. Yet even Six Sigma acknowledges that perfection remains elusive. This subtle but crucial nuance often gets lost when organizations embrace FTR as an absolute expectation rather than an aspiration.

First-Time Right in biologics drug substance manufacturing refers to the principle and performance metric of producing a biological drug substance that meets all predefined quality attributes and regulatory requirements on the first attempt, without the need for rework, reprocessing, or batch rejection. In this context, FTR emphasizes executing each step of the complex, multi-stage biologics manufacturing process correctly from the outset-starting with cell line development, through upstream (cell culture/fermentation) and downstream (purification, formulation) operations, to the final drug substance release.

Achieving FTR is especially challenging in biologics because these products are made from living systems and are highly sensitive to variations in raw materials, process parameters, and environmental conditions. Even minor deviations can lead to significant quality issues such as contamination, loss of potency, or batch failure, often requiring the entire batch to be discarded.

In biologics manufacturing, FTR is not just about minimizing waste and cost; it is critical for patient safety, regulatory compliance, and maintaining supply reliability. However, due to the inherent variability and complexity of biologics, FTR is best viewed as a continuous improvement goal rather than an absolute expectation. The focus is on designing and controlling processes to consistently deliver drug substances that meet all critical quality attributes-recognizing that, despite best efforts, some level of process variation and deviation is inevitable in biologics production

The Unique Complexities of Continuous Manufacturing

Traditional batch processing creates natural boundaries-discrete points where production pauses, quality can be assessed, and decisions about proceeding can be made. In contrast, continuous manufacturing operates without these convenient checkpoints, as raw materials are continuously fed into the manufacturing system, and finished products are continuously extracted, without interruption over the life of the production run.

This fundamental difference requires a complete rethinking of quality assurance approaches. In continuous environments:

  • Quality must be monitored and controlled in real-time, without stopping production
  • Deviations must be detected and addressed while the process continues running
  • The interconnected nature of production steps means issues can propagate rapidly through the system
  • Traceability becomes vastly more complex

Regulatory agencies recognize these unique challenges, acknowledging that understanding and managing risks is central to any decision to greenlight CM in a production-ready environment. When manufacturing processes never stop, quality assurance cannot rely on the same methodologies that worked for discrete batches.

The Dangerous Complacency of Perfect-First-Time Thinking

The most insidious danger of treating FTR as an achievable absolute is the complacency it breeds. When leadership becomes fixated on achieving perfect FTR scores, several dangerous patterns emerge:

Overconfidence in Automation

While automation can significantly improve quality, it is important to recognize the irreplaceable value of human oversight. Automated systems, no matter how advanced, are ultimately limited by their programming, design, and maintenance. Human operators bring critical thinking, intuition, and the ability to spot subtle anomalies that machines may overlook. A vigilant human presence can catch emerging defects or process deviations before they escalate, providing a layer of judgment and adaptability that automation alone cannot replicate. Relying solely on automation creates a dangerous blind spot-one where the absence of human insight can allow issues to go undetected until they become major problems. True quality excellence comes from the synergy of advanced technology and engaged, knowledgeable people working together.

Underinvestment in Deviation Management

If perfection is expected, why invest in systems to handle imperfections? Yet robust deviation management-the processes used to identify, document, investigate, and correct deviations becomes even more critical in continuous environments where problems can cascade rapidly. Organizations pursuing FTR often underinvest in the very systems that would help them identify and address the inevitable deviations.

False Sense of Process Robustness

Process robustness refers to the ability of a manufacturing process to tolerate the variability of raw materials, process equipment, operating conditions, environmental conditions and human factors. An obsession with FTR can mask underlying fragility in processes that appear to be performing well under normal conditions. When we pretend our processes are infallible, we stop asking critical questions about their resilience under stress.

Quality Culture Deterioration

When FTR becomes dogma, teams may become reluctant to report or escalate potential issues, fearing they’ll be seen as failures. This creates a culture of silence around deviations-precisely the opposite of what’s needed for effective quality management in continuous manufacturing. When perfection is the only acceptable outcome, people hide imperfections rather than address them.

Magical Thinking in Quality Management

The belief that we can eliminate all errors in complex manufacturing processes amounts to what organizational psychologists call “magical thinking” – the delusional belief that one can do the impossible. In manufacturing, this often manifests as pretending that doing more tasks with less resources will not hurt the work quality.

This is a pattern I’ve observed repeatedly in my investigations of quality failures. When leadership subscribes to the myth that perfection is not just desirable but achievable, they create the conditions for quality disasters. Teams stop preparing for how to handle deviations and start pretending deviations won’t occur.

The irony is that this approach actually undermines the very goal of FTR. By acknowledging the possibility of failure and building systems to detect and learn from it quickly, we actually increase the likelihood of getting things right.

Building a Healthier Quality Culture for Continuous Manufacturing

Rather than chasing the mirage of perfect FTR, organizations should focus on creating systems and cultures that:

  1. Detect deviations rapidly: Continuous monitoring through advanced process control systems becomes essential for monitoring and regulating critical parameters throughout the production process. The question isn’t whether deviations will occur but how quickly you’ll know about them.
  2. Investigate transparently: When issues occur, the focus should be on understanding root causes rather than assigning blame. The culture must prioritize learning over blame.
  3. Implement robust corrective actions: Deviations should be thoroughly documented including details about when and where it occurred, who identified it, a detailed description of the nonconformance, initial actions taken, results of the investigation into the cause, actions taken to correct and prevent recurrence, and a final evaluation of the effectiveness of these actions.
  4. Learn systematically: Each deviation represents a valuable opportunity to strengthen processes and prevent similar issues in the future. The organization that learns fastest wins, not the one that pretends to be perfect.

Breaking the Groupthink Cycle

The FTR myth thrives in environments characterized by groupthink, where challenging the prevailing wisdom is discouraged. When leaders obsess over FTR metrics while punishing those who report deviations, they create the perfect conditions for quality disasters.

This connects to a theme I’ve explored repeatedly on this blog: the dangers of losing institutional memory and critical thinking in quality organizations. When we forget that imperfection is inevitable, we stop building the systems and cultures needed to manage it effectively.

Embracing Humility, Vigilance, and Continuous Learning

True quality excellence comes not from pretending that errors don’t occur, but from embracing a more nuanced reality:

  • Perfection is a worthy aspiration but an impossible standard
  • Systems must be designed not just to prevent errors but to detect and address them
  • A healthy quality culture prizes transparency and learning over the appearance of perfection
  • Continuous improvement comes from acknowledging and understanding imperfections, not denying them

The path forward requires humility to recognize the limitations of our processes, vigilance to catch deviations quickly when they occur, and an unwavering commitment to learning and improving from each experience.

In the end, the most dangerous quality issues aren’t the ones we detect and address-they’re the ones our systems and culture allow to remain hidden because we’re too invested in the myth that they shouldn’t exist at all. First-Time Right should remain an aspiration that drives improvement, not a dogma that blinds us to reality.

From Perfect to Perpetually Improving

As continuous manufacturing becomes the norm rather than the exception, we need to move beyond the simplistic FTR myth toward a more sophisticated understanding of quality. Rather than asking, “Did we get it perfect the first time?” we should be asking:

  • How quickly do we detect when things go wrong?
  • How effectively do we contain and remediate issues?
  • How systematically do we learn from each deviation?
  • How resilient are our processes to the variations they inevitably encounter?

These questions acknowledge the reality of manufacturing-that imperfection is inevitable-while focusing our efforts on what truly matters: building systems and cultures capable of detecting, addressing, and learning from deviations to drive continuous improvement.

The companies that thrive in the continuous manufacturing future won’t be those with the most impressive FTR metrics on paper. They’ll be those with the humility to acknowledge imperfection, the systems to detect and address it quickly, and the learning cultures that turn each deviation into an opportunity for improvement.

Leveraging Learning Logs to Foster a Learning Culture for CQV Maturity

Achieving maturity in commissioning, qualification, and validation (CQV) processes is vital for ensuring regulatory compliance, operational excellence, and product quality. However, advancing maturity requires more than adherence to protocols; it demands a learning culture that encourages reflection, adaptation, and innovation. Learning logs—structured tools for capturing experiences and insights—can play a transformative role in this journey. By introducing learning logs into CQV workflows, organizations can bridge the gap between compliance-driven processes and continuous improvement.


What Are Learning Logs?

A learning log is a reflective tool used to document key events, challenges, insights, and lessons learned during a specific activity or process. Unlike traditional record-keeping methods that focus on compliance or task completion, learning logs emphasize understanding and growth. They allow individuals or teams to capture their experiences in real time and revisit them later to extract deeper meaning. For example, a learning log might include the date of an event, the situation encountered, results achieved, insights gained, and next steps. Over time, these entries provide a rich repository of knowledge that can be leveraged for better decision-making.

The structure of a learning log can vary depending on the needs of the team or organization. Some may prefer simple spreadsheets to track entries by project or event type, while others might use visual tools like Miro boards for creative pattern recognition. Regardless of format, the key is to keep logs practical and focused on capturing meaningful “aha” moments rather than exhaustive details. Pairing learning logs with periodic team discussions—known as learning conversations—can amplify their impact by encouraging reflection and collaboration.

Learning logs are particularly effective because they combine assessment with reflection. They help individuals articulate what they’ve learned, identify areas for improvement, and plan future actions. This process fosters critical thinking and embeds continuous learning into daily workflows. In essence, learning logs are not just tools for documentation; they are catalysts for organizational growth.


Applying Learning Logs to CQV

In pharmaceutical CQV processes—where precision and compliance are paramount—learning logs can serve as powerful instruments for driving maturity. These processes often involve complex activities such as equipment commissioning, qualification (OQ), and product/process validation. Introducing learning logs into CQV workflows enables teams to capture insights that go beyond standard deviation reporting or audit trails.

During commissioning, for instance, engineers can use learning logs to document unexpected equipment behavior and the steps taken to resolve issues. These entries create a knowledge base that can inform future commissioning projects and reduce repeat errors. Similarly, in qualification phases, teams can reflect on deviations from expected outcomes and adjustments made to protocols. Validation activities benefit from logs that highlight inefficiencies or opportunities for optimization, ensuring long-term consistency in manufacturing processes.

By systematically capturing these reflections in learning logs, organizations can accelerate knowledge transfer across teams. Logs become living repositories of troubleshooting methods, risk scenarios, and process improvements that reduce redundancy in future projects. For example, if a team encounters calibration drift during equipment qualification and resolves it by updating SOPs, documenting this insight ensures that future teams can anticipate similar challenges.


Driving CQV Maturity Through Reflection

Learning logs also help close the loop between compliance-driven processes and innovation by emphasizing critical analysis. Reflective questions such as “What worked? What failed? What could we do differently?” uncover root causes of deviations that might otherwise remain unaddressed in traditional reporting systems. Logs can highlight overly complex steps in protocols or inefficiencies in workflows, enabling teams to streamline operations.

Moreover, integrating learning logs into change control processes ensures that past insights inform future decisions. When modifying validated systems or introducing new equipment, reviewing previous log entries helps predict risks and avoid repeating mistakes. This proactive approach aligns with the principles of continuous improvement embedded in GMP practices.


Cultivating a Learning Culture

To fully realize the benefits of learning logs in CQV workflows, organizations must foster a culture of reflection and collaboration. Leaders play a crucial role by modeling the use of learning logs during team meetings or retrospectives. Encouraging open discussions about log entries creates psychological safety where employees feel comfortable sharing challenges and ideas for improvement.

Gamification can further enhance engagement with learning logs by rewarding teams for actionable insights that optimize CQV timelines or reduce deviations. Linking log-derived improvements to KPIs—such as reductions in repeat deviations or faster protocol execution—demonstrates their tangible value to the organization.


The Future of CQV: Learning-Driven Excellence

As pharmaceutical manufacturing evolves with technologies like AI and digital twins, learning logs will become even more dynamic tools for driving CQV maturity. Machine learning algorithms could analyze log data to predict validation risks or identify recurring challenges across global sites. Real-time dashboards may visualize patterns from log entries to inform decision-making at scale.

By embedding learning logs into CQV workflows alongside compliance protocols, organizations can transform reactive processes into proactive systems of excellence. Teams don’t just meet regulatory requirements—they anticipate challenges, adapt seamlessly, and innovate continuously.

Next Step: Start small by introducing learning logs into one CQV process this month—perhaps equipment commissioning—and measure how insights shift team problem-solving approaches over time. Share your findings across departments to scale what works and build momentum toward maturity.

Building a Data-Driven Culture: Empowering Everyone for Success

Data-driven decision-making is an essential component for achieving organizational success. Simply adopting the latest technologies or bringing on board data scientists is not enough to foster a genuinely data-driven culture. Instead, it requires a comprehensive strategy that involves every level of the organization.

This holistic approach emphasizes the importance of empowering all employees—regardless of their role or technical expertise—to effectively utilize data in their daily tasks and decision-making processes. It involves providing training and resources that enhance data literacy, enabling individuals to understand and interpret data insights meaningfully. Moreover, organizations should cultivate an environment that encourages curiosity and critical thinking around data. This might include promoting cross-departmental collaboration where teams can share insights and best practices regarding data use. Leadership plays a vital role in this transformation by modeling data-driven behaviors and championing a culture that values data as a critical asset. By prioritizing data accessibility and encouraging open dialogue about data analytics, organizations can truly empower their workforce to harness the potential of data, driving informed decisions that contribute to overall success and innovation.

The Three Pillars of Data Empowerment

To build a robust data-driven culture, leaders must focus on three key areas of readiness:

Data Readiness: The Foundation of Informed Decision-Making

Data readiness ensures that high-quality, relevant data is accessible to the right people at the right time. This involves:

  • Implementing robust data governance policies
  • Investing in data management platforms
  • Ensuring data quality and consistency
  • Providing secure and streamlined access to data

By establishing a strong foundation of data readiness, organizations can foster trust in their data and encourage its use across all levels of the company.

Analytical Readiness: Cultivating Data Literacy

Analytical readiness is a crucial component of building a data-driven culture. While access to data is essential, it’s only the first step in the journey. To truly harness the power of data, employees need to develop the skills and knowledge necessary to interpret and derive meaningful insights. Let’s delve deeper into the key aspects of analytical readiness:

Comprehensive Training on Data Analysis Tools

Organizations must invest in robust training programs that cover a wide range of data analysis tools and techniques. This training should be tailored to different skill levels and job functions, ensuring that everyone from entry-level employees to senior executives can effectively work with data.

  • Basic data literacy: Start with foundational courses that cover data types, basic statistical concepts, and data visualization principles.
  • Tool-specific training: Provide hands-on training for popular data analysis tools and the specialized business intelligence platforms that are adopted.
  • Advanced analytics: Offer more advanced courses on machine learning, predictive modeling, and data mining for those who require deeper analytical skills.

Developing Critical Thinking Skills for Data Interpretation

Raw data alone doesn’t provide value; it’s the interpretation that matters. Employees need to develop critical thinking skills to effectively analyze and draw meaningful conclusions from data.

  • Data context: Teach employees to consider the broader context in which data is collected and used, including potential biases and limitations.
  • Statistical reasoning: Enhance understanding of statistical concepts to help employees distinguish between correlation and causation, and to recognize the significance of findings.
  • Hypothesis testing: Encourage employees to formulate hypotheses and use data to test and refine their assumptions.
  • Scenario analysis: Train staff to consider multiple interpretations of data and explore various scenarios before drawing conclusions.

Encouraging a Culture of Curiosity and Continuous Learning

A data-driven culture thrives on curiosity and a commitment to ongoing learning. Organizations should foster an environment that encourages employees to explore data and continuously expand their analytical skills.

  • Data exploration time: Allocate dedicated time for employees to explore datasets relevant to their work, encouraging them to uncover new insights.
  • Learning resources: Provide access to online courses, webinars, and industry conferences to keep employees updated on the latest data analysis trends and techniques.
  • Internal knowledge sharing: Organize regular “lunch and learn” sessions or internal workshops where employees can share their data analysis experiences and insights.
  • Data challenges: Host internal competitions or hackathons that challenge employees to solve real business problems using data.

Fostering Cross-Functional Collaboration to Share Data Insights

Data-driven insights become more powerful when shared across different departments and teams. Encouraging cross-functional collaboration can lead to more comprehensive and innovative solutions.

  • Interdepartmental data projects: Initiate projects that require collaboration between different teams, combining diverse datasets and perspectives.
  • Data visualization dashboards: Implement shared dashboards that allow teams to view and interact with data from various departments.
  • Regular insight-sharing meetings: Schedule cross-functional meetings where teams can present their data findings and discuss potential implications for other areas of the business.
  • Data ambassadors: Designate data champions within each department to facilitate the sharing of insights and best practices across the organization.

By investing in these aspects of analytical readiness, organizations empower their employees to make data-informed decisions confidently and effectively. This not only improves the quality of decision-making but also fosters a culture of innovation and continuous improvement. As employees become more proficient in working with data, they’re better equipped to identify opportunities, solve complex problems, and drive the organization forward in an increasingly data-centric business landscape.

Infrastructure Readiness: Enabling Seamless Data Operations

To support a data-driven culture, organizations must have the right technological infrastructure in place. This includes:

  • Implementing scalable hardware solutions
  • Adopting user-friendly software for data analysis and visualization
  • Ensuring robust cybersecurity measures to protect sensitive data
  • Providing adequate computing power for complex data processing
  • Build a clear and implementable qualification methodology around data solutions

With the right infrastructure, employees can work with data efficiently and securely, regardless of their role or department.

The Path to a Data-Driven Culture

Building a data-driven culture is an ongoing process that requires commitment from leadership and active participation from all employees. Here are some key steps to consider:

  1. Lead by example: Executives should actively use data in their decision-making processes and communicate the importance of data-driven approaches.
  2. Democratize data access: Break down data silos and provide user-friendly tools that allow employees at all levels to access and analyze relevant data.
  3. Invest in training and education: Develop comprehensive data literacy programs that cater to different skill levels and job functions.
  4. Encourage experimentation: Create a safe environment where employees feel comfortable using data to test hypotheses and drive innovation.
  5. Celebrate data-driven successes: Recognize and reward individuals and teams who effectively use data to drive positive outcomes for the organization.

Conclusion

To build a truly data-driven culture, leaders must take everyone along on the journey. By focusing on data readiness, analytical readiness, and infrastructure readiness, organizations can empower their employees to harness the full potential of data. This holistic approach not only improves decision-making but also fosters innovation, drives efficiency, and ultimately leads to better business outcomes.

Remember, building a data-driven culture is not a one-time effort but a continuous process of improvement and adaptation. By consistently investing in these three areas of readiness, organizations can create a sustainable competitive advantage in today’s data-centric business landscape.

PDCA and OODA

PDCA (and it’s variants) are a pretty tried and true model for process improvement. In the PDCA model a plan is structured in four steps: P (plan) D (do) C (check) A (act). The intention is create a structured cycle that allows the process to flow in accordance with the objectives to be achieved (P), execute what was planned (D), check whether the objectives were achieved with emphasis on the verification of what went right and what went wrong (C) and identify factors of success or failure to feed a new process of planning (A).

Conceptually, the organization will be a fast turning wheel of endlessly learning from mistakes and seeking to maximize processes in order to remain forever in pursuit of strategic objectives, endlessly searching for the maximum efficiency and effectiveness of the system.

The OODA Loop

The OODA loop or cycle was designed by John R. Boyd and consists of a cycle of four phases:
Observe, Orient, Decide and Act (OODA).

  • Observe: Based on implicit guidance and control, observations are made regarding unfolding circumstances, outside information, and dynamic interaction with the environment (including the result of prior actions).
  • Orient: Observations from the prior stage are deconstructed into separate component
    pieces; then synthesized and analyzed in several contexts such as cultural traditions, genetic
    heritage, and previous experiences; and then combined together for the purposes of
    analysis and synthesis to inform the next phase.
  • Decide: In this phase, hypotheses are evaluated, and a decision is made.
  • Act: Based on the decision from the prior stage, action is taken to achieve a desired effect
    or result

While similar to the PDCA improvement of a known system making it more effective, efficient or effective (depending on the effect to be expected), the OODA strives to model a framework for situational awareness.

Boyd’s concentration on the specific set of circumstances relevant to military situations had for years meant the OODA loop has not received a lot of wide spread interest. I’ve been seeing a lot of recent adaptations of the OODA loop try to expand to address the needs of operating in volatile, uncertain, complex and ambiguous (VUCA) situations. I especially like seeing it as part of resilience and business continuity.

Enhanced Decision-Making Speed and Agility

    The OODA loop enables organizations to make faster, more informed decisions in rapidly changing environments. By continuously cycling through the observe-orient-decide-act process, organizations can respond more quickly to market crises, threats, and emerging opportunities.

    Improved Situational Awareness

      The observation and orientation phases help organizations maintain a comprehensive understanding of their operating environment. This enhanced situational awareness allows us to identify trends, threats, and opportunities more effectively.

      Better Adaptability to Change

        The iterative nature of the OODA loop promotes continuous learning and adaptation. This fosters a culture of flexibility and responsiveness, enabling organizations to adjust their strategies and operations as circumstances evolve.

        Enhanced Crisis Management

          In high-pressure situations or crises, the OODA loop provides a structured approach for rapid, effective decision-making. This can be invaluable for managing unexpected challenges or emergencies.

          Improved Team Coordination and Communication

            The OODA process encourages clear communication and coordination among team members as they move through each phase. This can lead to better team cohesion and more effective execution of strategies.

            Data-Driven Culture

              The OODA loop emphasizes the importance of observation and orientation based on current data. This promotes a data-driven culture where decisions are made based on real-time information rather than outdated assumptions.

              Continuous Improvement

                The cyclical nature of the OODA loop supports ongoing refinement of processes and strategies. Each iteration provides feedback that can be used to improve future observations, orientations, decisions, and actions.

                Complementary Perspectives

                PDCA is typically used for long-term, systematic improvement projects, while OODA is better suited for rapid decision-making in dynamic environments. Using both allows organizations to address both strategic and tactical needs.

                Integration Points

                1. Observation and Planning
                  • OODA’s “Observe” step can feed into PDCA’s “Plan” phase by providing real-time situational awareness.
                  • PDCA’s structured planning can enhance OODA’s orientation process.
                2. Execution
                  • PDCA’s “Do” phase can incorporate OODA loops for quick adjustments during implementation.
                  • OODA’s “Act” step can trigger a new PDCA cycle for more comprehensive improvements.
                3. Evaluation
                  • PDCA’s “Check” phase can use OODA’s observation techniques for more thorough assessment.
                  • OODA’s rapid decision-making can inform PDCA’s “Act” phase for faster course corrections.

                Unlocking Hidden Potential: The Art of Assessing Team Capability

                For managers in an organization it is critical to understand and nurture the capabilities of our team members. I spend a lot of time on this blog talking about capability and competence frankly because they are an elusive concept, invisible to the naked eye. We can only perceive it through its manifestations – the tangible outputs and results produced by our team. This presents a unique challenge: how do we accurately gauge a team member’s highest level of capability?

                The Evidence-Based Approach

                The key to unraveling this mystery lies in evidence. We must adopt a systematic, iterative approach to testing and challenging our team members through carefully designed project work. This method allows us to gradually uncover the true extent of their competence.

                Step 1: Initial Assessment

                The journey begins with a quick assessment of the team member’s current applied capability. This involves examining the fruits of their labor – the tangible outcomes of their work. As managers, we must rely on our intuitive judgment to evaluate these results. I strongly recommend this is a conversation with the individual as well.

                Step 2: Incremental Complexity

                Once we have established a baseline, the next step is to marginally increase the complexity of the task. This takes the form of a new project, slightly more challenging than the previous one. Crucially, we must promise a project debrief upon completion. This debrief serves as a valuable learning opportunity for both the team member and the manager.

                Step 3: Continuous Iteration

                If the project is successful, it becomes a springboard for the next challenge. We continue this process, incrementally increasing the complexity with each new project, always ensuring a debrief follows. This cycle persists until we reach a point of failure.

                The Point of Failure: A Revelatory Moment

                When a team member encounters failure, we gain invaluable insights into their competence. This moment of truth illuminates both their strengths and limitations. We now have a clearer understanding of where they excel and where they struggle.

                However, this is not the end of the journey. After allowing some time for reflection and growth, we must challenge them again. This process of continual challenge and assessment should persist throughout the team member’s tenure with the organization.

                The Role of Deliberate Practice

                This approach aligns closely with the concept of deliberate practice, which is fundamental to the development of expertise. By providing our team members with guided practice, observation opportunities, problem-solving challenges, and experimentation, we create an environment conducive to skill development.

                Building Competence

                Remember, competence is a combination of capability and skill. While we cannot directly observe capability, we can nurture it through this process of continual challenge and assessment. By doing so, we also develop the skill component, as team members gain more opportunities for practice.

                The Manager’s Toolkit

                To effectively implement this approach, managers should cultivate several key attributes:

                1. System thinking: Understanding the interdependencies within projects and anticipating consequences.
                2. Judgment: Making rapid, wise decisions about when to increase complexity.
                3. Context awareness: Taking into account the unique circumstances of each team member and project.
                4. Interpersonal skills: Motivating and leading team members through challenges.
                5. Communication: Constructing and delivering clear, persuasive messages about project goals and expectations.

                By embracing this evidence-based, iterative approach to assessing capability, managers can unlock the hidden potential within their teams. It’s a continuous journey of discovery, challenge, and growth – one that benefits both the individual team members and the organization as a whole.