Section 3: Continuous Quality Improvement (CQI) in Child Welfare

Continuous quality improvement (CQI) is an evidence-based system used by many industries and agencies.  It can be simply defined as a data-driven process that employs specific values and tools for setting goals, planning, and implementing and measuring change.  The ultimate goal of CQI is to enable organizations to improve their overall performance on an ongoing basis.

Long entrenched in Japan, Western Europe, and the United States, CQI has served as a fundamental building block of manufacturing, health care, and other industries whose implementation of CQI decades ago led to dramatic improvements in efficiency, worker satisfaction, and customer service (Sollecito and Johnson, 2012). As a systematic, informed process that mirrors other sectors, however, CQI is still a developing field for the Nation’s child welfare programs.

The 1994 Amendments to the Social Security Act (SSA) authorized the Children’s Bureau, under Health and Human Services, to ensure substantial conformity with the State plan requirements of titles IV-B and IV-E of the SSA. Existing regulations of the Child and Family Services Plan (CFSP) require States to describe the quality assurance (QA) systems they have in place that “regularly assess the quality of services under the CFSP and assure that there will be measures to address identified problems.”

During both rounds of the Child and Family Services Reviews (CFSR), the Children’s Bureau found that the majority of States met basic requirements for QA systems that evaluated the quality of their services at the time of the review; however, the Program Improvement Plan phase revealed that many State QA systems needed extensive expansion and refinements to adequately assess and measure service and practice improvements, specifically with regard to CFSR outcomes and systemic issues, on an ongoing basis. Thus, a major intent of the CFSRs has been to illuminate and help States enhance their ability to monitor, self-evaluate, and improve their practices and programs.

Although States and jurisdictions have made efforts to improve the quality of their child welfare services for many years, some have found it challenging to implement comprehensive reform, sustain changes made, and effectively monitor practice. With the CFSRs as a major catalyst, and as States become more informed about implementation science and evidence-based practices, an understanding has grown that changes in practice must be implemented and measured, on a continual basis, through a formalized, comprehensive, agency-wide system.

The Children’s Bureau is committed to supporting States in achieving improved outcomes for families and children, and to that end is promoting sound, comprehensive CQI systems in States' child welfare agencies. The Children’s Bureau supports the quality improvement efforts of these agencies by providing information through online publications, trainings, and technical assistance and support services.

  • Note: Information Memorandum 12-07, published by the Children’s Bureau, specifically provides information on establishing and maintaining CQI systems in State child welfare agencies. It is available online at LINK.

The essence of CQI is an organization-wide, systematic focus on better meeting the needs of the recipients of the agency’s services. A comprehensive, well-integrated CQI program, diffused into all facets of a child welfare agency, offers an exciting approach for more effective responses to demands that agencies work more efficiently, use resources more wisely, and achieve more positive outcomes.

Module Structure

This e-training module is designed to provide an introduction and overview of CQI and its benefits. The first section, Defining CQI and Building a CQI Framework, provides a discussion of several key elements that “set the stage” or complement an agency’s ability to implement a strong CQI system.

The second section, Effective Leadership and Creating a Learning Environment, deals with leadership qualities, attitudes, and activities that are key to the implementation of a successful, agency-wide CQI program. Additionally, the section discusses the necessary features of an effective learning environment.

The third section, Functional Components and Processes of CQI, describes the five key components of an effective CQI system. These include:

  1. A strong foundational administrative structure
  2. Quality data collection
  3. An effective QA case record review process
  4. Effective data analysis and data dissemination procedures
  5. Procedures to enable feedback to stakeholders and decision-makers and adjustment of programs and process

The fourth section, Implementing Systems Change, covers the involvement of internal and external stakeholders in creating change, assessment of needs/goals, selecting intervention strategies, developing an implementation plan, implementing innovations, and sustaining the change.

The final section of the module provides Additional Resources for further study of numerous aspects of leadership, systems building, and of the CQI process. The listing for each resource provides a summary and Web link.

Defining CQI and Building a CQI Framework

Continuous quality improvement (CQI) is the complete process of identifying, describing, and analyzing strengths and problems and then testing, implementing, learning from, and revising solutions. CQI relies on a proactive organizational and/or system culture that supports continuous learning; it must be firmly grounded in the overall mission, vision, and values of a child welfare agency or system. Perhaps most important, CQI depends upon the active inclusion and participation of staff at all levels of the agency/system, children, youth, families, and stakeholders throughout the process (Using CQI to Improve Child Welfare Practice, 2005).

More simply defined, CQI is an organization-wide system that involves the identification, dissemination, and measurement of best practices, as well as enhancement of processes and systems, to improve overall agency functioning and ensure more positive outcomes.

A key principle of CQI is that those who are closest to the work are the true experts in the field. Consumers, such as parents, children, and youth, and external stakeholders, such as courts, Tribes, and service providers, have much to contribute from completely different perspectives and should be incorporated by agency leaders into all phases of the CQI process. Youth, birth parents, and foster parents should be assigned to working CQI teams or committees to ensure a holistic perspective.

Some State child welfare agencies are very open and transparent, and fully involve stakeholders, families, and youth in their assessments and strategic planning activities, including their Child and Family Services Plan and Program Improvement Plan assessment and feedback. Consumers and stakeholders are made to feel such a part of these agencies that when an agency celebrates its successes, the stakeholders and families feel a special pride and that they, too, have succeeded.

This section discusses the activities that make up a well-functioning CQI system, focusing on the importance of aligning CQI with an agency's mission, values, vision, and practice. It also outlines distinct differences between quality assurance (QA) and CQI activities, and discusses the critical importance of integrating staff, external stakeholders, and consumers into CQI processes. Finally, it defines and explains the development of outcomes and measures of success that should be used to realistically measure the results and outcomes of the CQI initiative's various components and determine whether improvements are needed to improve practice and processes. It also stresses the importance of using measures as a “bridge” to connect data and intended outcomes.

Aligning CQI and an Agency's Mission, Vision, Values, and Practice

Continuous quality improvement (CQI) systems help align the agency’s practices, procedures, policies, training, and services with mission, vision, and values. The agency should focus on specific practice standards and outcomes that are grounded in its mission and values. For example, an agency whose mission reflects a strong focus on family engagement may be receiving consistent feedback from consumers that families do not feel valued and included. To ensure that the agency is adhering to its mission, practice would need to be changed to be more family-centered with a strong focus on promoting family engagement.

Other elements necessary to effectively carry out the mission and vision include:

  • Involving all key staff, external stakeholders, and consumer groups in the determination of change initiatives and measures
  • Providing maximum data/information access to stakeholders and consumers and all levels of staff
  • Using data to inform all major decisions
  • Using results to continually improve services
  • Integrating CQI activities into all aspects of systems and services, and into the agency’s core beliefs and values
  • Ensuring that child and family outcomes and measures are continually assessed

Developing and implementing a practice model, or a conceptual map of how an agency will operate and partner with consumers and stakeholders in its services, has provided an opportunity for some States to align their mission, vision, values, policies, and practice. At the same time, these States might examine their existing CQI program and, as needed, expand or modify it to ensure that their entire child welfare system coincides with the agency’s current values and standards and that all critical factors are aligned and in sync.

For instance, an agency’s practice standards might state that workers should, in visits with children, see and visit with the child alone, observe the child’s environment to make certain he or she is safe, ask foster parents if they have needs, etc. If practice is closely aligned with mission, vision, and values, agency standards in both policy and procedures will reflect those factors; management will be focused on meeting articulated standards; and the agency will be training and coaching toward achieving better practice in those areas. While standards alone do not ensure quality practice, they are a framework that reflects the agency’s values, principles, and approaches that help ensure positive outcomes for families and children.

When mission, values, vision, policies, and practice are synchronized, what is in writing, what is being coached, what is included in training, and what is assessed in case record reviews is aligned and the same. There is consistent messaging to all staff across all programs, as well as external stakeholders and consumers, which is critically important in making needed changes.

CQI provides the means to reach the goal of organizational excellence and encompasses the pursuit of knowledge and skills necessary to effectively accomplish the agency’s mission. To that end, the mission, values, and vision statements should be revisited periodically to ensure that policies, practices, services, and intended outcomes are in accord with tenets articulated in the agency mission.

Quality Assurance and CQI Activities

In many child welfare agencies, quality assurance (QA) case record reviews, along with the collection and review of aggregate data, may be the only or primary components of the States’ continuous quality improvement (CQI) systems. However, an agency's CQI program can and should become much broader to include many activities at many levels where case practice is reviewed, recommendations are made and carried out with the goal of achieving better outcomes for families and children, and data are generated. States should be creative in determining their practice elements that fit this criterion.

Normally, in the QA case review process, individual or paired staff members serve as case reviewers, and they periodically examine a group of selected cases in different areas of the State. They provide feedback about their findings, and this feedback most immediately impacts caseworkers, supervisors, and the next level of management. Over the years, with the implementation of the Child and Family Services Reviews, many agencies have moved from primarily monitoring compliance in their QA case reviews to assessing quality of services and child and family outcomes.

Some agencies with well-functioning QA case review systems may question why an expanded CQI program is needed, now that they have progressed beyond compliance monitoring. It is important to remember that a full CQI system involves analyzing data from case reviews and numerous other sources to identify what is working well and what is not so that the agency can constantly improve practice. With the relatively new focus on CQI in child welfare, agencies are broadening their improvement efforts to permeate every aspect of the agency. Thus, QA case reviews should represent an important, but not necessarily the primary, component of States’ multi-faceted, agency-wide CQI efforts that drive needed changes on a continual basis.

The table below is adapted from Alan Dever's 2003 book, Public Health Practice and Continuous Quality Improvement, and outlines the major differences between QA and CQI:

Differences Between QA and CQI

QA

CQI

is a separate activity

is an integrated activity

is reactive

is proactive

is “top down”

bridges both horizontally and vertically

improves the performance of those whose cases are being reviewed 

improves performance agency-wide

focuses on meeting specific compliance and outcome criteria 

focuses on improving multiple processes and outcomes

measures standards that are established by professionals 

uses fluid, constantly changing standards that are established by stakeholders and consumers working alongside professionals 

is event based 

is based on an ongoing process 

is management focused (directing) 

is employee, stakeholder, and consumer focused (involving) 

involves selected staff and functions

is agency-wide and crosses all functions

Thus, QA uses standards established by professionals that define acceptable or unacceptable levels of performance. As a reactive or retrospective process, it assesses practice that has already occurred. QA results direct the behavior and practice of child welfare practitioners toward improved outcomes “after the fact” through a specific event, such as a scheduled case review with selected staff, that occurs outside of other improvement processes.

QA case reviews are a necessary element of a child welfare agency’s overall CQI system. Although both QA and CQI seek to improve quality, CQI proactively tracks, analyzes, and corrects ongoing, interrelated, and interconnected processes (including QA case reviews) in an effort to constantly improve systems and practices. In CQI, it is these multiple processes and systems, not the performance of specific practitioners, that are the focus of improvement. A comprehensive CQI process sends a strong signal to agency staff, external stakeholders, and consumers that their involvement is crucial to the agency’s continued learning, exploration of new ways of doing things, and improvement.

Consequently, a well-functioning CQI system encompasses a wide range of processes and facilitates the launching of targeted activities to meet identified needs. These may be major initiatives or smaller-scale projects. When data show concerns in specific practice areas or parts of the State, the agency may choose to implement family team meetings in a targeted area, or large-scale statewide initiatives such as trauma-informed care. In another situation, mentoring and coaching frontline staff on higher quality worker-child visits statewide might have a significant positive impact on safety, placement stability, timely permanency, and other areas. Varied, smaller-scale activities can occur concurrently, as long as they are being overseen, assessed, and well-managed through an analysis of the data they are generating. 

Development of Outcomes and Measures of Success

Essential to an effective continuous quality improvement (CQI) system is accurately identifying outcome and systemic areas to be measured and tracked that will assess the status and ongoing progress of an agency’s practices, programs, and services. In its August 27, 2012, Information Memorandum, ACYF-CB-IM-12-07 (available online at: http://www.acf.hhs.gov/programs/cb/resource/im1207), the Children’s Bureau stated that it intends to publish a specific set of monitoring measures in the future. Until those are known, however, concerns that have been identified in a State’s Child and Family Services Review (CFSR) and Program Improvement Plan, unless already sufficiently addressed, are a recommended beginning point for measurement toward desired outcomes.

Generally, CFSR results revealed that all States were challenged in assessing and meeting well-being needs of children and families, as well as achieving timely permanency. Specifically, many States struggle to ensure the social and emotional well-being of children and youth. Improvement in this area can have profound positive effects on several permanency issues that reflect the essence of a State child welfare program. Thus, these should receive strong consideration as areas needing to be tracked and assessed.

Outcomes are key in assessing and refining program delivery and supporting organization-wide quality improvement. When determining outcomes, or intended results of a program or initiative, agencies should strive to measure what works and does not work. In other words, rather than defining an outcome by whether or not a child or parent was the recipient of a practice or service, it should be measured instead by whether or not service participation improved functioning or the chances of success. For instance, rather than measuring success by whether or not a parent completed parent training, success (outcome) could be gauged by whether or not the training improved skills and capacities of that parent, perhaps measured by whether another incident of child maltreatment occurred within a specified period of time.

Outcomes capture the “what” and the “who,” and are written as "change statements." In other words, in defining outcomes, the details of the targeted initiative should be considered, as well as the recipients, intended impact, and change desired. For instance, in attempting to strengthen its youth independent living program, an agency, rather than defining its outcome goal as “prepare youth to live independently,” might consider instead the following as outcomes:

  1. Increased high school graduation rates of youth in foster care, and/or
  2. Decreased instances of youth in foster care being involved with juvenile justice

Since outcomes are broad in nature, performance indicators or measures serve as a bridge connecting intended outcomes and data collected. Measures are specific pieces of information that describe observable – or otherwise captured – characteristics or changes in factors. They are indicators that can be counted, reported, observed, or somehow detailed from data collected.

In composing measures, agencies should first calculate a baseline, or initial data that allow a comparison with subsequent data for assessing impact, and then identify targets, or the level of achievement (quantifiable goals) it hopes to achieve. Measures drawn up should be as simple as possible, while still being meaningful and useful. Performance measures enable an organization to use factual data it has gathered to determine whether its programs, practices, and CQI system as a whole have had a measurable impact on consumers and whether programmatic goals have been met. 

Effective Leadership and Creating a Learning Environment

Agency culture can be defined as “…the basic pattern of shared beliefs, behaviors, attitudes, and assumptions acquired over time by members of an organization” (Connor, 2006). In other words, the way employees actually perceive, think, believe, and behave determines the culture of an agency. Behaviors in the work setting evolve from staff attitudes and belief systems, with the agency’s formal policies and procedures acting as the framework and guidelines for those behaviors. For their pursuit to be successful, an involved, focused, and responsive management team is key. Leaders should embrace and be fully committed to a continuous quality improvement (CQI) structure and process, and their actions should set the stage for full implementation of CQI.

Uneasiness in staff, over altering procedures and change in general, can sometimes increase challenges in implementing a CQI program. A crucial factor affecting employees’ perception of change is the degree of control and involvement they have in implementing those changes. Thus, it is critically important that management involve all levels of staff (particularly field staff) in evaluating and designing change initiatives; staff should feel a degree of ownership for practice changes, as well as the implications of those changes.

Management should anticipate and proactively deal with any apprehension (particularly at the worker and supervisor level) surrounding implementation of a systematic, ongoing process of examining, shifting, and improving practice. Great care should be taken to minimize misinformation, reassure staff, and reduce any anxieties. For example, management in some agencies in the midst of significant change have held open door office hours in areas being impacted as a venue to encourage staff to express any concerns and/or offer suggestions.

By whatever means they can, leaders should continually solicit input, provide information, answer questions, and attempt to allay concerns; doing so will result in staff feeling much more empowered, positive, and engaged in change initiatives. Leaders who are committed, forward-thinking, enthusiastic, transparent, and sensitive to staff needs can enable employees, external stakeholders, and the organization as a whole to adapt and thrive in the challenging environment of change.

There should be unwavering constancy of purpose in communicating to staff at all levels and in all divisions the immense rewards of a well-functioning CQI system, and expectations regarding their full participation in the process. Staff should be helped to understand that CQI is not a time-limited project or initiative, but will instead be transformative and lasting. CQI will not just augment their work; it will become the way the agency does its work.

This section further explains how effective leaders will go about creating and sustaining a continuous learning environment that yields ongoing improvements. It also discusses using leadership to deal with challenges and promote change, with a focus on the Adaptive Leadership model.

Note: For more information about Adaptive Leadership, visit the Cambridge Leadership Associates Web site at LINK

Creating and Sustaining a Continuous Learning Environment

Questioning and thinking reflectively are of critical importance in implementing a continuous quality improvement (CQI) system, as is a thorough understanding of the continuous learning atmosphere instilled through CQI. Management and administration, including unit supervisors, should constantly reinforce with staff that there are always better ways to do things. They should not only encourage staff to question the status quo, but also reward curiosity, creativity, and bold thinking. Staff at every level should be constantly encouraged to seek ways to improve their own performance, independent of agency requirements.

A continuous learning environment will:

  • Provide openness and transparency about agency activities, goals, and performance
  • Promote the free sharing of information at all levels to increase knowledge
  • Encourage and enable questioning, feedback, and recommendations/input from all strata of staff to all levels of administration
  • Minimize bureaucratic controls that hinder implementation of improvements and better practices
  • Promote ownership and involvement in new practices and processes
  • Recognize and reward creative thinking
  • Encourage analysis and learning from mistakes and failures
  • Engage staff in “sense making” or reasoning about case practices and CQI activities
  • Foster understanding of, and pride in, the learning culture
  • Promote trust in leadership

As succinctly described by Michael Fullen in his 2004 article, Systems Thinkers in Action: Moving beyond the standards plateau, “A learning organization is a place where people are continually discovering how they create their reality and how they can change it.” A focus on continuous learning, combined with the commitment and involvement of staff at all levels to collaboratively examine and improve practice, will engender excitement for improving the status quo and encourage a CQI-rich environment to emerge.

Once a learning culture has been created within the agency and a comprehensive continuous quality improvement system set in motion, there must be unwavering commitment on the part of agency administrators, teams, and individual staff members to maintain the process. It may be easier to sustain interest and activity among external stakeholders and consumers, at least initially, as they may view anticipated changes more enthusiastically than do some staff who are dealing with the loss of established roles and ways of doing things. Maintaining an institutional improvement path is sometimes more daunting and time-consuming than the initial task of gaining staff enthusiasm and support, but it can be done.

In developing an environment that identifies and sustains needed change, it is particularly critical to convert small individual and project successes by field personnel into sustained performance. Ultimately, the success or failure of the enterprise will likely hinge on the degree to which leadership engages its frontline staff in the CQI activities. It may take months for new processes to feel routine and for consumers and staff to perceive the benefits of change initiatives and an integrated CQI system. Regardless, the temptation to move away from a continuous improvement mindset must be avoided. The focus should be on consistently encouraging employee buy-in and enthusiasm for meaningful change and its rewards.

Using Leadership to Deal with Challenges and Promote Change

Effective new ways of leading and managing are critical for all levels of leadership when an agency is undergoing sweeping systems change. Absolutely essential is the ability to proactively envision and frame opportunities for the agency, as well as drive performance and innovation within teams and among employees agency-wide. Leaders in today’s changing organizations must marshal resources toward adaptation and innovation in the implementation and management of their CQI programs, and must energize and inspire those around them to achieve.

Various leadership models help develop leadership knowledge, skills, and capacity to lead effectively on a day by day basis. Others, such as the Adaptive Leadership model, are particularly effective for significant systems change efforts; they enable organizations to adapt and flourish in complex, challenging environments. The Adaptive Leadership model presents strong evaluative skills and techniques for distinguishing the necessary from the dispensable, having courageous conversationsencouraging experimentation and creativity, tolerating risk-taking and mistakes, and dealing with loss. A capable leader continually and artfully works to bring about real change, embraced by the entire organization, from the status quo.

Adaptive Leadership and other models recognize the value of individual employees and their contributions to the overall success of the organization, and stress that effectively employing a systems change leadership model will lead to much greater engagement of the workforce in the workings of the organization. These leadership models require bold new ways of thinking and responding. Even if managers have developed their own leadership styles over the years, these new skillsets and innovative ways of leading and managing can be practiced and developed.

Note: For more information about Adaptive Leadership, visit the Cambridge Leadership Associates Web site at LINK.

In the change process, one difficulty many leaders have is distinguishing technical from adaptive challenges. Technical challenges are ones that usually belong in the realm of processes or mechanics, or that, with the correct expertise and tools, are generally fixable. In the child welfare world, an example of a technical problem or challenge would be older foster youth attending college who are not receiving their Education and Training Voucher (ETV) checks in a timely way. As a solution, the ETV payment system, and processes of those involved in that system, can be examined and adjustments made so the youth begin receiving their checks on time.

Adaptive challenges are those where solutions often require people to learn new behaviors or change attitudes or beliefs. The ability to distinguish technical challenges from adaptive ones and tailor efforts to meet the challenges is a leadership skill. If technical fixes are employed for a problem and it continues to persist, that should be a clear indication that an underlying adaptive challenge exists.

For example, data may show that the State has issues locating and engaging absent fathers. Leadership initially sees this as a technical problem, and institutes an enhanced parent locator system statewide. However, data continue to show that absent fathers are not being contacted and engaged. After delving deeper into the issue, it becomes apparent that many staff believe that absent fathers contribute limited value to a case and their efforts can be better spent in other ways. It becomes obvious that the issue is a significant adaptive challenge, requiring education of staff so they begin to understand and think in new ways about the value of fathers and paternal relatives to the child.

Having Courageous Conversations

A critical leadership task that goes hand-in-hand with creating and sustaining a continuous learning environment is producing a culture that encourages creativity, flexible behaviors and attitudes, and the embracing of new ideas. An important step is having dialogue during solution-seeking that is probing and challenging, or having, in other words, “courageous conversations.” Since change often challenges deeply-held values and beliefs, courageous conversations are about confronting delicate issues and challenging assumptions, beliefs, and processes at the individual, unit, division, regional, and organization-wide levels. They are also about leaders listening to all voices, including dissenters, and being able to both give and receive tough messages. Openness to these conversations allows leaders to be perceived as more authentic, credible, and trustworthy.

For example, a far-reaching issue that impacts staff, stakeholders, and other community agencies is institutional racial/ethnic disparity and disproportionality. Courageous conversations may need to take place to reveal and articulate those deep-seated behaviors, beliefs, and attitudes that impair individual and organizational ability to ensure fairness and equity in dealing with families of different races or ethnicities. Such individual conversations may cause discomfort and even distress in some, but they are a necessary element to confront assumptions and prejudices, foster true learning and growing, and promote deep, effective, and lasting change.

On an individual level, a courageous conversation that might take place with a caseworker would involve a situation where a State began dual licensure of both foster and adoptive homes, using the same standards and processes, and an adoption worker expressed strong resistance. She voiced that losing the specific adoption perspective would not be good for children, when, on a deeper level, she feared her loss of status as a statewide adoption expert. A courageous conversation would need to occur with that worker to help her confront and deal with her feelings of loss regarding her position within the agency, and help her learn to be of value in the new system.

Encouraging Experimentation and Creativity

Finding true solutions to adaptive challenges necessitates the involvement of not only leaders, but staff, external stakeholders, and consumers; it is critical that leadership empower these groups to explore novel solutions. Additionally, integral to leadership is a willingness to take calculated risks and encourage innovation and experimentation in problem-solving around major challenges as well as day-to-day situations.

Leaders can foster an atmosphere of exploring unprecedented ideas and measured risk-taking by framing solution-seeking efforts as experiments. To further set the stage for experimentation, it may be necessary to disrupt existing patterns and allow uncertainty and conflicts to emerge between individuals and groups. Skilled leadership involves active orchestration of the uncertainty and discomfort toward a focused dialogue of the presenting issues so that the disturbance is productive, rather than destructive; through this “disequilibrium” and dynamic, rich interaction, the seeds of change and new ways of doing things often emerge.

Encouraged by flexible leaders, many agencies have already shown great creativity in implementing change and redesigning their continuous quality improvement (CQI) programs. Some have built capacity through imaginative partnerships with other entities, both public and private, that support their CQI programs in numerous ways and foster ongoing productive relationships. For example, an agency with limited quality assurance (QA) case review resources might develop and use its foster care review board to supplement QA case reviews, with the board providing qualitative case information around permanency and well-being items while the QA case review teams focus more on safety and in-home cases. Devising this solution might pose several adaptive challenges to be resolved, such as “turf” issues and empowerment of the review board, and might also require courageous conversations.

Other agencies have devised unique and impressive ways of educating managers and supervisors to manage by data. Some States have employed their CQI model’s principles and methodology not only in their work with families, but to enhance casework and supervision as well. Still others have used the steps of their CQI model as a logical, beneficial method of dealing with difficult internal processes, such as case transfer between units or divisions when the receiving unit is resistant to taking the case. 

Tolerating Risk-Taking and Mistakes

Another critical element of effective change leadership is a tolerance of risk-taking on the part of those who, while working through the change process, make mistakes or try new ideas that prove unsuccessful. Trial and error is often an important part of successful change, which means that those navigating the change process must develop the insight to risk and know failure and be able to learn and adapt from those failures.

Traditionally, the field of child welfare has been one that does not tolerate mistakes because the stakes – children’s safety – are so high. This can make experimentation and implementation of innovations challenging. For example, in the 1980s, many child welfare practitioners were opposed to the implementation of family preservation services, as they felt that foster care was a better way to ensure child safety. In hindsight, there is evidence that working to keep children with their families with safety supports improves outcomes.

It should be noted, though, that tolerance of risk-taking as part of the change process does not mean tolerating risks that result in children being unsafe. The improvement process still requires informed, balanced risk-taking to move forward and improve outcomes for families and children, while continuing to ensure children’s safety. Leaders who accept and effectively deal with lack of success as part of the change process become stronger because lessons learned from unsuccessful efforts illustrate where assumptions were wrong and where future investments should be targeted. And, as the child welfare field moves toward implementing data-based management, expanded continuous quality improvement systems, evaluating programs and outcomes, and use of evidence-based practice, implementing change involves less risk. 

Dealing With Loss

Rather than resisting change per se, many people, instead, resist loss of their roles or of the status quo. A common factor contributing to difficulty adapting or changing is fear of, and resistance to, loss and doing things a new way. When change involves real or potential loss, even in perceptions and beliefs, it can be painful and difficult. Those affected may respond out of fear and anxiety, and these feelings, if not addressed, can slow down or even derail a thoughtful, well-managed change effort.

A key to effective leadership is the ability to anticipate and deal with the kinds of losses – from roles, job functions, status, and relevance; to beliefs, identity, and competence – that are at stake in a given situation. Capable leaders will identify, assess, provide context for, and manage losses so that people can move on to new ways of doing things. Helping people learn and appreciate that their loss is contributing to the beginning of something valuable and substantive should help move them along. 

Functional Components and Processes of CQI

An effective continuous quality improvement (CQI) system consists of five “functional components” that must be addressed by the agency. Those five functional components are listed below and described in more detail in this section:

  1. Foundational administrative structure describes specific details an agency, whether State-administered, county-administered, or privatized, must take into consideration to ensure that its CQI system is applied consistently and functions effectively, has appropriate oversight, and that the process is being consistently administered as designed.
  2. Quality data collection details the importance of collecting accurate, complete, timely, and meaningful data from a variety of sources for use in assessing and improving practice and systems, and discusses ways to help ensure that data are accurate.
  3. Case record review data and process considers the agency's quality assurance (QA) case review process, including the qualifications of QA case reviewers, how to ensure inter-rater reliability, and how to deal with special challenges that may be encountered in the QA process itself.
  4. Analysis and dissemination of quality data discusses data-based decision making, qualifications of data analysts, differing levels of data analysis, and data dissemination.
  5. Feedback to stakeholders and decision-makers and adjustment of programs and processes explains the "feedback loops," or bi-directional communication, that must exist among everyone involved in the CQI process, including all levels of the agency, external stakeholders, consumers, and decision-makers. It also shows how States use data and information to drive organizational change, at varying levels, and improve child and family outcomes.  

Quality assurance (QA) and CQI activities in States administered by State agencies should take place with consistency and quality statewide. This is also true for reviews in county-administered or privatized States, or any combination of types of administration. The results of QA case reviews, considered in conjunction with other CQI activities, will help ensure and sustain high quality services across the agency.

For more information about the five functional components of an effective CQI system, see Information Memorandum 12-07, published by the Children’s Bureau. It is available online at LINK.

Foundational Administrative Structure

A solid foundational administrative structure is a critical component in the development of a well-functioning continuous quality improvement (CQI) system. Strong administrative oversight and commitment by leadership is an obvious element of this component. To illustrate commitment and to promote staff and stakeholder buy-in, leadership should ensure that:

  • Agency-wide CQI standards, requirements, policies, and procedures are clear and consistent
  • The State possesses or builds the capacity to implement a strong operational CQI system
  • There is strong guidance of the CQI program

To be successful, the new systems must create linkages within the entire agency, both vertically and horizontally. By focusing on creating a strong administrative structure, with adequate resources and solid direction, the agency will help ensure the effective functioning and sustainability of its CQI system.

State agencies should have procedures in place that result in a statewide systematic approach to implementing, overseeing, and exercising oversight of the CQI process, ensuring that it is being applied fairly and consistently. There should be a well-articulated, common approach to implementing, reviewing, and adjusting any CQI process.

Some States facilitate CQI by establishing a centralized CQI unit or division that serves as a bridge to connect all areas of the agency, from upper management to support staff. In this case, agencies may assign joint responsibility for monitoring the progress of CQI and any initiatives to both field and central office CQI staff, based on the initiative. Regardless of centralization, an agency will illustrate its commitment to CQI by having designated CQI staff, adequate resources, clearly written procedures, and by expanding its CQI system to go beyond the case review process.

The agency’s CQI requirements, policies, and processes should be clearly articulated in writing, and should illustrate consistent standards and procedures for the entire State, including other public agencies that operate title IV-E programs for the State. The standards and requirements should be structured to ensure that all jurisdictions across the State are implementing and executing CQI activities as designed and intended. Additionally, there should be an approved, consistent training process for CQI staff, under the guidance of the CQI oversight division, to include any external stakeholders who are involved in conducting CQI activities.

State agencies should have capacity and resources to implement and sustain a statewide CQI program on an ongoing basis, using designated CQI staff. Designated staff who focus on CQI activities will be able to develop a high level of expertise in quality assurance (QA) case review and other CQI activities. Having such staff also helps send a message to frontline caseworkers that they are accountable for the outcomes of cases they carry.

It is important that all jurisdictions of the State be assessed and reviewed at least annually through the QA case review process, in order to ensure consistent evaluations of all areas and that practice and system adjustments are made as needed, achievements are recognized, and new goals are being set. 

Quality Data Collection

Collecting data and ensuring quality are critically important in a State’s efforts to establish a robust continuous quality improvement (CQI) system of data compilation, analysis, and dissemination. The collection of quantitative and qualitative data from varying sources is the foundation of a CQI system; a robust connection between administrative data and other sources of information is key to a plausible vision of change. If solid process and outcome data are used to identify strengths and concerns and establish strategies for improvement, and if progress and trends are tracked by repeated measuring, the results can provide management with a simple, visually compelling thermometer of the organization’s performance and health at every level and can help the agency see where it wants to go in the future.

All agencies want data that are timely, complete, understandable, and relate to the task at hand. Data must be accurate and relevant before data analysis can yield beneficial results; for this to occur, any issues that exist with caseworkers and data entry must be identified and resolved. Furthermore, there must be an efficient process in place for the resolution of data quality issues.

An agency’s interrelated activities and processes are anchored by the quality of its information systems and their ability to produce accurate, reliable, interpretable data that are consistent in definition and usage across the State and nationally.

Collecting Data and Ensuring Quality

States should have the ability to input and collect or extract quality information from a variety of sources, including data from Federal reporting systems, case review data, as well as other administrative, quantitative, and qualitative data sources. States should also be able to ensure that the quality of their data is maintained.

Data collected in relation to continuous quality improvement (CQI) should be related to both practice standards (Did monthly visits with the child occur?) and outcomes (Did the child experience repeat maltreatment?). Agencies are already collecting large amounts of aggregate data, or data compiled from several measurements, much of which feeds into systems such as the Adoption and Foster Care Analysis and Reporting System (AFCARS), National Child Abuse and Neglect Data System (NCANDS), and National Youth in Transition Database (NYTD) systems. States may also collect case review data and data that reflect performance in systemic areas. Many agencies collect data specific to various other areas, such as length of time to complete investigations, occurrence of team meetings with families, worker caseloads, and evidence of racial or ethnic disproportionality. Some States also access data that are available from partner agencies, such as the courts, juvenile justice, and mental health providers.

Data from case record reviews, in a well-functioning CQI system, will help determine whether case review instruments and ratings are completed as per instrument instructions and with consistency across reviewers. Review data should also support practice and outcome summaries. Additionally, processes to extract accurate quantitative and qualitative data from across the State’s jurisdictions should be clear and consistently implemented. These methods and processes should be documented, with a process in place to review and verify that they are being followed.

It is possible to generate so much data that an agency becomes overwhelmed as it begins the process of analysis. According to Reveal and Helfgott (2012), “There is a simple and universal answer to ‘what data do I need?’ and that is, it depends on what question(s) you are trying to answer.” For example, agencies might ask themselves “How can we increase placement stability of children in care?” or “How can we stabilize children emotionally and decrease placements in residential treatment facilities?” In other words, agencies will need to make strategic decisions about data they need based on an understanding of what they want to achieve. Thus, agencies should tie data back to their goals, key strategies, and system change efforts.

Agencies should also prioritize attention to data by focusing on the most critical data first. They should then consider data that have the broadest value, are of the greatest benefit to the majority of users, or are of value to the most diverse of users. As agencies learn from their earlier efforts and increasingly improve and become more skilled at analyzing data, the better they will become with analysis of more and varied data. 

Quantitative and Qualitative Data

States will input and extract both quantitative and qualitative data from their continuous quality improvement (CQI) systems, in order to have a more complete understanding of issues being evaluated and addressed. Quantitative data are those that are expressed by numbers and/or frequencies rather than by meaning and observation/experience. In other words, quantitative data are numerical measurements of an object or event (e.g., how many, how much, or how often), while qualitative data are descriptive of characteristics or attributes, representing what someone observes or otherwise gleans. Qualitative data and research help agencies understand action and experience as a whole and in context.

Some common sources of quantitative data are questionnaires, case record reviews, and extractions from institutional databases, while obtaining qualitative information can be achieved from activities such as focus groups, review of case files, and case-related interviews. Because qualitative data are descriptive, they may be more challenging to analyze than quantitative data. Each type of data has positive attributes, and combining both can result in gaining a more comprehensive picture of agency functioning, as both types enable deeper understanding of various phenomena and provide new knowledge.

The following chart shows the difference in qualitative and quantitative elements for the same group, youth age 17 about to age out of foster care in a region of a State:

Aging Out Youth - Qualitative Data Aging Out Youth - Quantitative Data
Employment readiness 173 youth
Relationship with birth families 83 girls, 90 boys
Support systems with caring adults 62% (107) graduating from high school
Preparedness to live unsupervised 48 youth college-bound

To augment their case review systems or to delve further into specific issues, States may want to administer surveys, conduct interviews, or hold focus groups with staff, external stakeholders, or consumers to obtain more qualitative information. Collecting this additional information will go far to provide a more complete picture of overall agency strengths, needs, and functioning in terms of outcomes for children and families, and may be particularly helpful in evaluating systemic factors, such as adequacy of services in the community and training of staff and resource parents. For example, if foster parent retention is a challenge for the agency, it may choose to interview foster parents who dropped out of the program in the past year, or conduct a focus group with current foster parents, to gain valuable qualitative data about changes needed in the program in order to increase retention.

In general, internal stakeholders who would typically be interviewed or included in focus groups include caseworkers (investigation, foster care, and in-home), supervisors, foster home finders, adoption staff, information technology staff, and the local child welfare director. External stakeholders could include organizations and individuals who are representative of entities who participated in the development of the State’s Child and Family Services Plan. Likely participants would be the courts, guardians and attorneys ad litem, directors and staff of community agencies who serve agency consumers, Tribal representatives, law enforcement personnel, and agency attorneys. Foster and adoptive parents and consumers, such as youth served by the agency, would also be included.

To assist in the process, the State’s CQI oversight division might develop a set of core and follow-up questions for the various groups to be used as guidelines in interviewing and facilitating focus group discussions across the State. However, if specific issues are being targeted, then questions may need to be added to reflect local/regional concerns. The interviews/focus group meetings should be standardized as much as possible to help ensure more consistency in information/data that are obtained throughout the State. It will be important to remind group participants that their responses should reflect current agency information, or within the past year or two, rather than anecdotes or information (whether positive or negative) from several years in the past.

Caseworkers and Data Entry

Caseworkers are commonly the originators of the bulk of an agency’s data in its Statewide Automated Child Welfare Information System (SACWIS) or other statewide system, and often this is where data quality begins. A study (Carrilio, 2008) on caseworkers’ use of computers and data systems found four variables relating to the accuracy of their data entry:

  1. Skills and experience with using computers (worker background and comfort level with use of computers)
  2. Perceived ease of use of the agency’s automated system (worker perception regarding user-friendliness of the system)
  3. Utility of the data (worker belief about usefulness and helpfulness of data being gathered)
  4. Attitude about the data (worker perception regarding importance of inputting and gathering the data)

Agencies already employ tools for checking the accuracy and completeness of data. States that continue to have significant data errors and inconsistencies should address any worker entry issues through training and coaching. A well-functioning help desk and other supports for direct delivery staff will also assist greatly in minimizing errors and ensuring a collective sense of responsibility for accurate data. In addition, States should examine how they define data elements to be captured, the clarity of instructions overall, and if data entry screens and systems are well-designed. Well-designed systems will have:

  • Clear screens
  • Well-spaced and uncluttered fields
  • Easy-to-read font sizes
  • Descriptive captions that are easy to understand
  • Information that flows in a logical order within the screen and from screen to screen
  • Ease of entry

Agency leaders should accept responsibility for the appropriate breadth, quality, and usefulness of an agency’s data, and should continually look for ways to improve data. When data are faulty or otherwise inadequate, management should ensure that effective processes are in place to identify, report, and address data errors, inconsistencies, and omissions at whatever juncture and level they may occur. Corrective mechanisms may involve instituting a vigorous data quality assurance (QA) process, training or re-training staff, re-examining skills of those analyzing the data, and/or creating partnerships with outside entities for training and technical assistance to ensure more effective data collection and analysis.

Resolution of Data Quality Issues

Child welfare agencies must stay current on the demographic data about the children, youth, and families being served at any given time. Additionally, they must be able to ascertain the practices and services being provided by frontline staff, as well as services provided by contractors and community agencies. Finally, they must be able to determine whether outcomes of services and practices are meeting agency expectations. Sound assessments of practice and outcomes depend on correct, consistent, and complete data.

Even though agencies strive for data accuracy, in the best of systems there will occasionally be inconsistencies. For example, if the number of children free for adoption differs significantly from the number of children on whom termination of parental rights (TPR) has occurred, there is an obvious error in at least one of the numbers. The State’s process should be clear about who is responsible for entering data, ensuring data accuracy, and correcting errors that are found. Additionally, the process for correcting such errors should be clear, transparent, and effective.

Three of the most common elements that contribute to poor data quality are:

  1. Duplication of information across files and systems
  2. Incomplete and missing data elements
  3. Inconsistent or untimely data entry

To be successful, data quality improvement activities need widespread support and active involvement from all levels of staff. Data quality management must be a collaborative effort that bridges the gaps between the information technology (IT) department and the program divisions.

One approach to managing data might be a collaborative model in which the program side is accountable for ensuring that there are well-defined data quality rules, elements to be captured, measures, and acceptability levels, while IT is responsible for instituting and maintaining the architectural framework to ensure ease of capturing information, that rules are observed, and measures are accurately reported. As Reveal and Helfgott (2012) explained in their article Putting the Pieces Together, “In a fact-based decision-making culture, operational, policy, and regulatory data are all treated as assets of the agency and system, not the purview of a single office.”

Case Record Review Data and Process

A critical component of any agency’s continuous quality improvement (CQI) system is the ongoing, periodic review of case files taken from a statewide case sampling of children who are or were served under the title IV-B and IV-E programs. These quality assurance (QA) case reviews should be performed by skilled QA case reviewers who collect information to assess practice, services, and outcomes for children and families, and to determine whether specific requirements have been met.

Pivotal to gaining a complete picture of the case is conducting case-related interviews, or interviewing various parties involved in the cases. QA case reviews employing a comprehensive case review instrument as well as case-related interviews will yield meaningful data that can be used to make individual, unit, division, regional, and statewide practice improvements.

Through these thorough case file reviews and interviews, the State can better understand how the agency’s policies, procedures, and practices are impacting children and families. Assessment of this detailed case-level data helps in evaluating the quality of services being delivered, and how the agency can better ensure children's ongoing safety, permanency, and well-being. The State’s policies and manuals should provide clear guidance for carrying out and completing case reviews.

The QA case review process should:

  • Cover the entire State
  • Have clear, consistent written policies and processes
  • Be underpinned by strong infrastructure
  • Include interviews of case participants
  • Ensure reviewer skill
  • Promote inter-rater reliability
  • Identify, measure, and clarify practices that guide safety, permanency, and well-being in terms of daily practice

The QA case review activities in all States, whether they are State-administered, county-administered, or privatized States, should take place with consistency and quality statewide, with ongoing involvement and monitoring by the State’s CQI oversight division. Case review activities are an integral part of an agency’s CQI program as a whole. The meaningful results generated by QA case reviews, considered in conjunction with other CQI activities, will help ensure and sustain high quality services across the agency.

Case Sampling

A continuous quality improvement (CQI) system ensures that the States review cases of children based on a sampling universe of children statewide who are or were recently in foster care and children statewide who are or were served in their own homes. State data systems should be able to clearly identify a relevant sample frame. The universe of cases should include the title IV-B and IV-E child population directly served by the State agency, or served through title IV-E agreements (e.g. with Indian Tribes, juvenile justice, or mental health agencies).

QA Case Reviewers

Ensuring the skills of quality assurance (QA) case reviewers is integral to a continuous quality improvement (CQI) system that yields quality data and functions with integrity. Reviewers are normally drawn from staff within the agency and are centrally or regionally supervised, or they may be contractor/private agency staff. Sometimes reviewers may be consultants from outside the agency who have expertise in child welfare; some States have QA case reviewers who are a combination of types. In any case, having designated staff who focus on the CQI role promotes the development of a high level of expertise in case review. Desired qualifications of quality assurance case reviewers would likely include:

  • Extensive experience in, and knowledge of, the State’s child welfare system
  • Critical thinking skills
  • Well-versed in best practices and the agency’s practice standards
  • Ability to assess and synthesize large amounts of case information that may be complex and/or conflicting
  • Ability to work well as a team member
  • Ability to write well
  • Good interviewing skills 
  • Ability to be diplomatic and collegial with other staff

To help ensure objectivity, it is recommended that QA case reviewers and those who provide QA review for the instruments have no responsibility, directly or indirectly, for the cases being reviewed. The further removed reviewers are from the cases they are reviewing, the better. A process should also be in place to screen potential reviewers to ensure that there is no conflict or personal interest or bias in a case that could impact the reviewer’s objectivity. If a conflict of interest is identified with a particular reviewer or team, a process should be in place for dealing with those conflicts (e.g., selection of cases as back-ups that can substitute for the case(s) in question, switching of cases among different reviewers or review teams, or back-up reviewers who can review the case or cases in question, etc.).

Caseworkers and supervisors whose cases are being reviewed are more willing to be full participants in the process if they perceive the QA case reviewers to be open, flexible, and receptive. Administrators should continue to ensure that the reviewers have the ability to put themselves in the shoes of the caseworkers and supervisors whose case(s) they are reviewing, and that reviewers are promoting, on an ongoing basis, supportive and collegial relationships with casework and other staff.

CQI is an approach that is built upon a partnership between the child welfare practitioner and the case reviewer. Each party is mutually seeking to learn, to elevate both the practice of the organization and the skills of the individual practitioner, and to achieve ongoing improvements toward more effective outcomes.

Case Review Instrument

It is critically important that agencies have a well-developed case review instrument that is clear, concise, and definitive, with instructions that systematically guide reviewers in how to assess items and complete the instrument. For agencies that are currently in the process of designing or revising their case review instrument, it is recommended that reviewers, the instrument's end-users, be involved in the process. As noted data experts Hiruma and Kaiho explained, “Research demonstrates that what is considered a quality tool from a designer’s perspective may differ from the user perspective. Engaging the end-user in the tool development process can improve alignment between the purpose of the tool and the needs of the end-user” (Wandersman, Chien, and Katz, 2011). Further, any new or revised instrument or tool should be piloted prior to its full release.

In any case, a review instrument should always include the State’s practice standards, with a focus on the outcomes that the agency wants to achieve. As an example, if an agency has a practice initiative focused on engaging non-custodial parents, it might want to ensure that the review instrument provides a focus on what the worker has done to engage a non-custodial parent. The worker's activities would be compared with the requirements and measures of the strategy or initiative. For instance, it may be that in every case it is expected that the worker complete a formal search and other specific activities to identify and locate non-custodial parents, as well as engage in specific activities to promote engagement. By expanding its case review instrument to cover those areas, the agency could use its case review process to determine whether the initiative has been properly implemented and whether the enhanced practice is having the intended effect or outcome of more non-custodial parent/child involvement and, possibly, reunifications.

Note: The Onsite Review Instrument (OSRI) used during Round 2 of the Child and Family Services Reviews (CFSRs) is a good example of a case review instrument. It is available for download on the Resources Page of the CFSR Information Portal (https://www.cfsrportal.org/node/1159).

Case-Related Interviews

It is essential that agencies, in their quality assurance (QA) case review programs, include interviews of case participants as a part of their review process. There may be others, but parties interviewed typically include:

  • The child/youth (if old enough to participate)
  • Birth parents (as applicable)
  • Caregivers
  • Caseworker and/or supervisor
  • The child’s attorney or representative (as appropriate)
  • Service providers involved in the case

The interviews add valuable case information that may be missing from the file, or that may differ from case file information. Interview information may also corroborate information found in the case record. Reviewers and interviewers should prepare for each interview by noting areas in the case file where information is missing, unclear, or needs validation by the person being interviewed.

Interviews help complete case information, and they can provide unique perspectives that would not otherwise be known. It is particularly important that parents, foster parents, and children/youth be interviewed, because they are direct consumers of agency services and can provide an otherwise unavailable qualitative assessment of practices and services such as case planning, appropriateness of a specific service, and accessibility issues. 

Inter-Rater Reliability

Inter-rater reliability in the continuous quality improvement (CQI) process refers to the degree to which the agency's quality assurance (QA) case reviewers agree with each other in their assessments and ratings of the review instrument items. It is critical to ensure inter-rater reliability to the greatest degree possible, so that practice and outcomes for the State or area can be accurately assessed and review data will be of good quality.

Some elements to help ensure inter-rater/reviewer reliability are:

  • case review instrument that is clear, flows in a logical way, and has instructions that guide the reviewers in assessment/completion of instrument items.
  • Comprehensive, uniform training with refresher training sessions as needed for all case reviewers. There should be as much consistency among trainers as possible, and a component of the training should be the completion of a sample case using the case review instrument.
  • New reviewers who shadow and are mentored by experienced reviewers on a case review, case review exercise, or training session outside an actual review.
  • Reviewers who read and perform QA on each other’s instruments.

Other processes which help ensure instrument quality are:

  • Another level of QA performed on all reviewers’ instruments by someone with reviewer or QA expertise from outside the chain of command or even outside the agency, which helps to ensure consistency, quality, and objectivity.
  • Reviewers’ inclusion of verbal summary presentations, or debriefings, of their case ratings to the review team at the end of a case review to help determine and promote consistency in ratings.

One of the most effective ways to ensure inter-rater reliability and consistency is to have dedicated staff who are completely focused on QA and CQI activities. Dedicated staff persons who repeatedly perform case reviews will likely be able to develop and maintain a high level of expertise and accuracy in their case reviews. Another way to promote inter-rater reliability is to ensure that reviewers are re-trained as needed to assess instrument items and instrument completion. Additionally, CQI oversight staff should provide observation and feedback of reviewer processes, such as interviews of case participants, to help ensure quality and consistency. Monitoring of reviewer processes and results, in terms of reliability and consistency, should be ongoing.

Analysis and Dissemination of Quality Data

States have long collected child welfare data from a variety of sources. Of course, the ability to regularly track, categorize, and analyze data varies from agency to agency, particularly as it relates to obtaining information about safety, permanency, and well-being for children. In recent years, though, the advantages of using data have become more apparent to those who use data extensively in managing their organizations. Both quantitative and qualitative data provide evidence to help take the emotion and guesswork out of decisions that can be difficult. Data from multiple sources can help an agency define its current status versus its desired status; identify its strengths, needs, and trends; and set strategic priorities for reaching desired goals and improving outcomes.

The process of turning data into meaningful information that can be used to make decisions is data analysis, sometimes called analytics. Analytics has become a critical component of managing performance, which normally involves setting goals, monitoring progress toward meeting the goals through use of specific measures, and making necessary adjustments along the way to improve performance. Developing an “analytics mindset” is a process that evolves over time as staff become more accustomed to managing by data. Increasing staff and stakeholder access to data is a crucial element of this mindset.

Data Analysis

Generally, data analysis is defined as the compilation, evaluation, and presentation of data to highlight useful information or suggest answers or conclusions to questions or issues that have been raised. It is essential that only those who have the requisite skills are initially interpreting and compiling data. For example, agencies may want to ensure that those analyzing data have been trained in data analysis, have a thorough understanding of the structure of the State’s child welfare system, and have knowledge of case practice. It is also helpful if analysts have knowledge of and experience with the Children’s Bureau national databases. Additionally, analysts should have the ability to interpret and transfer data results into user-friendly formats that can be understood and used by other staff as well as consumers and external stakeholders.

Some States do not have sufficient qualified staff to analyze data and have created partnerships or alliances with other entities for this purpose. For example, some agencies have developed creative partnerships with one or a consortium of universities to analyze agency data, identify key needs, and develop outcomes and performance measures. When considering these alliances, it is important for agencies to ensure that partners who lend their expertise in data analysis have a working knowledge of agency functioning, including services and goals.

A systems approach in data analysis is important in providing an understanding of how different elements in a system interact with each other. Many things are interrelated, so the whole always has to be considered. If something changes in one area, it is necessary to see what is affected in other areas that may be related. Changes in one outcome can affect other outcomes. For example, as an agency’s time to reunification decreases, the rate of re-entry into care may increase. Close examination and analysis of possible relatedness between the two activities should take place, which may lead to adjustments in practice.

Staff and Stakeholder Access to Data

In order to encourage community support and involvement of key partners like courts and law enforcement, as well as to promote understanding of agency decisions by all community partners, it is crucial that external stakeholders be “at the table” in providing feedback on data collection and data analysis as well as in receiving data, both local and statewide, about agency performance. Fully involving stakeholders in understanding where the agency is, where it wants to go and how it hopes to get there, and eliciting stakeholder input into agency planning, will:

  1. Promote ownership of the agency and its services
  2. Foster support for the work and its difficulties
  3. Ensure multiple perspectives that will enrich the continuous quality improvement (CQI) process overall

Aggregated data are data compiled from different measures so that figures relate to broad classes, groups, or categories. An agency should provide its local and statewide aggregated data to all levels of staff for their consideration in determining and assessing strengths, trends, and needs. Along with aggregated data, many agencies provide more specific non-aggregated data for staff, including at the caseworker and unit levels, which promotes ownership, illustrates transparency, and fosters the concept of “managing by data.” Additionally, having the data helps frontline staff to assume responsibility for the outcomes on their caseloads by providing tangible evidence that their actions do matter. 

Feedback to Stakeholders and Decision-Makers and Adjustment of Programs and Processes

Essential to a well-functioning continuous quality improvement (CQI) system is building productive CQI teams and ensuring that information generated through the system will be effectively used to make needed improvements. A productive CQI system requires a mechanism that promotes circular feedback and communication among staff, stakeholders, and teams. These feedback loops permit an ongoing, bi-directional information exchange across all levels of the agency, which in turn facilitates the change process. Equally important is sharing data with agency staff and sharing data with consumers and external stakeholders.

By sharing data and information and then using staff, stakeholder, and consumer feedback as a starting point, the agency can create a dialogue about improvements it should make in policies, practices, systems, planning, services, and in its CQI program as a whole. Frontline staff, particularly workers and supervisors, will show an increased understanding of how their day-to-day actions, as revealed by data, impact short- and long-term outcomes for children and families, and how their practices can be enhanced as a result. Thus, staff, youth, families, and external stakeholders should receive information and actively participate in analyzing and interpreting data, connecting data to practice, and identifying trends and key findings.

Through this process of data-based decision-making, the CQI process as a whole is subject to continued examination and evaluation and can be adjusted as needed to better meet agency needs. This ongoing adjustment is one of the key factors in an agency maintaining the momentum of effective systemic change.

CQI Teams

A key component of setting up an agency-wide continuous quality improvement (CQI) process that enables the systematic gathering of information for analysis is building a statewide CQI implementation team composed of experienced evaluators, external stakeholders, and differing levels of committed staff with varying skills. This team should be tasked with leading the overall CQI initiative. The agency may also choose to form regional teams to augment the work of the statewide team and to closely monitor regional initiatives. If so, these regional teams should be composed of all levels of staff, as well as external stakeholders and agency consumers.

The external stakeholders who serve on these implementation teams may be selected based on general knowledge, skills, interest, and involvement in the agency. They may also have special areas of expertise that would be particularly helpful in implementing systems change. For instance, given a State’s array of service and practice needs, agency leadership may want to include workers from mental health agencies, both public and private, as they may be particularly helpful in bringing knowledge and resources to strengthen the State’s counseling and treatment services. Or leadership may want to appoint representatives of public health and the school systems so they can assist with ways to improve medical, dental, and educational services for agency consumers. Depending on the particular needs of the State, stakeholders with other special areas of expertise may help round out the team. Special areas of expertise may include:

  • Grant proposal writing
  • Data analysis
  • Media
  • Planning and administration
  • Criminal justice
  • Evaluation
  • Research

It is essential that external stakeholders be committed to systems change and able to give of their time and efforts to be deeply engaged in the process. Rather than being seen as “window dressing” for the systems change process, they should be viewed as true partners, willing to contribute their time, ideas, and energy to ensuring better outcomes for families and children. Likewise, they should be valued for the skills and experience they bring to the team and for their ongoing contributions in developing the change vision.

In order to foster a cooperative spirit, staff members’ contributions should be based on their knowledge, skills, and experience rather than their positions in the agency. Staff must feel comfortable speaking truthfully, particularly about barriers they see regarding CQI implementation, and have confidence that their contributions will be taken seriously. CQI requires that all employees, units, and departments share a unified purpose, direction, and commitment. De-emphasizing hierarchy on teams and encouraging cross-sector alliances will enhance productive team functioning.

Employing team structure is at the core of CQI, and open, transparent communication between team members, among various teams, and between teams, leadership, and agency staff is critical to effective functioning of the CQI program. There should be constant, bi-directional, formalized “feedback loops” for communication among staff, stakeholders, and teamsIn these loops, data/information is being imparted, assessment is taking place, and information is being fed back in planning, implementing, and overseeing the State’s CQI program.

Agency leaders are responsible for ensuring that CQI teams meet regularly and that they consistently push forward to complete planning and activities. If group enthusiasm and commitment begin to flag, one option to get teams back on track is to bring in trained facilitators to lead team meetings for a time. 

Communication Among Staff, Stakeholders, and Teams

Because continuous quality improvement (CQI) teams are populated by staff from all levels and divisions of the agency, administration should ensure that there are strong, formalized communication loops (“feedback loops”) among the various groups. There should be ongoing bi-directional communication between the administration and policy division, the training unit, the data unit, program staff, information technology staff, and legal staff. All agency divisions should routinely interact to ensure that both field and support staff are actively engaged in assessing data, brainstorming solutions, exchanging ideas, recommending changes, monitoring implementation, and reviewing results of practices and CQI processes. Ongoing interaction and communication among these agency units and levels are essential so that everyone understands the change initiative(s) and joins in a unified effort toward the same end.

It is especially critical that the policy division and training unit be strongly connected. For example, if the agency implements more thorough measures to screen infants for risk in investigations but the training unit fails to incorporate the policy into its new worker curriculum, poor implementation could significantly compromise the effectiveness of the change.

States should have firm processes for sharing, analyzing, and using data and information with external stakeholders (staff, courts, Tribes, service providers, and others) and consumers, to guide collaborations and inform the Child and Family Services Plan and other planning efforts. Leaders, staff, and external stakeholders should consider trends, comparisons, and other data findings, to drive both incremental and larger-scale change when improvements are needed in specific practices or in the CQI program as a whole. 

Sharing Data with Agency Staff

Management may be unsure about how to determine which data are most relevant to the field. Ideally, the primary data that should be going to workers and supervisors are in the key areas that they are trying to improve. For example, information on timely initial contacts and time to finish investigations should be prioritized for investigative staff. If more foster homes are needed, data and information on recruitment and retention of foster parents should be prioritized for placement and foster home finder staff. The data should be focused and updated regularly.

Data importance will shift to coincide with changing priorities and change initiatives. For example, as goals for the frequency of visits between worker and child are met, that data should be accorded a lower priority level. On the other hand, if frequency of visits is currently met but there are fears that it may fall again, that data may continue to bear close watching.

Many agencies have developed creative ways to share data. Several agencies use data “dashboards,” presented in Web applications or portals on staff computers, which provide various types of data to staff at all levels across the State. The dashboards display the most relevant data in a clear, user-friendly format, many times with charts and graphs. In some instances, data access is interactive so that high-level data can be broken down into more specific units. Some agencies even break down the information to the unit, worker, and client level. The design of the dashboards helps bring the data to life for staff and highlights the importance of accurate data entry. Also, it emphasizes the direct connection between their casework and consumer outcomes.

Dashboard screenshot

Some States display key data on their State agency Web sites, so it is accessible to both employees and those outside the agency. These data are updated on at least a quarterly basis. Other agencies provide information to staff through “tips” documents intended to help in their day-to-day work; topics may be based on quality assurance (QA) case record review results or other data. If, for example, a State is having a particular problem engaging absent parents, the tip sheet would contain useful guidance on how to promote engagement.

Note: To view an example of a “Tips” document developed by the State of Minnesota to help improve visits with siblings in foster care, visit: http://www.dhs.state.mn.us/main/groups/county_access/documents/pub/dhs_id_028202.pdf

Sharing Data with Consumers and External Stakeholders

Agency leaders should institute strong bi-directional communication among not only all levels of staff, but stakeholders and consumers as well. Youth, families, and stakeholders should be an integral part of continuous quality improvement (CQI) activities. Leaders should regularly and directly share with these groups the results of data analyses as well as successes, concerns, and contemplated changes. They should also actively solicit feedback from stakeholders and families about their experiences with the agency, what is working and what is not, and how they perceive and recommend that practice be changed.

This ongoing, bi-directional communication is known as a feedback loop. To cite an example of this loop, consider a large-scale, significant issue such as insufficient services statewide for emancipating youth. As a starting point, agency leadership might solicit feedback from youth groups/alliances and youth workers, with those groups framing the issues and preparing data showing concerns and trends. Then, management might charge regional leaders with having regional/area forums with staff, external stakeholders, families, and youth to assess and plan. At these forums, mixed-population groups would be encouraged to share experiences, further identify sub-issues, brainstorm, exchange ideas, and provide group recommendations.

The recommendations from each region would be collected by agency leadership, and, based on those recommendations, a tentative plan, approved by a group composed of staff, youth, and stakeholder representatives from each region, would be formulated for moving forward. The plan would be sent to all participants as quickly as possible, with a request for any further feedback or recommendations. Additional benefits of the forums might include the forming of new, productive alliances of action-oriented individuals who would help ensure that the forums’ recommendations become reality.

In any case, staff, children, youth, families, and external stakeholders should actively and continuously provide advice and feedback throughout the agency. Those involved in working teams, regardless of their roles, should take ownership of issues and be deeply engaged in the change efforts. Their observations, analysis, and recommendations should help inform the State’s strategic priorities and initiatives. There should be constant cross-fertilization of ideas and communication among various individuals and groups, with vision shaped by the varying areas of expertise and perspective each brings, and all should have complete access to processes and information.

Data-Based Decision-Making

The concept of data-based decision making underpins the entire process of a continuous quality improvement (CQI) system. Statistical data, if accurate, are a reliable means of information on which to base important practice and staffing decisions. If decisions and changes are based on solid data rather than on observations, anecdotal evidence, or conventional wisdom and if practice changes are sound, the chances are greatly increased that anticipated outcomes will be achieved.

The use of analytics, or the process of examining raw data from multiple sources to determine patterns, trends, and other useful information, has become an essential component of good management and sound decision-making in today’s organizations. Most agencies now recognize the need to manage by data so they will know how they should improve, but some are unclear on how to begin. The first step is to assemble a CQI team that includes staff with analytical skills, people familiar with the work being done, and subject matter experts. This team should be tasked with identifying the questions that need to be asked, such as:

  • “What are we trying to achieve, how do we measure the results, and are agency goals and objectives clearly tied to the desired results?”
  • “How do our processes and activities relate to our goals and objectives?”
  • “What is working well and what is not?”
  • “What will success look like, and how will we know we’ve achieved it?”

With these questions as a starting point, the team should brainstorm to determine what an improved result or outcome might be, then identify roles and responsibilities, barriers to be addressed, and resources that can be leveraged. An action plan should be developed with small, incremental steps identified to test the process and further refine requirements for data analysis.

The value of data’s use in decision-making and achieving improved outcomes should be constantly shared with staff and external stakeholders in an open, transparent way. Management should lead by example, setting the vision and clearly communicating to those involved how data have informed decisions. Soliciting staff and stakeholder input into development of measures, particularly, will boost their enthusiasm for, and understanding of, the process.

Data should be analyzed on an ongoing basis to determine what is not working, and that analysis should then be used to inform further data-based decision making to drive changes in policy, practice, support services, training, consumer services, and the CQI program as a whole. As a practice example, if data reveal that worker/child visits are meeting the agency's quantitative frequency element but review information shows that the visits are brief and superficial, then the agency understands that it has a significant missing qualitative element. For example, the data may reflect that workers are not assessing foster parent needs during the visits. As a result, the agency will work to improve the practice, using input from foster parents, staff, and others.

Data-based decision-making is a repetitive process that, depending on the outcome, may involve re-examining decisions that have already been made and making ongoing adjustments to the course(s) of action. This may require back-tracking to an earlier activity. In an environment of continual learning and striving for improvement, data should be used at every level of both staff and agency decision-making—from workers, supervisors, and regional managers to central office managers—to assess policy and case practice, determine how the CQI system is functioning, identify shortfalls and trends, pinpoint what is working well, decide on corrective actions, and anticipate and plan for various issues that may impact staff, consumers, and the community. A well-functioning CQI system enables the implementation of changes at any level, from the worker to the entire agency, in a continuous cycle of learning, adjusting, and improving. 

Workers, Supervisors, and Regional Managers

Staff at all levels should be encouraged to use data to constantly examine their own practice and the practice of those whom they supervise. For example, investigative workers, in assessing safety of children, can and do use basic data in a substantive way to inform their case decisions. First, they look at data/information on past referrals to ascertain the family’s history with the agency, then use that information to help determine threat of harm to the child and parental protective capacities so that better decisions can be made about the next course of action. Workers also use basic data to monitor and improve their performance, such as their timeliness in initiating and completing investigations, the frequency with which they see children and parents, the number of placement changes of children on their caseload, etc.

Supervisors should examine the same types of data as their workers to ensure that practice standards and outcomes are being met, and to identify areas, such as placement instability and lack of timeliness in initiating investigations, where the unit needs improvement. They will also use the data to identify individual worker strengths and challenges in different areas, so that challenges can be addressed and strengths can be recognized, analyzed, and used to help others.

A county or regional manager might look at the same practice issues as the supervisors, but would examine data from several units to determine which were functioning best regarding specific issues, how the county or regional outcomes compare with other regions or State or national standards, and how the well-functioning units could be used to help improve practice of the others. Also, the manager might consider issues in light of an area’s context, like the number of substantiated investigations or number of children coming into care, an unexpected increase in community unemployment, a rise in local drug use, or the sudden closure of a community resource. With this type of analysis, the manager would be better able to predict and plan for what may be on the horizon in terms of investigations, support needed by families, and numbers of children who might be entering care.

Central Office Managers

At the State or central office level, managers should look at the same data examined by county or regional managers in terms of practice and areas needing improvement, and should compare areas and the State as a whole with the national standards. They should also make comparisons between counties and regions regarding practice and outcomes. If, for example, one or more areas use family team meetings and others do not, management might look at ways the meetings impact caseloads, such as the time it takes to hold the meetings or the effect they have on outcomes such as children’s length of care, and use that data to determine whether and how to roll out the practice to other areas.

In this example, if management determines that family team meetings are having positive impacts but need strengthening before expansion, then training for family team facilitators might need to be more comprehensive, and stronger policies might be needed regarding the importance of relative involvement. If family engagement as a whole is an issue system-wide, then strategies to improve case planning, parental involvement, worker/parent visits, extended family connections, and so forth would need to be in place before family team meetings are expanded further.

Managers should constantly use data to analyze outcomes and then set goals for what those outcomes should be. If, for instance, managers discover that an unacceptably high percentage of children in care are having no visits with their fathers, they may decide that a closer look is needed at worker/father visits, father assessments, and services to fathers. They may also look harder at how the agency involves fathers with their children outside of visitation so that realistic goals can be set and strategies to reach those goals can be put into place.

Finally, State and central office managers should also consider how activities in one program might affect another. If, for example, there were high numbers of children coming into care, that data could be used to help examine staffing issues and how cases might be effectively handled later on by permanency and adoption workers.

Maintaining the Momentum

If management senses that the continuous quality improvement (CQI) process is weakening, it should try to pinpoint the areas that are waning. Administrators should ask themselves probing questions:

  • In which areas is the agency strong?
  • What are the assets that are supporting and reinforcing quality improvement efforts?
  • In which areas is the agency failing to sustain the quality improvement philosophy and practice?
  • Are measurement and feedback systems still clear and effective?
  • Have we truly applied lessons learned from CQI in particular areas to the rest of the agency?

Sustaining change momentum may require adjusting the CQI vision, plan, and strategies in response to the new ideas and answers offered by staff, stakeholders, and families. Leaders should adjust the change vision and strategy to reflect new learning and insights. Management should continually challenge employees to be open in their communication and, whenever feasible, integrate dissident perspectives into the vision. Leaders should recognize and celebrate accomplishments, identify consequences of not implementing a change, and provide a solid reason and motivation to act.

Staff, stakeholder, and consumer understanding of and commitment to CQI will deepen as they observe that the agency's leadership has truly embraced CQI as the process by which it makes decisions and evaluates progress. Enormous effort is involved in initiating and maintaining a CQI program, but the ongoing rewards of a successful program are even more immense.

CQI in County-Administered or Privatized States

In county-administered States where the various counties operate with some degree of autonomy, as well as in States that have largely privatized their child welfare services, States should continue to have consistent expectations statewide regarding continuous quality improvement (CQI). The State agency should establish basic requirements that are uniform for the entire State, while considering negotiation with counties or private agencies on issues that may allow some flexibility (such as less frequent reviews for areas that consistently perform well).

The CQI focus will be on ensuring that the CQI process is comprehensive, contains a robust case review process, that collected data are uniform statewide, and that there is consistency throughout the State in CQI processes and casework standards and practices. In the case review process, CQI staff, whether centrally supervised or supervised from their counties/regions, will assess practice more objectively and consistently in their review of cases if they are dedicated to CQI activities. Having dedicated CQI staff who work closely with the State’s CQI administrative oversight division will also help ensure that case record reviews are conducted with necessary frequency, meet quality standards, and adhere to other policy requirements.

If the counties in county-administered States have their own CQI staff, then the State’s CQI oversight staff could either pair with the county staff for case record reviews or have close involvement in planning and overseeing the reviews in a county or region to help ensure a quality process. If that is not possible, then county reviewers could cross-review with other counties and the State’s CQI oversight division could take steps to ensure consistency and quality, such as the provision of QA for case review instruments. Likewise, in the case of private agency contractors who have their own CQI staff, the State’s CQI oversight staff might pair with the reviewers, at least on initial reviews, and/or take steps to ensure accuracy, quality, and objectivity such as providing QA review for case review instruments completed by contractors.

Oversight of county and private agency QA case reviews by the State’s oversight division should be strong enough to ensure that review-related procedures are being followed, including that:

  • County, regional, or contractor reviewers are qualified and appropriately trained
  • The necessary degree of objectivity exists
  • Inter-rater reliability is assured
  • Reviews are being completed on schedule
  • A back-up plan is in place when conflicts of interest are identified with a reviewer, private agency, or county
  • The case sample is randomly identified and the size is correct
  • Required review results are drawn up and dispersed in a timely way

If CQI staff are county-based and -supervised, they should receive uniform training (and re-training as needed) coordinated by the CQI oversight division, as should private agency staff. 

It is critical to ensure that private agency contractors have a thorough understanding of the State’s CQI efforts and requirements and that they be held to the same standards as agency staff regarding:

CQI expectations, processes, and outcomes should be clearly articulated in private agency contracts, and should be closely overseen. Contractor performance, on an ongoing basis, should be monitored by the State agency’s program staff or contract managers or both, as should action plans for contractors to correct any deficiencies. Diligent monitoring and oversight of contractor activities will help ensure quality and integrity in their CQI programs. Enthusiasm for and commitment to the CQI process should be just as high in the private agencies as it is in the State agency.

Implementing and Sustaining Systems Change

Once a well-functioning continuous quality improvement (CQI) system is in place, it will provide information and data that can be used to identify the State child welfare system’s strengths and weaknesses. This information is the foundation for a strong strategic planning process. Child and Family Services Plans are developed through the State’s strategic planning process, and the elements of the Plan should be informed by data from the CQI system and  feedback from staff, consumers and external stakeholders.

The process for effective systems change has been the subject of much research and scholarly writing in recent years. During Rounds 1 and 2 of the Child and Family Services Reviews (CFSRs), many States adopted ambitious Program Improvement Plans, yet in spite of commendable efforts, the intended improvements were not always realized. The research on effective systems change is instructive as we look to more effective systemic improvement in child welfare.

Achieving positive change requires a thorough assessment of the agency’s strengths and weaknesses based on comprehensive data. From this analysis, goals are identified and appropriate interventions selected. The interventions must then be fully implemented. A strong intervention that is inadequately implemented will not have the intended result.

A relatively new field of research, implementation science, has evolved around the study of the process of implementing new or improved practice innovations or programs. A specific knowledge base has emerged, articulated in various models, that applies broadly to many industries and settings. The models identify and describe proven, research-based steps for properly implementing new programs and systemic changes. Agencies can implement new initiatives in more sound ways by drawing on this rich body of implementation research in order to develop an Implementation Framework. A well-developed framework, in turn, can be a critical element in sustaining system change.

Implementation Framework

The Children’s Bureau has developed a three-phase model for system improvement in consultation with the National Implementation Research Network (NIRN). The three phases are:

  • Foundational Phase:  During this phase, an implementation team is formed, data are analyzed and decisions are made about what goals and initiatives will be the focus of the systems change.
  • Planning Phase:  This is the phase in which implementation is planned, needed infrastructure is developed and monitoring and feedback loops are designed.
  • Action Phase:  During this phase, the plans are executed and the implementation team is engaged in continuous monitoring and improvement of the change effort.

For more information on NIRN, visit http://nirn.fpg.unc.edu/.

It is important to remember that the major activities of practice and system improvement are ongoing and cyclical. One activity shapes and leads to the next, although steps are not always sequential; sometimes agencies must “backtrack” to a previous activity to re-evaluate, adjust, and even complete (or partially complete) the activity again.

Note that the implementation steps discussed in the links below relate to new practice innovations, although the concepts of implementation research have comparable application to the implementation of systemic improvements to programs and infrastructure, such as continuous quality improvement.

Note also that there are a number of models for implementing effective systems change. The Systems Change section of the Additional Resources section contains several resources for further reading. 

Foundational Phase

The foundational phase of an implementation framework includes the following elements:

  • Implementation Team
  • Assessment
  • Goal-Setting
  • Selection of Strategies
  • Readiness

Details on each element are provided below.

Implementation Team: The strategic planning process begins with the establishment of an implementation team, similar to the CQI teams discussed elsewhere in this module. The implementation team should have the authority, skills and support to lead the change effort. The team should include key staff, courts, Tribes and external stakeholders. The Team is empowered by leadership to guide the change process from the beginning through full implementation.

The implementation team enlists support for the strategic improvement process from internal and external stakeholders and develops a communication plan for bi-directional communication with stakeholders throughout the process.  Communication and collaboration with stakeholders should begin with the development of the agency’s vision and mission, and include review of the data and assessment of agency strengths and concerns, selection of priority areas for the change effort, identification of strategies and assessment, and adjustment of strategies throughout the implementation period.

Assessment: The purpose of this step is to use data to get a precise picture of strengths, needs and challenges. Change efforts that begin with a well-completed, thorough assessment phase have been shown to have a much higher probability of success. Data from multiple sources should be used to assess need, identify cross-cutting issues, and determine goals for the State. By “drilling down” in the data, or examining data at deeper and deeper levels to identify patterns, needs can become more clear and issues and target populations can be identified.

For example, in a situation where a State’s close examination of placement data reveals that older foster youth have more placements than younger children, the agency should ask “Why?” and proceed from there to answer questions such as the following:

  • What placement types show the most/least transition?
  • Are there enough and the right type of placement resources?
  • At what ages did older children come into care?

Often times, the answer to one question will lead to another question. Exploration of each question leads to a more complete understanding of the issue and possible solutions.

Goal-setting: After more questions are asked and answered, the next steps should involve pinpointing needs, identifying goals and desired outcomes to know what success will look like, and defining measures of success. As more information becomes apparent through deeper data analysis, a multi-faceted initiative may be developed with the goal of increasing placement stability for older youth in care.

Goals, both long-term and short-term, provide motivation and direction for agencies to be more focused and productive. States should use data on an ongoing basis to pinpoint areas of concern or poor outcomes for families and children, analyze why these poor outcomes may be occurring, and then set goals for what the outcome or outcomes should be. States can begin by asking the following general questions:

  • What changes will most improve outcomes for children and families?
  • What do we, as an agency, need to accomplish?
  • What are the most critical things to change initially?

An agency should attack its highest priority first, selecting the goal that will have the biggest impact on enhancing outcomes within that priority area. Then, when initiatives are identified or in place to achieve that goal, it should move on to its next prioritized goal. For instance, an agency may have a need to reunify more children in its care within 12 months, as well as a need to increase safety of children in care. While both goals are important, the agency’s priority goal would likely be to improve safety of children in care. At some point after work has begun toward meeting the safety goal, the agency can begin working to enhance reunifications.

Goals selected should be measurable, attainable, and relevant. Furthermore, they should correlate with and more clearly define the agency’s mission, values, and vision, where the agency is headed, and how it will get there.

Selection of Strategies: Once it has established the goals that it wants to address, the agency and its implementation team must begin the process of strategically identifying and planning those initiatives or innovations required to accomplish these goals. As a general policy, no more than three major innovations should be implemented with overlapping activities during the same time period. By the time the priority innovation begins full implementation, the agency may be in the assessment or planning stage of its next initiative. If an agency attempts to drive too many changes simultaneously, field staff may become overwhelmed, experiencing increased stress, competing priorities, and more difficulty coping with change. From a list of many possible initiatives, leaders should choose those that will have the greatest impact and improve multiple child and family outcomes.

For example, an agency may choose to implement a trauma-informed system of care. If successfully implemented, those practice changes should result in progress toward meeting multiple goals of children in care, such as fewer moves, enhanced connections, greater educational success, and diminished behavioral or mental health issues. In selecting specific innovations, research should be done on current evidence-based or evidence-informed practice, or practice and services where effectiveness, in terms of meeting outcomes, has been demonstrated.

Note that the California Evidence-Based Clearinghouse for Child Welfare (http://www.cebc4cw.org/) maintains a list and descriptions of evidence-based and evidence-informed child welfare practices to help inform and guide agencies in their selection of practice interventions.

Once potential innovations have been identified, States should determine which are a good fit for the agency, the communities and areas being considered, the State as a whole, and the values of culturally diverse groups. Agencies should ask themselves practical questions, such as:

  • How will the innovations fit with other initiatives already underway?
  • Can the State afford to implement and sustain this initiative?
  • Are training, technical assistance, and coaching available to support implementation of the innovation?
  • Does the State have the infrastructure to support the proposed innovation?

An example of an ill-fitting initiative might be a largely rural state that wanted to implement an evidence-based home visitation program using nurses despite a chronic shortage of nurses in many parts of the State.

Readiness: The purpose of this step is to identify the technical assistance required to implement each strategy and to ensure before beginning that there is internal and external support for this effort. 

Once a strategy has been selected, the next step, which is critically important, is assessing readiness for implementation and gaining buy-in. The implementation teams and agency leadership should ensure that there is support among stakeholders about the issues, concerns, and needs. The implementation team and leadership should share data with stakeholders, get their input, and use that input in planning. Stakeholder feedback, as well as adjustments based on that feedback, should be continual throughout the process. If internal and external stakeholders do not support and embrace the identified issue, they will not be an effective part of the solution. Carefully and thoroughly determining readiness for a project prior to implementation dramatically increases the chances of its success.

Another critical step will be obtaining training and technical assistance. The State will want to ask practical questions, including:

  • Do we have the resources to access training, coaching, and technical assistance, initially and on an ongoing basis?
  • What kinds of training and technical assistance are needed and who are the potential experts in this innovation?
  • Who will assist in evaluating the implementation and outcomes of the innovation?

Planning Phase

Before implementation of a new initiative can begin, agency leadership and the implementation team must collaborate to develop an implementation plan. This plan will serve as a roadmap to guide the project. Decision-makers must decide on, and plan for, the specifics of how the new initiative will unfold, including:

  1. A rollout plan, or the scope of activities
  2. How rollout activities will be sequenced
  3. Whether specific areas (e.g., transformation zones or innovation counties) will be designated for rollout
  4. How areas will be selected
  5. Projected date for full implementation
  6. Timelines for the various activities

The implementation plan will be a living document that reflects necessary and perhaps unanticipated changes and adaptations made as activities progress, and as continuous quality improvement (CQI) teams learn by doing and feed information back to the planners. The plan will ultimately reflect the details of the project’s implementation and ongoing adjustment process.

Other elements of the planning phase include supports for implementation and feedback loops for communication, monitoring and improvement.

Supports for Implementation: The purpose of this step is to put in place the supports and infrastructure for the new initiatives and to develop methods for measuring progress.

Decision-makers must further think through other organizational supports, including staffing, stakeholder and community support, training, space, equipment, funding, supervision, policies, and data processes. In determining staffing levels needed, staff expertise required, and other issues, leadership should ask questions such as:

  • What qualifications should be required of staff?
  • What level of staffing will be needed, and are resources available to support increased levels?
  • What coaching activities will be needed, and for how long and by whom?
  • Is funding available to support all facets of the project initially and in the long term?
  • Is there space available for staff and related activities, as well as technology, equipment, and supplies
  • What policies need to be changed or developed to support the project?

For each strategy, appropriate supports must be put in place to assure successful implementation of the initiative.

Feedback Loops for Communication, Monitoring and Improvement:  The purpose of this step is to update communication protocols, develop progress measures and establish feedback loops that will provide information on whether the intervention is operating as intended and having the desired impact on outcomes.

The implementation team should review and update the communication plan to assure that effective processes are in place to communicate findings and progress and obtain ongoing feedback from agency staff as well as consumers and external stakeholders, including a process for reporting any barriers to implementation and the plan for addressing those barriers.

A key part of the implementation plan at this stage is determining the quantitative and qualitative data needed for assessment and evaluation of project implementation and effectiveness, as well as clear measures of the progress of the initiative. Measurement of implementation and effectiveness includes both process measures (Did training occur as planned? and Is coaching ongoing?) and outcome measures (Is placement stability improving with the target population in the innovation county?).

It is essential that the project be implemented as it was intended, or with fidelity and faithfulness to the model. Assuring that changes are implemented properly can be just as important as determining whether the new initiatives are effective. If an agency veers from the intended project design in its implementation, then results and outcomes will be less predictable, may be inconsistent, and likely will not be sustainable.

Action Phase

The action phase of an implementation framework includes the following elements:

  • Implement, Monitor, and Adjust Interventions
  • Improve and Adjust Interventions
  • Scale-up

Details on each are provided below:

Implement, Monitor, and Adjust Interventions: The purpose of this step is to fully execute plans, review the data on progress of implementation and impact of the interventions, and make adjustments to improve outcomes.

During this stage, the plans are fully executed, including new procedures, guidelines, and practices. The implementation team is gathering information on how implementation is progressing, and is asking important questions such as “Is the model being implemented as intended (with fidelity)?” and “Are additional supports, like training and technical assistance, needed?”

Improve and Adjust Interventions: The purpose of this step is to assess whether the intervention is effective and to make adjustments, as necessary.

Based upon the initial effectiveness of the innovation and other staff and stakeholder feedback, adjustments will be made by the agency to improve the impact of the innovation, eliminate barriers, and increase fidelity. For instance, an adjustment might be needed when information reveals that caseworkers in a new initiative are having to work a great deal in the evenings, but agency policies do not allow for staff to work flexible hours. The State could implement flex-time policies for front-line staff before rolling the initiative out to other areas.

Data from tracking and monitoring the activities and results of the initiative should be reviewed on a regular basis. Ongoing assessment and analysis of findings will validate effective practice, identify trends and needs, and allow strategies to be developed to address any challenges. This ongoing, careful analysis will enable an agency to refine or adjust processes and practice, on a continual basis, in ways that enhance both implementation and effectiveness.

Scale-up: The purpose of this step is to determine when an intervention is ready for expansion and to plan and implement this expansion with necessary supports in place. Leadership and the implementation team will decide, based on information and data gathered, when the intervention is ready for expansion, and how it should be expanded. A realistic process is needed that outlines the steps to ensure that sufficient capacity has been developed to support the intervention in each new site. Decision-makers should ask questions such as the following:

  • What should the pace of the expansion look like?
  • How will training and technical assistance be provided to each site?
  • Are systems and resources in place to support expansion to the next site?
  • Is communication in place to prepare sites for implementation, and are communication and peer support available between sites?

Sustaining System Change

Full implementation of a new initiative can take from 2 to 4 years. Critical to the process overall is for the agency to ensure sustainability. Planning for long-term sustainability must begin during the strategic planning stage and continue throughout the process. Agency leadership should ensure that funding streams remain available, that staff, external stakeholders, and consumers continue to be involved, that goals are being appropriately worked toward, that all sites are maintaining fidelity to the intervention’s design, and that there is progress toward meeting goals.

Articulating the connections between new behaviors and improved outcomes can be a powerful tool in assuring staff and partners of progress. For example, if moves of children in care are diminishing due to more frequent and better quality worker/child/caregiver visits, as anticipated, support and enthusiasm from staff and partners involved in the innovation will be bolstered.

The innovation, in all its various stages, will need to be fully integrated into the State’s systems. This includes ongoing training, regulations, policies and procedures, and, most importantly, the agency-wide continuous quality improvement (CQI) system. The new practice should be incorporated into the case record review process and data system. Bi-directional stakeholder communication, or feedback loops, should then continue through the CQI process, as should ongoing analysis, assessments, and improvements. The emphasis of CQI on data, expedient diffusion of best practices, and ongoing, cyclical improvement can then continue to guide and strengthen the implementation of the agency’s various initiatives.

Additional Resources

Below are listed numerous resource references that provide additional information and guidance to child welfare practitioners, particularly at the management level, on a variety of aspects regarding continuous quality improvement (CQI). The resources are grouped under four broad headings:

  • CQI Concepts and Implementation
  • Data Quality, Decision-Making, and Processes
  • Leadership
  • Systems Change

CQI Concepts and Implementation

  • A Framework for Quality Assurance in Child Welfare, (2002), National Child Welfare Resource Center for Organizational Improvement, retrieved from LINK. This framework of quality assurance (QA) systems is based on examples from existing State systems, requirements from Federal legislation, Child Welfare research, and national QA standards. There is a discussion of the difference between QA and CQI, and details are provided of the five main steps of the QA framework, including State examples for each step.
  • Continuous Quality Improvement, Child Welfare Information Gateway, retrieved from LINK. This article provides an overview of CQI, including planning and implementation. Additionally, State examples are provided.
  • Continuous Quality Improvement in Title IV-B and IV-E Programs, (2012), Administration for Children and Families Information Memorandum 12-07, retrieved from LINK. This Information Memorandum (IM) provides information that State child welfare agencies can use to establish and maintain CQI systems. It also provides information on claiming allowable Federal financial participation costs for CQI systems.
  • Dedhia, N., (2008), Continuous Improvement Requires a Quality Culture, retrieved from LINK. The article describes in detail the culture needed in organizations to set the stage for implementing and sustaining continuous quality improvement.
  • Dever, A., Public Health Practice and Continuous Quality Improvement, Improving Outcomes in Public Health Practice: Strategy and Methods [chapter and book], information retrieved from LINK. This chapter defines CQI in health care settings and, in chart form, clearly delineates in detail the differences between quality control/assurance and CQI, and “conventional thinking” and “CQI thinking.”
  • Getting Ready for CQI: A Webinar for Child Welfare Agency Directors and Administrators, (2013), North Carolina Department of Health and Human Services, Division of Social Services, Child Protective Services, retrieved from LINK; accompanying handouts, including a pre-implementation data gathering tool, are retrieved from LINK. This webinar presents a panel discussion of the North Carolina agency’s efforts to implement its CQI program, focusing on the four key areas of readiness, including agency climate and engagement of partners.
  • Introduction to CQI History, (2004), Loyola Medicine and Illinois Department of Public Health, retrieved from LINK. This article provides a history of CQI, presents the “Plan, Do, Study, Act” methodology and “14 points” to creative management, and discusses institutional barriers to implementation in the emergency medical services field.
  • Juran and Deming, Prism Consultancy, retrieved from LINK. The article discusses CQI in the context of the work of the early pioneers of the process, Dr. J.M. Juran and Dr. W. Edwards Deming, and compares and contrasts the work of the two. Many interesting concepts are discussed, including “Rules of the Road” for overcoming employee fear of change when establishing a CQI culture.
  • Kaizan, a Model for Continuous Improvement, Aberdeenshire Council, Northeast Scotland, paper presented at International Leading Practices Symposium, Queensland, Australia, May 2008, retrieved from LINK (note: to open this file, please paste the complete link into your browser window). This paper provides an overview of improvements made in the services of a Scottish [regional governing] Council, using the Kaizan [Japanese] model of CQI. It highlights the employee contributions and combined benefits of measurable performance improvements and culture change. The article charts the Council’s CQI planning and implementation activities from 2004 to 2008 and notes that the Council’s project earned a European Excellence award.
  • McKay, M., First CQI Projects in Family Support, Mount Sinai School of Medicine, New York, New York, retrieved from LINK. This PowerPoint presentation discusses principles involved in the “Plan, Do, Check, Act” steps of CQI as applied to support for youth and families in the psychiatric setting. CQI actions for individual staff members are stressed.
  • QI 101, Loyola University Health System, retrieved from LINK. This site, in an auxiliary paper, provides a history of quality improvement, illustrated by the concepts of early CQI pioneers Joseph Juran, Philip Crosby, and W. Edwards Deming, and discusses barriers to effective CQI implementation. Additionally, the site discusses the importance of CQI and provides links to numerous other resources that recount the history, tools, and techniques developed and used in CQI.
  • Quality Improvement in Social Care, Healthcare Quality Improvement Partnership [of the United Kingdom], retrieved from LINK. This site proposes that social care [social services] systems in the U.K. should implement systematic CQI systems that mirror those of health care, and it provides a discussion of the cycles of CQI.
  • Sollecito, W., and Johnson, J., (2012), The Global Evolution of Continuous Quality Improvement: From Japanese Manufacturing to Global Health Services, retrieved from LINK. The article posits that CQI, used very successfully in other industries, remains a critical need for much of the Nation’s health care field. Detail is provided about the “evolution of the quality movement,” beginning with the Japanese auto industry in 1950.
  • Sperber, K., CQI 101: Building and Sustaining an Effective Infrastructure, retrieved from LINK. This PowerPoint presentation provides information on the formation of a CQI system from beginning to end, stressing the major components that make up each step of the system; short- and long-term benefits of a CQI program are clearly stated.
  • Tout, K., Isner, T., and Zaslow, M., (2011), Coaching for Quality Improvement: Lessons Learned from Quality Rating and Improvement Systems, retrieved from LINK. This research brief summarizes the results of a full research study done to determine whether coaching and mentoring in Quality Rating and Improvement Systems [QRIS] in early childhood settings resulted in more positive outcomes for both practitioners and children. The brief concludes with an overview of implications for coaching in QRIS early childhood settings.
  • Using CQI to Improve Child Welfare Practice, (2005), National Child Welfare Resource Center for Organizational Improvement, retrieved from LINK. This article provides a discussion of the results of a meeting of 28 national child welfare CQI experts who were brought together to develop a framework for the implementation of CQI in child welfare agencies; the article includes information on establishing a CQI-receptive culture.
  • What is Continuous Quality Improvement?, National Resource Center for Community-Based Child Abuse Prevention, retrieved from LINK. The article explains the “Plan, Do, Study, Act,” model of CQI and also discusses the differences between evidence-informed practice, evidence-based practice, and evidence-based programs.
  • Wulczyn, F., Chapin Hall, (2007), Monitoring Child Welfare Programs: Performance Improvement in a CQI Context, Center for Children at the University of Chicago, retrieved from LINK. The authors explain the major steps involved in a CQI program, discuss the cycle of improvement, and provide examples to clearly explain each step.

Data Quality, Decision-Making, and Processes

  • Adams, C., Crowe, P., Neely, A., The Performance Prism in Action, retrieved from LINK. The authors illustrate the practical application of a new measurement framework for companies, used extensively in the United Kingdom, called The Performance Prism; they address the limitations of traditional measurement frameworks, presenting their model that has extensive stakeholder involvement.
  • Carrilio, T., (2008), Accountability, Evidence, and the Use of Information Systems in Social Service Programs, Journal of Social Work, April, Volume 8, retrieved from LINK. This article discusses the importance of social workers accurately documenting service activities and outcomes, particularly with the advent of evidence-based practices; further, it describes a “multiple case study” of social workers’ use of computers and data systems.
  • Chapman, A., [report presenter], (2005), Principles of Data Quality, Global Biodiversity Information Facility, Copenhagen, Denmark, retrieved from LINK. This paper highlights the importance of data quality in various occupations, but as specifically geared to primary species occurrence and environmental assessment; the importance and necessity of data quality and proper documentation in business, medicine, and other fields is emphasized.
  • Developing a Plan for Outcome Measurement, Strengthening Nonprofits – A Capacity Builder’s Resource Library, retrieved from LINK. This e-learning module discusses and provides suggestions for clarifying goals, assembling a planning team, developing outcomes, crafting logic models, and devising performance measures; additionally, more Web sites are suggested for further learning.
  • Dietrich, R., (2010), Data-based Decision Making Cultures; Four Assumptions, Association for Positive Behavior Support, retrieved from LINK. This presentation describes four assumptions necessary for data-based decision making to be effective, and explores the truthfulness of the assumptions. Additionally, discussion is provided of how decisions based on data are becoming increasingly regarded as an ethical obligation by some helping professions.
  • Ensuring Quality in Contracted Child Welfare Services, (2008), U.S. Department of Health and Human Services, Office of the Assistant Secretary for Planning and Evaluation, retrieved from LINK. This article describes ways that public child welfare agencies can better monitor quality and outcomes within the State agency’s quality assurance/improvement system through contracts with service providers.
  • Fayyad, U., (2002), Datamation, Drilling Down with a Data Mining Pioneer, retrieved from LINK. The author defines drilling down/data mining and provides guidelines and tips for mining data.
  • From Data to Decisions II, Partnership for Public Service, IBM Center for the Business of Government (October 2012), retrieved from LINK. This publication discusses in detail an analytics approach to managing organizations, which allows for the unearthing of hidden problems, monitoring of progress, measuring of performance, and providing of a vision for what should be done better. Clearly described steps are articulated to help agencies begin to use data as a major component in moving forward and measuring progress.
  • Gwet, K., (2012), Handbook of Inter-Rater Reliability, Third Edition: The Definitive Guide to Measuring the Extent of Agreement Among Multiple Raters, retrieved from LINK. This information serves as a handbook for researchers, practitioners, teachers, and students, and provides, for both researchers and non-researchers, well-organized and readable materials on inter-rater reliability.
  • Liddy, C., Wiems, M., and Hogg, W., (2011), Methods to Achieve High Interrater Reliability in Data Collection from Primary Care Medical Records, Annals of Family Medicine, retrieved from LINK. This article deals with inter-rater reliability in the medical setting and makes recommendations for increasing reliability.
  • Reveal, E., and Helfgott, K., (2012), Putting the Pieces Together: Guidebook for Fact-Based Decision Making to Improve Outcomes for Children and Families, Washington, DC: Technical Assistance Partnership for Child and Family Mental Health, retrieved from LINK. This article presents helpful guidance to human services agencies/employees that are just beginning their “managing by data” journey to those who are already in a data-driven culture, with the goal of achieving better outcomes for children and families.

Leadership

  • Collin-Camargo, C., McBeath, B., and Ensign, K., (2011), Privatization and Performance-Based Contracting in Child Welfare: Recent Trends and Implications for Social Service Administrators, Administration in Social Work, 35:494–516, Volume 35, Issue 5, retrieved from LINK. The authors review information about privatization and performance-based contracting to reveal themes around key management tasks and competencies within these settings. These themes are then considered in light of existing literature, and implications for administrative practice are discussed.
  • Exploring Five Core Leadership Capacities: Engaging in Courageous Conversations, Ontario Ministry of Education Leadership Strategy Bulletin, Winter 2009/10, retrieved from LINK. The article defines, as one core component of desired leadership capacities, “courageous conversations” in organizations, and discusses in depth the need and benefits to organizational change and health from having such conversations.
  • Heifetz, R. A., Linsky, M., & Grashow, A., (2009), The Practice of Adaptive Leadership, Boston, Massachusetts: Harvard Business Press, with information [review] retrieved from LINK. According to the reviewer, the authors define authentic leadership as “the practice of mobilizing people to tackle tough challenges and thrive,” with the crux of adaptive leadership practice being that if a system is faulty, it must be analyzed, diagnosed, and remedied by taking risks and challenging the status quo to provoke change. Each of the book’s five sections takes the reader through the steps involved in learning/adopting adaptive leadership practices.
  • Lichtenstein, B., and Plowman, D., (2009), The leadership of emergence: A complex systems leadership theory of emergence at successive organizational levels, retrieved from LINK. The authors describe “complexity science” and how it reframes leadership by focusing on the dynamic interactions between individuals and how those interactions can result in “emergent outcomes.” An analysis of three empirical studies takes place, leading to development of a “Leadership of Emergence.”

Systems Change

  • Connor, D., (1993) [book updated 2006], Managing at the Speed of Change, with information retrieved from LINK [overview presented by Vinson, J., (June 2010)]. The book helps agency leaders learn how to orchestrate transitions vital to their organizations’ success; the dynamics of change are explored and, rather than focusing on what to change, the goal of the book is to show readers how to change.
  • Franks, R., Implementation Science: What Do We Know, and Where Do We Go from Here?, Connecticut Center for Effective Practice, retrieved from LINK. This presentation provides an overview of implementation science and discusses different implementation science theories, such as those of (1) Simpson, (2) Greenhalgh, Robert, Macfarlane, Bate, and Kyriakidou, and (3) the National Implementation Research Network. The steps and stages of implementation are discussed, as well as the importance of having an implementation framework when making practice and process changes.
  • Fullen, M., (2004), Systems Thinkers in Action: Moving beyond the standards plateau, retrieved from LINK. The article intends to promote debate, within and beyond the teaching profession, on how the nature of leadership in any major field increasingly must recognize that sustained improvement, via continuous quality improvement and capacity building, is not possible in systems unless they are constantly moving forward.
  • Leading Fearless Change!, (2013), Russell Consulting, retrieved from LINK. This presentation posits a “natural” model of how people respond to change, actions to assist others during the emotional journey through change, the origins of resistance, and how to deal with resisters.
  • Positioning Public Child Welfare Guidance [PPCWG] Reflective Thinking Guides [on topics such as Strategic Partnerships, Change Management, Strategy, and many more], retrieved from LINK. The guides offer practical suggestions, including many hypothetical questions that agencies/leaders should ask themselves in close examination and to better know how to move forward with forming partnerships, planning strategies, and managing by data.
  • Wandersman, A., Chien, V., and Katz, J., (2011), Toward An Evidence-Based System for Innovation Support (Tools, Training, Technical Assistance, Quality Improvement/Quality Assurance) for Implementing Innovations with Quality to Achieve Desired Outcomes, University of South Carolina, retrieved from LINK. This paper provides theory, research, and action for evidence-based innovation systems, with the major goal of improving the practice of evidence-based support to build capacity to implement quality innovations