CFSR Background

The Child and Family Services Reviews (CFSRs) are a partnership between the Federal and State governments that seeks to examine State programs from two perspectives: outcomes of services provided, and systemic factors that affect those outcomes. The review process identifies the State agency’s strengths as well as areas needing improvement, and then uses a Program Improvement Plan to help States make needed improvements and build on identified strengths. Central to the review process is the promotion of sound practice principles that support improved outcomes for both children and families. The ultimate goal of the review process is to drive program improvements by focusing on systemic changes, as well as to enhance States’ capacity to become self-evaluating.

The primary focus of the CFSR reviews is on outcomes for children and families and the child welfare system’s efforts to support the achievement of those outcomes. Remember that when we talk about the child welfare system, we are talking about the State child welfare agency as well as all of the other agencies that work together to help families achieve positive outcomes. In other words, we are looking at the entire system of care, a system that includes State agencies, service providers, the courts, law enforcement, foster and adoptive parents, and so on. Note, however, that because of the structure and autonomy of the education system, it is the only system that is considered separately from the child welfare system in this review.

History of the CFSRs

Federal legislation established the authority for the Child and Family Services Reviews (CFSR) process. In 1994, Congress passed amendments to the Social Security Act authorizing the U.S. Department of Health and Human Services, through the Children’s Bureau, to review State child and family service programs to ensure State conformity with titles IV-B and IV-E. Subsequently, the Adoption and Safe Families Act of 1997 (ASFA) influenced the design of the reviews by emphasizing the child welfare goals of safety, permanency, and child and family well-being.

The CFSRs are administered by the Children's Bureau. A video (length: 13:30) by Will Hornsby of the CFSR Unit provides more information on the history, status, key operating principles, and structure of the review process. You can also read the script of the video.

Watch the Video

To view this 13:30 video on a PC, you need Windows Media Player, which may be downloaded at no charge. On a Macintosh computer, the video file will be downloaded and played in QuickTime. The video may take several minutes to open, depending on your Internet connection speed.

Click here to launch the video.

Read the Script

This script contains the text of the video by Will Hornsby of the Children's Bureau.

Hello. My name is Will Hornsby, and I am a child welfare program specialist for the Children’s Bureau Child and Family Services Reviews Unit within the Administration for Children and Families. I’m going to provide a brief overview of the history of the Child and Family Services Reviews, or CFSRs; the key operating principles of the reviews; and the structure of the reviews.

Context for the Reviews

First, let me set the stage for you with a bit of history about the CFSRs.

Federal legislation established the authority for the review process. In 1994, Congress passed amendments to the Social Security Act authorizing the U.S. Department of Health and Human Services, through the Children’s Bureau, to review State child and family service programs to ensure State conformity with titles IV-B and IV-E. Subsequently, the Adoption and Safe Families Act of 1997, or ASFA, influenced the design of the reviews by emphasizing the child welfare goals of safety, permanency, and child and family well-being. ASFA established timeframes for achieving these goals and set forth the responsibility of child welfare agencies to improve outcomes by including families in case planning and by collaborating with community groups and institutions that have an impact on child welfare.

While the Children’s Bureau has the authority to assess compliance, it also is committed to the underlying philosophy of the legislation. That is, the evaluative component of the reviews is designed to be used to identify elements within child welfare systems that are working best to improve outcomes for children and families. That knowledge, in turn, is used to improve child welfare systems across the nation.

The Bureau spent a number of years designing and pilot-testing the reviews and incorporated hundreds of comments from the field into the Final Rule (published in the Federal Register in January 2000). The first round of CFSRs was launched in August 2000, when the first States began their required assessments.

We completed the first round of reviews in 2004. Between Federal fiscal years 2001 and 2004, child welfare programs in all States, the District of Columbia, and Puerto Rico were reviewed using the CFSR process. None of the States, the District of Columbia, or Puerto Rico achieved substantial conformity with respect to all seven child welfare outcomes and seven systemic factors. As a result, all were required to develop Program Improvement Plans, or PIPs, to address areas in which they were found to be out of conformity.

Since implementation of the reviews, the Children’s Bureau has taken many actions to improve the CFSR process:

  • We compiled lessons learned and recommendations from State child welfare agency administrators regarding the first round.
  • We assessed comments about the review process from various sources, including local, county, and State child welfare staff; Federal government child welfare staff; and national child welfare organizations.
  • We retained a consultant to convene a work group of State child welfare agency administrators and researchers to gather information on how the review process could be improved.
  • We established work groups consisting of National Review Team, or NRT, staff and consultants. (The NRT comprises staff from the Children’s Bureau and the Bureau’s Regional Offices who provide leadership to the review teams in planning and conducting the CFSRs.) These work groups then developed strategies for enhancing the review process in five areas: collaboration, helping States build on their prior CFSR and PIP, revising the format of the debriefing process and exit conferences, revising the case sampling strategy, and developing a process for ongoing NRT collaboration and communication.
  • We revised and improved measures for developing State data profiles.
  • We redesigned the Statewide Assessment Instrument, Onsite Review Instrument, and Stakeholder Interview Guide.
  • And finally, we automated the Onsite Review Instrument and the Stakeholder Interview Guide, which will allow for the instant compilation of preliminary review information for presentation at the exit conference and will provide a basis for the Final Reports.

Review Operating Principles

Now that we’ve covered the history of the reviews, let’s look at the key operating principles of the CFSR process. These principles were established for the first round of reviews and will be maintained as standards for the second round. The operating principles are as follows:

First, the reviews represent a partnership between the Federal and State governments. As such, the Children’s Bureau Central and Regional Office and the State child welfare agency work together to prepare for the review. During the 9 months before the onsite review, the Federal staff, via the Child Welfare Reviews Project, convenes at least five planning conference calls with the State, and the State completes a Statewide Assessment.

The second principle is that the reviews examine State programs from two perspectives: first, the outcomes for children and families of services provided and, second, the systemic factors that affect those outcomes.

We look at seven outcomes of services provided:

Two Safety Outcomes

  • Safety Outcome 1 is that children are protected from abuse and neglect.
  • Safety Outcome 2 is that children are safely maintained in their own homes.

Two Permanency Outcomes

  • Permanency Outcome 1 is that children have permanency and stability in their living arrangements.
  • Permanency Outcome 2 is that the continuity of family relationships and connections is preserved for children.

Three Child and Family Well-Being Outcomes

  • Well-Being Outcome 1 is that families have enhanced capacity to provide for their children’s needs.
  • Well-Being Outcome 2 is that children receive appropriate services to meet their educational needs.
  • Well-Being Outcome 3 is that children receive adequate services to meet their physical and mental health needs.

Outcomes are assessed primarily on case record reviews and case-related interviews conducted during the onsite review and on national data standards for safety and permanency measures.

When assessing outcomes, we are really talking about how the child welfare system in each State is serving the child or children and family whose case is being reviewed. For example, did the agency provide appropriate services to prevent a particular child’s entry into foster care?

However, for two of the outcomes, Safety Outcome 1 and Permanency Outcome 1, decisions about substantial conformity are based on both the onsite case review findings and on data indicators. For these outcomes, six national standards have been established for the data indicators. For the State to achieve substantial conformity on these outcomes, the State data must meet these standards. In addition, the case record review must indicate that the State is in substantial conformity.

As I mentioned previously, we also examine systemic factors that affect the agency’s ability to help children and families achieve those positive outcomes. The systemic factors are the statewide information system, the case review system, the quality assurance system, staff and provider training, the service array and resource development, agency responsiveness to the community, and foster and adoptive parent licensing, recruitment, and retention.

Information about the systemic factors is obtained through the Statewide Assessment and through interviews with State and local stakeholders conducted during the onsite review.

When referring to systemic factors, we are talking about how aspects of the State child welfare system as a whole are performing and how these are affecting outcomes for children and families involved with the child welfare system. For example, how effectively has the State implemented licensing or approval standards for foster family homes and child care institutions so that these standards ensure the safety and health of children in foster care?

In addition, the reviews provide a comprehensive look at services provided to children and families, covering child protective services, foster care, adoption, family preservation and family support, and independent living. The reviews focus on how all of the State’s child welfare programming affects outcomes for children and families.

Third, the reviews are designed to identify both the State agency’s strengths and areas needing improvement for each of the outcomes and systemic factors. The reviews include a program improvement process that States use to make improvements, where needed, and build on identified State strengths.

The fourth principle is that the reviews use multiple information sources to assess State performance. These sources of information include the Statewide Assessment; data indicators; case record reviews; interviews with children, parents, foster parents, social workers, and other professionals working with a child; and interviews with State and community stakeholders. Using multiple sources of information enables reviewers to gain a comprehensive picture of a system, which often is not achieved when looking only at case records.

The fifth key operating principle is that central to this review process is the promotion of sound practice principles believed to support improved outcomes for children and families. Those principles include family-centered practice, community-based services, strengthening parental capacity to protect and provide for children, and individualizing services that respond to the unique needs of children and families.

The sixth principle is that the reviews emphasize the accountability of States to the children, families, and communities that they serve. While the review process supports States in making program improvements before having Federal funds withheld due to nonconformity, there are significant penalties associated with failure to make improvements needed to attain substantial conformity. The Children’s Bureau makes no apologies for this approach. The Bureau’s goal is to ensure that children and families receive the best services possible.

This leads directly to the seventh principle. The reviews are designed to drive program improvements through focus on improving systems. Reviewers identify State program strengths that can be used to make improvements in other program areas where and when they are needed. The Children’s Bureau provides support to States during the Program Improvement Plan development and process.

And finally, the eighth principle is that the reviews focus on enhancing States’ capacity to become self-evaluating. By conducting the Statewide Assessment and participating in the onsite review, States engage in a process for examining outcomes for children and families and the systemic factors that affect those outcomes. States then can adapt, if desired, the process for use in their own quality assurance efforts to conduct ongoing evaluations of their systems and programs.

Review Structure

So what actually happens during a Child and Family Services Review? The CFSR is comprised of two phases: the Statewide Assessment, which the State completes in the 6 months before the onsite review, and the onsite review.

In the first phase, the Statewide Assessment Team completes a Statewide Assessment, using data indicators to evaluate the programs under review and examine the systemic factors subject to review.

In the second phase, the Onsite Review Team examines outcomes for a sample of children and families served by the State during a specific period (known as the Period Under Review) by doing two things:

  • First, conducting case record reviews and case-related interviews. These are designed to assess the quality of services provided in a range of areas.
  • Second, conducting State and local stakeholder interviews. The interviews are designed to provide information about the systemic factors that affect the quality of those services.

States determined not to be in substantial conformity with any of the outcomes or systemic factors must develop a PIP to address each area of nonconformity.

On behalf of the Child and Family Services Review Team and the Children’s Bureau, thank you for taking the time to watch this video. We hope that you’ve found it helpful and informative and that you will take advantage of the other training modules available on this training site. The resource section of this training module provides access to relevant CFSR documents that will provide you with more specific information in each of these areas we have reviewed. For more information on the CFSRs, visit the Children’s Bureau Web site at www.acf.hhs.gov/programs/cb/ or e-mail cw@jbsinternational.com. The Children’s Bureau appreciates your interest in the Child and Family Services Reviews and welcomes your questions and suggestions.

Philosophical Context for the Reviews

The Child and Family Services Reviews (CFSRs), authorized by the 1994 Amendments to the Social Security Act and administered by the Children’s Bureau, U.S. Department of Health and Human Services, provide a unique opportunity for the Federal Government and State child welfare agencies to work as a team in assessing States’ capacity to promote positive outcomes for children and families engaged in the child welfare system.

The CFSRs are based on a number of guiding principles and concepts and rooted in the concept of collaboration between Federal and State partners.

CFSR Principles and Concepts

The CFSR process uses both qualitative and quantitative data to look at the services provided in a relatively small group of cases, and then evaluates the overall quality of those services. In other words, the process looks at the outcomes for children and families involved with the entire child welfare system by learning about and documenting the stories of those children and families. The CFSRs are based on a number of central principles and concepts, including the following:

Partnership Between the Federal and State Governments: The CFSRs are a Federal-State collaborative effort. A review team comprising both Federal and State staff conducts the reviews and evaluates State performance.

Examination of Outcomes of Services to Children and Families and State Agency Systems That Affect Those Services: The reviews examine State programs from two perspectives. First, the reviews assess the outcomes of services provided to children and families. Second, they examine systemic factors that affect the agency’s ability to help children and families achieve positive outcomes.

Identification of State Needs and Strengths: The reviews are designed to capture both State program strengths and areas needing improvement. The reviews include a program improvement process that States use to make improvements, where needed, and build on identified State strengths.

Use of Multiple Sources To Assess State Performance: The review team collects information from a variety of sources to make decisions about a State’s performance. These sources include:

Promotion of Practice Principles: Through the reviews, the Children’s Bureau promotes States’ use of practice principles believed to support positive outcomes for children and families. These are:

  • family-centered practice
  • community-based services
  • individualizing services that address the unique needs of children and families
  • efforts to strengthen parents’ capacity to protect and provide for children.

Emphasis on Accountability: The reviews emphasize accountability. While the review process includes opportunities for States to make negotiated program improvements before having Federal funds withheld because of nonconformity, there are significant penalties associated with the failure to make the improvements needed to attain substantial conformity.

Focus on Improving Systems: State child welfare agencies determined to be out of conformity through the review develop Program Improvement Plans for strengthening their systems’ capacities to create positive outcomes for children and families. The Children’s Bureau provides support to States during the Program Improvement Plan development and implementation process.

Enhancement of State Capacity To Become Self-Evaluating: Through conducting the Statewide Assessment and participating in the onsite review, States will become familiar with the process of examining outcomes for children and families and systemic factors that affect those outcomes. They can adapt this process for use in the ongoing evaluation of their systems and programs.

CFSR Collaboration

From their inception, the CFSRs were intended to promote change through collaborative principles. This begins with the collaboration between the Federal and State governments in assessing the effectiveness of child welfare agencies in serving children and families. It continues with the collaboration between child welfare agency leaders and their internal and external collaborative partners. Internal partners include staff and consultants. External partners include policymakers; other agencies serving children, youth, and families; the courts; Tribes and tribal organizations; the community; and children, youth, and families.

These collaborations are critical during the two assessment phases of the CFSR (Statewide Assessment and onsite review) and the Program Improvement Plan development, implementation, and evaluation process. The collaborative process focuses on identifying shared goals and activities and establishing a purpose, framework, and plan. Most important, that collaborative process should result in changes that promote improved outcomes for children and families.

Collaborative Principles

The overarching principles guiding the CFSR collaborative process include the following:

  • The safety, permanency, and well-being of children is a shared responsibility, and child welfare agencies should make every effort to reach out to other partners in the State who can help to achieve positive results with respect to the CFSR child welfare outcome measures and systemic factors.
  • Child welfare agencies do not serve children and families in isolation; they should work in partnership with policymakers, community leaders, courts, service providers, and other public and private agencies to improve outcomes for children and families in their States. This includes partnering with organizations that directly serve children, youth, and families and those whose actions impact family and community life.
  • Family-centered and community-based practices are integral to improving outcomes for children and families. As such, collaboration with families, including young people, is important in identifying and assessing strengths and barriers to improved outcomes for children, youth, and families.
  • Real collaboration has a purpose and a goal; it takes time and effort to promote meaningful collaboration. There also are varying degrees of collaboration, each of which can serve the CFSR process and, more importantly, children, youth, and families.

Collaborative Partners

The CFSR process defines key partners that should be engaged in the CFSR Statewide Assessment, onsite review, and Program Improvement Plan. These include partners with whom the State is required to collaborate in developing the Child and Family Services Plan (CFSP) and Annual Progress and Services Reports (APSRs), as noted at 45 CFR, Part 1357.15(1). Examples of these partners include:

  • Court representatives, including, but not limited to, Court Improvement Programs (CIPs)
  • Tribal representatives
  • Youth representatives
  • Child welfare agency internal partners, such as State and local agency staff, training staff, contract staff, supervisors, and administrators
  • Child welfare agency external partners, such as children (as appropriate); biological, foster, and adoptive parents and relative caregivers; and representatives from (1) other State and community-based service agencies, (2) State and local governments, (3) professional and advocacy organizations, and (4) agencies administering other Federal and federally assisted programs. These programs include those funded by the U.S Departments of Education, Housing, and Labor; the ACF (including Head Start; the Family and Youth Services Bureau; the Office of Family Assistance and the Child Care Bureau within that Office; and the Administration on Developmental Disabilities); the Substance Abuse and Mental Health Services Administration; and the Office of Juvenile Justice and Delinquency Prevention. These programs are responsible for education, labor, developmental disabilities services, juvenile justice, mental health, substance abuse prevention and treatment, family support, services to runaway and homeless youth, domestic violence intervention, child care, Medicaid, and housing.
  • Partners that represent the diversity of the State’s population, especially in relation to those served by the child welfare system
  • Other entities related to children and families within the State, such as the Community-Based Child Abuse Prevention lead agencies, citizen review panels, Children’s Justice Act task forces, and CFSP and Promoting Safe and Stable Families partners

CFSR Structure

The CFSRs comprise two phases: the Statewide Assessment, which the State completes in the 6 months before the onsite review, and the onsite review. During the first phase, the Statewide Assessment Team completes a Statewide Assessment, using data indicators to evaluate the programs under review and examine the systemic factors subject to review. In the second phase, the review team examines outcomes for a sample of children and families served by the State during the period under review (PUR) by conducting case record reviews and case-related interviews to assess the quality of services provided in a range of areas and by conducting State and local stakeholder interviews regarding the systemic factors that affect the quality of those services.

Once a State's onsite review is complete, the Children's Bureau generates a Final Report that serves as written notice of conformity or nonconformity. It is the goal of the Children's Bureau to provide a courtesy copy of this report to the State within 30 days of the onsite review. A State that is determined not to be in substantial conformity with one or more of the seven outcomes or seven systemic factors under review then develops a Program Improvement Plan that addresses all areas of nonconformity. The State submits the Program Improvement Plan to the Children’s Bureau Regional Office for approval within 90 calendar days of receiving the written notice of nonconformity.

Once the Program Improvement Plan is approved, the State implements the plan. During this process, the Children’s Bureau Regional Office monitors the plan’s implementation and the State’s progress toward goals specific to the Program Improvement Plan. During both review phases and the Program Improvement Plan process, if necessary, States have access to training and technical assistance provided by the Children’s Bureau-funded National Resource Centers and coordinated through the Children’s Bureau Regional Offices.

Preparation for the Onsite Review

Preparation for the onsite review involves a wide range of activities, including:

  • Identifying cases to be reviewed.
  • Preparing case records for review.
  • Scheduling case-related and stakeholder interviews.
  • Assembling the review team and preparing reviewer schedules.
  • Planning travel, lodging, and other logistical arrangements.
  • Providing training to members of the review team.
  • Distributing review-related materials and technology to the review team.

Responsibility for these activities is shared between the Children’s Bureau Central Office, the Children's Bureau Regional Office, the State Central Office, and various Local Site Coordinators. The Child Welfare Reviews Project also plays a significant role in the logistical planning that takes place for an onsite review and provides a variety of other resources for the Children's Bureau and State.

Children's Bureau Central Office

The Children's Bureau Central Office has several key responsibilities in preparing for an onsite review. In addition to appointing the Federal Review Team, it collaborates closely with the Children's Bureau Regional Office, State central office, and Child Welfare Reviews Project (CWRP) on a series of review planning conference calls. It also plays a role in selecting the sample of cases that are used during the onsite review and in developing the data profiles that are used to measure a State's substantial conformity.

Appointing the Federal Review Team

During its planning for the onsite review, the Children's Bureau Central Office identifies the National Review Team (NRT) Team Leader and NRT Local Site Leaders. It also identifies other Children's Bureau staff who will serve as Federal reviewers and arrange for any training they might require. 

Review Planning Conference Calls

In the weeks leading up to the onsite review, the Children's Bureau Central Office participates in a series of review planning conference calls with the Children's Bureau Regional Office and State child welfare agency staff. These calls are scheduled and facilitated by the CWRP and cover a wide range of review-related topics, including logistical information such as the locations of review sites, lodging arrangements, and the composition of the review team; the State data profile and Statewide Assessment; State policies that may affect the review process; the State's Program Improvement Plan (PIP); stakeholder interviews; the State Team Training, and more.

Sample of Cases

Case selection for an onsite review is handled differently for foster care cases and in-home services cases, although case sampling guidelines are used for both. For foster care cases, the Children's Bureau Central Office draws random samples of cases from Adoption and Foster Care Analysis and Reporting System (AFCARS) data. The cases are drawn from a "universe" of cases that is the State's 6-month AFCARS submissions that correspond with the sampling period for the three review sites. The Children's Bureau Central Office then transmits those samples through the Children's Bureau Regional Office to the State in a sorted AFCARS table organized around the four foster care categories and by jurisdiction within a State. This ensures that sites selected for the onsite review have a sufficient number of targeted foster care cases for review.

To select in-home services cases, the Children's Bureau Central Office draws from a list that is provided to it by the State central office. The cases must have been open for at least 60 consecutive days during the sampling period, which extends for 2 months beyond the sampling period used for foster cases for a total of 8 months.

Data Profiles

The Children's Bureau Central Office develops the safety and permanency data profiles used to measure substantial conformity within the Safety and Permanency outcomes and transmits them through the Children's Bureau Regional Office to the State. These data profiles are then used by the State to perform its Statewide Assessment.

The Children's Bureau Central Office, along with the Children's Bureau Regional Office, then reviews and provides feedback on the Statewide Assessment, which is a key indicator in determining a State's substantial conformity during the onsite review. The Children's Bureau Central and Regional Office also provides feedback on State policies and the Preliminary Assessment.

Children's Bureau Regional Office

Like the Children's Bureau Central Office, the Children's Bureau Regional Office has a number of key responsibilities during the preparation for an onsite review. The Regional Office selects the Regional Office Team Leader, who works in close collaboration with the National Review Team (NRT) Team Leader to guide the overall review. 

In addition, the Regional Office Team Leader serves as the Children's Bureau Regional Office's main representative on the series of review planning conference calls with the Children's Bureau Central Office, State central office, and Child Welfare Reviews Project (CWRP). He or she is also responsible for identifying State issues that might be relevant during the onsite review and to assist in selecting the sample of cases that will be used. The Regional Office Team Leader also plays a role in assembling the Federal Review Team, onsite review scheduling, and indentifying stakeholders to be interviewed during the onsite review.

Identifying State Issues

The Regional Office Team Leader reviews with the State any potential policy issues relevant to the review. He or she will also collaborate in identifying State-specific systemic issues raised in the Statewide Assessment or Preliminary Assessment that may require further review on site.

Sample of Cases

The Regional Office Team Leader helps determine the composition of the sample of cases to be reviewed. He or she reviews and concurs with the State's method for selecting in-home services cases that meet the case sampling guidelines during the period under review, then transmits the State's final sample list to the Children's Bureau Central Office. The Central Office uses it and AFCARS data to draw a random sample of in-home services and foster care cases, which the Regional Office Team Leader then forwards to the State.

Federal Review Team

The Regional Office Team Leader is also instrumental in assembling the Federal Review Team. He or she consults with the Children's Bureau Central Office to assign Regional Office staff to the team, including Federal Local Site Leaders. He or she also consults with the NRT and State Team Leaders to determine the total number of reviewers needed for the review and advises CWRP of the total number of Federal consultant reviewers required to supplement the Federal review team.

In selecting the Federal consultant reviewers, the Regional Office Team Leader collaborates with CWRP to identify and address any potential conflicts of interest. The Regional Office Team Leader then collaborates with the State Team Leader to develop the Federal-State Review Team pairings and site assignments and coordinates with CWRP to plan for and participate in the State Team Training. The training takes place 2 weeks before the onsite review.

Onsite Review Scheduling

Following the training, the Regional Office Team Leader requests that the State Team Leader submit review team schedules, including schedules for case record reviews and case-related interviews, to the Children's Bureau Regional Office. The Regional Office then distributes these schedules to the NRT Team Leader, NRT Local Site Leaders, and CWRP.

The Regional Office Team Leader also collaborates with the NRT Team Leader and State Team Leader to prepare for the Monday morning team meetinglocal site exit conference, and statewide exit conference.

Identifying Stakeholders

The Regional Office Team Leader collaborates with the State to identify all required State and local stakeholders. He or she reviews the stakeholder interview schedule developed by the State, submits stakeholder interview schedules to the Children's Bureau Regional Office, then distributes these to the NRT Team Leader, NRT Local Site Leaders, and CWRP.

State Central Office

The State central office and its various local agencies play a significant role in pre-review preparation. The central office will assign a senior State staff person to serve as the State Team Leader, who then provides oversight to the State Onsite Review Team and liaises with the Children's Bureau Regional Office and the Child Welfare Reviews Project (CWRP) in making logistical arrangements for the review. This includes participating in a series of review planning conference calls.

Other collaborative responsibilities that the State central office shares with the Children's Bureau include identifying and selecting the review sites used during the onsite reviewassembling the State review team, identifying the in-home services cases that will be reviewed on site, and scheduling the case-related and stakeholder interviews

Logistical Arrangements

The State central office and its local agencies are responsible for collaborating with the Children's Bureau Central Office, Children's Bureau Regional Office, and CWRP on a number of logistical arrangements for the onsite review. These include coordinating with CWRP to schedule the statewide exit conference by recommending meeting space and inviting participants.

The State central office will also collaborate with CWRP to identify:

  • Lodging arrangements for Onsite Review Team members.
  • Locations and times for nightly debriefings and the local site exit conference.
  • Space for other scheduled meetings and review activities during the week.
  • Transportation for review team members.

Review Site Selection

The State central office is also responsible for identifying the State's three review sites, one of which must include the State's largest metropolitan subdivision. The State central office selects these sites based on information obtained from the Statewide Assessment, then consults with the NRT Team Leader and Regional Office Team Leaders before final site selections are made. 

Once a review site is selected, the State central office assigns to it a Local Site Coordinator responsible for making local arrangements and ensuring that case records are available onsite. The Local Site Coordinator should be an administrator from the site under review, or their designee. To avoid conflicts of interest, the Local Site Coordinator does not participate in team activities such as nightly debriefings or case-related interviews, but should be available to the team during regular working hours to handle any unexpected issues that may arise.

State Review Team

The State central office is also responsible for identifying State review team members. The State review team should include staff from the State's public child welfare agency as well as representatives from external partners. To avoid conflicts of interest, State review team members should not be assigned as State Local Site Leaders or reviewers in the same site where they work or have oversight responsibilities.

Once the State's review team is selected, the State central office provides information about each team member to the Children's Bureau Regional Office. The State central office then collaborates with the Children's Bureau Regional Office to place Federal and State review team members in review pairs. Each review pair is assigned to a review site at least 6 weeks before the onsite review. 

In-Home Services Cases

The State central office collaborates with the Children's Bureau Regional Office to determine which cases in the State meet the definition of in-home services cases for inclusion in the "universe" of in-home services cases from which the review sample will be drawn. The State central office specifies the methods for identifying and compiling a list of cases that meet this definition and that fall within the period under review, then compiles this list and submits it to the Children's Bureau Regional Office at least 60-90 days before the onsite review. 

Case-Related and Stakeholder Interviews

The total sample list of in-home services and foster care cases is transmitted by the Children's Bureau to the Local Site Coordinators. The local agencies managing the onsite review then examine the sample lists and schedule case-related interviews as appropriate. Following the State Team Training, which takes place 2 weeks before the review, the State central office submits the review team schedules for case record reviews and case-related interviews to the Children's Bureau Regional Office.

The State central office also collaborates with the Children's Bureau Regional Office to determine the number and composition of State and local stakeholder interviews to be conducted during the onsite review. Once this is established, the State central office makes appointments for Team Leaders to conduct interviews with stakeholders and submits a stakeholder interview schedule to the Children's Bureau Regional Office at least 2 weeks before the onsite review.

Local Site Coordinators

Local Site Coordinators are assigned by the State central office to each of the review sites. They are State staff members who are not considered part of the Onsite Review Team, but rather serve as the review team's liaison to the child welfare agency at each review site.

Local Site Coordinators have a number of responsibilities in preparing for an onsite review, and will often collaborate closely with the Child Welfare Reviews Project (CWRP) in carrying out their tasks. Of particular importance is their role in case preparation and in scheduling review week activities. They also have a number of other logistical responsibilities.

Case Preparation

Local Site Coordinators manage the process of selecting and assembling the case records that are to be reviewed at the local site. They ensure that all relevant records are ready and accessible at the beginning of the review week. The Local Site Coordinator is also responsible for ensuring that all case records are kept in a secure site for overnight storage during the review week.

Schedule Review Week Activities

Local Site Coordinators take the lead role in scheduling and reserving space for most review week activities, including the following:

  • The Monday morning team meeting, which includes local officials and Federal and State members of the Onsite Review Team. 
  • Case-related interviews for those cases selected for review. The Local Site Coordinator also confirms the interviews, orients those being interviewed to the purposes of the review, and handles any interview reschedulings that become necessary.
  • Local stakeholder interviews, which take place at stakeholders' offices or other suitable locations. As with the case-related interviews, the Local Site Coordinator is also responsible for confirming the stakeholder interviews, orienting the stakeholders to the purposes of the review, and any reschedulings that become necessary.
  • The nightly debriefings.
  • The local site exit conference (in collaboration with the NRT Local Site Leader).

Prior to the onsite review, each Local Site Coordinator finalizes the schedule of all review week activities. The schedule is reviewed and approved by the State Team Leader, who submits it to the Children's Bureau Regional Office and the Children's Bureau Central Office.

Appendix E of the Child and Family Services Reviews Procedures Manual, "Tips on Creating Onsite Review Schedules," contains comprehensive tips on how to schedule interviews, debriefings, conferences, and other required onsite events as well as sample schedules of a review week. 

Other Logistical Responsibilities

The Local Site Coordinator also has a number of other specific logistical responsibilities in preparing for an onsite review. Examples of these responsibilities include:

  • Preparing maps and other written directions for review team members to assist them in getting to the site office and scheduled appointments. He or she will also plan transportation, as required, to and from interviews.
  • Orienting local child welfare agency staff about the review.
  • Booking sleeping rooms for State review team members. 
  • Securing any releases of information or confidentiality forms needed to permit reviewers to access case records and interview individuals associated with the cases.
  • Ensuring that the technical requirements of the CFSR Data Management System are met, including securing Internet connections and power sources. He or she will also receive and secure the shipment of tablet computers that CWRP sends before the onsite review and release them to the Local Site Leader at the start of the review week.

Child Welfare Reviews Project

The Child Welfare Reviews Project (CWRP) is the Federal contractor responsible for handling much of the logistical planning that goes into preparing for an onsite review. CWRP works very closely, as required, with the Children's Bureau Central Office, Children's Bureau Regional Office, State central office, and Local Site Coordinators to ensure that all review planning needs are met. CWRP also schedules and facilitates the series of review planning conference calls, beginning 9 months before the onsite review, in which these separate groups can coordinate their activities.

During this 9-month planning period and the onsite review itself, CWRP tracks the overall status of the review and provides support wherever it is needed. CWRP also provides onsite staff at each review site for technical assistance regarding technology and logistics as necessary. 

CWRP is specifically responsible for a number of other review planning activities, including the recruitment and training of consultants to the Federal team, ensuring that transportation and lodging requirements are met, and providing review documents and technology.

Recruitment and Training

CWRP is responsible for recruiting and training individuals with experience in the child welfare field to become part of a national pool of consultants to the Federal team. Approximately 3 months before the onsite review, CWRP will provide the Children's Bureau Regional Office with the names and profiles of consultants who have indicated an availability to participate in the onsite review and meet the necessary criteria. The Children's Bureau Regional Office then selects consultants from that list to supplement the Federal side of the onsite review team as partners of State reviewers or as local site leaders

CWRP is also responsible for designing and conducting the State Team Trainings delivered to members of the State Review Team. These trainings are held in each State approximately 2 weeks before the onsite review. CWRP also designs and conducts trainings for any cross-State participants (CSPs) in the onsite review.

Transportation and Lodging Arrangements

After obtaining review site assignments from the Children's Bureau Regional Office, CWRP is responsible for the lodging arrangements of all Federal review team members. CWRP coordinates these arrangements with State staff to ensure that the entire onsite review team is housed in the same location. CWRP also coordinates onsite transportation arrangements with the Children's Bureau Regional Office and State Team Leaders, and can arrange for rental cars for up to eight consultants who serve as Federal review team members.

In addition, CWRP coordinates with the Children's Bureau Regional Office and State Team Leaders to identify and reserve a location for the Statewide exit conference. CWRP ensures that all necessary equipment is present at the site, and provides staff to manage onsite logistics.

Review Documents and Technology

CWRP is also responsible for producing and delivering to each review site copies of the Onsite Review Instrument (OSRI) and Stakeholder Interview Guide (SIG) that are used during the onsite review, as well as any other documents or information that review team members will require while on site. These materials, along with the tablet computers that contain the CFSR Data Management System, are sent to the attention of the NRT Team Leader and NRT Local Site Leader and arrive at each local site 1 week before the review.

CWRP also produces and distributes Review Information Packages to review team members approximately 2 weeks before the onsite review. Each Review Information Package contains a copy of:

  • The Statewide Assessment.
  • The Preliminary Assessment.
  • The State Policy Submission Form (completed by the State).
  • Demographic information on the local site (if provided by the State).
  • A Review Fact Sheet containing contact information for review team leaders and Local Site Coordinators, important addresses related to the review, and the dates and times of entrance and exit conferences.
  • The Federal and State review team member pairings chart, with site assignments.
  • A preliminary schedule of review week activities (developed by the State).

Statewide Assessment

The Statewide Assessment is the first phase of the CFSRs and is conducted during the 6 months preceding the second phase, which is the onsite review. In conducting the Statewide Assessment, the Statewide Assessment Team uses data indicators and other qualitative information to assess the impact of State policies and practices on the children and families being served by the State child welfare agency.

The Statewide Assessment provides States an opportunity to examine data and qualitative information related to their child welfare programs in light of their programmatic goals and desired outcomes for the children and families they serve. The Statewide Assessment serves the following purposes:

  • Provides States the opportunity to build capacity for continuous program evaluation and improvement
  • Helps prepare the Onsite Review Team for the onsite review by providing evaluative information regarding the child welfare agency’s policies, procedures, and practices
  • Provides a basis for making decisions regarding substantial conformity with the seven systemic factors, in conjunction with the information obtained from the onsite review
  • Identifies issues that require clarification and that therefore may need to be addressed through the training of State Review Team members

The Statewide Assessment Team uses a Statewide Assessment Instrument to record the following:

  • qualitative, evaluative, and quantitative information regarding the State’s outcomes for children and families served
  • systemic factors that affect the State’s ability to provide services
  • State strengths and areas needing improvement
  • issues for further examination through the onsite review

The Statewide Assessment Instrument is designed to assist States in completing their Statewide Assessment in an evaluative manner. The instrument includes a series of narrative-style questions and instructions on documenting data indicators. The Statewide Assessment Team should complete the Statewide Assessment and should be the primary group that responds to the narrative questions. Once the Statewide Assessment is complete, the Children's Bureau will release a Preliminary Assessment that will serve as a critical component in planning for the onsite review.

 

Statewide Assessment Team

States must include broad representation from within and outside the child welfare agency in forming a team to conduct the Statewide Assessment. The team should include representatives of organizations consulted in developing the CFSP and APSRs and who are expected to be involved in developing and implementing the Program Improvement Plan. States also should consider including on the Statewide Assessment Team individuals from within and outside the State child welfare agency who have the skills and background to serve as case record reviewers and interviewers and who are available to serve on the Onsite Review Team.

The following are suggested participants in the Statewide Assessment Team:

  • Administrators and program specialists from the State and local child welfare agencies
  • State and local agency staff with expertise in areas examined during the Statewide Assessment, such as information systems, quality assurance, training, and licensing
  • Local child welfare agency staff who have knowledge of front-line practice and supervisory issues
  • Judges and other court-related personnel, especially staff of the State's Court Improvement Program (CIP)
  • Representatives of the major domains outside child welfare that are addressed in the Statewide Assessment, such as education, health, mental health, substance abuse treatment, domestic violence prevention, and juvenile justice
  • Tribal representatives
  • Legislative personnel who focus on child welfare issues or funding issues that affect child welfare
  • Advocacy groups and consumer representatives, including children and youth in foster care or the groups that represent them
  • Service provider representatives, including foster and adoptive families
  • University or research-related partners of the State involved in data collection and analysis, training activities, or other relevant areas
  • Partners who represent the diversity of the State's population, especially in relation to those served by the child welfare system

Members of the Statewide Assessment Team may engage in the following types of activities:

  • Participate in training or orientation sessions
  • Attend meetings related to the Statewide Assessment or the review process
  • Analyze the data related to outcomes and systemic factors
  • Collect additional data as needed
  • Gather information pertaining to the agency's performance, such as conducting or participating in focus groups, surveys, or interviews
  • Develop, review, and comment on drafts of the Statewide Assessment
  • Participate in conference calls with Federal staff during the Statewide Assessment process (Statewide Assessment Team leadership only)
  • Make recommendations pertaining to the onsite review, such as sample composition, site selection, and Onsite Review Team composition
  • Identify the State's strengths and areas needing improvement on the basis of data and information gathered for the Statewide Assessment
  • Explore strategies for possible program improvement efforts in areas identified as needing improvement, and make preliminary recommendations to the State's Program Improvement Plan Development Team

Completing the Statewide Assessment

The Statewide Assessment is completed using the Statewide Assessment Instrument, which is divided into five sections:

  • General Information, which provides information about the child welfare agency;
  • Safety and Permanency Data, which States use to examine and report on their foster care and child protective services populations using the safety and permanency profiles provided by the Children’s Bureau’s data team;
  • Narrative Assessment of Child and Family Outcomes, which States use  to examine their data in relation to the three outcome areas under review;
  • Systemic Factors, where States provide narrative responses to questions about the seven systemic factors under review; and
  • State Assessment of Strengths and Needs, where States answer questions about the strengths of the agency’s programs and areas that may warrant further exploration through the onsite review.

The Statewide Assessment includes data that the Children’s Bureau extracts from the Adoption and Foster Care Analysis and Reporting System (AFCARS) and the National Child Abuse and Neglect Data System (NCANDS) Child File (the case-level component of NCANDS) and transmits to the State in report format. AFCARS data are used to develop a permanency profile of the State’s foster care populations, and NCANDS data are used to develop a safety profile of the child protective services population. These data profiles include data indicators that are used to determine substantial conformity.

For the initial review only, the Children’s Bureau could approve another source of data for the permanency profile in the absence of AFCARS data. Additionally, for both the initial and subsequent reviews, the Children’s Bureau may approve another source of data for the safety profile in the absence of NCANDS data. This source would then be used to prepare an alternative data profile.

Once it is compiled, this data or alternative data profile will serve as the foundation for the data analysis performed by the Statewide Assessment Team. The Children’s Bureau has established national standards for each of the data indicators used to determine substantial conformity. When a State is undergoing a CFSR, the Children’s Bureau Regional Office and the State compare the State’s data for the PUR with these national standards to determine the State’s substantial conformity with these standards. The completed Statewide Assessment will analyze the relationship between State data and practice, and the quality and effectiveness of the system under review. For example, if a State’s data show that children have frequent re-entries into foster care following reunification, the State will use the Statewide Assessment process to explore, and then document, the possible reasons why this is occurring. To do so, the State might examine the availability, accessibility, and quality of services to support family reunification. 

When the Statewide Assessment has been completed and accepted by the Children's Bureau Regional Office, the Children's Bureau will then release a Preliminary Assessment for the State that will serve as a critical component in planning for the onsite review.

 

Data Profiles

Six months before the onsite review, the Children’s Bureau Regional Office transmits to the State the AFCARS and NCANDS data profiles, unless the data are not available from the State’s submissions. This provides the State the opportunity to examine the profiles for accuracy and then decide whether it needs to correct and resubmit the data.

If the State resubmits data before the onsite review, the Children’s Bureau prepares updated data profiles on the basis of the resubmitted data. The turnaround time for doing so is generally 2–4 weeks. States, therefore, that elect to resubmit data should do so as early as possible after receiving the initial profiles.

The Children’s Bureau uses a specific data syntax to create the data profiles for the Statewide Assessment. States are encouraged to use this syntax to create and review their own data profiles before starting the Statewide Assessment. By doing so, States will have more time to examine the accuracy of their data and make corrections before receiving their official data profiles for the Statewide Assessment. If this data syntax is not normally used by the State, using the logic established by the syntax will enable the State to create its own data syntax that will be more compatible with that used for the review. The syntax (Data Profile Programming Logic) is available on the Children’s Bureau Web site.

Data Profiles Using Alternative Sources

If a State does not submit data to NCANDS, the Children’s Bureau Regional Office and State must agree on an alternate source of statewide data to be used in preparing the safety profile. Also, for its initial review, if the State had incomplete AFCARS data, an alternate source of data approved by the Children’s Bureau could be used to generate the permanency data profiles. In the absence of NCANDS data, the Children’s Bureau Regional Office requests that the State submit its description of the proposed alternate source of data to the Children’s Bureau Regional Office 8 months before the onsite review. This provides time for the Children’s Bureau Regional Office to approve the data and transmit them to the Children’s Bureau Central Office to prepare the profiles.

The Children’s Bureau Regional Office, in consultation with the Children’s Bureau Central Office, approves or disapproves the alternate data source, using the following criteria:

  • The data accurately represent the State’s service population.
  • The reporting definitions and timeframes of the alternate source are consistent with those of NCANDS.

Some of the data elements in the data profiles are used to determine the State’s substantial conformity. Failure to provide data from an alternate source, in the absence of NCANDS data, could result in a determination that the State is not in substantial conformity with Safety Outcome 1. When the Children’s Bureau has approved the alternate source of data for the profiles, the State transmits the data to the Children’s Bureau data team, which uses it to prepare the profiles. The State then notifies the Children’s Bureau Regional Office that it has done so. The Children’s Bureau Central Office prepares the profiles and sends them to the Children’s Bureau Regional Office, which transmits them to the State at least 6 months before the onsite review.

If the State submits the data from the alternate source to the Children’s Bureau in a timely manner, the profiles will reflect the alternate data when the Children’s Bureau transmits them to the State 6 months before the onsite review. If the State is not able to submit the alternate data in a timely manner, the Children’s Bureau updates the profiles to reflect the alternate data as soon as possible after receiving it.

Onsite Review

The onsite review is the second phase of the Child and Family Services Reviews (CFSRs) and is designed primarily to gather qualitative information. The onsite review lasts 1 week (see Module 2: The Review Week) and includes the examination of a sample of cases for outcome achievement and interviews with State and local stakeholders to evaluate the outcomes and systemic factors under review. The review takes place in three sites in the State. The State’s largest metropolitan subdivision is a required site, and the other two sites are determined on the basis of information in the Statewide Assessment.

During the onsite review, the Onsite Review Team examines case records, conducts case related and stakeholder interviews, and participates in nightly debriefings, local exit conferences, and the statewide exit conference. The goal of the case record reviews and case-related and stakeholder interviews is to obtain qualitative information that complements the quantitative information reported through the Statewide Assessment.

The onsite review also permits the team to collect information on items/outcomes that is not reported in aggregate form through data collection, such as risk assessment and safety management and the nature of the relationship between children in care and their parents. The combination of the data, reported through the Statewide Assessment, and the information on child and family outcomes and statewide systemic factors gathered through the onsite review, allows the review team to evaluate programs’ outcome achievement and identify areas in which the State may need TA to make improvements.

The Children’s Bureau developed the following standardized instruments for collecting and recording information during the onsite review:

  • Onsite Review Instrument and Instructions (OSRI): The OSRI is used by review team members who conduct case record reviews. It contains questions to guide the case record review process and provides space for rating the 23 items and 7 outcomes under review and for documenting information to support those ratings.
  • Stakeholder Interview Guide (SIG): The SIG provides a framework for the Team Leaders and Local Site Leaders who conduct interviews with stakeholders regarding the outcomes and systemic factors under review. The guide lists the individuals whom the NRT Local Site Leader must interview and provides core and follow-up questions for each of the 23 items under the 7 outcomes and 22 items under the 7 systemic factors.
  • Preliminary Assessment and Summary of Findings Form: This form is used by the Children’s Bureau Regional Office to: (1) prepare an analysis (the Preliminary Assessment) of the State’s performance on the outcomes and systemic factors, on the basis of information from the Statewide Assessment, (2) record the preliminary findings of the onsite review, and (3) prepare the Final Report of the review.

Preparation for the onsite review includes selecting cases to be reviewed; preparing case records for review; scheduling case-related interviews and State and local stakeholder interviews; preparing reviewer schedules, making other logistical arrangements, and distributing review-related materials to the review team; and providing training. Responsibilities for these activities are shared between the Children’s Bureau Central and Regional Offices, Child Welfare Reviews Project, State central and local child welfare agencies, and Local Site Coordinators. 

Case Selection

Before selecting the in-home services and foster care samples, the Children’s Bureau Central and Regional Offices and State staff will confirm the three counties or other geographical areas where the onsite review will be conducted. These review sites are selected on the basis of reviewing a draft Statewide Assessment, and in making their selections, the Children's Bureau will ensure that each review site has at least three times more in-home services and foster care cases than need to be scheduled for the review. The Children's Bureau will also confirm that any sealed foster care or adoption records will be available if they are selected for the sample.

A total of 65 cases will be reviewed per State, unless unusual circumstances exist and specific arrangements are made between the Children's Bureau and the State to review fewer cases. The breakout of cases in the State's review sample is as follows:

  • 25 in-home cases. These will reflect the State’s in-home services population as defined in the State CFSP
  • 40 foster care cases. These will be stratified into four categories to achieve an adequate representation of cases in key program areas

In any situation, a State’s review will involve no more than 40 foster care cases, even if the number of in-home cases does not reach 25. In situations where the number of in-home services cases cannot be reached and adjustments across sites are necessary, the Children's Bureau will seek to review a minimum of 5 in-home services and 10 foster care cases in each of the two non-metropolitan sites and a minimum of 10 in-home services cases in the metropolitan site. In addition, when the foster care cases from all three sites are combined, there should be 10 cases total in each of the four categories.

Case Sampling Guidelines

After the three review sites have been determined, the Children's Bureau draws two random samples of cases to be reviewed (a total of 150 in-home services cases and approximately 150 foster care cases) from the respective universe of cases in the three sites to be reviewed. The sample of in-home services cases is selected by family, and the sample of foster care cases is selected by child. Before the Children’s Bureau sends the sample of 150 foster care cases to the State, it randomizes the records in the sample. This is designed to preclude any bias when the State selects the cases to be reviewed at each of the three sites.

For in-home services cases, the “universe” of cases is a State-provided list of in-home services cases that were open for services for at least 60 consecutive days during the sampling period and in which no children in the family were in foster care for 24 hours or longer during any portion of the review period. The State should provide this list of in-home services cases to the Children's Bureau because that information is not currently available through the NCANDS or other national data sources. The sampling period for in-home services cases extends 2 months beyond the sampling period for foster care cases, for a total of 8 months. This is because the CFSRs review in-home services cases that were open for at least 60 days.

For foster care cases, the “universe” (list) of cases is the State’s 6-month AFCARS submissions that correspond with the sampling period for the three review sites. To ensure that sites selected for the onsite review will have a sufficient number of the targeted foster care cases for review, the Children's Bureau will sort the AFCARS foster care file by the four categories and by jurisdiction within a State. A table is then generated for each State identifying the jurisdictions and the number of cases in each of the four categories. This assists in the site selection process after sites are proposed through the Statewide Assessment.

Local Site Coordinators then schedule the 65 cases for onsite reviews across the three sites. At each review site, approximately 15-35 cases are reviewed (for example, the Onsite Review Team typically reviews up to 35 cases in the largest metropolitan subdivision and no fewer than 15 in the other two sites), unless otherwise agreed upon by the Children's Bureau and the State. The Children's Bureau, however, will review no fewer than 15 cases at any review site.

In-Home Services Cases

In-home services samples are family-based and are selected from a universe of cases provided by the State. The State should provide the universe as soon as possible after the review sites are selected. The universe of in-home services cases should include the State’s non-foster care cases for which the State’s title IV-E/IV-B agency is responsible as defined in State policy, or the families served pursuant to the State’s Child and Family Services Plan (CFSP). Juvenile justice cases, mental health cases, and other in-home services cases, even if they are not federally funded, are to be included in the State’s in-home services universe if the services the State IV-E/IV-B agency provides to them, either directly or through contractual arrangements, are provided pursuant to the State’s CFSP. This would include, for example, the requirement that a State have a pre-placement preventive services program to help children at risk of foster care placement remain safely with their families.

In determining whether an in-home services case should be included in the universe, the State should consider the following criteria:

  • Whether the State or local title IV-E/IV-B funded child welfare agency has or had ongoing responsibility for the case, as defined in State policy, or the families are served pursuant to the State’s CFSP
  • Whether the case was open for at least 60 consecutive days during the sampling period and did not have any children in the family in foster care for 24 hours or longer during any portion of the review period

The Children's Bureau Regional Office staff should determine whether the State’s in-home services cases are listed by family or by child. If a State lists its in-home services cases by child instead of by family, the Children's Bureau Regional Office will request that the State provide its list of in-home services cases with the children from each family grouped together. The ease of grouping these cases will depend on whether children from the same family have the same case number or another designation that identifies them as being from the same family.

Upon receiving the list of cases, the Children's Bureau data team will select a total of 150 in-home services cases from the three review sites, on the basis of the proportion of cases to be reviewed at each site. If 10 of the 25 in-home services cases (40 percent) scheduled to be reviewed are in county A, for example, the Children’s Bureau data team selects a sample of 60 (0.4 x 150) in-home services cases from county A’s list. If this is not possible, the Children's Bureau data team attempts to preserve the proportionality of the cases scheduled for review at each site to the extent possible. The Children's Bureau then re-randomizes the cases in each sample before transmitting these to the State.

After the State receives the three re-randomized samples, it verifies and finalizes the list of cases to be reviewed and schedules cases sequentially from the lists, maintaining the exact order used in the sample provided by the Children's Bureau and eliminating any ineligible cases after consultation with the Children's Bureau Regional Office.

If 25 in-home services cases cannot be scheduled on site, no substitution of foster care cases will be undertaken. At least two alternate in-home services cases should be available from the lists at each site in the event that in-home services cases are eliminated during the onsite review. If the target number of in-home services cases cannot be reached or adjustments across sites are necessary, the Children's Bureau Regional Office will seek to review a minimum of five in-home services cases for the two non-metropolitan sites.
 

Foster Care Cases

The State’s universe of foster care cases is the State’s AFCARS submission that corresponds with the sampling period for the three review sites. The universe of cases should comprise children for whom the agency has placement and care responsibility and who are considered to be in foster care on the basis of AFCARS reporting requirements. If juvenile justice or mental health cases are reported to AFCARS consistent with AFCARS requirements, they are part of the universe of cases.

From the AFCARS file or abridged AFCARS file, the Children’s Bureau data team selects approximately 150 foster care cases on the basis of the proportion of cases to be reviewed at each site. Foster care cases are stratified into four categories to achieve an adequate  representation of cases in key program areas, and the review should maintain a ratio of 10 cases per category. The four categories are as follows:

  • Category 1: 10 cases involving children who were ages 16 or 17 as of the last day of the PUR or the date that they exited care, as applicable. These children could have any permanency goal and could have entered care either before or during the PUR.
  • Category 2: 10 cases involving children who were under age 16 as of the last day of the PUR or the date that they exited care, as applicable. These children will have a current permanency goal of adoption and will have entered care either before or during the PUR.
  • Category 3: 10 cases involving children who were under age 16 as of the last day of the PUR or the date that they exited care, as applicable, and who entered care during the PUR. These cases could have any permanency goal except adoption.

These categories may include children entering foster care during the PUR, which will ensure a proportion of this case type that is consistent with the regulation and that will address the need to focus on State practice after the first-round of Program Improvement Plan implementation. The case numbers for these categories were based on the need to focus on (1) State practice during the PUR, (2) the emphasis on re-entries, and (3) the focus in the second round of reviews on the population of older youth in care.

  • Category 4: 10 cases involving children who were under age 16 as of the last day of the PUR or the date that they exited care, as applicable, and who entered care prior to the PUR. These cases could have any permanency goal except adoption.

Category 4 is intended to allow the random selection of cases with case plan goals other than adoption. These include guardianship, permanent placement with relatives, and other types of cases involving children younger than age 16 with a goal of Other Planned Permanent Living Arrangement.

After the State receives the list of approximately 150 foster care cases divided into 12 files, 4 for each site, it schedules the cases to be reviewed according to the case order listing, eliminating ineligible cases using the established elimination guidance. Each site should have at least two cases per category remaining on the lists as alternates in the event that cases are eliminated during the onsite review. States should not substitute cases from one list to supplement another list that incurred a shortfall.

Case Elimination

In some instances, cases that are included in the State sample may need to be eliminated from consideration. This will normally happen during the case selection process, although in some instances it may become necessary during the onsite review. States should only eliminate a case after consultation with the Children's Bureau, and generally only if one or more of the following reasons apply:

  • Key individuals are unavailable during the onsite review week or are completely unwilling to be interviewed, even by telephone. The key individuals in a case are the child (if school age), the parents, the foster parents, the caseworker, and other professionals knowledgeable about the case. Before eliminating these cases, the State should determine whether sufficient information and perspectives can be obtained from the available parties.
  • The case involves out-of-county or out-of-State family members or services that may not be readily available during the review week. Children on runaway status should not be eliminated from the sample unless it has been determined that pertinent information needed to complete the OSRI cannot be obtained from other available parties, such as the guardian ad litem or other significant individuals. Local Site Coordinators should make reasonable efforts to seek the participation of key individuals in the case to ensure the validity of the random sample.
  • An in-home services case open for fewer than 60 consecutive days during the PUR.
  • An in-home services case in which any child in the family was in foster care for more than 24 hours during the PUR.
  • An in-home services case in which any child in the family was in foster care during the 8-month sampling period or who entered foster care from the period after the 8-month sampling period up to the first day of the onsite review.
  • A foster care case open fewer than 24 hours during the PUR.
  • A foster care case in which a child was on a trial home visit (placement at home) during the entire PUR. If the child was in a foster care placement for any portion of the PUR, the case should stay in the foster care sample.
  • A case reported to AFCARS in error, such as a foster care case that was officially closed before the PUR, resulting in no State responsibility for the case; or a case in which the target child reached the age of majority as defined by State law before the PUR.
  • A case appearing multiple times in the sample, such as a case that involves siblings in foster care in separate cases or an in-home services case that was opened more than one time during a sampling period. If siblings appear on the list, the State should select the case of the child that appears first on the list and skip the cases of the other children or other cases involving the same family.
  • A foster care case in which the child’s adoption or guardianship was finalized before the PUR and the child is no longer under the care of the State child welfare agency.
  • Situations in which case selection would result in over-representation of child welfare agency staff, such as when more than two cases in one site are from the caseload of a single caseworker.
  • Situations in which case selection would result in over-representation or under-representation of juvenile justice cases.
  • A case in which the child was placed for the entire PUR in a locked juvenile facility or other placement that does not meet the Federal definition of foster care.

The cases in the sample of approximately 150 cases that are not selected for review may serve as substitute cases to replace any selected cases that are eliminated on site or to resolve discrepancies.

Case Sampling Issues During the Onsite Review

The NRT Local Site Leader and the Local Site Coordinator will need to approve decisions to eliminate a case because of last-minute developments that result in insufficient information being available to review the case. If an interview with a critical party to the case is cancelled at the last minute, for example, the case should be eliminated from the sample. The NRT Local Site Leader and Local Site Coordinator then should consider whether sufficient time exists to use a substitute case.

If the State already has identified alternate cases, it should substitute those cases by following the numerical order provided in the sample. If the State has not previously identified alternate cases, it should use the original sample and sampling procedures to select substitutes. The State also may draw from these cases to resolve discrepancies between information in the Statewide Assessment and the findings of the onsite review should additional cases need to be reviewed to resolve the discrepancies.

In addition, if during the onsite review an in-home services case is found to have included an episode of foster care during the PUR, it may be reviewed as a foster care case only when an alternative in-home services case cannot be substituted. A foster care case found during the onsite review to involve a family that has received in-home services during the entire PUR may be reviewed as an in-home services case only when no alternative foster care cases can be scheduled, provided no child in the family was in foster care during the PUR.

 

Case Record Preparation

All case records to be reviewed are made available at the review sites in their entirety, including applicable information for periods preceding the PUR. Case records also should be as orderly and up to date as possible, including any files maintained separately, such as separate child protective services files or separate child and family records. Caseworkers and/or supervisors assigned to these cases also should be available for interviews.

If the child welfare agency uses electronic files instead of or in addition to paper files, the Local Site Coordinator needs to: (1) make computers and technical support available to reviewers so that they can view the electronic records, (2) obtain hard copies of the files or the portions of the files containing information relevant to the review, or (3) use a combination of these two approaches.

If necessary, the State agency obtains confidentiality statements or releases of information before the onsite review to permit reviewers to read case records and conduct case-related interviews. In addition, the Child Welfare Reviews Project require that all consultants serving on the Review Team sign an agreement that includes a confidentiality provision.

Interview Scheduling

Case-related interviews and stakeholder interviews are key components of the onsite review. These interviews are arranged before the review week begins.

Case-Related Interviews

Each review pair is responsible for reviewing the case records they are assigned and interviewing key individuals involved in the cases. In general, the following individuals related to a case will be interviewed unless they are unavailable or completely unwilling to participate:

  • The child, if he or she is school age. Cases involving preschool-age children may be reviewed but do not require an interview with the child. Instead, the reviewers might observe the child in the home while interviewing the birth or foster parents.
  • The child’s parents.
  • The child’s foster parents, pre-adoptive parents, or other caregivers, such as a relative caregiver or group home houseparent, if the child is in foster care.
  • The family’s caseworker. When the caseworker has left the agency or is no longer available for interview, it may be necessary to schedule interviews with the supervisor who was responsible for the caseworker assigned to the family.
  • Other professionals knowledgeable about the case. When numerous service providers are involved with a child or family, it may be necessary to schedule interviews only with those most recently involved, those most knowledgeable about the family, or those who provide the primary services the family is receiving. More than one service provider may be interviewed.

As needed, on a case-by-case basis, other individuals who have relevant information about the case also may be interviewed. These individuals may include the child’s guardian ad litem, advocate, or other family members. If possible, interviews with parents, foster parents, and children should be conducted in their homes or foster homes. Service providers may be interviewed wherever it is most convenient for them and the review pair. When travel arrangements and the schedules of reviewers preclude travel to those locations, or when persons to be interviewed prefer not to have reviewers in their homes or offices, the interviews may take place in a central location or by telephone.

Case Record Interview Scheduling

The Local Site Coordinator handles all case record interview scheduling. He or she will generally allow time at the beginning of each day for reviewers to read the cases before the first interview is scheduled. Each interview is typically scheduled for 1 hour or less, and the Local Site Coordinator will build in time between interviews for any necessary travel. Additionally, he or she will prepare, in advance, maps or other written directions to the interview sites and provide these to each review pair. The interviews scheduled for each day will correspond to the case that the review pair is expected to complete that day

In general, the caseworker will not be present at the interview. If, however, concerns exist about the safety of reviewers or other issues related to the interview, the Local Site Coordinator will take the necessary precautions, such as arranging for the interview to be held in the local child welfare agency office. If special accommodations are required to complete an interview, for example, to address language needs, the Local Site Coordinator will make those arrangements as well, including obtaining an interpreter, if needed. Before the review week begins, the Local Site Coordinator will prepare the individuals being interviewed for their interviews. This preparation will include helping them to understand the purpose of the review and confirming the time and location of the interview in writing.

Stakeholder Interviews

Stakeholder interviews can involve State or local stakeholders. The State Team Leader schedules State stakeholder interviews, in collaboration with the NRT and Children's Bureau Regional Office Team Leaders, and confirms the appointments in writing. On average, around 15 State stakeholder interviews are scheduled for the review week, unless the NRT or Children's Bureau Regional Office Team Leaders request additional interviews. Each interview is normally scheduled to last an hour or more, with time built in for travel between interviews.

Local stakeholder interviews are scheduled by the Local Site Coordinator. As with State interviews, there are usually around 15 local stakeholder interviews per review site unless the NRT or Children's Bureau Regional Office requests additional interviews. They may be conducted either at the local agency or where the stakeholders are located.

 

Substantial Conformity and Program Improvement Plans

Because child welfare agencies work with the nation’s most vulnerable children and families, the Children’s Bureau has established very high standards of performance for child and family services systems. States are expected to meet defined criteria regarding the outcomes and systemic factors examined in the Child and Family Services Reviews (CFSRs), as well as national standards established by the Children’s Bureau regarding safety and permanency. These high standards underpin the entire CFSR process and are designed to strengthen the delivery of effective services, fortify partnerships, encourage ongoing self-monitoring and continuous quality improvement (CQI), and achieve more positive outcomes overall for children and families.

At the end of a CFSR onsite review, the Children’s Bureau analyzes information from a variety of sources to determine whether a State is in substantial conformity with the seven outcomes and seven systemic factors. Substantial conformity means that the State meets Federal criteria established for each outcome or systemic factor. 

States determined by the Children’s Bureau not to have achieved substantial conformity in one or more of the assessed areas must develop and implement a Program Improvement Plan (PIP). The PIP is a critically important component of the CFSR, since each State’s PIP serves as a blueprint for addressing  identified areas needing improvement in order to achieve substantial conformity across all outcomes and systemic factors. Additionally, the PIP enables a State to build ongoing capacity to evaluate the performance of its entire child welfare system. 

Substantial Conformity

The Child and Family Services Review (CFSR) is a comprehensive review of a State's child and family services system. The Children’s Bureau analyzes information from a variety of sources to determine whether a State is in substantial conformity with the seven outcomes and seven systemic factors. Substantial conformity means that the State meets Federal criteria established for each outcome or systemic factor. 

The Children's Bureau uses the following documents and data collection procedures to make its determinations:

  • The Statewide Assessment, prepared by the State child welfare agency.
  • The State Data Profile, prepared by the Children’s Bureau.
  • Case record reviews of 65 cases (40 foster care and 25 in-home services cases) at three sites, one of which has to be the largest metropolitan area in the State.
  • Stakeholder interviews and focus groups (conducted at all three sites and at the State level) with stakeholders including, but not limited to: youth, parents, foster and adoptive parents, all levels of child welfare agency personnel, collaborating agency personnel, service providers, court personnel, child advocates, Tribal representatives, and attorneys.

Conformity with the outcomes is primarily based on information gathered from the sample of 65 cases examined during the State’s CFSR onsite review. For Safety Outcome 1 and Permanency Outcome 1, the Children’s Bureau also evaluates whether the State meets specific national standards.

Conformity with the systemic factors is based on an evaluation of the information contained in the Statewide Assessment and the information collected in stakeholder interviews during the onsite review.

If a State fails to achieve substantial conformity with an outcome or systemic factor, it must submit a Program Improvement Plan that addresses the areas of non-conformity. In prioritizing areas to be addressed, the State must first address, in content and timeframes, those items that specifically affect child safety.

Conformity with the Outcomes

The seven outcomes assessed in the CFSR address aspects of children’s safety, permanency, and well-being and incorporate a total of 23 items. Each item reflects a key Federal program requirement relevant to the Child and Family Services Plan (CFSP) for that outcome. In conducting their case record review, reviewers use the Onsite Review Instrument (OSRI) to obtain an item rating of Strength, Area Needing Improvement, or Not Applicable for each individual item.

An outcome's individual item ratings will together comprise its outcome rating of either Substantially Achieved, Partially Achieved, Not Achieved, or Not Applicable. When all the case record reviews have been completed, that review site's substantial conformity with the outcomes will be determined in one of two ways:

  • For every outcome except Safety Outcome 1 and Permanency Outcome 1: substantial conformity is determined by the percentage of cases reviewed on site in which the outcome was determined to be Substantially Achieved. If the outcome was rated as Substantially Achieved for at least 95 percent of cases, then that outcome is considered to be in substantial conformity. Note that the threshold for substantial conformity during round 1 was 90 percent; it was raised for round 2 in the spirit of continuous quality improvement that is the foundation of the CFSRs.

  • For Safety Outcome 1 and Permanency Outcome 1, the 95 percent threshold is still used. However, these outcomes also consider the State's performance on related data indicators and composites in order to determine substantial conformity. National standards were established for two data indicators for Safety Outcome 1 and four data composites for Permanency Outcome 1.

Following the onsite review, the Children's Bureau uses data gathered through the Statewide Assessment and the onsite review to make determinations of substantial conformity with the outcomes for the State as a whole.

National Standards

The Children's Bureau uses six data indicators to determine substantial conformity with two outcomes: Safety Outcome 1 (two data indicators) and Permanency Outcome 1 (four data composites). 

If the State's data fail to meet national standards, the State is required to implement strategies in its Program Improvement Plan designed to improve the State's performance on each failed item or data indicator. In prioritizing areas to be addressed, the State must first address, in content and timeframes, those items that specifically affect child safety.

Safety Outcome 1

For Safety Outcome 1, the data indicators are individual measures:

  • Absence of maltreatment recurrence: Of all children who were victims of substantiated or indicated abuse or neglect during the first 6 months of the reporting year, what percent did not experience another incident of substantiated or indicated abuse or neglect within a 6-month period?
  • Absence of child abuse and/or neglect in foster care: Of all children in foster care during the reporting period, what percent were not victims of a substantiated or indicated maltreatment by foster parents or facility staff members?

Note that safety outcomes determined to not be in substantial conformity must be given priority by the State in its Program Improvement Plan.

Permanency Outcome 1

The four data indicators used for Permanency Outcome 1 are composite indicators. Each composite is a set of measures that assess a different aspect of performance with regard to a specific program area.

Each measure in the composite makes a unique contribution to the total composite score, which ranges from 50 to 150 (the higher the score, the better the performance). Having multiple measures in each composite, therefore, provides a more comprehensive portrait of State performance than could be obtained through a single measure. 

The four composite indicators used during round 2 are: 

  • Permanency Composite 1: Timeliness and permanency of reunification
  • Permanency Composite 2: Timeliness of adoption
  • Permanency Composite 3: Permanency for children and youth in foster care for long periods of time
  • Permanency Composite 4: Placement stability

Note that the national standards were established for each of the composites as a whole, not for the individual measures that make up each composite. Therefore, States are not expected to meet any specific standard for individual measures within a composite, but rather to achieve an overall performance level within the composite itself with the understanding that improvement on any given measure will result in an increase in the overall composite score.

Permanency Composite 1

Permanency Composite 1 is concerned with the timeliness and permanency of reunifications. It consists of two principal components, A and B, each of which contributes 50 percent to the composite's overall score.

Component A pertains to the timeliness of reunification and includes three separate measures. This allows for a broader picture of State performance in regard to reunifying children in a timely manner than would be possible with any single measure. Component B pertains to permanency of reunifications and includes a single measure.

Note that, for the purposes of CFSR data measures, "reunification" occurs if the child is reported to AFCARS as discharged from foster care and the reason for discharge is either "reunification with parents or primary caretakers" or "living with relatives." The composite excludes children who were in foster care for less than 8 days, and also includes an adjustment to length of stay for children whose last placement prior to their discharge for reunification was a trial home visit that lasted longer than 30 days.

Component A

Component A of Permanency Composite 1 consists of three separate measures, described below:

  1. Of all children discharged from foster care to reunification in the year shown, and who had been in foster care for 8 days or longer, what percent was reunified in less than 12 months from the date of the most recent entry into foster care?

This measure includes two groups of children in its calculation: those discharged from foster care to a reunification in less than 12 months after the date of their removal from home, and those discharged to a reunification who were reported to AFCARS as being placed in a trial home visit within 11 months or less of their removal and remained in that placement until their discharge.

  1. Of all children exiting foster care to reunification in the year shown, and who had been in care for 8 days or longer, what was the median length of stay (in months) from the date of the most recent entry into foster care until the date of reunification?

This measure assesses a particular child's length of stay in foster care in two different ways. First, it considers the length of stay in months from the date of removal from the home until the date of discharge to reunification. Second, it considers the length of stay in months from the child's date of removal from the home to the date that the child was reported to AFCARS as being placed into a trial home visit, assuming the trial home visit lasted longer than 30 days and was the child's last placement prior to his or her discharge from foster care. 

  1. Of all children entering foster care for the first time in the second 6 months of the year prior to the year shown, and who remained in foster care for 8 days or longer, what percent was discharged from FC to reunification in less than 12 months from the date of first entry into foster care?

There are two categories of children included in calculating his measure. The first are children who entered foster care in the second 6 months of the year prior to the year shown who were then discharged to reunification in less than 12 months from their foster care entry date. The second are children who entered foster care in the second 6 months of the year prior to the year shown who were reported to AFCARS as being placed in a trial home visit within 11 months from the foster care entry date and remained in that placement until discharge to reunification.

Component B

Component B of Permanency Composite 1 includes a single measure: 

  1. Of all children exiting foster care to reunification in the year prior to the one shown, what percent re-entered foster care in less than 12 months from the date of discharge?

Permanency Composite 2

Permanency Composite 2 concerns the timeliness of adoptions. It consists of three principal components, A through C, each of which contributes 33.3 percent to the composite's overall score.

Component A pertains to the timeliness of adoptions of children exiting foster care to adoption. Component B measures progress toward adoption of a cohort of children who have been in foster care for 17 months or more and therefore meet the ASFA requirements for the State to file a termination of parental rights (TPR). Each of these two components include two individual measures. 

Component C, which looks at the timeliness of adoptions of a cohort of children who are considered "legally free" for adoption, contains only one individual measure. 

Component A

Component A of Permanency Composite 2 consists of two separate measures concerning the timeliness of adoptions of children exiting foster care. Those measures are described below: 

  1. Of all children who were discharged from foster care to a finalized adoption in the year shown, what percent was discharged in less than 24 months from the date of the most recent entry into foster care?
  2. Of all children who were discharged from foster care to a finalized adoption in the year shown, what was the median length of stay in foster care (in months) from the date of the most recent entry into foster care to the date of discharge?

Component B

Component B of Permanency Composite 2 consists of two separate measures of progress toward adoption of a cohort of children who meet the ASFA time-in-foster care requirement. Those measures are: 

  1. Of all children in foster care on the first day of the year shown, and who were in foster care for 17 continuous months or longer, what percent was discharged from foster care to a finalized adoption before the end of the year shown?
  2. Of all children in foster care on the first day of the year shown, and who were in foster care for 17 continuous months or longer, what percent became legally free for adoption (i.e., a Termination of Parental Rights was granted for each living parent) in less than 6 months from the beginning of the fiscal year?

Component C

Component C of Permanency Composite 2 pertains to the timeliness of adoptions of children who are considered "legally free" for adoption. It includes a single measure:

  1. Of all children who became legally free for adoption during the prior year, what percent was discharged from foster care to a finalized adoption in less than 12 months of becoming legally free?

Remember that for a child to be considered "legally free," the State must have filed a termination of parental rights (TPR) for all of his or her living parents.

Permanency Composite 3

Permanency Composite 3 is concerned with the achievement of permanency for children in foster care. It consists of two separate components, A and B, each of which contribute 50 percent to the composite's total score.

Component A, which includes two separate measures, looks at how well the State achieves permanency for children who spend extended periods of time in foster care. Component B includes a single measure and looks at children who grow up in foster care and exit to emancipation.

Component A

Component A of Permanency Composite 3 examines how well the State achieves permanency for children who spend an extended period of time in foster. It consists of two individual measures:

  1. Of all children who were discharged from foster care in the year shown who were legally free for adoption (i.e., there was a termination of parental rights (TPR) for each living parent), what percent was discharged to a permanent home prior to their 18th birthday?
  2. Of all children who were in foster care for 24 months or longer on the first day of the year shown, what percent were discharged from foster care to a permanent home prior to their 18th birthday?

For both measures, a "permanent home" is defined as having a discharge reason of adoption, reunification (including living with relative), or guardianship.

Note that guardianship is included in this permanency assessment because, nationwide, there is only a very small percentage of children who are discharged from foster care to guardianship. These small numbers prevent the effective use of a separate composite or measure focusing on the timeliness of achieving guardianship.

A 24-month period was chosen for both measures because, nationally, about 50 percent of children in foster care have been in foster care for two years or more. Using a 24-month period allows for the complete assessment of what happens to children in foster care during a 12-month period.

Component B

Component B of Permanency Composite 3 addresses children who grow up in foster care and exit to emancipation. It consists of a single measure:

  1. In the year shown, of all children who exited foster care with a discharge reason of emancipation prior to their 18th birthday, or who reached their 18th birthday while in foster care, what percent was in foster care for three years or longer?

Note that, in AFCARS, "emancipation" is defined as "the child reached maturity according to State law by virtue of age, marriage, etc." 

The 3-year time period used for this measure was selected to exclude from consideration those children who entered foster care at age 15 or older and then exited to emancipation. This takes into consideration the large variation among States in the age of children upon entry to foster care.

Permanency Composite 4

Permanency Composite 4 is the only composite that consists of a single component. It evaluates placement stability with three individual measures:

  1. Of all children in foster care during the year shown, and who were in foster care for at least 8 days but less than 12 months, what percent had two or fewer placement settings?

Note that if a child has been in care for longer than 8 days, any placement changes that took place within the first 8 days in foster care are considered in this measure.

  1. Of all children in foster care during the year shown, and who were in foster care for at least 12 months but less than 24 months, what percent had two or fewer placement settings?
  2. Of all children in foster care during the year shown, and who were in foster care for at least 24 months, what percent had two or fewer placement settings?

Note that measure 3 is used because the Children's Bureau believes that placement stability is as important to the well-being of children in foster care for 2 years or longer as it is for children who have spent only a few months in foster care.

Conformity with the Systemic Factors

During the development of the Statewide Assessment, the Statewide Assessment Team compiles and evaluates information on the seven systemic factors. The State's Child and Family Services Plan (CFSP) and other program requirements provide the basis for determining substantial conformity with each systemic factor. During the onsite review, State and local review team leaders conduct State and local stakeholder interviews to collect the information they need to evaluate the systemic factors and determine substantial conformity. 

Each systemic factor's overall rating is based on the ratings of the individual items that make up the systemic factor. All of the systemic factors are rated based on multiple items except for one, "Statewide Information System," which is rated based on only one item. The items themselves represent key Federal requirements relevant to the State's CFSP or other programs, and can be rated as either a Strength or Area Needing Improvement.

The final determination of each systemic factor's substantial conformity considers whether:

  • the CFSP and other program requirements attached to this systemic factor are actually in place in the State, and
  • the CFSP and other program requirements attached to this systemic factor are functioning as described in the applicable regulation or statute

For each systemic factor, the State receives a score on a 4-point scale. A score of 3 or 4 indicates that the State is in substantial conformity for that systemic factor; a score of 1 or 2 indicates it is not in substantial conformity. The table below describes the scoring system in more detail:

NOT IN SUBSTANTIAL CONFORMITY NOT IN SUBSTANTIAL CONFORMITY IN SUBSTANTIAL CONFORMITY IN SUBSTANTIAL CONFORMITY
1 2 3 4
None of the CFSP or program requirements is in place Some or all of the CFSP or program requirements are in place, but more than one of the requirements fail to function as described in each requirement All of the CFSP or program requirements are in place, and no more than one of the requirements fails to function as described in each requirement All of the CFSP or program requirements are in place and functioning as described in each requirement

Two of the seven systemic factors use slightly different methods for determining substantial conformity: statewide information system and quality assurance system

Statewide Information System

The systemic factor, "statewide information system," has only one CFSP requirement subject to review. If it is determined during the onsite review that this requirement is in place but not functioning as required, the factor will receive a rating of 2, or "Not in Substantial Conformity," rather than 3. 

Quality Assurance System

There are two performance indicators, or items, associated with the "quality assurance system" systemic factor:

  • Item 30: Standards Ensuring Quality Services, and
  • Item 31: Quality Assurance System.

For this systemic factor to be in substantial conformity, it must be rated as a 3 or 4. To earn a "4" rating, both items must be in place in the State and functioning as required. 

To earn a "3" rating, both items must be in place and item 31 must be functioning as required level. Item 30 does not need to be functioning as required for the systemic factor to be found in substantial conformity.

If, however, item 31 is not in place or is not functioning as required, the systemic factor must be rated either a 1 or 2 depending on the State's performance on item 30. If item 30 is in place but not functioning, the factor will be rated a 2. If item 30 is neither in place nor functioning, the factor is rated a 1.

Program Improvement Plan

State child welfare agencies should involve their leadership, staff, and external partners to assess their CFSR findings, form a comprehensive picture of the State’s child welfare system, and further identify areas of strength as well as those needing improvement. This comprehensive picture, in turn, should be used by the State to inform the development of the State’s Program Improvement Plan (PIP), along with other information at its disposal. The PIP process has been found to be most effective when it is integrated into the collaborative planning process that States use to develop their 5-year Child and Family Services Plan (CFSP). 

The CFSR reform framework is intended to create accountability in child welfare through ongoing, effective partnerships between the Federal and State governments. The CFSR and PIP processes are designed to focus child welfare agencies on broad reform efforts that include, but also go beyond, the immediate details of day-to-day practice. The overarching goal of the PIP process is to enable States to use CFSR findings to design initiatives that will result in program improvement and better outcomes for children and families. PIP content will consist of specific strategies and measurement plans designed to facilitate the successful completion of each PIP.

For your convenience, this training module also contains a separate page of PIP resources, which are documents and tools that further explain or assist the PIP process. Many of these resources are also linked separately throughout the module itself.

PIP Process

The overall Program Improvement Plan (PIP) process consists of three general phases:

  • PIP Development and Approval. During this phase, the State and Children’s Bureau work together to develop the content of the State’s PIP and the State submits it to the Children’s Bureau for approval. This phase can begin as early as the Statewide Assessment; the State must submit the PIP within 90 days of receiving its courtesy copy of the Final Report of review findings from its onsite CFSR review. This courtesy copy is generally delivered to the State within 30 days of the statewide exit conference that officially completes the onsite review.
  • PIP Implementation. In this phase, the State implements all activities contained in its PIP, including its goals, primary strategies, action steps, benchmarks, and the measurement plan. During this time the State is required to submit to the Regional Office quarterly reports outlining the benchmarks and action steps completed and the evidence of completion for each as identified in the PIP. The State also uses quarterly reports to identify measurement plan activities that have been completed, and provides updated results as identified in the measurement plan. Also during this time, the State may receive technical assistance (TA) as necessary and may enter into discussions with the Children’s Bureau concerning the renegotiation of certain components of the PIP. The PIP implementation period is 2 years.
  • PIP Evaluation and Final Determination of Conformity. The focus of PIP evaluation is to ensure that the State has completed all action steps and benchmarks, and has achieved the approved amount of improvement in all National Standard and item-specific measures specified in its approved PIP. Once the PIP is evaluated and confirmed as completed by the Children’s Bureau, the State will receive official notification that its PIP has been closed out. During the subsequent CFSR process, the State will once again be evaluated for conformity with the CFSR outcomes and systemic factors.

While the PIP process “officially” begins after completion of the onsite review, the PIP process itself is not isolated and linear, but rather follows the cyclical CFSR process where each step informs and leads to the next. 

The PIP process is circular in nature

In this cyclical process, the Statewide Assessment is the precursor to the onsite review. The onsite review culminates in the statewide exit conference, where preliminary findings for the review are presented. Although each State is encouraged to begin developing its PIP during or after completing its Statewide Assessment, it is not until after the release of the Final Report that the PIP can be finalized, approved for implementation, and then evaluated for a final determination of conformity. Upon completion of its PIP, the State is then ready to begin the process anew with another Statewide Assessment and the next round of the CFSRs.

PIP Development and Approval

After the CFSR onsite review has concluded and all applicable data and information have been analyzed, the Children’s Bureau prepares a written Final Report to the State on the findings informing the State whether it is, or is not, operating in substantial conformity. This Final Report includes a cover letter that estimates the Federal funds that are to be withheld from the State as a financial penalty for failure to achieve substantial conformity and the date by which the State must submit its PIP. The Children’s Bureau provides the State a courtesy copy of the cover letter and Final Report within 30 days of the statewide exit conference that marks the official end of the onsite review.

Note that the “courtesy copy” is a final draft that provides advance notice to the State of the review findings before the findings are made public. It serves as the written notice to the State of the determination of substantial conformity. After reviewing the courtesy copy and cover letter, the State and the Children’s Bureau work together to finalize any issues or revisions. The Children’s Bureau Regional Office then issues the official Final Report and cover letter to the State approximately 2 weeks after the courtesy copy.

The State must submit its Program Improvement Plan (PIP) for approval within 90 days of receiving the courtesy copy of the Final Report. In preparing the PIP for approval, the State child welfare agency must involve staff and external partners to ensure that all stakeholders have collective ownership in the document and address the most meaningful priorities for the child welfare system as a whole. States should also make use of the tools provided by the Children’s Bureau, which include technical assistance (TA) opportunities and the PIP Matrix, which was developed by the Children’s Bureau as a suggested format for States to use in organizing their PIP content.

To be a useful working tool for creating systemic change, a PIP should be manageable and include clear goals and strategies, measurable action steps with time frames, realistic benchmarks that can be used to gauge progress, and the negotiated improvement that the State will make toward meeting the national standards and the item-specific measurements. Additionally, a properly and thoroughly developed PIP will generally do all of the following:

  • Be theme-based, providing a system for integrating the action steps across items, outcomes, and systemic factors.
  • Build on what the State learned through its Statewide Assessment and the onsite review.
  • Provide for the engagement of the agency’s leadership and upper management throughout the PIP implementation and monitoring process.
  • Identify the individuals responsible for the program improvement action steps, the measurement process, and the review process.
  • Identify opportunities for stakeholder involvement.

After the State completes and submits its PIP to the Children’s Bureau for approval, the Bureau reviews it and either accepts it as submitted or returns it to the State with comments.

Once the Children’s Bureau has approved the State’s PIP, it provides the State with an approval notification that identifies the target completion date for the PIP. The State signs this notification and forwards it to the Children’s Bureau Central Office. At this point, the State’s 2-year PIP implementation period begins. All financial penalties are placed on hold while the State implements its PIP.

If the Children’s Bureau does not approve the State’s PIP, it sends the State a written notification detailing the basis for the disapproval as well as a target date for resubmission by the State, which should be within 30 days. The State’s resubmission must address the areas that resulted in disapproval of the PIP. The PIP is then subject to another round of review and comment by the Children’s Bureau. If the State does not submit an approvable PIP within the specified time frame, then the financial penalties outlined in the cover letter may be reinstated.

PIP Implementation

Once the State’s Program Improvement Plan (PIP) receives Children’s Bureau approval, the State has 2 years to implement it. During this period, all financial penalties that were to be assessed against the State for failure to achieve substantial conformity are put on hold. Using the technical assistance (TA) as defined in its PIP, the State is then responsible for implementing all of the action steps it established to achieve its strategies and for completing the benchmarks it defined to measure its progress.

Over the course of the implementation period, the State is responsible for submitting quarterly progress reports to the Children’s Bureau. These progress reports are part of the ongoing PIP monitoring and evaluation process that enables the State and Children’s Bureau to verify that action steps have been completed and benchmarks are being met in a timely and effective way. At least annually, the Children’s Bureau Regional Office and the State jointly evaluate the State’s progress in implementing the PIP. It is possible for the State and Children’s Bureau to enter into discussion concerning the renegotiation of PIP benchmarks, action steps, and other contents during the implementation period.

If exceptional circumstances arise that delay the completion of the PIP within the 2-year time frame, States may request up to a 1-year extension of the PIP’s completion time. However, granting such an extension is rare and requires that the State submit a written request that provides a compelling reason for the extension, along with supporting documentation. Such an extension request must be received by the U.S. Department of Health and Human Services (HHS) at least 60 days before the approved PIP completion date, and is subject to approval by the Secretary of HHS.

Once the PIP implementation period has been completed, the State may enter a “non-overlapping year” evaluative period. This period provides the State with one additional year of data, after the 2-year PIP implementation period, which can be used for the final evaluation of the successful completion of the PIP measurement plan.

PIP Technical Assistance (TA)

The Children’s Bureau encourages every State developing a Program Improvement Plan (PIP) to include a technical assistance (TA) plan that defines how the State will effectively use TA to achieve its goals. There are a variety of Federal and non-Federal TA resources available to States to assist them throughout the PIP process.  Typically, the Children’s Bureau collaborates with the State during its PIP preparation to discuss specific TA needs. In many cases, a State requires multiple TA providers to meet its varied needs. The TA strategies that the State develops should be designed not as one-time events intended to help achieve immediate PIP goals, but rather as long-term, capacity-building efforts.

Federal TA providers include 11 National Resource Centers (NRCs) funded by the Children’s Bureau, which are organized under the Training and Technical Assistance (T/TA) Network. The Training and Technical Assistance Coordination Center (TTACC) ensures that TA assistance from the NRCs is provided to States in response to review findings and serves as a single point of coordination for individualized, onsite or offsite TA services from multiple providers. In addition, in 2008 the Children's Bureau established five regional Implementation Centers to work with States and Tribes in implementing strategies to achieve sustainable, systemic change for greater safety, permanency, and well-being for children, youth, and families.

Non-Federal TA providers generally include local or State universities or other nonprofit entities that can contribute consultant expertise to the State child welfare agency. At the same time, their assistance may result in strengthened ties between the agency and community.

PIP Renegotiation

The State may ask to renegotiate its Program Improvement Plan (PIP) with the Children’s Bureau Regional Office, as needed, especially when implementing complex strategies. This renegotiation may be considered to revise action steps or modify measurement plans (see PIP Content).

Requests for changes to the PIP should be submitted in writing (or electronically) to the Children’s Bureau Regional Office for approval. Contact information for each Regional Office is available online at: http://www.acf.hhs.gov/programs/oro.

Once a request has been received, the Children’s Bureau Regional Office Team Leader then contacts the State to discuss the issues leading to the request, the specific changes proposed, and the rationale for the adjustment. The Children’s Bureau Regional Office and State, in consultation with the Children’s Bureau Central Office, may then renegotiate the PIP as needed, but the new plan must meet the following criteria:

  • The renegotiated PIP must be designed to correct the areas of the State’s program determined not to be in substantial conformity or to achieve a standard for the data indicators that is acceptable.
  • Any action steps that are renegotiated in the PIP must still be completed within the allowable 2-year time frame for PIP implementation.

The terms of the renegotiated PIP must be approved by the Children’s Bureau Regional Office in consultation with the Children’s Bureau Central Office.

PIP Evaluation and Final Determination of Conformity

The focus of Program Improvement Plan (PIP) evaluation is to ensure that the State has completed all action steps and benchmarks identified in its primary strategies. Evaluation is also intended to ensure that the State has achieved the approved amount of improvement in all national standard and item-specific measurements identified in its approved PIP.

The evaluation phase may actually begin as early as the PIP development and approval process, when new results from comparisons against national standards may become available from ongoing Federal data submissions. Evaluation can last through what is referred to as a “non-overlapping data year” that follows the conclusion of the PIP implementation period. This additional year is allowed for the measurement plan element only, allowing time for results to be demonstrated following the implementation of the PIP’s action steps.

The State and Children’s Bureau Regional and Central Offices work collaboratively to complete the ongoing process of PIP evaluation and, ultimately, make a final determination of conformity. This overall process is accomplished through ongoing PIP measurement that takes into account the State’s progress through its various goals and their primary strategies, action steps, and benchmarks. It also considers the State’s progress toward meeting its improvement goals in the safety and permanency national standards and for each measured item in the PIP.

The State generally provides these measurements through regular quarterly reports, but the Children’s Bureau may also conduct an annual review to confirm that specific activities have been completed or measurement targets achieved.

Once the Children's Bureau has evaluated the PIP and confirmed that it is complete, the State receives official notification that its PIP has been closed out. If the PIP is completed successfully, then the financial penalties assessed against the State for failure to achieve substantial conformity are rescinded. The State’s ongoing conformity with CFSR requirements will then be evaluated in a subsequent CFSR process. However, if the State fails to successfully complete its PIP, those financial penalties will be assessed against the State and remain in effect until the next CFSR. 

PIP Content

Each State must work collaboratively with the Children’s Bureau to prepare its Program Improvement Plan (PIP). For each outcome, systemic factor, and national standard that is not in substantial conformity (as identified in the Final Report), the State must work in conjunction with the Children’s Bureau to specify the broad, measurable goals of improvement that it will use to address those areas in which it failed to achieve substantial conformity.

Some examples of overarching goals that States have used in their PIP documents include the following:

  • Conduct child risk and safety assessments throughout the life of the case
  • Expedite permanency for children
  • Promote family engagement
  • Recruit and retain foster homes
  • Increase access to service delivery systems for children and/or youth

In determining the specific issues to address, the State must give first priority, in both level of effort and time frame, to those items and outcome areas that affect child safety. Second priority goes to the remaining areas that were the furthest from achieving substantial conformity. However, all items and outcomes that were determined by the onsite review to be out of conformity with Federal requirements must be addressed in the State’s PIP.

With broad goals in place, the PIP document must then address the primary strategies that will be used to achieve those goals as well as the action steps required to implement each strategy. It must also identify any technical assistance (TA) that will be required to achieve each strategy. Furthermore, the PIP must address issues of measurement – specifically how the agency will measure benchmarks of progress toward completing the action steps and the measurement goals, or percentage improvement, that will be used to evaluate the impact of each strategy.

When complete, the PIP document will consist of four main components that provide sufficient detail and context to ensure that the Children’s Bureau and State agency staff have a clear understanding of issues and steps and can work in partnership throughout the PIP process:

  1. A general information section with key contact information.
  2. A recommended Program Improvement Plan Strategy Summary and TA Plan that provides information on the primary strategies and TA that the State intends to use to support improvement achievement.
  3. An agreement form indicating approval of the PIP by the Children’s Bureau and the State that establishes the PIP’s implementation date.
  4. A work plan that includes the Strategy, National Standards, and Item-Specific and Quantitative Measurement Plans, which describes action steps for each primary strategy and the benchmarks to be used to track their progress, specifies the safety and permanency national standards baseline performance and percentage of improvement, and identifies the item-specific measurement baseline performance and goals.

The PIP will also include a schedule for submitting regular progress reports to the Children’s Bureau as part of the PIP’s implementation.

The Children's Bureau has developed a suggested format, the PIP Matrix, which States can use to organize their PIP content. The PIP Matrix is available for download on the Children’s Bureau Web site at http://www.acf.hhs.gov/sites/default/files/cb/pip_instruct.pdf. It is also available in the PIP Resources section of this module.

Primary Strategies

The State must document its PIP's primary strategies as the broad approaches that address the key concerns from the CFSR and serve as a framework for the achievement of the PIP's goals and negotiated measures. These strategies should reflect the overarching reforms and continuing improvements that address key concerns from the CFSR Final Report. They may build on prior PIP activity, and should be integrated with the time frames of other State plans, such as the CFSP.

Wherever possible, the PIP strategies should be thematic in nature, perhaps integrating multiple outcomes and systemic factor items to address broad areas of concern. For example, a primary strategy of “Implement a Systems of Care Practice Approach” could affect multiple areas of concern linking to OSRI outcomes and systemic factor items, such as:

  1. Inconsistency of child safety services in in-home cases (Safety Outcome 2)
  2. Ineffectiveness in addressing needs and services of families and foster parents (Child and Family Well-Being Outcome 1, as well as Service Array and Resource Development)
  3. Inconsistency in involving children and families in case planning (Child and Family Well-Being Outcome 1)
  4. Inadequate staff training program for case practice skills (Staff and Provider Training)

Note that when multiple areas are addressed by a single strategy, the State should identify the outcome or systemic factor that is most directly affected by the strategy and should avoid linking the same outcomes or systemic factors to more than one key strategy.

When developing strategies that affect front-line practice, the State should be guided by the principles of family-centered practice, community-based services, individualizing services that address the unique needs of children and families, and strengthening parents’ capacity to protect and provide for their children. In some situations, a State may need to review and revise its policies and procedures to ensure a focus on these principles and that practice is consistent with their policies. 

PIP Measurement

The State must incorporate three separate measurement plans into its ongoing PIP evaluation process:

  • The Strategy Measurement Plan, where the State outlines its goals, primary strategies, action steps, and benchmarks.
  • The National Standards Measurement Plan, where the State identifies the safety and permanency national standards and progress toward meeting its improvement goals in these areas.
  • The Item-Specific and Quantitative Measurement Plan, where the State enters and reports information for each CFSR item that is to be measured in its PIP.

Each of these measurement plans uses quarterly status reports to facilitate an ongoing dialogue between the State and Children’s Bureau Regional and Central Offices.

Strategy Measurement Plan

The Strategy Measurement Plan, and its corresponding quarterly status report, is where the State outlines the goals and strategies of its Program Improvement Plan (PIP). Each primary strategy must include measurable action steps that the State can take toward improvement, and not simply suggest further study of issues identified through the CFSR. These action steps are specific activities that will be undertaken to accomplish the strategy, and each action step should be designed to generate specific program improvements.

Along with all of the other required information, States should use the Strategy Measurement Plan to detail the specific documents, reports, or other items of confirmation that can be used to provide the Children’s Bureau Regional Office with evidence of progress and evidence of completion. For example, for the benchmark: Convene work group comprising families that receive services, front-line child welfare staff, and other key partners to guide development of the plan, the evidence of completion might be: Copy of meeting minutes and list of work group participants.

The PIP document must also include a reasonable time frame for the completion of each action step, as well as the technical assistance (TA) required to support each step. The PIP should also identify the individual or individuals responsible for undertaking each action step, as well as the geographic area or areas of the State in which each action step will be undertaken. While these are not regulatory requirements, they should be included in the document whenever possible to ensure that all of the components required for successful program improvement are deployed as planned throughout the State.  

Benchmarks are measurable indicators used by the State and Children’s Bureau to monitor progress in completing action steps. Because PIP evaluation and monitoring must occur throughout the process, and not simply at the end of the implementation period, States should establish realistic, measurable benchmarks for each action step to serve as interim, periodic measures of progress. These benchmarks can be quantitative (number-oriented) or qualitative (process-oriented) in nature, and are designed to help measure incremental progress toward completing the strategies which, in turn, lead toward achieving the final improvement goals.

States should use their quarterly status reports to enter and report information regarding each action step or benchmark that is due during each quarter, and note any evidence of completion. The Children’s Bureau Regional Office will determine, based on its review of a State’s report, whether the action step or benchmark has been completed satisfactorily or is incomplete. If an action step is past due, the State should explain the reason in the plan with a revised completion date. The Children’s Bureau will then review the explanation and revised date and either accept the extended due date or flag the action step for renegotiation. 

National Standards Measurement Plan

The National Standards Measurement Plan, and its corresponding quarterly status report, is where the State identifies the safety and permanency national standards, along with its performance in those areas as measured by both the Final Report and baseline. The State also uses the National Standards Measurement Plan and its quarterly status updates to identify the negotiated, and any renegotiated, improvement goals in its Program Improvement Plan (PIP).

To establish the level of needed progress regarding national standards, each State must first work with the Children’s Bureau to define a percentage of improvement that will be made in each standard found to be out of substantial conformity. The selection of national data indicators that must be addressed in the PIP, the time period that may qualify as the baseline period for measurement purposes, and the required amount of improvement are all determined pursuant to the provisions of Technical Bulletin #3, Amended (dated October 8, 2009). Technical Bulletin #3 is available online at: http://www.acf.hhs.gov/programs/cb/resource/afcars-tb3.

For other outcomes found to be out of substantial conformity, the Children’s Bureau and the State must work together to determine the most realistic way of measuring goal attainment. If the State is using the PIP Matrix for its quarterly reporting, it must enter the status of the data indicator for each reported quarter.

Item-Specific and Quantitative Measurement Plan

The Item-Specific and Quantitative Measurement Plan, and its corresponding quarterly status report, is particularly important because it allows States to enter and report information regarding each CFSR item that is to be measured in its Program Improvement Plan (PIP), including the:

  • status of the item in the Final Report;
  • performance as measured for the established baseline and the source data period for the measure;
  • negotiated improvement goal;
  • method of measuring improvement; and
  • renegotiated improvement goal, if applicable. 

The selection of item-specific measures that must be addressed in the PIP, how the baselines are established, and what the approved amount of improvement is to be are determined pursuant to the provisions of Technical Bulletin #3, Amended (dated October 8, 2009). Technical Bulletin #3 is available online at http://www.acf.hhs.gov/programs/cb/resource/afcars-tb3.

To demonstrate improvement in item-specific measures, the Children’s Bureau encourages the use of State-generated data from the State's own quality assurance (QA), continuous quality improvement (CQI) systems, or Management Information Systems. To measure the degree of improvement in items, the State may use one of four methodologies:

  1. Retrospective data using collected findings that represent 12 months of data;
  2. Prospective data collected during the PIP implementation period after 12 months of data becomes available to establish the baseline;
  3. Data from national standard composite individual measures; or
  4. Data collected from the State SACWIS or other Management Information System.

The State may also propose another methodology for approval by the Children’s Bureau.

Note that if the State is using the PIP Matrix for its quarterly reporting, it must enter the status of the item for each reported quarter. The PIP Matrix is available for download on the Children’s Bureau Web site at http://www.acf.hhs.gov/sites/default/files/cb/pip_instruct.pdf. It is also available in the PIP Resources section of this module.

PIP Resources

The following resources have been developed to further explain or assist with the overall PIP Process.

  • PIP Matrix: The PIP Matrix document is in a standard format developed by the Children’s Bureau and designed to assist States in preparing PIPs for submission to the Children’s Bureau Regional Office. While it was developed to facilitate ease of review, approval, and tracking of State PIPs, it is not mandatory and States may choose to use a different format. However, all PIPs must include the information required by regulation at 45 CFR 1355.35. The PIP Matrix is available for download at http://www.acf.hhs.gov/sites/default/files/cb/pip_instruct.pdf.
  • Procedures Manual: The Child and Family Services Reviews Procedures Manual offers an overview of the purpose and structure of the reviews, as well as detailed information on planning for and conducting the reviews. The manual also discusses the Final Report and the PIP process. It is designed to assist Children’s Bureau staff and State child welfare agencies in planning for, and participating in, a CFSR. State agency administrators are strongly encouraged to share the manual with agency staff who will play active roles in the State’s CFSR, including Local Site Coordinators. The Procedures Manual is available for download at http://www.acf.hhs.gov/programs/cb/resource/cfsr-procedures-manual.
  • Program Improvement Plan Annual/Quarterly Status Report Form: This form is designed to be used by Children's Bureau Regional Office staff to evaluate PIP progress for States that are not using the new matrix in developing their PIPs. For States that are using the new matrix, Regional Offices should use the new matrix to evaluate State PIPs. The new matrix is available through Information Memorandum 07-08, which is available for download at http://www.acf.hhs.gov/programs/cb/resource/im0708.
  • Technical Bulletin (TB) #3: This Technical Bulletin (amended) pertains to the Children’s Bureau’s approach to determining and approving degrees of improvement and attainment of goals for PIPs during Round 2 of the Child and Family Services Reviews (CFSRs). It is available for download at http://www.acf.hhs.gov/programs/cb/resource/cfsr-amended-technical-bulletin-3.
  • Technical Bulletin (TB) #4: This Technical Bulletin contains updated general instructions for PIP monitoring, evaluation, and renegotiation. It includes technical information for States and Children's Bureau Regional Offices about monitoring, reporting, and using a matrix spreadsheet for PIP submissions. It is available for download at http://www.acf.hhs.gov/programs/cb/resource/cfsr-technical-bulletin-4.

In addition, the Children’s Bureau has released a number of Information Memoranda (IMs) that speak to the PIP process. These IMs include:

  • IM 09-01: Measuring Round One Program Improvement Plan (PIP) Improvement for Child and Family Services Reviews (CFSRs) Using Round Two Revised National Standards. This IM provides guidance on measuring Round 1 PIP improvement for the CFSRs using Round 2 revised national standards. Available for download at http://www.acf.hhs.gov/programs/cb/resource/im0901
  • IM 07-05: Measuring Program Improvement Plan (PIP) Improvement for the Child and Family Services Reviews (CFSRs) National Standards. This IM provides information on measuring PIP improvement for the CFSR national standards. Available for download at http://www.acf.hhs.gov/programs/cb/resource/im0705.
  • IM-02-04: Guidance and Suggested Format for Program Improvement Plans in Child and Family Services Reviews. This IM provides guidance and a suggested format for PIPs. Available for download at http://www.acf.hhs.gov/programs/cb/resource/im0204.
  • IM-01-07: Updated National Standards for the Child and Family Services Reviews and Guidance on Program Improvement Plans. This IM provides information and guidance for use by States and Regional Offices on updated national standards as well as guidance on PIPs. Available for download at http://www.acf.hhs.gov/programs/cb/resource/im0107

Onsite Review Team

The CFSR Onsite Review Team comprises both Federal and State staff, with trained consultant reviewers supplementing the Federal component of the team. Federal staff, in consultation with State agency officials, select the Federal and consultant reviewers. State agency officials, in consultation with Federal staff, choose the State Review Team members, who may be State agency staff or external representatives.

The overall review team is divided into four local site teams that are based at three sites around the State. Two of these teams are located at the State's major metro site, and one team is located at each of the other two sites. Each of these local site teams is comprised of a group of Local Site Leaders who work with and coordinate the reviewers

Overseeing and coordinating among all four local site teams are a group of Team Leaders. These Team Leaders include Federal and State representatives whose job it is to coordinate the entire review week and prepare the Friday exit conference

Generally speaking, all members of the Onsite Review Team are expected to:

  • Work as partners regardless of affiliation
  • Fully participate in the review process
  • Maintain a professional demeanor at all times
  • Maintain confidentiality of case-specific information
  • Treat all review team members with respect and as valued team members
  • Complete all activities thoroughly and promptly
  • Be available for all review activities unless, for example, an interview conflicts with a debriefing session, and the Federal Local Site Leader has approved the absence in advance
  • Talk to the Federal Local Site Leader to determine whether other tasks need attention
  • Respect the team leadership and discuss any differences of opinion with the Federal Local Site Leaders in private
  • Set a positive example for other review team members
  • Be prepared to work extended hours

The Federal Review Team also may include Children’s Bureau Regional Office staff from Regions other than the one responsible for the State being reviewed and State child welfare staff from States other than the State being reviewed. These are referred to as cross-State participants.

Local Site Leaders

There will be typically between four and six Local Site Leaders (or, simply, Site Leaders) allocated to each local site team. These individuals conduct stakeholder interviews with local stakeholders, and they support reviewers by answering questions regarding the instrument, assisting in case reviews as necessary, and conducting quality assurance of completed instruments. The Local Site Leaders also assist in preparing the Thursday local site exit conference and collaborate with the Team Leaders to prepare for the Friday statewide exit conference.

There are three types of Site Leaders who will be present at each local site: NRT Local Site Leaders, Federal Local Site Leaders, and State Local Site Leaders. Each site will have one NRT and State Local Site Leader who share overall leadership responsibilities, and two or more Federal Local Site Leaders who assist the NRT and State Local Site Leaders. 

NRT Local Site Leader

NRT Local Site Leaders are Federal representatives from the National Review Team (NRT). There are four NRT Local Site Leaders assigned to each State Review Team. Of these, one is assigned to each local site team (one at each of the metropolitan sites and one at each of the two local sites) to provide, in collaboration with the State Local Site Leader, overall leadership for the local site.

In addition to general leadership responsibilities that include overseeing and coordinating site activities, ensuring that daily time and task requirements are met, and problem-solving, specific duties of the NRT Local Site Leader during the review week include:

 

Federal Local Site Leader

Federal Local Site Leaders are Federal representatives who assist in providing leadership at each local site. There will usually be eight Federal Local Site Leaders assigned to each State's review team, and they will be equally distributed among the four local sites. In some cases, the individuals filling these roles may be high-performing and specially trained consultants who have served on multiple CFSRs and received special, advanced training designed to prepare them for leadership roles.

While the specific duties of Federal Local Site Leaders will vary from site to site, they generally will be expected to:

State Local Site Leader

State Local Site Leaders are State agency representatives who serve as the State’s lead representative for the review team at each of the three local sites. There are four State Local Site Leaders on each State's review team, so that each local site will have one State Local Site Leader of its own (one at each of the two metropolitan sites and one each at the other two local sites). The State Local Site Leaders work closely with the NRT Local Site Leaders to provide overall site leadership and share most of their same responsibilities during the review week. During stakeholder interviews, though, State Local Site Leaders typically will participate as note-takers rather than interviewers.

Reviewers

The reviewers who use the OSRI to conduct case record reviews at each local site work in pairs. Generally, each review pair consists of one State and one Federal representative. The State representative is typically a child welfare agency staff person or representative of the agency’s external partners in the CFSR planning process. The Federal reviewer is normally a Federal agency representative or a specially trained consultant with skills and experience in the child welfare field.

There are usually six or seven review pairs at each local site (which means 12 to 14 reviewers). Each review pair typically reviews two or three cases during the review week. Although each review pair's primary responsibility while on site is to complete one of their assigned case record reviews per day, there are other responsibilities, as well. These other responsibilities include:

For reviewers, an important task early in the review week is forming a good working relationship with your partner. While it is important to begin working on your assigned cases as quickly as possible, you should take a few minutes before you begin to review your first case to get to know one another. As you and your partner work through your first case, you’ll discover that you each have different strengths. By acknowledging these, you can make the case review process more efficient.

For example, because different States organize their cases differently, a Federal reviewer may not be as familiar with specific forms or case file organization as the State reviewer, but he or she may be much more familiar with inputting data into the automated application. You may, therefore, find it beneficial to divide work responsibilities accordinglythe State reviewer might lead the initial case record review, while the Federal reviewer handles data entry.

It is important that review pairs recognize when they are not moving through a case efficiently or are having disagreements with one another about how to proceed. In these cases, the review pair should consult a Local Site Leader for guidance before a problem becomes a crisis. Furthermore, review pairs who complete their assigned case record reviews early should be prepared, at the direction of the Local Site Leader, to assist other review pairs in their own case record reviews. In short, successful review pairs must work well together, must recognize when they need guidance, and must assist the rest of the team as necessary.

Team Leaders

The State Review Team's Team Leaders are the individuals responsible for coordinating between all four of the State's local sites. They play a key role in every aspect of the onsite review, and stay in close contact with each local site's NRT Local Site Leader and State Local Site Leader. They are specifically responsible for assembling and facilitating the Friday statewide exit conference and can also play an important role in the Quality Assurance process. They also handle all State-level interviews for the Stakeholder Interview Guide.

There are typically three Team Leaders for each State's review: an NRT Leader, a Regional Office Leader, and a State Leader.

NRT Leader

The National Review Team (NRT) Team Leader is a Federal agency representative who provides overall leadership for the onsite review and is a member of the NRT. The NRT comprises staff from the Children’s Bureau Central and Regional Offices who provide leadership to the review teams in planning and conducting the CFSRs.

Regional Office Leader

The Regional Office Team Leader is a Children’s Bureau Regional Office representative who assists in providing overall leadership for the onsite review.

State Leader

The State Team Leader is a State agency representative who serves as the State’s lead representative for the onsite review.