Section 5: Round 2 Archives

This section contains archived e-training modules from Round 2 of the Child and Family Services Reviews (CFSRs). It is intended to provide background and historical information about the reviews and review process. The first module, CFSR Background, provides general background information on the philosophical context and structure of the CFSRs. It also includes an overview of the onsite review team's composition.

The second module, The Review Week, provides a detailed look at the events that take place during the week of an onsite review, beginning with the arrival on site of the review team and the Monday morning team meeting, and ending with the Friday statewide exit conference. 

The third module, Data Integrity and Quality Assurance, provides an in-depth overview of the seven-step quality assurance (QA) process used during an onsite review to ensure the accuracy and integrity of the data collected by the review team. 

The fourth and fifth modules concern the Onsite Review Instrument and the Stakeholder Interview Guide, the two instruments used to collect data during an onsite CFSR.

Finally, the last module, The Automated Application, is an online version of the handbook developed for the CFSR Data Management Application, the automated version of the review instruments developed to streamline the review process.

CFSR Background

The Child and Family Services Reviews (CFSRs) are a partnership between the Federal and State governments that seeks to examine State programs from two perspectives: outcomes of services provided, and systemic factors that affect those outcomes. The review process identifies the State agency’s strengths as well as areas needing improvement, and then uses a Program Improvement Plan to help States make needed improvements and build on identified strengths. Central to the review process is the promotion of sound practice principles that support improved outcomes for both children and families. The ultimate goal of the review process is to drive program improvements by focusing on systemic changes, as well as to enhance States’ capacity to become self-evaluating.

The primary focus of the CFSR reviews is on outcomes for children and families and the child welfare system’s efforts to support the achievement of those outcomes. Remember that when we talk about the child welfare system, we are talking about the State child welfare agency as well as all of the other agencies that work together to help families achieve positive outcomes. In other words, we are looking at the entire system of care, a system that includes State agencies, service providers, the courts, law enforcement, foster and adoptive parents, and so on. Note, however, that because of the structure and autonomy of the education system, it is the only system that is considered separately from the child welfare system in this review.

History of the CFSRs

Federal legislation established the authority for the Child and Family Services Reviews (CFSR) process. In 1994, Congress passed amendments to the Social Security Act authorizing the U.S. Department of Health and Human Services, through the Children’s Bureau, to review State child and family service programs to ensure State conformity with titles IV-B and IV-E. Subsequently, the Adoption and Safe Families Act of 1997 (ASFA) influenced the design of the reviews by emphasizing the child welfare goals of safety, permanency, and child and family well-being.

The CFSRs are administered by the Children's Bureau. A video (length: 13:30) by Will Hornsby of the CFSR Unit provides more information on the history, status, key operating principles, and structure of the review process. You can also read the script of the video.

Watch the Video

To view this 13:30 video on a PC, you need Windows Media Player, which may be downloaded at no charge. On a Macintosh computer, the video file will be downloaded and played in QuickTime. The video may take several minutes to open, depending on your Internet connection speed.

Click here to launch the video.

Read the Script

This script contains the text of the video by Will Hornsby of the Children's Bureau.

Hello. My name is Will Hornsby, and I am a child welfare program specialist for the Children’s Bureau Child and Family Services Reviews Unit within the Administration for Children and Families. I’m going to provide a brief overview of the history of the Child and Family Services Reviews, or CFSRs; the key operating principles of the reviews; and the structure of the reviews.

Context for the Reviews

First, let me set the stage for you with a bit of history about the CFSRs.

Federal legislation established the authority for the review process. In 1994, Congress passed amendments to the Social Security Act authorizing the U.S. Department of Health and Human Services, through the Children’s Bureau, to review State child and family service programs to ensure State conformity with titles IV-B and IV-E. Subsequently, the Adoption and Safe Families Act of 1997, or ASFA, influenced the design of the reviews by emphasizing the child welfare goals of safety, permanency, and child and family well-being. ASFA established timeframes for achieving these goals and set forth the responsibility of child welfare agencies to improve outcomes by including families in case planning and by collaborating with community groups and institutions that have an impact on child welfare.

While the Children’s Bureau has the authority to assess compliance, it also is committed to the underlying philosophy of the legislation. That is, the evaluative component of the reviews is designed to be used to identify elements within child welfare systems that are working best to improve outcomes for children and families. That knowledge, in turn, is used to improve child welfare systems across the nation.

The Bureau spent a number of years designing and pilot-testing the reviews and incorporated hundreds of comments from the field into the Final Rule (published in the Federal Register in January 2000). The first round of CFSRs was launched in August 2000, when the first States began their required assessments.

We completed the first round of reviews in 2004. Between Federal fiscal years 2001 and 2004, child welfare programs in all States, the District of Columbia, and Puerto Rico were reviewed using the CFSR process. None of the States, the District of Columbia, or Puerto Rico achieved substantial conformity with respect to all seven child welfare outcomes and seven systemic factors. As a result, all were required to develop Program Improvement Plans, or PIPs, to address areas in which they were found to be out of conformity.

Since implementation of the reviews, the Children’s Bureau has taken many actions to improve the CFSR process:

  • We compiled lessons learned and recommendations from State child welfare agency administrators regarding the first round.
  • We assessed comments about the review process from various sources, including local, county, and State child welfare staff; Federal government child welfare staff; and national child welfare organizations.
  • We retained a consultant to convene a work group of State child welfare agency administrators and researchers to gather information on how the review process could be improved.
  • We established work groups consisting of National Review Team, or NRT, staff and consultants. (The NRT comprises staff from the Children’s Bureau and the Bureau’s Regional Offices who provide leadership to the review teams in planning and conducting the CFSRs.) These work groups then developed strategies for enhancing the review process in five areas: collaboration, helping States build on their prior CFSR and PIP, revising the format of the debriefing process and exit conferences, revising the case sampling strategy, and developing a process for ongoing NRT collaboration and communication.
  • We revised and improved measures for developing State data profiles.
  • We redesigned the Statewide Assessment Instrument, Onsite Review Instrument, and Stakeholder Interview Guide.
  • And finally, we automated the Onsite Review Instrument and the Stakeholder Interview Guide, which will allow for the instant compilation of preliminary review information for presentation at the exit conference and will provide a basis for the Final Reports.

Review Operating Principles

Now that we’ve covered the history of the reviews, let’s look at the key operating principles of the CFSR process. These principles were established for the first round of reviews and will be maintained as standards for the second round. The operating principles are as follows:

First, the reviews represent a partnership between the Federal and State governments. As such, the Children’s Bureau Central and Regional Office and the State child welfare agency work together to prepare for the review. During the 9 months before the onsite review, the Federal staff, via the Child Welfare Reviews Project, convenes at least five planning conference calls with the State, and the State completes a Statewide Assessment.

The second principle is that the reviews examine State programs from two perspectives: first, the outcomes for children and families of services provided and, second, the systemic factors that affect those outcomes.

We look at seven outcomes of services provided:

Two Safety Outcomes

  • Safety Outcome 1 is that children are protected from abuse and neglect.
  • Safety Outcome 2 is that children are safely maintained in their own homes.

Two Permanency Outcomes

  • Permanency Outcome 1 is that children have permanency and stability in their living arrangements.
  • Permanency Outcome 2 is that the continuity of family relationships and connections is preserved for children.

Three Child and Family Well-Being Outcomes

  • Well-Being Outcome 1 is that families have enhanced capacity to provide for their children’s needs.
  • Well-Being Outcome 2 is that children receive appropriate services to meet their educational needs.
  • Well-Being Outcome 3 is that children receive adequate services to meet their physical and mental health needs.

Outcomes are assessed primarily on case record reviews and case-related interviews conducted during the onsite review and on national data standards for safety and permanency measures.

When assessing outcomes, we are really talking about how the child welfare system in each State is serving the child or children and family whose case is being reviewed. For example, did the agency provide appropriate services to prevent a particular child’s entry into foster care?

However, for two of the outcomes, Safety Outcome 1 and Permanency Outcome 1, decisions about substantial conformity are based on both the onsite case review findings and on data indicators. For these outcomes, six national standards have been established for the data indicators. For the State to achieve substantial conformity on these outcomes, the State data must meet these standards. In addition, the case record review must indicate that the State is in substantial conformity.

As I mentioned previously, we also examine systemic factors that affect the agency’s ability to help children and families achieve those positive outcomes. The systemic factors are the statewide information system, the case review system, the quality assurance system, staff and provider training, the service array and resource development, agency responsiveness to the community, and foster and adoptive parent licensing, recruitment, and retention.

Information about the systemic factors is obtained through the Statewide Assessment and through interviews with State and local stakeholders conducted during the onsite review.

When referring to systemic factors, we are talking about how aspects of the State child welfare system as a whole are performing and how these are affecting outcomes for children and families involved with the child welfare system. For example, how effectively has the State implemented licensing or approval standards for foster family homes and child care institutions so that these standards ensure the safety and health of children in foster care?

In addition, the reviews provide a comprehensive look at services provided to children and families, covering child protective services, foster care, adoption, family preservation and family support, and independent living. The reviews focus on how all of the State’s child welfare programming affects outcomes for children and families.

Third, the reviews are designed to identify both the State agency’s strengths and areas needing improvement for each of the outcomes and systemic factors. The reviews include a program improvement process that States use to make improvements, where needed, and build on identified State strengths.

The fourth principle is that the reviews use multiple information sources to assess State performance. These sources of information include the Statewide Assessment; data indicators; case record reviews; interviews with children, parents, foster parents, social workers, and other professionals working with a child; and interviews with State and community stakeholders. Using multiple sources of information enables reviewers to gain a comprehensive picture of a system, which often is not achieved when looking only at case records.

The fifth key operating principle is that central to this review process is the promotion of sound practice principles believed to support improved outcomes for children and families. Those principles include family-centered practice, community-based services, strengthening parental capacity to protect and provide for children, and individualizing services that respond to the unique needs of children and families.

The sixth principle is that the reviews emphasize the accountability of States to the children, families, and communities that they serve. While the review process supports States in making program improvements before having Federal funds withheld due to nonconformity, there are significant penalties associated with failure to make improvements needed to attain substantial conformity. The Children’s Bureau makes no apologies for this approach. The Bureau’s goal is to ensure that children and families receive the best services possible.

This leads directly to the seventh principle. The reviews are designed to drive program improvements through focus on improving systems. Reviewers identify State program strengths that can be used to make improvements in other program areas where and when they are needed. The Children’s Bureau provides support to States during the Program Improvement Plan development and process.

And finally, the eighth principle is that the reviews focus on enhancing States’ capacity to become self-evaluating. By conducting the Statewide Assessment and participating in the onsite review, States engage in a process for examining outcomes for children and families and the systemic factors that affect those outcomes. States then can adapt, if desired, the process for use in their own quality assurance efforts to conduct ongoing evaluations of their systems and programs.

Review Structure

So what actually happens during a Child and Family Services Review? The CFSR is comprised of two phases: the Statewide Assessment, which the State completes in the 6 months before the onsite review, and the onsite review.

In the first phase, the Statewide Assessment Team completes a Statewide Assessment, using data indicators to evaluate the programs under review and examine the systemic factors subject to review.

In the second phase, the Onsite Review Team examines outcomes for a sample of children and families served by the State during a specific period (known as the Period Under Review) by doing two things:

  • First, conducting case record reviews and case-related interviews. These are designed to assess the quality of services provided in a range of areas.
  • Second, conducting State and local stakeholder interviews. The interviews are designed to provide information about the systemic factors that affect the quality of those services.

States determined not to be in substantial conformity with any of the outcomes or systemic factors must develop a PIP to address each area of nonconformity.

On behalf of the Child and Family Services Review Team and the Children’s Bureau, thank you for taking the time to watch this video. We hope that you’ve found it helpful and informative and that you will take advantage of the other training modules available on this training site. The resource section of this training module provides access to relevant CFSR documents that will provide you with more specific information in each of these areas we have reviewed. For more information on the CFSRs, visit the Children’s Bureau Web site at www.acf.hhs.gov/programs/cb/ or e-mail cw@jbsinternational.com. The Children’s Bureau appreciates your interest in the Child and Family Services Reviews and welcomes your questions and suggestions.

Philosophical Context for the Reviews

The Child and Family Services Reviews (CFSRs), authorized by the 1994 Amendments to the Social Security Act and administered by the Children’s Bureau, U.S. Department of Health and Human Services, provide a unique opportunity for the Federal Government and State child welfare agencies to work as a team in assessing States’ capacity to promote positive outcomes for children and families engaged in the child welfare system.

The CFSRs are based on a number of guiding principles and concepts and rooted in the concept of collaboration between Federal and State partners.

CFSR Principles and Concepts

The CFSR process uses both qualitative and quantitative data to look at the services provided in a relatively small group of cases, and then evaluates the overall quality of those services. In other words, the process looks at the outcomes for children and families involved with the entire child welfare system by learning about and documenting the stories of those children and families. The CFSRs are based on a number of central principles and concepts, including the following:

Partnership Between the Federal and State Governments: The CFSRs are a Federal-State collaborative effort. A review team comprising both Federal and State staff conducts the reviews and evaluates State performance.

Examination of Outcomes of Services to Children and Families and State Agency Systems That Affect Those Services: The reviews examine State programs from two perspectives. First, the reviews assess the outcomes of services provided to children and families. Second, they examine systemic factors that affect the agency’s ability to help children and families achieve positive outcomes.

Identification of State Needs and Strengths: The reviews are designed to capture both State program strengths and areas needing improvement. The reviews include a program improvement process that States use to make improvements, where needed, and build on identified State strengths.

Use of Multiple Sources To Assess State Performance: The review team collects information from a variety of sources to make decisions about a State’s performance. These sources include:

Promotion of Practice Principles: Through the reviews, the Children’s Bureau promotes States’ use of practice principles believed to support positive outcomes for children and families. These are:

  • family-centered practice
  • community-based services
  • individualizing services that address the unique needs of children and families
  • efforts to strengthen parents’ capacity to protect and provide for children.

Emphasis on Accountability: The reviews emphasize accountability. While the review process includes opportunities for States to make negotiated program improvements before having Federal funds withheld because of nonconformity, there are significant penalties associated with the failure to make the improvements needed to attain substantial conformity.

Focus on Improving Systems: State child welfare agencies determined to be out of conformity through the review develop Program Improvement Plans for strengthening their systems’ capacities to create positive outcomes for children and families. The Children’s Bureau provides support to States during the Program Improvement Plan development and implementation process.

Enhancement of State Capacity To Become Self-Evaluating: Through conducting the Statewide Assessment and participating in the onsite review, States will become familiar with the process of examining outcomes for children and families and systemic factors that affect those outcomes. They can adapt this process for use in the ongoing evaluation of their systems and programs.

CFSR Collaboration

From their inception, the CFSRs were intended to promote change through collaborative principles. This begins with the collaboration between the Federal and State governments in assessing the effectiveness of child welfare agencies in serving children and families. It continues with the collaboration between child welfare agency leaders and their internal and external collaborative partners. Internal partners include staff and consultants. External partners include policymakers; other agencies serving children, youth, and families; the courts; Tribes and tribal organizations; the community; and children, youth, and families.

These collaborations are critical during the two assessment phases of the CFSR (Statewide Assessment and onsite review) and the Program Improvement Plan development, implementation, and evaluation process. The collaborative process focuses on identifying shared goals and activities and establishing a purpose, framework, and plan. Most important, that collaborative process should result in changes that promote improved outcomes for children and families.

Collaborative Principles

The overarching principles guiding the CFSR collaborative process include the following:

  • The safety, permanency, and well-being of children is a shared responsibility, and child welfare agencies should make every effort to reach out to other partners in the State who can help to achieve positive results with respect to the CFSR child welfare outcome measures and systemic factors.
  • Child welfare agencies do not serve children and families in isolation; they should work in partnership with policymakers, community leaders, courts, service providers, and other public and private agencies to improve outcomes for children and families in their States. This includes partnering with organizations that directly serve children, youth, and families and those whose actions impact family and community life.
  • Family-centered and community-based practices are integral to improving outcomes for children and families. As such, collaboration with families, including young people, is important in identifying and assessing strengths and barriers to improved outcomes for children, youth, and families.
  • Real collaboration has a purpose and a goal; it takes time and effort to promote meaningful collaboration. There also are varying degrees of collaboration, each of which can serve the CFSR process and, more importantly, children, youth, and families.

Collaborative Partners

The CFSR process defines key partners that should be engaged in the CFSR Statewide Assessment, onsite review, and Program Improvement Plan. These include partners with whom the State is required to collaborate in developing the Child and Family Services Plan (CFSP) and Annual Progress and Services Reports (APSRs), as noted at 45 CFR, Part 1357.15(1). Examples of these partners include:

  • Court representatives, including, but not limited to, Court Improvement Programs (CIPs)
  • Tribal representatives
  • Youth representatives
  • Child welfare agency internal partners, such as State and local agency staff, training staff, contract staff, supervisors, and administrators
  • Child welfare agency external partners, such as children (as appropriate); biological, foster, and adoptive parents and relative caregivers; and representatives from (1) other State and community-based service agencies, (2) State and local governments, (3) professional and advocacy organizations, and (4) agencies administering other Federal and federally assisted programs. These programs include those funded by the U.S Departments of Education, Housing, and Labor; the ACF (including Head Start; the Family and Youth Services Bureau; the Office of Family Assistance and the Child Care Bureau within that Office; and the Administration on Developmental Disabilities); the Substance Abuse and Mental Health Services Administration; and the Office of Juvenile Justice and Delinquency Prevention. These programs are responsible for education, labor, developmental disabilities services, juvenile justice, mental health, substance abuse prevention and treatment, family support, services to runaway and homeless youth, domestic violence intervention, child care, Medicaid, and housing.
  • Partners that represent the diversity of the State’s population, especially in relation to those served by the child welfare system
  • Other entities related to children and families within the State, such as the Community-Based Child Abuse Prevention lead agencies, citizen review panels, Children’s Justice Act task forces, and CFSP and Promoting Safe and Stable Families partners

CFSR Structure

The CFSRs comprise two phases: the Statewide Assessment, which the State completes in the 6 months before the onsite review, and the onsite review. During the first phase, the Statewide Assessment Team completes a Statewide Assessment, using data indicators to evaluate the programs under review and examine the systemic factors subject to review. In the second phase, the review team examines outcomes for a sample of children and families served by the State during the period under review (PUR) by conducting case record reviews and case-related interviews to assess the quality of services provided in a range of areas and by conducting State and local stakeholder interviews regarding the systemic factors that affect the quality of those services.

Once a State's onsite review is complete, the Children's Bureau generates a Final Report that serves as written notice of conformity or nonconformity. It is the goal of the Children's Bureau to provide a courtesy copy of this report to the State within 30 days of the onsite review. A State that is determined not to be in substantial conformity with one or more of the seven outcomes or seven systemic factors under review then develops a Program Improvement Plan that addresses all areas of nonconformity. The State submits the Program Improvement Plan to the Children’s Bureau Regional Office for approval within 90 calendar days of receiving the written notice of nonconformity.

Once the Program Improvement Plan is approved, the State implements the plan. During this process, the Children’s Bureau Regional Office monitors the plan’s implementation and the State’s progress toward goals specific to the Program Improvement Plan. During both review phases and the Program Improvement Plan process, if necessary, States have access to training and technical assistance provided by the Children’s Bureau-funded National Resource Centers and coordinated through the Children’s Bureau Regional Offices.

Preparation for the Onsite Review

Preparation for the onsite review involves a wide range of activities, including:

  • Identifying cases to be reviewed.
  • Preparing case records for review.
  • Scheduling case-related and stakeholder interviews.
  • Assembling the review team and preparing reviewer schedules.
  • Planning travel, lodging, and other logistical arrangements.
  • Providing training to members of the review team.
  • Distributing review-related materials and technology to the review team.

Responsibility for these activities is shared between the Children’s Bureau Central Office, the Children's Bureau Regional Office, the State Central Office, and various Local Site Coordinators. The Child Welfare Reviews Project also plays a significant role in the logistical planning that takes place for an onsite review and provides a variety of other resources for the Children's Bureau and State.

Children's Bureau Central Office

The Children's Bureau Central Office has several key responsibilities in preparing for an onsite review. In addition to appointing the Federal Review Team, it collaborates closely with the Children's Bureau Regional Office, State central office, and Child Welfare Reviews Project (CWRP) on a series of review planning conference calls. It also plays a role in selecting the sample of cases that are used during the onsite review and in developing the data profiles that are used to measure a State's substantial conformity.

Appointing the Federal Review Team

During its planning for the onsite review, the Children's Bureau Central Office identifies the National Review Team (NRT) Team Leader and NRT Local Site Leaders. It also identifies other Children's Bureau staff who will serve as Federal reviewers and arrange for any training they might require. 

Review Planning Conference Calls

In the weeks leading up to the onsite review, the Children's Bureau Central Office participates in a series of review planning conference calls with the Children's Bureau Regional Office and State child welfare agency staff. These calls are scheduled and facilitated by the CWRP and cover a wide range of review-related topics, including logistical information such as the locations of review sites, lodging arrangements, and the composition of the review team; the State data profile and Statewide Assessment; State policies that may affect the review process; the State's Program Improvement Plan (PIP); stakeholder interviews; the State Team Training, and more.

Sample of Cases

Case selection for an onsite review is handled differently for foster care cases and in-home services cases, although case sampling guidelines are used for both. For foster care cases, the Children's Bureau Central Office draws random samples of cases from Adoption and Foster Care Analysis and Reporting System (AFCARS) data. The cases are drawn from a "universe" of cases that is the State's 6-month AFCARS submissions that correspond with the sampling period for the three review sites. The Children's Bureau Central Office then transmits those samples through the Children's Bureau Regional Office to the State in a sorted AFCARS table organized around the four foster care categories and by jurisdiction within a State. This ensures that sites selected for the onsite review have a sufficient number of targeted foster care cases for review.

To select in-home services cases, the Children's Bureau Central Office draws from a list that is provided to it by the State central office. The cases must have been open for at least 60 consecutive days during the sampling period, which extends for 2 months beyond the sampling period used for foster cases for a total of 8 months.

Data Profiles

The Children's Bureau Central Office develops the safety and permanency data profiles used to measure substantial conformity within the Safety and Permanency outcomes and transmits them through the Children's Bureau Regional Office to the State. These data profiles are then used by the State to perform its Statewide Assessment.

The Children's Bureau Central Office, along with the Children's Bureau Regional Office, then reviews and provides feedback on the Statewide Assessment, which is a key indicator in determining a State's substantial conformity during the onsite review. The Children's Bureau Central and Regional Office also provides feedback on State policies and the Preliminary Assessment.

Children's Bureau Regional Office

Like the Children's Bureau Central Office, the Children's Bureau Regional Office has a number of key responsibilities during the preparation for an onsite review. The Regional Office selects the Regional Office Team Leader, who works in close collaboration with the National Review Team (NRT) Team Leader to guide the overall review. 

In addition, the Regional Office Team Leader serves as the Children's Bureau Regional Office's main representative on the series of review planning conference calls with the Children's Bureau Central Office, State central office, and Child Welfare Reviews Project (CWRP). He or she is also responsible for identifying State issues that might be relevant during the onsite review and to assist in selecting the sample of cases that will be used. The Regional Office Team Leader also plays a role in assembling the Federal Review Team, onsite review scheduling, and indentifying stakeholders to be interviewed during the onsite review.

Identifying State Issues

The Regional Office Team Leader reviews with the State any potential policy issues relevant to the review. He or she will also collaborate in identifying State-specific systemic issues raised in the Statewide Assessment or Preliminary Assessment that may require further review on site.

Sample of Cases

The Regional Office Team Leader helps determine the composition of the sample of cases to be reviewed. He or she reviews and concurs with the State's method for selecting in-home services cases that meet the case sampling guidelines during the period under review, then transmits the State's final sample list to the Children's Bureau Central Office. The Central Office uses it and AFCARS data to draw a random sample of in-home services and foster care cases, which the Regional Office Team Leader then forwards to the State.

Federal Review Team

The Regional Office Team Leader is also instrumental in assembling the Federal Review Team. He or she consults with the Children's Bureau Central Office to assign Regional Office staff to the team, including Federal Local Site Leaders. He or she also consults with the NRT and State Team Leaders to determine the total number of reviewers needed for the review and advises CWRP of the total number of Federal consultant reviewers required to supplement the Federal review team.

In selecting the Federal consultant reviewers, the Regional Office Team Leader collaborates with CWRP to identify and address any potential conflicts of interest. The Regional Office Team Leader then collaborates with the State Team Leader to develop the Federal-State Review Team pairings and site assignments and coordinates with CWRP to plan for and participate in the State Team Training. The training takes place 2 weeks before the onsite review.

Onsite Review Scheduling

Following the training, the Regional Office Team Leader requests that the State Team Leader submit review team schedules, including schedules for case record reviews and case-related interviews, to the Children's Bureau Regional Office. The Regional Office then distributes these schedules to the NRT Team Leader, NRT Local Site Leaders, and CWRP.

The Regional Office Team Leader also collaborates with the NRT Team Leader and State Team Leader to prepare for the Monday morning team meetinglocal site exit conference, and statewide exit conference.

Identifying Stakeholders

The Regional Office Team Leader collaborates with the State to identify all required State and local stakeholders. He or she reviews the stakeholder interview schedule developed by the State, submits stakeholder interview schedules to the Children's Bureau Regional Office, then distributes these to the NRT Team Leader, NRT Local Site Leaders, and CWRP.

State Central Office

The State central office and its various local agencies play a significant role in pre-review preparation. The central office will assign a senior State staff person to serve as the State Team Leader, who then provides oversight to the State Onsite Review Team and liaises with the Children's Bureau Regional Office and the Child Welfare Reviews Project (CWRP) in making logistical arrangements for the review. This includes participating in a series of review planning conference calls.

Other collaborative responsibilities that the State central office shares with the Children's Bureau include identifying and selecting the review sites used during the onsite reviewassembling the State review team, identifying the in-home services cases that will be reviewed on site, and scheduling the case-related and stakeholder interviews

Logistical Arrangements

The State central office and its local agencies are responsible for collaborating with the Children's Bureau Central Office, Children's Bureau Regional Office, and CWRP on a number of logistical arrangements for the onsite review. These include coordinating with CWRP to schedule the statewide exit conference by recommending meeting space and inviting participants.

The State central office will also collaborate with CWRP to identify:

  • Lodging arrangements for Onsite Review Team members.
  • Locations and times for nightly debriefings and the local site exit conference.
  • Space for other scheduled meetings and review activities during the week.
  • Transportation for review team members.

Review Site Selection

The State central office is also responsible for identifying the State's three review sites, one of which must include the State's largest metropolitan subdivision. The State central office selects these sites based on information obtained from the Statewide Assessment, then consults with the NRT Team Leader and Regional Office Team Leaders before final site selections are made. 

Once a review site is selected, the State central office assigns to it a Local Site Coordinator responsible for making local arrangements and ensuring that case records are available onsite. The Local Site Coordinator should be an administrator from the site under review, or their designee. To avoid conflicts of interest, the Local Site Coordinator does not participate in team activities such as nightly debriefings or case-related interviews, but should be available to the team during regular working hours to handle any unexpected issues that may arise.

State Review Team

The State central office is also responsible for identifying State review team members. The State review team should include staff from the State's public child welfare agency as well as representatives from external partners. To avoid conflicts of interest, State review team members should not be assigned as State Local Site Leaders or reviewers in the same site where they work or have oversight responsibilities.

Once the State's review team is selected, the State central office provides information about each team member to the Children's Bureau Regional Office. The State central office then collaborates with the Children's Bureau Regional Office to place Federal and State review team members in review pairs. Each review pair is assigned to a review site at least 6 weeks before the onsite review. 

In-Home Services Cases

The State central office collaborates with the Children's Bureau Regional Office to determine which cases in the State meet the definition of in-home services cases for inclusion in the "universe" of in-home services cases from which the review sample will be drawn. The State central office specifies the methods for identifying and compiling a list of cases that meet this definition and that fall within the period under review, then compiles this list and submits it to the Children's Bureau Regional Office at least 60-90 days before the onsite review. 

Case-Related and Stakeholder Interviews

The total sample list of in-home services and foster care cases is transmitted by the Children's Bureau to the Local Site Coordinators. The local agencies managing the onsite review then examine the sample lists and schedule case-related interviews as appropriate. Following the State Team Training, which takes place 2 weeks before the review, the State central office submits the review team schedules for case record reviews and case-related interviews to the Children's Bureau Regional Office.

The State central office also collaborates with the Children's Bureau Regional Office to determine the number and composition of State and local stakeholder interviews to be conducted during the onsite review. Once this is established, the State central office makes appointments for Team Leaders to conduct interviews with stakeholders and submits a stakeholder interview schedule to the Children's Bureau Regional Office at least 2 weeks before the onsite review.

Local Site Coordinators

Local Site Coordinators are assigned by the State central office to each of the review sites. They are State staff members who are not considered part of the Onsite Review Team, but rather serve as the review team's liaison to the child welfare agency at each review site.

Local Site Coordinators have a number of responsibilities in preparing for an onsite review, and will often collaborate closely with the Child Welfare Reviews Project (CWRP) in carrying out their tasks. Of particular importance is their role in case preparation and in scheduling review week activities. They also have a number of other logistical responsibilities.

Case Preparation

Local Site Coordinators manage the process of selecting and assembling the case records that are to be reviewed at the local site. They ensure that all relevant records are ready and accessible at the beginning of the review week. The Local Site Coordinator is also responsible for ensuring that all case records are kept in a secure site for overnight storage during the review week.

Schedule Review Week Activities

Local Site Coordinators take the lead role in scheduling and reserving space for most review week activities, including the following:

  • The Monday morning team meeting, which includes local officials and Federal and State members of the Onsite Review Team. 
  • Case-related interviews for those cases selected for review. The Local Site Coordinator also confirms the interviews, orients those being interviewed to the purposes of the review, and handles any interview reschedulings that become necessary.
  • Local stakeholder interviews, which take place at stakeholders' offices or other suitable locations. As with the case-related interviews, the Local Site Coordinator is also responsible for confirming the stakeholder interviews, orienting the stakeholders to the purposes of the review, and any reschedulings that become necessary.
  • The nightly debriefings.
  • The local site exit conference (in collaboration with the NRT Local Site Leader).

Prior to the onsite review, each Local Site Coordinator finalizes the schedule of all review week activities. The schedule is reviewed and approved by the State Team Leader, who submits it to the Children's Bureau Regional Office and the Children's Bureau Central Office.

Appendix E of the Child and Family Services Reviews Procedures Manual, "Tips on Creating Onsite Review Schedules," contains comprehensive tips on how to schedule interviews, debriefings, conferences, and other required onsite events as well as sample schedules of a review week. 

Other Logistical Responsibilities

The Local Site Coordinator also has a number of other specific logistical responsibilities in preparing for an onsite review. Examples of these responsibilities include:

  • Preparing maps and other written directions for review team members to assist them in getting to the site office and scheduled appointments. He or she will also plan transportation, as required, to and from interviews.
  • Orienting local child welfare agency staff about the review.
  • Booking sleeping rooms for State review team members. 
  • Securing any releases of information or confidentiality forms needed to permit reviewers to access case records and interview individuals associated with the cases.
  • Ensuring that the technical requirements of the CFSR Data Management System are met, including securing Internet connections and power sources. He or she will also receive and secure the shipment of tablet computers that CWRP sends before the onsite review and release them to the Local Site Leader at the start of the review week.

Child Welfare Reviews Project

The Child Welfare Reviews Project (CWRP) is the Federal contractor responsible for handling much of the logistical planning that goes into preparing for an onsite review. CWRP works very closely, as required, with the Children's Bureau Central Office, Children's Bureau Regional Office, State central office, and Local Site Coordinators to ensure that all review planning needs are met. CWRP also schedules and facilitates the series of review planning conference calls, beginning 9 months before the onsite review, in which these separate groups can coordinate their activities.

During this 9-month planning period and the onsite review itself, CWRP tracks the overall status of the review and provides support wherever it is needed. CWRP also provides onsite staff at each review site for technical assistance regarding technology and logistics as necessary. 

CWRP is specifically responsible for a number of other review planning activities, including the recruitment and training of consultants to the Federal team, ensuring that transportation and lodging requirements are met, and providing review documents and technology.

Recruitment and Training

CWRP is responsible for recruiting and training individuals with experience in the child welfare field to become part of a national pool of consultants to the Federal team. Approximately 3 months before the onsite review, CWRP will provide the Children's Bureau Regional Office with the names and profiles of consultants who have indicated an availability to participate in the onsite review and meet the necessary criteria. The Children's Bureau Regional Office then selects consultants from that list to supplement the Federal side of the onsite review team as partners of State reviewers or as local site leaders

CWRP is also responsible for designing and conducting the State Team Trainings delivered to members of the State Review Team. These trainings are held in each State approximately 2 weeks before the onsite review. CWRP also designs and conducts trainings for any cross-State participants (CSPs) in the onsite review.

Transportation and Lodging Arrangements

After obtaining review site assignments from the Children's Bureau Regional Office, CWRP is responsible for the lodging arrangements of all Federal review team members. CWRP coordinates these arrangements with State staff to ensure that the entire onsite review team is housed in the same location. CWRP also coordinates onsite transportation arrangements with the Children's Bureau Regional Office and State Team Leaders, and can arrange for rental cars for up to eight consultants who serve as Federal review team members.

In addition, CWRP coordinates with the Children's Bureau Regional Office and State Team Leaders to identify and reserve a location for the Statewide exit conference. CWRP ensures that all necessary equipment is present at the site, and provides staff to manage onsite logistics.

Review Documents and Technology

CWRP is also responsible for producing and delivering to each review site copies of the Onsite Review Instrument (OSRI) and Stakeholder Interview Guide (SIG) that are used during the onsite review, as well as any other documents or information that review team members will require while on site. These materials, along with the tablet computers that contain the CFSR Data Management System, are sent to the attention of the NRT Team Leader and NRT Local Site Leader and arrive at each local site 1 week before the review.

CWRP also produces and distributes Review Information Packages to review team members approximately 2 weeks before the onsite review. Each Review Information Package contains a copy of:

  • The Statewide Assessment.
  • The Preliminary Assessment.
  • The State Policy Submission Form (completed by the State).
  • Demographic information on the local site (if provided by the State).
  • A Review Fact Sheet containing contact information for review team leaders and Local Site Coordinators, important addresses related to the review, and the dates and times of entrance and exit conferences.
  • The Federal and State review team member pairings chart, with site assignments.
  • A preliminary schedule of review week activities (developed by the State).

Statewide Assessment

The Statewide Assessment is the first phase of the CFSRs and is conducted during the 6 months preceding the second phase, which is the onsite review. In conducting the Statewide Assessment, the Statewide Assessment Team uses data indicators and other qualitative information to assess the impact of State policies and practices on the children and families being served by the State child welfare agency.

The Statewide Assessment provides States an opportunity to examine data and qualitative information related to their child welfare programs in light of their programmatic goals and desired outcomes for the children and families they serve. The Statewide Assessment serves the following purposes:

  • Provides States the opportunity to build capacity for continuous program evaluation and improvement
  • Helps prepare the Onsite Review Team for the onsite review by providing evaluative information regarding the child welfare agency’s policies, procedures, and practices
  • Provides a basis for making decisions regarding substantial conformity with the seven systemic factors, in conjunction with the information obtained from the onsite review
  • Identifies issues that require clarification and that therefore may need to be addressed through the training of State Review Team members

The Statewide Assessment Team uses a Statewide Assessment Instrument to record the following:

  • qualitative, evaluative, and quantitative information regarding the State’s outcomes for children and families served
  • systemic factors that affect the State’s ability to provide services
  • State strengths and areas needing improvement
  • issues for further examination through the onsite review

The Statewide Assessment Instrument is designed to assist States in completing their Statewide Assessment in an evaluative manner. The instrument includes a series of narrative-style questions and instructions on documenting data indicators. The Statewide Assessment Team should complete the Statewide Assessment and should be the primary group that responds to the narrative questions. Once the Statewide Assessment is complete, the Children's Bureau will release a Preliminary Assessment that will serve as a critical component in planning for the onsite review.

 

Statewide Assessment Team

States must include broad representation from within and outside the child welfare agency in forming a team to conduct the Statewide Assessment. The team should include representatives of organizations consulted in developing the CFSP and APSRs and who are expected to be involved in developing and implementing the Program Improvement Plan. States also should consider including on the Statewide Assessment Team individuals from within and outside the State child welfare agency who have the skills and background to serve as case record reviewers and interviewers and who are available to serve on the Onsite Review Team.

The following are suggested participants in the Statewide Assessment Team:

  • Administrators and program specialists from the State and local child welfare agencies
  • State and local agency staff with expertise in areas examined during the Statewide Assessment, such as information systems, quality assurance, training, and licensing
  • Local child welfare agency staff who have knowledge of front-line practice and supervisory issues
  • Judges and other court-related personnel, especially staff of the State's Court Improvement Program (CIP)
  • Representatives of the major domains outside child welfare that are addressed in the Statewide Assessment, such as education, health, mental health, substance abuse treatment, domestic violence prevention, and juvenile justice
  • Tribal representatives
  • Legislative personnel who focus on child welfare issues or funding issues that affect child welfare
  • Advocacy groups and consumer representatives, including children and youth in foster care or the groups that represent them
  • Service provider representatives, including foster and adoptive families
  • University or research-related partners of the State involved in data collection and analysis, training activities, or other relevant areas
  • Partners who represent the diversity of the State's population, especially in relation to those served by the child welfare system

Members of the Statewide Assessment Team may engage in the following types of activities:

  • Participate in training or orientation sessions
  • Attend meetings related to the Statewide Assessment or the review process
  • Analyze the data related to outcomes and systemic factors
  • Collect additional data as needed
  • Gather information pertaining to the agency's performance, such as conducting or participating in focus groups, surveys, or interviews
  • Develop, review, and comment on drafts of the Statewide Assessment
  • Participate in conference calls with Federal staff during the Statewide Assessment process (Statewide Assessment Team leadership only)
  • Make recommendations pertaining to the onsite review, such as sample composition, site selection, and Onsite Review Team composition
  • Identify the State's strengths and areas needing improvement on the basis of data and information gathered for the Statewide Assessment
  • Explore strategies for possible program improvement efforts in areas identified as needing improvement, and make preliminary recommendations to the State's Program Improvement Plan Development Team

Completing the Statewide Assessment

The Statewide Assessment is completed using the Statewide Assessment Instrument, which is divided into five sections:

  • General Information, which provides information about the child welfare agency;
  • Safety and Permanency Data, which States use to examine and report on their foster care and child protective services populations using the safety and permanency profiles provided by the Children’s Bureau’s data team;
  • Narrative Assessment of Child and Family Outcomes, which States use  to examine their data in relation to the three outcome areas under review;
  • Systemic Factors, where States provide narrative responses to questions about the seven systemic factors under review; and
  • State Assessment of Strengths and Needs, where States answer questions about the strengths of the agency’s programs and areas that may warrant further exploration through the onsite review.

The Statewide Assessment includes data that the Children’s Bureau extracts from the Adoption and Foster Care Analysis and Reporting System (AFCARS) and the National Child Abuse and Neglect Data System (NCANDS) Child File (the case-level component of NCANDS) and transmits to the State in report format. AFCARS data are used to develop a permanency profile of the State’s foster care populations, and NCANDS data are used to develop a safety profile of the child protective services population. These data profiles include data indicators that are used to determine substantial conformity.

For the initial review only, the Children’s Bureau could approve another source of data for the permanency profile in the absence of AFCARS data. Additionally, for both the initial and subsequent reviews, the Children’s Bureau may approve another source of data for the safety profile in the absence of NCANDS data. This source would then be used to prepare an alternative data profile.

Once it is compiled, this data or alternative data profile will serve as the foundation for the data analysis performed by the Statewide Assessment Team. The Children’s Bureau has established national standards for each of the data indicators used to determine substantial conformity. When a State is undergoing a CFSR, the Children’s Bureau Regional Office and the State compare the State’s data for the PUR with these national standards to determine the State’s substantial conformity with these standards. The completed Statewide Assessment will analyze the relationship between State data and practice, and the quality and effectiveness of the system under review. For example, if a State’s data show that children have frequent re-entries into foster care following reunification, the State will use the Statewide Assessment process to explore, and then document, the possible reasons why this is occurring. To do so, the State might examine the availability, accessibility, and quality of services to support family reunification. 

When the Statewide Assessment has been completed and accepted by the Children's Bureau Regional Office, the Children's Bureau will then release a Preliminary Assessment for the State that will serve as a critical component in planning for the onsite review.

 

Data Profiles

Six months before the onsite review, the Children’s Bureau Regional Office transmits to the State the AFCARS and NCANDS data profiles, unless the data are not available from the State’s submissions. This provides the State the opportunity to examine the profiles for accuracy and then decide whether it needs to correct and resubmit the data.

If the State resubmits data before the onsite review, the Children’s Bureau prepares updated data profiles on the basis of the resubmitted data. The turnaround time for doing so is generally 2–4 weeks. States, therefore, that elect to resubmit data should do so as early as possible after receiving the initial profiles.

The Children’s Bureau uses a specific data syntax to create the data profiles for the Statewide Assessment. States are encouraged to use this syntax to create and review their own data profiles before starting the Statewide Assessment. By doing so, States will have more time to examine the accuracy of their data and make corrections before receiving their official data profiles for the Statewide Assessment. If this data syntax is not normally used by the State, using the logic established by the syntax will enable the State to create its own data syntax that will be more compatible with that used for the review. The syntax (Data Profile Programming Logic) is available on the Children’s Bureau Web site.

Data Profiles Using Alternative Sources

If a State does not submit data to NCANDS, the Children’s Bureau Regional Office and State must agree on an alternate source of statewide data to be used in preparing the safety profile. Also, for its initial review, if the State had incomplete AFCARS data, an alternate source of data approved by the Children’s Bureau could be used to generate the permanency data profiles. In the absence of NCANDS data, the Children’s Bureau Regional Office requests that the State submit its description of the proposed alternate source of data to the Children’s Bureau Regional Office 8 months before the onsite review. This provides time for the Children’s Bureau Regional Office to approve the data and transmit them to the Children’s Bureau Central Office to prepare the profiles.

The Children’s Bureau Regional Office, in consultation with the Children’s Bureau Central Office, approves or disapproves the alternate data source, using the following criteria:

  • The data accurately represent the State’s service population.
  • The reporting definitions and timeframes of the alternate source are consistent with those of NCANDS.

Some of the data elements in the data profiles are used to determine the State’s substantial conformity. Failure to provide data from an alternate source, in the absence of NCANDS data, could result in a determination that the State is not in substantial conformity with Safety Outcome 1. When the Children’s Bureau has approved the alternate source of data for the profiles, the State transmits the data to the Children’s Bureau data team, which uses it to prepare the profiles. The State then notifies the Children’s Bureau Regional Office that it has done so. The Children’s Bureau Central Office prepares the profiles and sends them to the Children’s Bureau Regional Office, which transmits them to the State at least 6 months before the onsite review.

If the State submits the data from the alternate source to the Children’s Bureau in a timely manner, the profiles will reflect the alternate data when the Children’s Bureau transmits them to the State 6 months before the onsite review. If the State is not able to submit the alternate data in a timely manner, the Children’s Bureau updates the profiles to reflect the alternate data as soon as possible after receiving it.

Onsite Review

The onsite review is the second phase of the Child and Family Services Reviews (CFSRs) and is designed primarily to gather qualitative information. The onsite review lasts 1 week (see Module 2: The Review Week) and includes the examination of a sample of cases for outcome achievement and interviews with State and local stakeholders to evaluate the outcomes and systemic factors under review. The review takes place in three sites in the State. The State’s largest metropolitan subdivision is a required site, and the other two sites are determined on the basis of information in the Statewide Assessment.

During the onsite review, the Onsite Review Team examines case records, conducts case related and stakeholder interviews, and participates in nightly debriefings, local exit conferences, and the statewide exit conference. The goal of the case record reviews and case-related and stakeholder interviews is to obtain qualitative information that complements the quantitative information reported through the Statewide Assessment.

The onsite review also permits the team to collect information on items/outcomes that is not reported in aggregate form through data collection, such as risk assessment and safety management and the nature of the relationship between children in care and their parents. The combination of the data, reported through the Statewide Assessment, and the information on child and family outcomes and statewide systemic factors gathered through the onsite review, allows the review team to evaluate programs’ outcome achievement and identify areas in which the State may need TA to make improvements.

The Children’s Bureau developed the following standardized instruments for collecting and recording information during the onsite review:

  • Onsite Review Instrument and Instructions (OSRI): The OSRI is used by review team members who conduct case record reviews. It contains questions to guide the case record review process and provides space for rating the 23 items and 7 outcomes under review and for documenting information to support those ratings.
  • Stakeholder Interview Guide (SIG): The SIG provides a framework for the Team Leaders and Local Site Leaders who conduct interviews with stakeholders regarding the outcomes and systemic factors under review. The guide lists the individuals whom the NRT Local Site Leader must interview and provides core and follow-up questions for each of the 23 items under the 7 outcomes and 22 items under the 7 systemic factors.
  • Preliminary Assessment and Summary of Findings Form: This form is used by the Children’s Bureau Regional Office to: (1) prepare an analysis (the Preliminary Assessment) of the State’s performance on the outcomes and systemic factors, on the basis of information from the Statewide Assessment, (2) record the preliminary findings of the onsite review, and (3) prepare the Final Report of the review.

Preparation for the onsite review includes selecting cases to be reviewed; preparing case records for review; scheduling case-related interviews and State and local stakeholder interviews; preparing reviewer schedules, making other logistical arrangements, and distributing review-related materials to the review team; and providing training. Responsibilities for these activities are shared between the Children’s Bureau Central and Regional Offices, Child Welfare Reviews Project, State central and local child welfare agencies, and Local Site Coordinators. 

Case Selection

Before selecting the in-home services and foster care samples, the Children’s Bureau Central and Regional Offices and State staff will confirm the three counties or other geographical areas where the onsite review will be conducted. These review sites are selected on the basis of reviewing a draft Statewide Assessment, and in making their selections, the Children's Bureau will ensure that each review site has at least three times more in-home services and foster care cases than need to be scheduled for the review. The Children's Bureau will also confirm that any sealed foster care or adoption records will be available if they are selected for the sample.

A total of 65 cases will be reviewed per State, unless unusual circumstances exist and specific arrangements are made between the Children's Bureau and the State to review fewer cases. The breakout of cases in the State's review sample is as follows:

  • 25 in-home cases. These will reflect the State’s in-home services population as defined in the State CFSP
  • 40 foster care cases. These will be stratified into four categories to achieve an adequate representation of cases in key program areas

In any situation, a State’s review will involve no more than 40 foster care cases, even if the number of in-home cases does not reach 25. In situations where the number of in-home services cases cannot be reached and adjustments across sites are necessary, the Children's Bureau will seek to review a minimum of 5 in-home services and 10 foster care cases in each of the two non-metropolitan sites and a minimum of 10 in-home services cases in the metropolitan site. In addition, when the foster care cases from all three sites are combined, there should be 10 cases total in each of the four categories.

Case Sampling Guidelines

After the three review sites have been determined, the Children's Bureau draws two random samples of cases to be reviewed (a total of 150 in-home services cases and approximately 150 foster care cases) from the respective universe of cases in the three sites to be reviewed. The sample of in-home services cases is selected by family, and the sample of foster care cases is selected by child. Before the Children’s Bureau sends the sample of 150 foster care cases to the State, it randomizes the records in the sample. This is designed to preclude any bias when the State selects the cases to be reviewed at each of the three sites.

For in-home services cases, the “universe” of cases is a State-provided list of in-home services cases that were open for services for at least 60 consecutive days during the sampling period and in which no children in the family were in foster care for 24 hours or longer during any portion of the review period. The State should provide this list of in-home services cases to the Children's Bureau because that information is not currently available through the NCANDS or other national data sources. The sampling period for in-home services cases extends 2 months beyond the sampling period for foster care cases, for a total of 8 months. This is because the CFSRs review in-home services cases that were open for at least 60 days.

For foster care cases, the “universe” (list) of cases is the State’s 6-month AFCARS submissions that correspond with the sampling period for the three review sites. To ensure that sites selected for the onsite review will have a sufficient number of the targeted foster care cases for review, the Children's Bureau will sort the AFCARS foster care file by the four categories and by jurisdiction within a State. A table is then generated for each State identifying the jurisdictions and the number of cases in each of the four categories. This assists in the site selection process after sites are proposed through the Statewide Assessment.

Local Site Coordinators then schedule the 65 cases for onsite reviews across the three sites. At each review site, approximately 15-35 cases are reviewed (for example, the Onsite Review Team typically reviews up to 35 cases in the largest metropolitan subdivision and no fewer than 15 in the other two sites), unless otherwise agreed upon by the Children's Bureau and the State. The Children's Bureau, however, will review no fewer than 15 cases at any review site.

In-Home Services Cases

In-home services samples are family-based and are selected from a universe of cases provided by the State. The State should provide the universe as soon as possible after the review sites are selected. The universe of in-home services cases should include the State’s non-foster care cases for which the State’s title IV-E/IV-B agency is responsible as defined in State policy, or the families served pursuant to the State’s Child and Family Services Plan (CFSP). Juvenile justice cases, mental health cases, and other in-home services cases, even if they are not federally funded, are to be included in the State’s in-home services universe if the services the State IV-E/IV-B agency provides to them, either directly or through contractual arrangements, are provided pursuant to the State’s CFSP. This would include, for example, the requirement that a State have a pre-placement preventive services program to help children at risk of foster care placement remain safely with their families.

In determining whether an in-home services case should be included in the universe, the State should consider the following criteria:

  • Whether the State or local title IV-E/IV-B funded child welfare agency has or had ongoing responsibility for the case, as defined in State policy, or the families are served pursuant to the State’s CFSP
  • Whether the case was open for at least 60 consecutive days during the sampling period and did not have any children in the family in foster care for 24 hours or longer during any portion of the review period

The Children's Bureau Regional Office staff should determine whether the State’s in-home services cases are listed by family or by child. If a State lists its in-home services cases by child instead of by family, the Children's Bureau Regional Office will request that the State provide its list of in-home services cases with the children from each family grouped together. The ease of grouping these cases will depend on whether children from the same family have the same case number or another designation that identifies them as being from the same family.

Upon receiving the list of cases, the Children's Bureau data team will select a total of 150 in-home services cases from the three review sites, on the basis of the proportion of cases to be reviewed at each site. If 10 of the 25 in-home services cases (40 percent) scheduled to be reviewed are in county A, for example, the Children’s Bureau data team selects a sample of 60 (0.4 x 150) in-home services cases from county A’s list. If this is not possible, the Children's Bureau data team attempts to preserve the proportionality of the cases scheduled for review at each site to the extent possible. The Children's Bureau then re-randomizes the cases in each sample before transmitting these to the State.

After the State receives the three re-randomized samples, it verifies and finalizes the list of cases to be reviewed and schedules cases sequentially from the lists, maintaining the exact order used in the sample provided by the Children's Bureau and eliminating any ineligible cases after consultation with the Children's Bureau Regional Office.

If 25 in-home services cases cannot be scheduled on site, no substitution of foster care cases will be undertaken. At least two alternate in-home services cases should be available from the lists at each site in the event that in-home services cases are eliminated during the onsite review. If the target number of in-home services cases cannot be reached or adjustments across sites are necessary, the Children's Bureau Regional Office will seek to review a minimum of five in-home services cases for the two non-metropolitan sites.
 

Foster Care Cases

The State’s universe of foster care cases is the State’s AFCARS submission that corresponds with the sampling period for the three review sites. The universe of cases should comprise children for whom the agency has placement and care responsibility and who are considered to be in foster care on the basis of AFCARS reporting requirements. If juvenile justice or mental health cases are reported to AFCARS consistent with AFCARS requirements, they are part of the universe of cases.

From the AFCARS file or abridged AFCARS file, the Children’s Bureau data team selects approximately 150 foster care cases on the basis of the proportion of cases to be reviewed at each site. Foster care cases are stratified into four categories to achieve an adequate  representation of cases in key program areas, and the review should maintain a ratio of 10 cases per category. The four categories are as follows:

  • Category 1: 10 cases involving children who were ages 16 or 17 as of the last day of the PUR or the date that they exited care, as applicable. These children could have any permanency goal and could have entered care either before or during the PUR.
  • Category 2: 10 cases involving children who were under age 16 as of the last day of the PUR or the date that they exited care, as applicable. These children will have a current permanency goal of adoption and will have entered care either before or during the PUR.
  • Category 3: 10 cases involving children who were under age 16 as of the last day of the PUR or the date that they exited care, as applicable, and who entered care during the PUR. These cases could have any permanency goal except adoption.

These categories may include children entering foster care during the PUR, which will ensure a proportion of this case type that is consistent with the regulation and that will address the need to focus on State practice after the first-round of Program Improvement Plan implementation. The case numbers for these categories were based on the need to focus on (1) State practice during the PUR, (2) the emphasis on re-entries, and (3) the focus in the second round of reviews on the population of older youth in care.

  • Category 4: 10 cases involving children who were under age 16 as of the last day of the PUR or the date that they exited care, as applicable, and who entered care prior to the PUR. These cases could have any permanency goal except adoption.

Category 4 is intended to allow the random selection of cases with case plan goals other than adoption. These include guardianship, permanent placement with relatives, and other types of cases involving children younger than age 16 with a goal of Other Planned Permanent Living Arrangement.

After the State receives the list of approximately 150 foster care cases divided into 12 files, 4 for each site, it schedules the cases to be reviewed according to the case order listing, eliminating ineligible cases using the established elimination guidance. Each site should have at least two cases per category remaining on the lists as alternates in the event that cases are eliminated during the onsite review. States should not substitute cases from one list to supplement another list that incurred a shortfall.

Case Elimination

In some instances, cases that are included in the State sample may need to be eliminated from consideration. This will normally happen during the case selection process, although in some instances it may become necessary during the onsite review. States should only eliminate a case after consultation with the Children's Bureau, and generally only if one or more of the following reasons apply:

  • Key individuals are unavailable during the onsite review week or are completely unwilling to be interviewed, even by telephone. The key individuals in a case are the child (if school age), the parents, the foster parents, the caseworker, and other professionals knowledgeable about the case. Before eliminating these cases, the State should determine whether sufficient information and perspectives can be obtained from the available parties.
  • The case involves out-of-county or out-of-State family members or services that may not be readily available during the review week. Children on runaway status should not be eliminated from the sample unless it has been determined that pertinent information needed to complete the OSRI cannot be obtained from other available parties, such as the guardian ad litem or other significant individuals. Local Site Coordinators should make reasonable efforts to seek the participation of key individuals in the case to ensure the validity of the random sample.
  • An in-home services case open for fewer than 60 consecutive days during the PUR.
  • An in-home services case in which any child in the family was in foster care for more than 24 hours during the PUR.
  • An in-home services case in which any child in the family was in foster care during the 8-month sampling period or who entered foster care from the period after the 8-month sampling period up to the first day of the onsite review.
  • A foster care case open fewer than 24 hours during the PUR.
  • A foster care case in which a child was on a trial home visit (placement at home) during the entire PUR. If the child was in a foster care placement for any portion of the PUR, the case should stay in the foster care sample.
  • A case reported to AFCARS in error, such as a foster care case that was officially closed before the PUR, resulting in no State responsibility for the case; or a case in which the target child reached the age of majority as defined by State law before the PUR.
  • A case appearing multiple times in the sample, such as a case that involves siblings in foster care in separate cases or an in-home services case that was opened more than one time during a sampling period. If siblings appear on the list, the State should select the case of the child that appears first on the list and skip the cases of the other children or other cases involving the same family.
  • A foster care case in which the child’s adoption or guardianship was finalized before the PUR and the child is no longer under the care of the State child welfare agency.
  • Situations in which case selection would result in over-representation of child welfare agency staff, such as when more than two cases in one site are from the caseload of a single caseworker.
  • Situations in which case selection would result in over-representation or under-representation of juvenile justice cases.
  • A case in which the child was placed for the entire PUR in a locked juvenile facility or other placement that does not meet the Federal definition of foster care.

The cases in the sample of approximately 150 cases that are not selected for review may serve as substitute cases to replace any selected cases that are eliminated on site or to resolve discrepancies.

Case Sampling Issues During the Onsite Review

The NRT Local Site Leader and the Local Site Coordinator will need to approve decisions to eliminate a case because of last-minute developments that result in insufficient information being available to review the case. If an interview with a critical party to the case is cancelled at the last minute, for example, the case should be eliminated from the sample. The NRT Local Site Leader and Local Site Coordinator then should consider whether sufficient time exists to use a substitute case.

If the State already has identified alternate cases, it should substitute those cases by following the numerical order provided in the sample. If the State has not previously identified alternate cases, it should use the original sample and sampling procedures to select substitutes. The State also may draw from these cases to resolve discrepancies between information in the Statewide Assessment and the findings of the onsite review should additional cases need to be reviewed to resolve the discrepancies.

In addition, if during the onsite review an in-home services case is found to have included an episode of foster care during the PUR, it may be reviewed as a foster care case only when an alternative in-home services case cannot be substituted. A foster care case found during the onsite review to involve a family that has received in-home services during the entire PUR may be reviewed as an in-home services case only when no alternative foster care cases can be scheduled, provided no child in the family was in foster care during the PUR.

 

Case Record Preparation

All case records to be reviewed are made available at the review sites in their entirety, including applicable information for periods preceding the PUR. Case records also should be as orderly and up to date as possible, including any files maintained separately, such as separate child protective services files or separate child and family records. Caseworkers and/or supervisors assigned to these cases also should be available for interviews.

If the child welfare agency uses electronic files instead of or in addition to paper files, the Local Site Coordinator needs to: (1) make computers and technical support available to reviewers so that they can view the electronic records, (2) obtain hard copies of the files or the portions of the files containing information relevant to the review, or (3) use a combination of these two approaches.

If necessary, the State agency obtains confidentiality statements or releases of information before the onsite review to permit reviewers to read case records and conduct case-related interviews. In addition, the Child Welfare Reviews Project require that all consultants serving on the Review Team sign an agreement that includes a confidentiality provision.

Interview Scheduling

Case-related interviews and stakeholder interviews are key components of the onsite review. These interviews are arranged before the review week begins.

Case-Related Interviews

Each review pair is responsible for reviewing the case records they are assigned and interviewing key individuals involved in the cases. In general, the following individuals related to a case will be interviewed unless they are unavailable or completely unwilling to participate:

  • The child, if he or she is school age. Cases involving preschool-age children may be reviewed but do not require an interview with the child. Instead, the reviewers might observe the child in the home while interviewing the birth or foster parents.
  • The child’s parents.
  • The child’s foster parents, pre-adoptive parents, or other caregivers, such as a relative caregiver or group home houseparent, if the child is in foster care.
  • The family’s caseworker. When the caseworker has left the agency or is no longer available for interview, it may be necessary to schedule interviews with the supervisor who was responsible for the caseworker assigned to the family.
  • Other professionals knowledgeable about the case. When numerous service providers are involved with a child or family, it may be necessary to schedule interviews only with those most recently involved, those most knowledgeable about the family, or those who provide the primary services the family is receiving. More than one service provider may be interviewed.

As needed, on a case-by-case basis, other individuals who have relevant information about the case also may be interviewed. These individuals may include the child’s guardian ad litem, advocate, or other family members. If possible, interviews with parents, foster parents, and children should be conducted in their homes or foster homes. Service providers may be interviewed wherever it is most convenient for them and the review pair. When travel arrangements and the schedules of reviewers preclude travel to those locations, or when persons to be interviewed prefer not to have reviewers in their homes or offices, the interviews may take place in a central location or by telephone.

Case Record Interview Scheduling

The Local Site Coordinator handles all case record interview scheduling. He or she will generally allow time at the beginning of each day for reviewers to read the cases before the first interview is scheduled. Each interview is typically scheduled for 1 hour or less, and the Local Site Coordinator will build in time between interviews for any necessary travel. Additionally, he or she will prepare, in advance, maps or other written directions to the interview sites and provide these to each review pair. The interviews scheduled for each day will correspond to the case that the review pair is expected to complete that day

In general, the caseworker will not be present at the interview. If, however, concerns exist about the safety of reviewers or other issues related to the interview, the Local Site Coordinator will take the necessary precautions, such as arranging for the interview to be held in the local child welfare agency office. If special accommodations are required to complete an interview, for example, to address language needs, the Local Site Coordinator will make those arrangements as well, including obtaining an interpreter, if needed. Before the review week begins, the Local Site Coordinator will prepare the individuals being interviewed for their interviews. This preparation will include helping them to understand the purpose of the review and confirming the time and location of the interview in writing.

Stakeholder Interviews

Stakeholder interviews can involve State or local stakeholders. The State Team Leader schedules State stakeholder interviews, in collaboration with the NRT and Children's Bureau Regional Office Team Leaders, and confirms the appointments in writing. On average, around 15 State stakeholder interviews are scheduled for the review week, unless the NRT or Children's Bureau Regional Office Team Leaders request additional interviews. Each interview is normally scheduled to last an hour or more, with time built in for travel between interviews.

Local stakeholder interviews are scheduled by the Local Site Coordinator. As with State interviews, there are usually around 15 local stakeholder interviews per review site unless the NRT or Children's Bureau Regional Office requests additional interviews. They may be conducted either at the local agency or where the stakeholders are located.

 

Substantial Conformity and Program Improvement Plans

Because child welfare agencies work with the nation’s most vulnerable children and families, the Children’s Bureau has established very high standards of performance for child and family services systems. States are expected to meet defined criteria regarding the outcomes and systemic factors examined in the Child and Family Services Reviews (CFSRs), as well as national standards established by the Children’s Bureau regarding safety and permanency. These high standards underpin the entire CFSR process and are designed to strengthen the delivery of effective services, fortify partnerships, encourage ongoing self-monitoring and continuous quality improvement (CQI), and achieve more positive outcomes overall for children and families.

At the end of a CFSR onsite review, the Children’s Bureau analyzes information from a variety of sources to determine whether a State is in substantial conformity with the seven outcomes and seven systemic factors. Substantial conformity means that the State meets Federal criteria established for each outcome or systemic factor. 

States determined by the Children’s Bureau not to have achieved substantial conformity in one or more of the assessed areas must develop and implement a Program Improvement Plan (PIP). The PIP is a critically important component of the CFSR, since each State’s PIP serves as a blueprint for addressing  identified areas needing improvement in order to achieve substantial conformity across all outcomes and systemic factors. Additionally, the PIP enables a State to build ongoing capacity to evaluate the performance of its entire child welfare system. 

Substantial Conformity

The Child and Family Services Review (CFSR) is a comprehensive review of a State's child and family services system. The Children’s Bureau analyzes information from a variety of sources to determine whether a State is in substantial conformity with the seven outcomes and seven systemic factors. Substantial conformity means that the State meets Federal criteria established for each outcome or systemic factor. 

The Children's Bureau uses the following documents and data collection procedures to make its determinations:

  • The Statewide Assessment, prepared by the State child welfare agency.
  • The State Data Profile, prepared by the Children’s Bureau.
  • Case record reviews of 65 cases (40 foster care and 25 in-home services cases) at three sites, one of which has to be the largest metropolitan area in the State.
  • Stakeholder interviews and focus groups (conducted at all three sites and at the State level) with stakeholders including, but not limited to: youth, parents, foster and adoptive parents, all levels of child welfare agency personnel, collaborating agency personnel, service providers, court personnel, child advocates, Tribal representatives, and attorneys.

Conformity with the outcomes is primarily based on information gathered from the sample of 65 cases examined during the State’s CFSR onsite review. For Safety Outcome 1 and Permanency Outcome 1, the Children’s Bureau also evaluates whether the State meets specific national standards.

Conformity with the systemic factors is based on an evaluation of the information contained in the Statewide Assessment and the information collected in stakeholder interviews during the onsite review.

If a State fails to achieve substantial conformity with an outcome or systemic factor, it must submit a Program Improvement Plan that addresses the areas of non-conformity. In prioritizing areas to be addressed, the State must first address, in content and timeframes, those items that specifically affect child safety.

Conformity with the Outcomes

The seven outcomes assessed in the CFSR address aspects of children’s safety, permanency, and well-being and incorporate a total of 23 items. Each item reflects a key Federal program requirement relevant to the Child and Family Services Plan (CFSP) for that outcome. In conducting their case record review, reviewers use the Onsite Review Instrument (OSRI) to obtain an item rating of Strength, Area Needing Improvement, or Not Applicable for each individual item.

An outcome's individual item ratings will together comprise its outcome rating of either Substantially Achieved, Partially Achieved, Not Achieved, or Not Applicable. When all the case record reviews have been completed, that review site's substantial conformity with the outcomes will be determined in one of two ways:

  • For every outcome except Safety Outcome 1 and Permanency Outcome 1: substantial conformity is determined by the percentage of cases reviewed on site in which the outcome was determined to be Substantially Achieved. If the outcome was rated as Substantially Achieved for at least 95 percent of cases, then that outcome is considered to be in substantial conformity. Note that the threshold for substantial conformity during round 1 was 90 percent; it was raised for round 2 in the spirit of continuous quality improvement that is the foundation of the CFSRs.

  • For Safety Outcome 1 and Permanency Outcome 1, the 95 percent threshold is still used. However, these outcomes also consider the State's performance on related data indicators and composites in order to determine substantial conformity. National standards were established for two data indicators for Safety Outcome 1 and four data composites for Permanency Outcome 1.

Following the onsite review, the Children's Bureau uses data gathered through the Statewide Assessment and the onsite review to make determinations of substantial conformity with the outcomes for the State as a whole.

National Standards

The Children's Bureau uses six data indicators to determine substantial conformity with two outcomes: Safety Outcome 1 (two data indicators) and Permanency Outcome 1 (four data composites). 

If the State's data fail to meet national standards, the State is required to implement strategies in its Program Improvement Plan designed to improve the State's performance on each failed item or data indicator. In prioritizing areas to be addressed, the State must first address, in content and timeframes, those items that specifically affect child safety.

Safety Outcome 1

For Safety Outcome 1, the data indicators are individual measures:

  • Absence of maltreatment recurrence: Of all children who were victims of substantiated or indicated abuse or neglect during the first 6 months of the reporting year, what percent did not experience another incident of substantiated or indicated abuse or neglect within a 6-month period?
  • Absence of child abuse and/or neglect in foster care: Of all children in foster care during the reporting period, what percent were not victims of a substantiated or indicated maltreatment by foster parents or facility staff members?

Note that safety outcomes determined to not be in substantial conformity must be given priority by the State in its Program Improvement Plan.

Permanency Outcome 1

The four data indicators used for Permanency Outcome 1 are composite indicators. Each composite is a set of measures that assess a different aspect of performance with regard to a specific program area.

Each measure in the composite makes a unique contribution to the total composite score, which ranges from 50 to 150 (the higher the score, the better the performance). Having multiple measures in each composite, therefore, provides a more comprehensive portrait of State performance than could be obtained through a single measure. 

The four composite indicators used during round 2 are: 

  • Permanency Composite 1: Timeliness and permanency of reunification
  • Permanency Composite 2: Timeliness of adoption
  • Permanency Composite 3: Permanency for children and youth in foster care for long periods of time
  • Permanency Composite 4: Placement stability

Note that the national standards were established for each of the composites as a whole, not for the individual measures that make up each composite. Therefore, States are not expected to meet any specific standard for individual measures within a composite, but rather to achieve an overall performance level within the composite itself with the understanding that improvement on any given measure will result in an increase in the overall composite score.

Permanency Composite 1

Permanency Composite 1 is concerned with the timeliness and permanency of reunifications. It consists of two principal components, A and B, each of which contributes 50 percent to the composite's overall score.

Component A pertains to the timeliness of reunification and includes three separate measures. This allows for a broader picture of State performance in regard to reunifying children in a timely manner than would be possible with any single measure. Component B pertains to permanency of reunifications and includes a single measure.

Note that, for the purposes of CFSR data measures, "reunification" occurs if the child is reported to AFCARS as discharged from foster care and the reason for discharge is either "reunification with parents or primary caretakers" or "living with relatives." The composite excludes children who were in foster care for less than 8 days, and also includes an adjustment to length of stay for children whose last placement prior to their discharge for reunification was a trial home visit that lasted longer than 30 days.

Component A

Component A of Permanency Composite 1 consists of three separate measures, described below:

  1. Of all children discharged from foster care to reunification in the year shown, and who had been in foster care for 8 days or longer, what percent was reunified in less than 12 months from the date of the most recent entry into foster care?

This measure includes two groups of children in its calculation: those discharged from foster care to a reunification in less than 12 months after the date of their removal from home, and those discharged to a reunification who were reported to AFCARS as being placed in a trial home visit within 11 months or less of their removal and remained in that placement until their discharge.

  1. Of all children exiting foster care to reunification in the year shown, and who had been in care for 8 days or longer, what was the median length of stay (in months) from the date of the most recent entry into foster care until the date of reunification?

This measure assesses a particular child's length of stay in foster care in two different ways. First, it considers the length of stay in months from the date of removal from the home until the date of discharge to reunification. Second, it considers the length of stay in months from the child's date of removal from the home to the date that the child was reported to AFCARS as being placed into a trial home visit, assuming the trial home visit lasted longer than 30 days and was the child's last placement prior to his or her discharge from foster care. 

  1. Of all children entering foster care for the first time in the second 6 months of the year prior to the year shown, and who remained in foster care for 8 days or longer, what percent was discharged from FC to reunification in less than 12 months from the date of first entry into foster care?

There are two categories of children included in calculating his measure. The first are children who entered foster care in the second 6 months of the year prior to the year shown who were then discharged to reunification in less than 12 months from their foster care entry date. The second are children who entered foster care in the second 6 months of the year prior to the year shown who were reported to AFCARS as being placed in a trial home visit within 11 months from the foster care entry date and remained in that placement until discharge to reunification.

Component B

Component B of Permanency Composite 1 includes a single measure: 

  1. Of all children exiting foster care to reunification in the year prior to the one shown, what percent re-entered foster care in less than 12 months from the date of discharge?

Permanency Composite 2

Permanency Composite 2 concerns the timeliness of adoptions. It consists of three principal components, A through C, each of which contributes 33.3 percent to the composite's overall score.

Component A pertains to the timeliness of adoptions of children exiting foster care to adoption. Component B measures progress toward adoption of a cohort of children who have been in foster care for 17 months or more and therefore meet the ASFA requirements for the State to file a termination of parental rights (TPR). Each of these two components include two individual measures. 

Component C, which looks at the timeliness of adoptions of a cohort of children who are considered "legally free" for adoption, contains only one individual measure. 

Component A

Component A of Permanency Composite 2 consists of two separate measures concerning the timeliness of adoptions of children exiting foster care. Those measures are described below: 

  1. Of all children who were discharged from foster care to a finalized adoption in the year shown, what percent was discharged in less than 24 months from the date of the most recent entry into foster care?
  2. Of all children who were discharged from foster care to a finalized adoption in the year shown, what was the median length of stay in foster care (in months) from the date of the most recent entry into foster care to the date of discharge?

Component B

Component B of Permanency Composite 2 consists of two separate measures of progress toward adoption of a cohort of children who meet the ASFA time-in-foster care requirement. Those measures are: 

  1. Of all children in foster care on the first day of the year shown, and who were in foster care for 17 continuous months or longer, what percent was discharged from foster care to a finalized adoption before the end of the year shown?
  2. Of all children in foster care on the first day of the year shown, and who were in foster care for 17 continuous months or longer, what percent became legally free for adoption (i.e., a Termination of Parental Rights was granted for each living parent) in less than 6 months from the beginning of the fiscal year?

Component C

Component C of Permanency Composite 2 pertains to the timeliness of adoptions of children who are considered "legally free" for adoption. It includes a single measure:

  1. Of all children who became legally free for adoption during the prior year, what percent was discharged from foster care to a finalized adoption in less than 12 months of becoming legally free?

Remember that for a child to be considered "legally free," the State must have filed a termination of parental rights (TPR) for all of his or her living parents.

Permanency Composite 3

Permanency Composite 3 is concerned with the achievement of permanency for children in foster care. It consists of two separate components, A and B, each of which contribute 50 percent to the composite's total score.

Component A, which includes two separate measures, looks at how well the State achieves permanency for children who spend extended periods of time in foster care. Component B includes a single measure and looks at children who grow up in foster care and exit to emancipation.

Component A

Component A of Permanency Composite 3 examines how well the State achieves permanency for children who spend an extended period of time in foster. It consists of two individual measures:

  1. Of all children who were discharged from foster care in the year shown who were legally free for adoption (i.e., there was a termination of parental rights (TPR) for each living parent), what percent was discharged to a permanent home prior to their 18th birthday?
  2. Of all children who were in foster care for 24 months or longer on the first day of the year shown, what percent were discharged from foster care to a permanent home prior to their 18th birthday?

For both measures, a "permanent home" is defined as having a discharge reason of adoption, reunification (including living with relative), or guardianship.

Note that guardianship is included in this permanency assessment because, nationwide, there is only a very small percentage of children who are discharged from foster care to guardianship. These small numbers prevent the effective use of a separate composite or measure focusing on the timeliness of achieving guardianship.

A 24-month period was chosen for both measures because, nationally, about 50 percent of children in foster care have been in foster care for two years or more. Using a 24-month period allows for the complete assessment of what happens to children in foster care during a 12-month period.

Component B

Component B of Permanency Composite 3 addresses children who grow up in foster care and exit to emancipation. It consists of a single measure:

  1. In the year shown, of all children who exited foster care with a discharge reason of emancipation prior to their 18th birthday, or who reached their 18th birthday while in foster care, what percent was in foster care for three years or longer?

Note that, in AFCARS, "emancipation" is defined as "the child reached maturity according to State law by virtue of age, marriage, etc." 

The 3-year time period used for this measure was selected to exclude from consideration those children who entered foster care at age 15 or older and then exited to emancipation. This takes into consideration the large variation among States in the age of children upon entry to foster care.

Permanency Composite 4

Permanency Composite 4 is the only composite that consists of a single component. It evaluates placement stability with three individual measures:

  1. Of all children in foster care during the year shown, and who were in foster care for at least 8 days but less than 12 months, what percent had two or fewer placement settings?

Note that if a child has been in care for longer than 8 days, any placement changes that took place within the first 8 days in foster care are considered in this measure.

  1. Of all children in foster care during the year shown, and who were in foster care for at least 12 months but less than 24 months, what percent had two or fewer placement settings?
  2. Of all children in foster care during the year shown, and who were in foster care for at least 24 months, what percent had two or fewer placement settings?

Note that measure 3 is used because the Children's Bureau believes that placement stability is as important to the well-being of children in foster care for 2 years or longer as it is for children who have spent only a few months in foster care.

Conformity with the Systemic Factors

During the development of the Statewide Assessment, the Statewide Assessment Team compiles and evaluates information on the seven systemic factors. The State's Child and Family Services Plan (CFSP) and other program requirements provide the basis for determining substantial conformity with each systemic factor. During the onsite review, State and local review team leaders conduct State and local stakeholder interviews to collect the information they need to evaluate the systemic factors and determine substantial conformity. 

Each systemic factor's overall rating is based on the ratings of the individual items that make up the systemic factor. All of the systemic factors are rated based on multiple items except for one, "Statewide Information System," which is rated based on only one item. The items themselves represent key Federal requirements relevant to the State's CFSP or other programs, and can be rated as either a Strength or Area Needing Improvement.

The final determination of each systemic factor's substantial conformity considers whether:

  • the CFSP and other program requirements attached to this systemic factor are actually in place in the State, and
  • the CFSP and other program requirements attached to this systemic factor are functioning as described in the applicable regulation or statute

For each systemic factor, the State receives a score on a 4-point scale. A score of 3 or 4 indicates that the State is in substantial conformity for that systemic factor; a score of 1 or 2 indicates it is not in substantial conformity. The table below describes the scoring system in more detail:

NOT IN SUBSTANTIAL CONFORMITY NOT IN SUBSTANTIAL CONFORMITY IN SUBSTANTIAL CONFORMITY IN SUBSTANTIAL CONFORMITY
1 2 3 4
None of the CFSP or program requirements is in place Some or all of the CFSP or program requirements are in place, but more than one of the requirements fail to function as described in each requirement All of the CFSP or program requirements are in place, and no more than one of the requirements fails to function as described in each requirement All of the CFSP or program requirements are in place and functioning as described in each requirement

Two of the seven systemic factors use slightly different methods for determining substantial conformity: statewide information system and quality assurance system

Statewide Information System

The systemic factor, "statewide information system," has only one CFSP requirement subject to review. If it is determined during the onsite review that this requirement is in place but not functioning as required, the factor will receive a rating of 2, or "Not in Substantial Conformity," rather than 3. 

Quality Assurance System

There are two performance indicators, or items, associated with the "quality assurance system" systemic factor:

  • Item 30: Standards Ensuring Quality Services, and
  • Item 31: Quality Assurance System.

For this systemic factor to be in substantial conformity, it must be rated as a 3 or 4. To earn a "4" rating, both items must be in place in the State and functioning as required. 

To earn a "3" rating, both items must be in place and item 31 must be functioning as required level. Item 30 does not need to be functioning as required for the systemic factor to be found in substantial conformity.

If, however, item 31 is not in place or is not functioning as required, the systemic factor must be rated either a 1 or 2 depending on the State's performance on item 30. If item 30 is in place but not functioning, the factor will be rated a 2. If item 30 is neither in place nor functioning, the factor is rated a 1.

Program Improvement Plan

State child welfare agencies should involve their leadership, staff, and external partners to assess their CFSR findings, form a comprehensive picture of the State’s child welfare system, and further identify areas of strength as well as those needing improvement. This comprehensive picture, in turn, should be used by the State to inform the development of the State’s Program Improvement Plan (PIP), along with other information at its disposal. The PIP process has been found to be most effective when it is integrated into the collaborative planning process that States use to develop their 5-year Child and Family Services Plan (CFSP). 

The CFSR reform framework is intended to create accountability in child welfare through ongoing, effective partnerships between the Federal and State governments. The CFSR and PIP processes are designed to focus child welfare agencies on broad reform efforts that include, but also go beyond, the immediate details of day-to-day practice. The overarching goal of the PIP process is to enable States to use CFSR findings to design initiatives that will result in program improvement and better outcomes for children and families. PIP content will consist of specific strategies and measurement plans designed to facilitate the successful completion of each PIP.

For your convenience, this training module also contains a separate page of PIP resources, which are documents and tools that further explain or assist the PIP process. Many of these resources are also linked separately throughout the module itself.

PIP Process

The overall Program Improvement Plan (PIP) process consists of three general phases:

  • PIP Development and Approval. During this phase, the State and Children’s Bureau work together to develop the content of the State’s PIP and the State submits it to the Children’s Bureau for approval. This phase can begin as early as the Statewide Assessment; the State must submit the PIP within 90 days of receiving its courtesy copy of the Final Report of review findings from its onsite CFSR review. This courtesy copy is generally delivered to the State within 30 days of the statewide exit conference that officially completes the onsite review.
  • PIP Implementation. In this phase, the State implements all activities contained in its PIP, including its goals, primary strategies, action steps, benchmarks, and the measurement plan. During this time the State is required to submit to the Regional Office quarterly reports outlining the benchmarks and action steps completed and the evidence of completion for each as identified in the PIP. The State also uses quarterly reports to identify measurement plan activities that have been completed, and provides updated results as identified in the measurement plan. Also during this time, the State may receive technical assistance (TA) as necessary and may enter into discussions with the Children’s Bureau concerning the renegotiation of certain components of the PIP. The PIP implementation period is 2 years.
  • PIP Evaluation and Final Determination of Conformity. The focus of PIP evaluation is to ensure that the State has completed all action steps and benchmarks, and has achieved the approved amount of improvement in all National Standard and item-specific measures specified in its approved PIP. Once the PIP is evaluated and confirmed as completed by the Children’s Bureau, the State will receive official notification that its PIP has been closed out. During the subsequent CFSR process, the State will once again be evaluated for conformity with the CFSR outcomes and systemic factors.

While the PIP process “officially” begins after completion of the onsite review, the PIP process itself is not isolated and linear, but rather follows the cyclical CFSR process where each step informs and leads to the next. 

The PIP process is circular in nature

In this cyclical process, the Statewide Assessment is the precursor to the onsite review. The onsite review culminates in the statewide exit conference, where preliminary findings for the review are presented. Although each State is encouraged to begin developing its PIP during or after completing its Statewide Assessment, it is not until after the release of the Final Report that the PIP can be finalized, approved for implementation, and then evaluated for a final determination of conformity. Upon completion of its PIP, the State is then ready to begin the process anew with another Statewide Assessment and the next round of the CFSRs.

PIP Development and Approval

After the CFSR onsite review has concluded and all applicable data and information have been analyzed, the Children’s Bureau prepares a written Final Report to the State on the findings informing the State whether it is, or is not, operating in substantial conformity. This Final Report includes a cover letter that estimates the Federal funds that are to be withheld from the State as a financial penalty for failure to achieve substantial conformity and the date by which the State must submit its PIP. The Children’s Bureau provides the State a courtesy copy of the cover letter and Final Report within 30 days of the statewide exit conference that marks the official end of the onsite review.

Note that the “courtesy copy” is a final draft that provides advance notice to the State of the review findings before the findings are made public. It serves as the written notice to the State of the determination of substantial conformity. After reviewing the courtesy copy and cover letter, the State and the Children’s Bureau work together to finalize any issues or revisions. The Children’s Bureau Regional Office then issues the official Final Report and cover letter to the State approximately 2 weeks after the courtesy copy.

The State must submit its Program Improvement Plan (PIP) for approval within 90 days of receiving the courtesy copy of the Final Report. In preparing the PIP for approval, the State child welfare agency must involve staff and external partners to ensure that all stakeholders have collective ownership in the document and address the most meaningful priorities for the child welfare system as a whole. States should also make use of the tools provided by the Children’s Bureau, which include technical assistance (TA) opportunities and the PIP Matrix, which was developed by the Children’s Bureau as a suggested format for States to use in organizing their PIP content.

To be a useful working tool for creating systemic change, a PIP should be manageable and include clear goals and strategies, measurable action steps with time frames, realistic benchmarks that can be used to gauge progress, and the negotiated improvement that the State will make toward meeting the national standards and the item-specific measurements. Additionally, a properly and thoroughly developed PIP will generally do all of the following:

  • Be theme-based, providing a system for integrating the action steps across items, outcomes, and systemic factors.
  • Build on what the State learned through its Statewide Assessment and the onsite review.
  • Provide for the engagement of the agency’s leadership and upper management throughout the PIP implementation and monitoring process.
  • Identify the individuals responsible for the program improvement action steps, the measurement process, and the review process.
  • Identify opportunities for stakeholder involvement.

After the State completes and submits its PIP to the Children’s Bureau for approval, the Bureau reviews it and either accepts it as submitted or returns it to the State with comments.

Once the Children’s Bureau has approved the State’s PIP, it provides the State with an approval notification that identifies the target completion date for the PIP. The State signs this notification and forwards it to the Children’s Bureau Central Office. At this point, the State’s 2-year PIP implementation period begins. All financial penalties are placed on hold while the State implements its PIP.

If the Children’s Bureau does not approve the State’s PIP, it sends the State a written notification detailing the basis for the disapproval as well as a target date for resubmission by the State, which should be within 30 days. The State’s resubmission must address the areas that resulted in disapproval of the PIP. The PIP is then subject to another round of review and comment by the Children’s Bureau. If the State does not submit an approvable PIP within the specified time frame, then the financial penalties outlined in the cover letter may be reinstated.

PIP Implementation

Once the State’s Program Improvement Plan (PIP) receives Children’s Bureau approval, the State has 2 years to implement it. During this period, all financial penalties that were to be assessed against the State for failure to achieve substantial conformity are put on hold. Using the technical assistance (TA) as defined in its PIP, the State is then responsible for implementing all of the action steps it established to achieve its strategies and for completing the benchmarks it defined to measure its progress.

Over the course of the implementation period, the State is responsible for submitting quarterly progress reports to the Children’s Bureau. These progress reports are part of the ongoing PIP monitoring and evaluation process that enables the State and Children’s Bureau to verify that action steps have been completed and benchmarks are being met in a timely and effective way. At least annually, the Children’s Bureau Regional Office and the State jointly evaluate the State’s progress in implementing the PIP. It is possible for the State and Children’s Bureau to enter into discussion concerning the renegotiation of PIP benchmarks, action steps, and other contents during the implementation period.

If exceptional circumstances arise that delay the completion of the PIP within the 2-year time frame, States may request up to a 1-year extension of the PIP’s completion time. However, granting such an extension is rare and requires that the State submit a written request that provides a compelling reason for the extension, along with supporting documentation. Such an extension request must be received by the U.S. Department of Health and Human Services (HHS) at least 60 days before the approved PIP completion date, and is subject to approval by the Secretary of HHS.

Once the PIP implementation period has been completed, the State may enter a “non-overlapping year” evaluative period. This period provides the State with one additional year of data, after the 2-year PIP implementation period, which can be used for the final evaluation of the successful completion of the PIP measurement plan.

PIP Technical Assistance (TA)

The Children’s Bureau encourages every State developing a Program Improvement Plan (PIP) to include a technical assistance (TA) plan that defines how the State will effectively use TA to achieve its goals. There are a variety of Federal and non-Federal TA resources available to States to assist them throughout the PIP process.  Typically, the Children’s Bureau collaborates with the State during its PIP preparation to discuss specific TA needs. In many cases, a State requires multiple TA providers to meet its varied needs. The TA strategies that the State develops should be designed not as one-time events intended to help achieve immediate PIP goals, but rather as long-term, capacity-building efforts.

Federal TA providers include 11 National Resource Centers (NRCs) funded by the Children’s Bureau, which are organized under the Training and Technical Assistance (T/TA) Network. The Training and Technical Assistance Coordination Center (TTACC) ensures that TA assistance from the NRCs is provided to States in response to review findings and serves as a single point of coordination for individualized, onsite or offsite TA services from multiple providers. In addition, in 2008 the Children's Bureau established five regional Implementation Centers to work with States and Tribes in implementing strategies to achieve sustainable, systemic change for greater safety, permanency, and well-being for children, youth, and families.

Non-Federal TA providers generally include local or State universities or other nonprofit entities that can contribute consultant expertise to the State child welfare agency. At the same time, their assistance may result in strengthened ties between the agency and community.

PIP Renegotiation

The State may ask to renegotiate its Program Improvement Plan (PIP) with the Children’s Bureau Regional Office, as needed, especially when implementing complex strategies. This renegotiation may be considered to revise action steps or modify measurement plans (see PIP Content).

Requests for changes to the PIP should be submitted in writing (or electronically) to the Children’s Bureau Regional Office for approval. Contact information for each Regional Office is available online at: http://www.acf.hhs.gov/programs/oro.

Once a request has been received, the Children’s Bureau Regional Office Team Leader then contacts the State to discuss the issues leading to the request, the specific changes proposed, and the rationale for the adjustment. The Children’s Bureau Regional Office and State, in consultation with the Children’s Bureau Central Office, may then renegotiate the PIP as needed, but the new plan must meet the following criteria:

  • The renegotiated PIP must be designed to correct the areas of the State’s program determined not to be in substantial conformity or to achieve a standard for the data indicators that is acceptable.
  • Any action steps that are renegotiated in the PIP must still be completed within the allowable 2-year time frame for PIP implementation.

The terms of the renegotiated PIP must be approved by the Children’s Bureau Regional Office in consultation with the Children’s Bureau Central Office.

PIP Evaluation and Final Determination of Conformity

The focus of Program Improvement Plan (PIP) evaluation is to ensure that the State has completed all action steps and benchmarks identified in its primary strategies. Evaluation is also intended to ensure that the State has achieved the approved amount of improvement in all national standard and item-specific measurements identified in its approved PIP.

The evaluation phase may actually begin as early as the PIP development and approval process, when new results from comparisons against national standards may become available from ongoing Federal data submissions. Evaluation can last through what is referred to as a “non-overlapping data year” that follows the conclusion of the PIP implementation period. This additional year is allowed for the measurement plan element only, allowing time for results to be demonstrated following the implementation of the PIP’s action steps.

The State and Children’s Bureau Regional and Central Offices work collaboratively to complete the ongoing process of PIP evaluation and, ultimately, make a final determination of conformity. This overall process is accomplished through ongoing PIP measurement that takes into account the State’s progress through its various goals and their primary strategies, action steps, and benchmarks. It also considers the State’s progress toward meeting its improvement goals in the safety and permanency national standards and for each measured item in the PIP.

The State generally provides these measurements through regular quarterly reports, but the Children’s Bureau may also conduct an annual review to confirm that specific activities have been completed or measurement targets achieved.

Once the Children's Bureau has evaluated the PIP and confirmed that it is complete, the State receives official notification that its PIP has been closed out. If the PIP is completed successfully, then the financial penalties assessed against the State for failure to achieve substantial conformity are rescinded. The State’s ongoing conformity with CFSR requirements will then be evaluated in a subsequent CFSR process. However, if the State fails to successfully complete its PIP, those financial penalties will be assessed against the State and remain in effect until the next CFSR. 

PIP Content

Each State must work collaboratively with the Children’s Bureau to prepare its Program Improvement Plan (PIP). For each outcome, systemic factor, and national standard that is not in substantial conformity (as identified in the Final Report), the State must work in conjunction with the Children’s Bureau to specify the broad, measurable goals of improvement that it will use to address those areas in which it failed to achieve substantial conformity.

Some examples of overarching goals that States have used in their PIP documents include the following:

  • Conduct child risk and safety assessments throughout the life of the case
  • Expedite permanency for children
  • Promote family engagement
  • Recruit and retain foster homes
  • Increase access to service delivery systems for children and/or youth

In determining the specific issues to address, the State must give first priority, in both level of effort and time frame, to those items and outcome areas that affect child safety. Second priority goes to the remaining areas that were the furthest from achieving substantial conformity. However, all items and outcomes that were determined by the onsite review to be out of conformity with Federal requirements must be addressed in the State’s PIP.

With broad goals in place, the PIP document must then address the primary strategies that will be used to achieve those goals as well as the action steps required to implement each strategy. It must also identify any technical assistance (TA) that will be required to achieve each strategy. Furthermore, the PIP must address issues of measurement – specifically how the agency will measure benchmarks of progress toward completing the action steps and the measurement goals, or percentage improvement, that will be used to evaluate the impact of each strategy.

When complete, the PIP document will consist of four main components that provide sufficient detail and context to ensure that the Children’s Bureau and State agency staff have a clear understanding of issues and steps and can work in partnership throughout the PIP process:

  1. A general information section with key contact information.
  2. A recommended Program Improvement Plan Strategy Summary and TA Plan that provides information on the primary strategies and TA that the State intends to use to support improvement achievement.
  3. An agreement form indicating approval of the PIP by the Children’s Bureau and the State that establishes the PIP’s implementation date.
  4. A work plan that includes the Strategy, National Standards, and Item-Specific and Quantitative Measurement Plans, which describes action steps for each primary strategy and the benchmarks to be used to track their progress, specifies the safety and permanency national standards baseline performance and percentage of improvement, and identifies the item-specific measurement baseline performance and goals.

The PIP will also include a schedule for submitting regular progress reports to the Children’s Bureau as part of the PIP’s implementation.

The Children's Bureau has developed a suggested format, the PIP Matrix, which States can use to organize their PIP content. The PIP Matrix is available for download on the Children’s Bureau Web site at http://www.acf.hhs.gov/sites/default/files/cb/pip_instruct.pdf. It is also available in the PIP Resources section of this module.

Primary Strategies

The State must document its PIP's primary strategies as the broad approaches that address the key concerns from the CFSR and serve as a framework for the achievement of the PIP's goals and negotiated measures. These strategies should reflect the overarching reforms and continuing improvements that address key concerns from the CFSR Final Report. They may build on prior PIP activity, and should be integrated with the time frames of other State plans, such as the CFSP.

Wherever possible, the PIP strategies should be thematic in nature, perhaps integrating multiple outcomes and systemic factor items to address broad areas of concern. For example, a primary strategy of “Implement a Systems of Care Practice Approach” could affect multiple areas of concern linking to OSRI outcomes and systemic factor items, such as:

  1. Inconsistency of child safety services in in-home cases (Safety Outcome 2)
  2. Ineffectiveness in addressing needs and services of families and foster parents (Child and Family Well-Being Outcome 1, as well as Service Array and Resource Development)
  3. Inconsistency in involving children and families in case planning (Child and Family Well-Being Outcome 1)
  4. Inadequate staff training program for case practice skills (Staff and Provider Training)

Note that when multiple areas are addressed by a single strategy, the State should identify the outcome or systemic factor that is most directly affected by the strategy and should avoid linking the same outcomes or systemic factors to more than one key strategy.

When developing strategies that affect front-line practice, the State should be guided by the principles of family-centered practice, community-based services, individualizing services that address the unique needs of children and families, and strengthening parents’ capacity to protect and provide for their children. In some situations, a State may need to review and revise its policies and procedures to ensure a focus on these principles and that practice is consistent with their policies. 

PIP Measurement

The State must incorporate three separate measurement plans into its ongoing PIP evaluation process:

  • The Strategy Measurement Plan, where the State outlines its goals, primary strategies, action steps, and benchmarks.
  • The National Standards Measurement Plan, where the State identifies the safety and permanency national standards and progress toward meeting its improvement goals in these areas.
  • The Item-Specific and Quantitative Measurement Plan, where the State enters and reports information for each CFSR item that is to be measured in its PIP.

Each of these measurement plans uses quarterly status reports to facilitate an ongoing dialogue between the State and Children’s Bureau Regional and Central Offices.

Strategy Measurement Plan

The Strategy Measurement Plan, and its corresponding quarterly status report, is where the State outlines the goals and strategies of its Program Improvement Plan (PIP). Each primary strategy must include measurable action steps that the State can take toward improvement, and not simply suggest further study of issues identified through the CFSR. These action steps are specific activities that will be undertaken to accomplish the strategy, and each action step should be designed to generate specific program improvements.

Along with all of the other required information, States should use the Strategy Measurement Plan to detail the specific documents, reports, or other items of confirmation that can be used to provide the Children’s Bureau Regional Office with evidence of progress and evidence of completion. For example, for the benchmark: Convene work group comprising families that receive services, front-line child welfare staff, and other key partners to guide development of the plan, the evidence of completion might be: Copy of meeting minutes and list of work group participants.

The PIP document must also include a reasonable time frame for the completion of each action step, as well as the technical assistance (TA) required to support each step. The PIP should also identify the individual or individuals responsible for undertaking each action step, as well as the geographic area or areas of the State in which each action step will be undertaken. While these are not regulatory requirements, they should be included in the document whenever possible to ensure that all of the components required for successful program improvement are deployed as planned throughout the State.  

Benchmarks are measurable indicators used by the State and Children’s Bureau to monitor progress in completing action steps. Because PIP evaluation and monitoring must occur throughout the process, and not simply at the end of the implementation period, States should establish realistic, measurable benchmarks for each action step to serve as interim, periodic measures of progress. These benchmarks can be quantitative (number-oriented) or qualitative (process-oriented) in nature, and are designed to help measure incremental progress toward completing the strategies which, in turn, lead toward achieving the final improvement goals.

States should use their quarterly status reports to enter and report information regarding each action step or benchmark that is due during each quarter, and note any evidence of completion. The Children’s Bureau Regional Office will determine, based on its review of a State’s report, whether the action step or benchmark has been completed satisfactorily or is incomplete. If an action step is past due, the State should explain the reason in the plan with a revised completion date. The Children’s Bureau will then review the explanation and revised date and either accept the extended due date or flag the action step for renegotiation. 

National Standards Measurement Plan

The National Standards Measurement Plan, and its corresponding quarterly status report, is where the State identifies the safety and permanency national standards, along with its performance in those areas as measured by both the Final Report and baseline. The State also uses the National Standards Measurement Plan and its quarterly status updates to identify the negotiated, and any renegotiated, improvement goals in its Program Improvement Plan (PIP).

To establish the level of needed progress regarding national standards, each State must first work with the Children’s Bureau to define a percentage of improvement that will be made in each standard found to be out of substantial conformity. The selection of national data indicators that must be addressed in the PIP, the time period that may qualify as the baseline period for measurement purposes, and the required amount of improvement are all determined pursuant to the provisions of Technical Bulletin #3, Amended (dated October 8, 2009). Technical Bulletin #3 is available online at: http://www.acf.hhs.gov/programs/cb/resource/afcars-tb3.

For other outcomes found to be out of substantial conformity, the Children’s Bureau and the State must work together to determine the most realistic way of measuring goal attainment. If the State is using the PIP Matrix for its quarterly reporting, it must enter the status of the data indicator for each reported quarter.

Item-Specific and Quantitative Measurement Plan

The Item-Specific and Quantitative Measurement Plan, and its corresponding quarterly status report, is particularly important because it allows States to enter and report information regarding each CFSR item that is to be measured in its Program Improvement Plan (PIP), including the:

  • status of the item in the Final Report;
  • performance as measured for the established baseline and the source data period for the measure;
  • negotiated improvement goal;
  • method of measuring improvement; and
  • renegotiated improvement goal, if applicable. 

The selection of item-specific measures that must be addressed in the PIP, how the baselines are established, and what the approved amount of improvement is to be are determined pursuant to the provisions of Technical Bulletin #3, Amended (dated October 8, 2009). Technical Bulletin #3 is available online at http://www.acf.hhs.gov/programs/cb/resource/afcars-tb3.

To demonstrate improvement in item-specific measures, the Children’s Bureau encourages the use of State-generated data from the State's own quality assurance (QA), continuous quality improvement (CQI) systems, or Management Information Systems. To measure the degree of improvement in items, the State may use one of four methodologies:

  1. Retrospective data using collected findings that represent 12 months of data;
  2. Prospective data collected during the PIP implementation period after 12 months of data becomes available to establish the baseline;
  3. Data from national standard composite individual measures; or
  4. Data collected from the State SACWIS or other Management Information System.

The State may also propose another methodology for approval by the Children’s Bureau.

Note that if the State is using the PIP Matrix for its quarterly reporting, it must enter the status of the item for each reported quarter. The PIP Matrix is available for download on the Children’s Bureau Web site at http://www.acf.hhs.gov/sites/default/files/cb/pip_instruct.pdf. It is also available in the PIP Resources section of this module.

PIP Resources

The following resources have been developed to further explain or assist with the overall PIP Process.

  • PIP Matrix: The PIP Matrix document is in a standard format developed by the Children’s Bureau and designed to assist States in preparing PIPs for submission to the Children’s Bureau Regional Office. While it was developed to facilitate ease of review, approval, and tracking of State PIPs, it is not mandatory and States may choose to use a different format. However, all PIPs must include the information required by regulation at 45 CFR 1355.35. The PIP Matrix is available for download at http://www.acf.hhs.gov/sites/default/files/cb/pip_instruct.pdf.
  • Procedures Manual: The Child and Family Services Reviews Procedures Manual offers an overview of the purpose and structure of the reviews, as well as detailed information on planning for and conducting the reviews. The manual also discusses the Final Report and the PIP process. It is designed to assist Children’s Bureau staff and State child welfare agencies in planning for, and participating in, a CFSR. State agency administrators are strongly encouraged to share the manual with agency staff who will play active roles in the State’s CFSR, including Local Site Coordinators. The Procedures Manual is available for download at http://www.acf.hhs.gov/programs/cb/resource/cfsr-procedures-manual.
  • Program Improvement Plan Annual/Quarterly Status Report Form: This form is designed to be used by Children's Bureau Regional Office staff to evaluate PIP progress for States that are not using the new matrix in developing their PIPs. For States that are using the new matrix, Regional Offices should use the new matrix to evaluate State PIPs. The new matrix is available through Information Memorandum 07-08, which is available for download at http://www.acf.hhs.gov/programs/cb/resource/im0708.
  • Technical Bulletin (TB) #3: This Technical Bulletin (amended) pertains to the Children’s Bureau’s approach to determining and approving degrees of improvement and attainment of goals for PIPs during Round 2 of the Child and Family Services Reviews (CFSRs). It is available for download at http://www.acf.hhs.gov/programs/cb/resource/cfsr-amended-technical-bulletin-3.
  • Technical Bulletin (TB) #4: This Technical Bulletin contains updated general instructions for PIP monitoring, evaluation, and renegotiation. It includes technical information for States and Children's Bureau Regional Offices about monitoring, reporting, and using a matrix spreadsheet for PIP submissions. It is available for download at http://www.acf.hhs.gov/programs/cb/resource/cfsr-technical-bulletin-4.

In addition, the Children’s Bureau has released a number of Information Memoranda (IMs) that speak to the PIP process. These IMs include:

  • IM 09-01: Measuring Round One Program Improvement Plan (PIP) Improvement for Child and Family Services Reviews (CFSRs) Using Round Two Revised National Standards. This IM provides guidance on measuring Round 1 PIP improvement for the CFSRs using Round 2 revised national standards. Available for download at http://www.acf.hhs.gov/programs/cb/resource/im0901
  • IM 07-05: Measuring Program Improvement Plan (PIP) Improvement for the Child and Family Services Reviews (CFSRs) National Standards. This IM provides information on measuring PIP improvement for the CFSR national standards. Available for download at http://www.acf.hhs.gov/programs/cb/resource/im0705.
  • IM-02-04: Guidance and Suggested Format for Program Improvement Plans in Child and Family Services Reviews. This IM provides guidance and a suggested format for PIPs. Available for download at http://www.acf.hhs.gov/programs/cb/resource/im0204.
  • IM-01-07: Updated National Standards for the Child and Family Services Reviews and Guidance on Program Improvement Plans. This IM provides information and guidance for use by States and Regional Offices on updated national standards as well as guidance on PIPs. Available for download at http://www.acf.hhs.gov/programs/cb/resource/im0107

Onsite Review Team

The CFSR Onsite Review Team comprises both Federal and State staff, with trained consultant reviewers supplementing the Federal component of the team. Federal staff, in consultation with State agency officials, select the Federal and consultant reviewers. State agency officials, in consultation with Federal staff, choose the State Review Team members, who may be State agency staff or external representatives.

The overall review team is divided into four local site teams that are based at three sites around the State. Two of these teams are located at the State's major metro site, and one team is located at each of the other two sites. Each of these local site teams is comprised of a group of Local Site Leaders who work with and coordinate the reviewers

Overseeing and coordinating among all four local site teams are a group of Team Leaders. These Team Leaders include Federal and State representatives whose job it is to coordinate the entire review week and prepare the Friday exit conference

Generally speaking, all members of the Onsite Review Team are expected to:

  • Work as partners regardless of affiliation
  • Fully participate in the review process
  • Maintain a professional demeanor at all times
  • Maintain confidentiality of case-specific information
  • Treat all review team members with respect and as valued team members
  • Complete all activities thoroughly and promptly
  • Be available for all review activities unless, for example, an interview conflicts with a debriefing session, and the Federal Local Site Leader has approved the absence in advance
  • Talk to the Federal Local Site Leader to determine whether other tasks need attention
  • Respect the team leadership and discuss any differences of opinion with the Federal Local Site Leaders in private
  • Set a positive example for other review team members
  • Be prepared to work extended hours

The Federal Review Team also may include Children’s Bureau Regional Office staff from Regions other than the one responsible for the State being reviewed and State child welfare staff from States other than the State being reviewed. These are referred to as cross-State participants.

Local Site Leaders

There will be typically between four and six Local Site Leaders (or, simply, Site Leaders) allocated to each local site team. These individuals conduct stakeholder interviews with local stakeholders, and they support reviewers by answering questions regarding the instrument, assisting in case reviews as necessary, and conducting quality assurance of completed instruments. The Local Site Leaders also assist in preparing the Thursday local site exit conference and collaborate with the Team Leaders to prepare for the Friday statewide exit conference.

There are three types of Site Leaders who will be present at each local site: NRT Local Site Leaders, Federal Local Site Leaders, and State Local Site Leaders. Each site will have one NRT and State Local Site Leader who share overall leadership responsibilities, and two or more Federal Local Site Leaders who assist the NRT and State Local Site Leaders. 

NRT Local Site Leader

NRT Local Site Leaders are Federal representatives from the National Review Team (NRT). There are four NRT Local Site Leaders assigned to each State Review Team. Of these, one is assigned to each local site team (one at each of the metropolitan sites and one at each of the two local sites) to provide, in collaboration with the State Local Site Leader, overall leadership for the local site.

In addition to general leadership responsibilities that include overseeing and coordinating site activities, ensuring that daily time and task requirements are met, and problem-solving, specific duties of the NRT Local Site Leader during the review week include:

 

Federal Local Site Leader

Federal Local Site Leaders are Federal representatives who assist in providing leadership at each local site. There will usually be eight Federal Local Site Leaders assigned to each State's review team, and they will be equally distributed among the four local sites. In some cases, the individuals filling these roles may be high-performing and specially trained consultants who have served on multiple CFSRs and received special, advanced training designed to prepare them for leadership roles.

While the specific duties of Federal Local Site Leaders will vary from site to site, they generally will be expected to:

State Local Site Leader

State Local Site Leaders are State agency representatives who serve as the State’s lead representative for the review team at each of the three local sites. There are four State Local Site Leaders on each State's review team, so that each local site will have one State Local Site Leader of its own (one at each of the two metropolitan sites and one each at the other two local sites). The State Local Site Leaders work closely with the NRT Local Site Leaders to provide overall site leadership and share most of their same responsibilities during the review week. During stakeholder interviews, though, State Local Site Leaders typically will participate as note-takers rather than interviewers.

Reviewers

The reviewers who use the OSRI to conduct case record reviews at each local site work in pairs. Generally, each review pair consists of one State and one Federal representative. The State representative is typically a child welfare agency staff person or representative of the agency’s external partners in the CFSR planning process. The Federal reviewer is normally a Federal agency representative or a specially trained consultant with skills and experience in the child welfare field.

There are usually six or seven review pairs at each local site (which means 12 to 14 reviewers). Each review pair typically reviews two or three cases during the review week. Although each review pair's primary responsibility while on site is to complete one of their assigned case record reviews per day, there are other responsibilities, as well. These other responsibilities include:

For reviewers, an important task early in the review week is forming a good working relationship with your partner. While it is important to begin working on your assigned cases as quickly as possible, you should take a few minutes before you begin to review your first case to get to know one another. As you and your partner work through your first case, you’ll discover that you each have different strengths. By acknowledging these, you can make the case review process more efficient.

For example, because different States organize their cases differently, a Federal reviewer may not be as familiar with specific forms or case file organization as the State reviewer, but he or she may be much more familiar with inputting data into the automated application. You may, therefore, find it beneficial to divide work responsibilities accordinglythe State reviewer might lead the initial case record review, while the Federal reviewer handles data entry.

It is important that review pairs recognize when they are not moving through a case efficiently or are having disagreements with one another about how to proceed. In these cases, the review pair should consult a Local Site Leader for guidance before a problem becomes a crisis. Furthermore, review pairs who complete their assigned case record reviews early should be prepared, at the direction of the Local Site Leader, to assist other review pairs in their own case record reviews. In short, successful review pairs must work well together, must recognize when they need guidance, and must assist the rest of the team as necessary.

Team Leaders

The State Review Team's Team Leaders are the individuals responsible for coordinating between all four of the State's local sites. They play a key role in every aspect of the onsite review, and stay in close contact with each local site's NRT Local Site Leader and State Local Site Leader. They are specifically responsible for assembling and facilitating the Friday statewide exit conference and can also play an important role in the Quality Assurance process. They also handle all State-level interviews for the Stakeholder Interview Guide.

There are typically three Team Leaders for each State's review: an NRT Leader, a Regional Office Leader, and a State Leader.

NRT Leader

The National Review Team (NRT) Team Leader is a Federal agency representative who provides overall leadership for the onsite review and is a member of the NRT. The NRT comprises staff from the Children’s Bureau Central and Regional Offices who provide leadership to the review teams in planning and conducting the CFSRs.

Regional Office Leader

The Regional Office Team Leader is a Children’s Bureau Regional Office representative who assists in providing overall leadership for the onsite review.

State Leader

The State Team Leader is a State agency representative who serves as the State’s lead representative for the onsite review.

The Review Week

Once you have been selected to attend a review, JBS will work with you to make your travel arrangements. You will also receive, 2 to 3 weeks before your scheduled review, all the information you need to prepare to go on site. This information, which will include such material as the Statewide Assessment, the Preliminary Assessment, and relevant State policies, will also be available for you to access at any time on the CFSR Information Portal.

Arriving On Site

In most cases, you will arrive at the review location on the Sunday before the review. You will check into your hotel that evening. The following morning, at around 7:30, you will typically meet the rest of the review team in the hotel lobby and travel together to the review site. There, at the Monday morning team meeting, you will meet your Local Site Leaders, receive your cases, and talk through any logistical issues.

Monday Morning Team Meeting

Once you arrive at your local site, you will participate in the Monday morning team meeting, led by the NRT Local Site Leader. During this meeting, the review team will be introduced, reviewers will meet their review partners, and cases will be assigned. Reviewers will also receive information about the case interviews that were scheduled ahead of time by the State. If there is relevant information about the review specific to that site or that State, it will also be shared here.

Logistical issues will also be dealt with at the Monday morning team meeting. These will include information such as the location of each review pair's workspace, which Local Site Leaders are responsible for handling Quality Assurance, how lunch and dinner will be handled, the time of each evening's nightly debriefing, and so on. At the end of the team meeting, each Site Leader and review pair will receive a tablet PC and supporting equipment and passwords, and the review week will be officially underway.

Day-to-Day Responsibilities

Monday through Wednesday of the review week are when the bulk of the case record reviews, case-related interviews, quality assurance activities, and stakeholder interviews take place. These are long work days, typically beginning at 8 a.m. and often going until late in the evening. Everybody on the review team is expected to participate in all scheduled or assigned activities and to remain on site throughout the work day, unless it is necessary to leave for an interview. Everybody is also expected to participate in each evening’s nightly debriefing.

By Thursday morning, all of the site's OSRIs and SIGs should be completed. The NRT Local Site Leader will have uploaded these records along with the completed Summary of Findings Form to the central server, and together with the other Site Leaders they will begin to prepare for the local site exit conference. Following this conference, reviewers are free to leave the review site and return home. Site Leaders will travel to the site of the statewide exit conference, which is held Friday morning.

Nightly Debriefings

On Monday, Tuesday, and Wednesday evening of the review week, the NRT Local Site Leader will convene a nightly debriefing meeting for the entire local site. The purpose of the nightly debriefing is to go over the day’s activities and to provide a forum that allows each review pair to present a brief, 10-minute summary of the case they completed that day. This summary helps the review team begin to identify themes and trends across the site and also helps Local Site Leaders ensure consistency in outcome ratings among reviewers.

In preparing these case summaries, reviewers should keep in mind that, in addition to having a strict 10-minute time limit, the summary itself should emphasize outcomes, not items. To assist themselves in preparing this summary, reviewers should use the automated Nightly Debriefing Report. When this report is first opened, the automated application will have already filled in some of the general information. However, reviewers will need to complete the remaining information, including details about the case history and ratings. When you are working on summarizing the case ratings, remember that you are synthesizing information from the 23 item ratings to explain the 7 outcome ratings. For example, when considering Safety Outcome 1, this statement:

Safety Outcome 1 was rated as Substantially Achieved because, during the period under review, all accepted reports were responded to in a timely manner (as documented through item 1) and there was no repeat maltreatment (as documented through item 2)

is better than this statement:

The outcome was rated as Substantially Achieved because both items 1 and 2 were rated as strengths.

The second example does not provide enough information about the ratings, while the first provides important details about the specific factors that contributed toward Outcome 1 being rated as Substantially Achieved.

Remember that, for reviewers, the point of the nightly debriefing is to present relevant information about completed cases as concisely as possible and ensure that outcome ratings are consistent across the local site. The purpose of the debriefings is not to educate other team members about all the details of a case or to critique the State’s policies or practices. Rather, you should focus on your findings regarding the actions taken by the State during the period under review. 

In addition to the case summaries, Local Site Leaders who have participated in stakeholder interviews will use the nightly debriefings to briefly summarize the interviews and address the systemic factors that were explored. Again, this allows the review team to begin to identify systemic themes. The entire team will also use each night's meeting to identify problems or concerns regarding schedules, logistical arrangements, instruments, the automation, or any other issues that may have come up over the day.

Thursday Morning Debriefing

By Thursday morning, all case record reviews and stakeholder interviews must be finalized and the instruments transferred to the NRT Local Site Leader's computer. The NRT Local Site Leader will use information from a variety of reports to complete the Summary of Findings Form and will ensure that this and all of the site's records are uploaded to the central server.

Afterward, the NRT Local Site Leader will convene a meeting of the entire local site review team. Like the nightly debriefings, this meeting is intended to provide the entire team a chance to review that site’s findings, focusing specifically on themes and trends that have emerged across cases and interviews. However, unlike the nightly debriefings, which focus only on outcome summaries for each case completed that day, the Thursday morning debriefing involves an item-by-item review of the entire site’s findings. As each item is discussed, the review pairs and Site Leaders will be invited to share relevant information of interest from either the case record review or stakeholder interview.

The Thursday Morning Debriefing will generally last until lunchtime. It is followed by the Thursday Local Site Exit Conference.

Thursday Local Site Exit Conference

Early on Thursday afternoon, following the Thursday morning debriefing, the NRT Local Site Leader will convene the local review team for a local site exit conference. During this conference, the NRT Local Site Leader will share with the local site a preliminary report on the issues and trends identified for that site. This preliminary report will include a description of the strengths and areas needing improvement identified at the local site.

In addition to the local site's review team, attendants at this exit conference may include caseworkers, supervisors, local administrators, agency staff, and other stakeholders. Following the local site exit conference, the reviewers are dismissed to return home. Site Leaders travel to the location designated for the statewide exit conference on Friday.

Friday Statewide Exit Conference

After the local site exit conference on Thursday, reviewers are dismissed to return home while Site Leaders travel to the location designated for the statewide debriefing and exit conference. These events begin on Friday morning, when the remaining review team comes together for the final statewide debriefing facilitated by the NRT Team Leader. By this time, all the finalized instruments and reports from the local sites will have been uploaded to the central server, and the preliminary data and findings from the three review sites will have been reviewed and discussed by the NRT Local Site Leaders for inclusion in the statewide exit conference presentation, which normally takes place in the afternoon.

At the exit conference, the NRT Team Leader delivers a presentation on the review findings for the State, using a PowerPoint presentation. There will be discussion at the conference of issues that may require resolution during the Final Report development process and a brief overview of the next steps in the review process. These steps include the preparation and submission to the State of the Final Report and the development and monitoring of a Program Improvement Plan.

Once the statewide exit conference concludes, the review week comes to an official close.

Data Integrity and Quality Assurance

The CFSRs use multiple information sources to assess State performance, including data indicators, the Statewide Assessment, case record reviews, and a variety of interviews. These interviews might be with children, parents, foster parents, social workers, or other professionals working with a child; they may also involve State or other community stakeholders. All of these sources are necessary to gather both quantitative and qualitative data and to gain a comprehensive picture of the child welfare system.

Reviewers and Local Site Leaders serve as the principle data collectors during the onsite portion of the CFSRs. The amount of data they collect on site, as well as the sources of those data, are very important. No amount of subsequent data analysis can make up for a lack of original data quantity or quality. In other words, the data analysis required to inform both the Final Reports and the Program Improvement Plan can only be as good as the original data allow. This is why the Quality Assurance (QA) process, which ensures the validation and integrity of data collected onsite, is so important.

The QA process as a whole is divided into seven steps.

  1. Preliminary QA is conducted by reviewers prior to transferring completed cases to a site leader for First-Level QA. It is also conducted by site leaders before they begin First-Level QA.
  2. First-Level QA, also called Onsite QA, is the first QA review of a case by a Local Site Leader. Cases are reviewed to ensure that data collected are clean, complete, and consistent.
  3. Second-Level QA, also called State QA, is the final local case-level QA conducted on an individual case. It is conducted by site leaders with a great deal of CFSR and child welfare-related experience who are approved to do Second-Level QA by the Children's Bureau. They may be located onsite or offsite. They conduct a final review of cases to ensure that data collected are clean, complete, and consistent.
  4. Local Site Finalization is a final confirmation that all case and stakeholder data have been finalized and uploaded to the central server.
  5. Data Validation is a final review of all State data to ensure that data are clean, complete, and consistent across the State.
  6. Third-Level QA is a post-review check of case data.
  7. Data Change Management is a process to validate and update OSRI data as applicable and appropriate.

There are a number of tools designed to support this process, called collectively the "Data Integrity Materials." 

Preliminary QA

Before transferring an OSRI to a designated Local Site Leader for First-Level QA, reviewers should conduct Preliminary QA to make sure the case is complete and accurate. Preliminary QA consists of the following processes:

Review the OSRI. Review instructions and definitions and carefully re-read the Main Reason statements and follow-up questions to ensure that all documentation requested is addressed appropriately. Use the OSRI Quality Assurance Guide and the Combined QA Tip Sheet to ensure that you have responded to item questions appropriately and have correctly documented item ratings.

Review the Completeness Column. Check the Completeness column on the Overview Screen of the automated application. The Face Sheet, OSRI, Documentation, and Items Rated should indicate that all items are 100 percent complete.

Check the Unanswered Questions Navigator: The Unanswered Questions Navigator on the bottom left corner will display all the unanswered item and rating documentation questions. If the case is complete, no items should appear.

Examine the tablet reports: The automated application features various reports that allow you to review the entire instrument for clean and complete data as one continuous document, rather than going screen by screen. These include the Completed Case Report, the Preliminary Case Summary Report, and the Case QA Rating Summary Report.

Once you’ve verified that the instrument is complete and using the methods above, you must complete the Case Finalization Checklist and turn it in to the onsite JBS representative. At this point, Preliminary QA is complete and you are ready to proceed to First-Level QA.

First-Level QA

First-Level QA is the first quality assurance review of a case by a Local Site Leader. The specific steps involved in First-Level QA vary depending on your role.

First-Level QA for Reviewers

Reviewers who are going through the First-Level QA process will use the following steps:

  1. Change the status of the case: Before you can transfer your case to a Local Site Leader to begin First-Level QA, you must change its QA Status to "Ready for QA Review."
  2. Locate a site leader to transfer case: Once the QA Status has been changed, locate the appropriate site leader or his or her designee. The site leader will transfer the case to his or her tablet to conduct First-Level QA.
  3. Case becomes locked: Once the site leader transfers your case to his or her tablet, it will become inactive on your tablet and grayed-out in the Record Summary Grid. You can still navigate the case to access information or to prepare the Nightly Debriefing Report, but you will not be able to edit any information until the site leader returns the case to you.
  4. Receive case from site leader after review: When the site leader has finished conducting QA on the case, he or she will notify you and return the active case to your tablet.
  5. Refresh the Overview Screen: The first thing you’ll want to do when the case is transferred back to your tablet is refresh the Overview Screen. When the case refreshes, you’ll see that it no longer appears as dark gray. This means it’s now unlocked and can be edited normally again. In addition, the case’s QA Status will have been changed to “QA Review Complete.”
  6. Locate stickies: Now you will need to find the comments, or "stickies," left by the site leader. These are the application’s version of the actual sticky notes that were added to the instrument during the first round of reviews, when everything was done in hard copy. The Unresolved Comment Navigator makes this easy by displaying all comments and allowing you to jump immediately to any sticky note.
  7. Address each sticky: As you work through the stickies, be sure to address each problem indicated in the note, making any corrections or additions necessary. If you don't fully understand or agree with the comments in a sticky, be sure to discuss the issue with the site leader.
  8. Respond to each sticky: After you take all necessary steps to address a sticky, be sure to write a response. Your response might be as brief as "done" or "fixed," or, if necessary, it might briefly summarize the actions you took to address the issues raised by the sticky.
  9. Return case to site leader: When you've addressed the issues in every sticky and left a response for each one, you will return the case to the site leader so that he or she can review your changes and comments.
  10. Resolving stickies: Once the site leader is satisfied that the issue has been adequately addressed, he or she will mark the comment as resolved. When a comment is resolved, it disappears from the Unresolved Comment Navigator. If, however, the site leader determines that not all of the comments were appropriately addressed, he or she will return the case to you again. This process may continue, or he or she may discuss the issues with you face to face, until all comments are successfully resolved.

When all stickies are resolved, First-Level QA is complete and the case is ready for Second-Level QA.

First-Level QA for Site Leaders

Specifically designated Local Site Leaders will bear responsibility for performing First-Level QA on OSRIs as they are completed by reviewers. In order to accurately perform First-Level QA, the site leader should complete both an initial review and an in-depth review.

Initial Review

The initial review is an opportunity for the site leader to become familiar with the case and quickly review the instrument. Follow these steps to complete your initial review:

  1. Confirm Completeness and Preliminary QA. Before you transfer a case to begin First-Level QA, make sure the case is complete by checking the Completeness column. Also verify with the review pair that they have completed Preliminary QA on the case.
  2. Transfer the case. You will need to transfer the case to your tablet to conduct First-Level QA. Before transferring a case, ensure that the reviewers have changed the case's QA Status to "Ready for QA Review." Also remember that after you transfer a case, reviewers will be able to access the case but not edit it until you have returned the case to them. JBS technical support staff on site will assist you in your first transfer, which you will find to be a very simple process.
  3. Check sample appropriateness. Use the Case Elimination Guidelines located on the ACF website, to make sure that the case belongs in the sample. If, after review, it appears that the case does not belong, you may need to consult with the NRT Local Site Leader about eliminating it from the sample.
  4. Add stickies where necessary. As you begin to review the case, add stickies to any questions that need correction, clarification, or additional information.
  5. Gain a basic understanding of the case. You’ll want to start to gain a basic understanding of the case as quickly as possible. Different site leaders employ different strategies for learning about the case details. One particularly effective strategy is to scan the Face Sheet, items 1–4 of the instrument, item 7, and item 17. Site leaders will go back over each of these items in more detail later in the QA process; however, if reviewers have appropriately completed these items, they should provide a basic overview of the case.
  6. Take basic notes. Site Leaders have also found it useful to begin taking notes from the Face Sheet at this time. Noting some basic information, such as the age of the target child or the ages of the children in the home and the date that the case was opened for services, can help the site leader get an overview of the case and prepares them for reviewing missing information.
  7. Identify missing information. Next, you’ll quickly review the case for missing information. The Combined QA Tip Sheet, particularly the sections that provide information on general errors in the body of the instrument and the Face Sheet, should be your guide.

In-Depth Review

Once you’ve completed the initial review of the case, you should begin reading through the case more carefully, paying special attention to the Rating Documentation for each item and using the item-specific errors and correlations section of the Combined QA Tip Sheet.Use the following steps to complete your in-depth review of the case:

  1. Examine Main Reason statements. Each Main Reason statement should be clear, concise, focused. Each statement should address only the issue being assessed in the item. It must also be consistent with the item’s rating and should contain enough information to fully explain or justify the rating.
  2. Check exploratory questions. Some of the information needed to fully explain the rating is covered in the follow-up questions, and you need to make sure that they have all been addressed fully and adequately in their Main Reason statement.
  3. Check for sources. You should also ensure that the Main Reason statement indicates the source or sources of the information—for example, the case record or an interview with parents. Again, the QA Tips should be used to help you here.
  4. Check for consistency throughout. Also note that there should be no contradictory information between items throughout an instrument. The QA Tips should be used to ensure that you have reviewed each case for discrepancies in the information provided from one item to another. For example, the Face Sheet might list three siblings of a targeted foster child, but the later items, such as items 12 and 13, might reference only two siblings. Also, item 17 might reference two fathers, but item 19 might reference only one. Inconsistency across items can be the most difficult and time consuming to identify.
  5. Transfer back to Reviewers. Once you have completely reviewed the case and added the necessary stickies throughout, you should change the case's QA status to "QA Review Complete" and then transfer the case back to the reviewers so that they can begin addressing your comments.
  6. Resolve Stickies. Once the review pair has finished addressing your comments, they will inform you that it is ready, and you will again transfer it to your tablet. At this point, you will review the stickies you added. If further changes are necessary, you can add to the sticky and send it back for further correction or meet with the review pair in person to discuss the problem. If the sticky was adequately addressed, you should resolve it to remove it from the list.

Once all of its stickies have been resolved, the case is ready to move on to Second-Level QA.

Second-Level QA

Second-Level QA, sometimes referred to as State-Level QA, is the final local case-level QA conducted on an individual case. It is conducted by site leaders who have a great deal of CFSR and child welfare-related experience and are approved to conduct Second-Level QA by the Children's Bureau. These site leaders may be located on or off site. If the site leader is located on site, then the review pair may work directly with him or her during Second-Level QA in a process identical to that used during First-Level QA. If the site leader is off site, the overall process remains the same, but the case must first be transfered to an onsite site leader's tablet before it can be moved to the offsite leader.

To conduct Second-Level QA, site leaders conduct a final review of cases (following the steps outlined in Preliminary QA and First-Level QA) to ensure that data collected are clean, complete, and consistent. In addition, they place special emphasis on consistency across cases in the site and State-specific issues that may arise. If they discover issues that may impact all sites, they will communicate with the NRT Local Site Leader so that he or she can discuss such issues with other NRT Site Leaders in order to ensure consistency throughout the State.

During Second-Level QA, the site leader may add additional stickies to the case, which will require that reviewers go through the same process as in First-Level QA to find and address the stickies. Once all stickies from First and Second-Level QA are resolved, the case is considered to be finalized and ready for Local Site Finalization.

Local Site Finalization

Local Site Finalization is completed by a JBS representative on site on the Thursday immediately prior to the NRT Local Site Leader’s transfer of cases to the central server. This is a final confirmation that all case and stakeholder data have been finalized and uploaded to the central server.

Data Validation

Data Validation takes place on Thursday evening and Friday morning. Federal staff and consultants work together to conduct a final review of State data to ensure that data are clean, complete, and consistent across the State.

Third-Level QA

Third-Level QA is a post-review check of the data by the Final Report writer assigned to the case following the review week.

Data Change Management

Data Change Management involves Final Report writers, the CB Central and Regional Office, and JBS staff collaborating to validate and update OSRI data as applicable and appropriate. This process includes tracking and getting CB approval for any corrections that need to be made as a result of Third-Level QA.

The Onsite Review Instrument

The Onsite Review Instrument (OSRI) is the tool that reviewers use to collect information during their case record review. Its structure is organized into a Face Sheet, which is used to document general case information, and three sections that correspond to the outcome domains of safety, permanency, and child and family well-being. Each outcome domain is further divided into individual outcomes (for example, Safety Outcome 1 and Safety Outcome 2), which are themselves divided into individual items that relate to the outcome.

The OSRI is used to rate both in-home and foster care cases. For in-home cases, the safety and well-being sections are completed for all the children in the family, and the permanency section is not applicable. For foster care cases, the safety section is completed for all the children in the family, but the permanency and well-being sections are completed only for the target child. In both instances, though, reviewers must distinguish between events that took place over the life of the case and events that took place during the period under review.

Each review pair completes one OSRI per case assigned, assessing and rating items based on information and standards provided in the instrument instructions. They draw equally from two information sources to complete the instrument: documentation from the case record itself, and case-related interviews with children, parents, foster parents, caseworkers, service providers, and other professionals knowledgeable about the case. As the reviewers complete the instrument, item and outcome ratings are assigned and rating documentation must be provided to support those ratings.

It is essential that reviewers become thoroughly familiar with the entire OSRI before arriving on site for the review week. While there are some differences between the layout of the paper instrument and the automated OSRI that will be used on site, learning the paper instrument will provide the foundation of knowledge that reviewers require to work efficiently while on site.

Structure of the OSRI

The hard copy of the OSRI contains a few elements that do not appear in the automated instrument. These elements include General Instructions at the beginning of the instrument and some slightly different formatting in a few of the questions. For the most part, though, the automated instrument and hard copy are identical.

The first part of the OSRI is the Face Sheet, which lists basic information about the case being reviewed, such as the target child’s name, the names of other children involved in the case, the reason for agency involvement, key dates of service, and individuals interviewed during the case review. These individuals are not identified by name on the Face Sheet, but rather by their relationship to the case.

Following the Face Sheet is the main body of the instrument, which is divided into three main sections that are organized by outcomes. These sections, or outcome domains, form the basis of the CFSRs: safety, permanency, and child and family well-being. For each outcome, reviewers collect information on a number of items related to that outcome. The instrument has a total of 7 outcomes and 23 corresponding items organized in the following manner:

  • There are two outcomes and four items under the safety section
  • There are two outcomes and twelve items under the permanency section
  • There are three outcomes and seven items under the child and family well-being section

Each item is identified by an item number and the area that the item assesses. Each item also has a Purpose of Assessment, which clearly identifies the purpose of the information being collected for the item. Each item’s Definitions and Instructions are incorporated into the individual questions. These instructions are very detailed and specific, and are intended to help clarify complicated issues and assist reviewers in making correct decisions regarding answers to each question.

In addition to the outcomes and item questions, the OSRI also includes Rating Documentation questions that must be completed by the reviewers before the case can be considered complete. These rating documentation questions, which include a Main Reason statement and various follow-up questions, are used to provide an explanation and sources of information that justify each item's and outcome's rating.

The Face Sheet

The Face Sheet is the first part of the OSRI. It is used to document general information about the case, such as the case type, the names of the children, the target child, the date the case was opened, and so on. It must be completed regardless of whether the case is a foster care or in-home services case.

The Face Sheet is one of the only parts of the instrument where full proper names can be used. Items 1 and 12 also have questions that require the input of a child's name, but in both of those cases only the child’s first name should be used. No surnames should ever appear anywhere in the instrument except on the Face Sheet, and for the remainder of the instrument (excepting Items 1 and 12) all proper names should be replaced with titles. Examples include "biological mother," "target child," "caseworker," "adoption agency," and so on.

Note that, unlike the items that make up the bulk of the instrument, there is no Rating Documentation attached to the Face Sheet. The Face Sheet itself is not a rated item, and as such should not really be considered as an “item” in the instrument at all.

Note also that there are a few differences between the paper and automated versions of the Face Sheet. Questions A through E, for example, exist only on the paper version. The chart in the electronic version of Question F has more columns than the paper copy does, and there are also follow-up questions in the electronic version (Questions K1, L1, M1, and M2) that do not exist in the paper version.

 

The Outcomes

The OSRI is divided into three sections that are themselves organized around the three outcome domains that form the basis of the CFSRs: Safety, Permanency, and Child and Family Well-Being. Each outcome domain is divided into specific outcomes: Safety Outcomes 1 and 2; Permanency Outcomes 1 and 2; and Well-Being Outcomes 1, 2, and 3. For each outcome, reviewers collect information on a number of “items” related to that outcome. Each item focuses on a specific Purpose of Assessment, and is further subdivided into individual questions.

Once all of an item’s questions are answered, the application will automatically rate the item as either a Strength, an Area Needing Improvement, or Not Applicable. When an item has been rated, the reviewers must complete its Rating Documentation questions. These questions provide evidence and contextual support for the item rating. When all of an outcome’s items have been rated and its Rating Documentation completed, the outcome itself receives a rating. Once all items and outcomes have been rated, and all supporting documentation is included, the instrument is complete.

Note that while reviewers will use the OSRI to review both foster care and in-home services cases, they will complete the Permanency section only if the case under review is a foster care case. For children in foster care, reviewers should consider both Safety Outcomes (items 1 through 4) for all children in the family, but complete the Permanency Outcomes (items 5 through 16) and the Well-Being Outcomes (items 17 through 23) only as they apply to the specific child whose case is under review.

For children receiving in-home services, reviewers should apply the Safety and Well-Being Outcomes to all the children in the family who are both residing with and included in services to the family.

Safety

The safety domain is divided into two separate outcomes: Safety Outcome 1 and Safety Outcome 2. Safety Outcome 1 is "Children are, first and foremost, protected from abuse and neglect." It is composed of item 1 and item 2.

Safety Outcome 2 is "Children are safely maintained in their homes whenever possible and appropriate." It consists of item 3 and item 4.

In general, the Safety domain is concerned with the following questions:

  • Did the agency respond quickly to reports of child abuse and neglect and take immediate steps to protect the children in the home?
  • Once involved, did the agency make sure that children were not abused or neglected again?
  • Did the agency provide services to make sure that children don’t enter or re-enter foster care?

Item 1

Timeliness of Initiating Investigations of Reports of Child Maltreatment

Purpose of Assessment: To determine whether responses to all accepted child maltreatment reports received during the period under review were initiated within the timeframes established by agency policies or State statute, and face-to-face contact with the child was made within those timeframes.

Item 1 investigates the agency’s timeliness of initiating investigations of reports of child maltreatment. We use it to determine whether all accepted reports during the Period Under Review:

  • were initiated within timeframes established by agency policy or State statute, and
  • involved face-to-face contact with the child during those timeframes.

An important tip to keep in mind for item 1 is:

  • This is the only item where the state’s policies are the criteria used to assess the entire item.
     

Item 2

Repeat Maltreatment

Purpose of Assessment: To determine if any child in the family experienced repeat maltreatment within a 6-month period.

Item 2 investigates repeat maltreatment. it is used it to determine whether there were two or more substantiated maltreatment reports within a single 6-month period. At least one of the reports must have occurred during the PUR and involve similar circumstances.

So, if there is at least one substantiated or indicated report during the PUR, you look to see if there are any other reports received and substantiated within 6 months of that first report. Please note that the second report could have occurred prior to the PUR. This is one of the few items where you are asked to look outside the PUR.
 

Item 3

Services to Family to Protect Child(ren) in the Home and Prevent Removal or Re-Entry into Foster Care

Purpose of Assessment: To determine whether, during the period under review, concerted efforts were made to provide services to the family to prevent the child(ren)’s entry into foster care or re-entry after a reunification.

Item 3 looks specifically at whether or not the child welfare system made concerted efforts to provide services to:

  • prevent the removal of a child from his or her home, or
  • prevent a re-entry into foster care following reunification

Some important tips to keep in mind for this item are:

  • it focuses on services
  • you must consider the entire PUR, even if children came in or out of care

This item also examines whether looks at if there was an emergency removal, and if so, whether it was necessary to ensure safety
 

Item 4

Risk Assessment and Safety Management

Purpose of Assessment: To determine whether, during the period under review, the agency made concerted efforts to assess and address the risk and safety concerns related to the children in their own homes or while in foster care.

In item 4, we are determining whether all risk and safety issues have been identified and addressed for a child, regardless of whether that child is at home or in foster care.

We are assessing the child welfare system’s efforts to both:

  • assess safety and risk both initially and during the Period Under Review, and
  • address identified safety and risk needs for the child initially and during the Period Under Review.

Keep in mind this distinction between risk and safety:

  • Risk is the likelihood that a child will be maltreated in the future.
  • Safety refers to imminent danger to the child.

An important tip to keep in mind is that assessments and plans need not be formal. For example, even if your State uses a specific document to assess risk and safety, but such a document is not in the case file, yet, case notes and interviews may provide evidence that informal assessments were made regarding risk and safety. This evidence may include child observations, discussions with the child and caregivers, and observations of surroundings. These informal assessments are acceptable provided you and your partner determine that they have adequately identified and addressed risk and safety issues.

Note, though, that you must not only decide on the adequacy of the assessments, but also how effectively risk and safety concerns were managed during the PUR. In other words, were services effective in mitigating risk and safety issues?

Permanency

The permanency domain is divided into two separate outcomes: Permanency Outcome 1 and Permanency Outcome 2. Together, they contain 12 performance items.

Permanency Outcome 1, "Children have permanency and stability in their living situations," consists of item 5, item 6, item 7, item 8, item 9, and item 10. For Items 7 through 10, please remember this very important point: Item 7 assesses the appropriateness and timeliness of the goal or goals, while Items 8, 9 and 10 assess achievement of said goals.

Permanency Outcome 2, "The continuity of family relationships and connections is preserved for children," consists of item 11, item 12, item 13, item 14, item 15, and item 16.

In general, the permanency domain is concerned with the following questions:

  • Did the agency make good decisions to return a child to parents and provide services to prevent re-entry?
  • Is the child in a stable placement now, and how many placement changes did the child experience? If appropriate, was the child placed in the same foster home as his or her siblings? Was relative placement explored, and did it happen?
  • Were a permanency goal and all subsequent goals established in a timely manner, and were the goals appropriate?
  • Did the agency make concerted efforts to achieve the goal?
  • Was the child placed close enough for parents to have ongoing contacts? Did the agency make sure that visits occurred frequently enough?

Item 5

Foster Care Re-Entries

Purpose of Assessment: To assess whether children who entered foster care during the Period Under Review were re-entering within 12 months of a prior foster care episode.

Item 5 looks at re-entries into foster care, so the main question is whether a child who entered care during the PUR is entering again within 12 months of discharge from a prior foster care episode.
If there was a re-entry, you’re asked to determine if the agency made concerted efforts to prevent this. If the child entered care prior to the PUR and remained in care during the PUR, then this item is always NA.

Like repeat maltreatment, if the child entered foster care during the PUR, you look 12 months before and after that entry and see if there is a second entry into care. This is another instance where you have to look outside of the PUR.

Item 6

Stability of Foster Care Placement

Purpose of Assessment: To determine if the child in foster care is in a stable placement at the time of the onsite review and that any changes in placement that occurred during the Period Under Review were in the best interest of the child and consistent with achieving the child’s permanency goals.

Item 6 addresses two issues. First, it explores whether there have been any changes in placement during the PUR, as well as the reasons for those changes. Second, it examines whether the current placement is stable.

For this item, you need to remember that moves up to higher levels of care because a child’s mental/behavioral needs increase are not necessarily seen as moves that further the child’s case goals; rather, in this situation, you must explore why the child’s mental/behavioral health needs are increasing, whether or not the child is being placed in appropriate settings, the commitment of caregivers, and so on.

Some important tips to keep in mind for item 6 include the following:

  • Stability and instability are defined around the likelihood of an unplanned disruption in the foreseeable future. A foster home placement is, by design, not permanent, and a move from a foster home may or may not be considered as contributing to the achievement of the child’s case goals. It depends on the circumstances. For example, if a child is moved to be placed with a sibling or into a relative home, you would probably consider that as being in the child’s best interest and furthering case goals. However, if the foster parents requested that the child be moved because they couldn't cope with the child's behavior, then you probably would not consider the move as furthering the child’s case goals.
  • If a youth in care is held in detention in a locked facility, this does not count as a placement setting.
     

Item 7

Permanency Goal for Child

Purpose of Assessment: To determine whether appropriate permanency goals were established for the child in a timely manner.

For item 7, we need to determine:

  • what the permanency goal is,
  • whether the goal is appropriate, and,
  • whether it was established in a timely manner.

We also look at the ASFA requirement for filing for TPR. How you answer this item will determine how you complete items 8, 9, and 10. For example, if you identified concurrent goals of reunification with parents and adoption, you would complete items 8 and 9 but not item 10.

An important tip to remember for item 7 is:

  • This item does not consider whether or not the goal was achieved. Achievement is measured in items 8, 9, and 10.

Item 8

Reunification, Guardianship, or Permanent Placement with Relatives

Purpose of Assessment: To determine whether concerted efforts were made, or are being made, during the Period Under Review, to achieve reunification, guardianship, or permanent placement with relatives in a timely manner.

Item 8 determines whether the goal of reunification, guardianship, or permanent placement with relatives was achieved in a timely manner. We look at what kind of concerted efforts were made to achieve the goal, how any barriers were addressed, etc. Generally, for a reunification, we’re looking at a 12 month time period to achieve reunification. However, note that this is a general time frame, and sometimes, depending on the case circumstances, a goal of reunification, guardianship, or permanent placement with relatives should or could have occurred sooner. The instrument instructions will guide you in determining this.

Item 9

Adoption

Purpose of Assessment: To determine whether, during the Period Under Review, concerted efforts were made, or are being made, to achieve a finalized adoption in a timely manner.

Similarly to item 8, item 9 is used to determine whether the established goal of adoption was achieved in a timely manner. For item 9, we’re looking at a time frame of achievement of a legally finalized adoption within 24 months of the child coming into care.
 

Item 10

Other Planned Permanent Living Arrangement

Purpose of Assessment: To determine whether, during the Period Under Review, the agency made concerted efforts to ensure:

  • That the child is adequately prepared to make the transition from foster care to independent living
  • That the child, even though remaining in foster care, is in a “permanent” living arrangement
  • That the child is in a long-term care facility and will remain in that facility until transition to an adult care facility

Item 10 is used to determine what efforts are being made to achieve permanency for children with this particular permanency goal.

This goal may not be specified in the written case record using the specific term OPPLA, as some states use different terminology, like Independent Living, Emancipation, etc. This goal refers to a situation in which the State maintains care and custody responsibilities for the child, but places the child in a setting in which the child is expected to remain until adulthood, such as with foster parents who have made a commitment to care for the child permanently or with relatives who have made the same commitment.

In this item, you’ll be looking for formal or informal agreements around the goal of “other planned permanent living arrangement.” If a State has a policy that a signed agreement or court order is necessary to validate a planned permanent living arrangement for a child, then the presence or absence of that agreement would be noted in the instrument. If no signed agreement is required by the State, then reviewers would need to validate the living arrangement through information in the case file and/or through interviews.

There are two important tips to keep in mind for item 10:

  • This item also assesses whether or not the child has been provided with appropriate independent living services.
  • This is the second—and last—instance of where a State’s policy would be considered in the item’s rating.

Item 11

Proximity of Foster Care Placement

Purpose of Assessment: To determine whether, during the Period Under Review, the agency made concerted efforts to ensure that the child’s foster care placement was close enough to the parent(s) to facilitate face-to-face contact between the child and the parent(s) while the child was in foster care.

Item 11 is used to assess whether the location of the child’s foster care placement makes it possible for his or her parents to visit.

Some tips to keep in mind for item 11 include:

  • Generally, a travel distance of less than 1 hour is considered close enough to facilitate face-to-face contact.
  • If the parents live separately from each other, this item is assessed using the residence of the parent with whom it’s most likely that the child will be reunified.

Item 12

Placement with Siblings

Purpose of Assessment: To determine if, during the Period Under Review, the agency made concerted efforts to ensure that siblings in foster care are placed together unless a separation was necessary to meet the needs of one of the siblings.

Item 12 asks whether all of the siblings in foster care were placed in the same home. If they were not, it explores the reason.

Keep in mind these tips for item 12:

  • A lack of resources is not justification for separating siblings, unless there is a large group of five or more children.
  • One valid reason for separating siblings in foster care would be to place them with different paternal relatives.  
  • Siblings may be separated to provide specialized services, but concerted efforts must be made to reunite them once those services are no longer needed.
  • This item looks specifically at siblings placed in foster care. Siblings who are not in foster care are addressed in a later item.
     

Item 13

Visiting With Parents and Siblings in Foster Care

Purpose of Assessment: To determine if, during the Period Under Review, the agency made concerted efforts to ensure that visitation between a child in foster care and his or her mother, father, and siblings is of sufficient frequency and quality to promote continuity in the child’s relationship with these family members.

Item 13 assesses the frequency and quality of visitation between the child in foster care and his or her parents and siblings placed separately in foster care.

A tip to keep in mind for item 13 is:

  • It refers only to siblings placed separately in care from the target child during the PUR.
  • If visitation did not occur, then the quality of that visitation must be recorded as NA.

Item 14

Preserving Connections

Purpose of Assessment: To determine whether, during the Period Under Review, the agency made concerted efforts to maintain the child’s connections to his or her neighborhood, community, faith, language, extended family, tribe, school, and friends.

Item 14 looks at the efforts that were made by the agency to maintain and reinforce the personal connections that are important to the child when he/she came into care, such as neighborhood, school, friends, faith, etc. This item also assesses compliance with ICWA requirements.

Connections to parents or siblings separated in care should not be included in this item; that’s assessed in other items. However, this is where you would address connections with siblings not in care, such as an adult brother or sister.

An important tip for item 14:

  • If a child has been in care for a considerable length of time, connections maintained or not maintained to previous foster parents or foster siblings would be assessed in item 17A. Item 14 focuses only on connections the child had prior to coming into care.
  • Question 14B, which collects information about Tribal membership, does not affect the item’s rating.

Item 15

Relative Placement

Purpose of Assessment: To determine whether, during the Period Under Review, the agency made concerted efforts to place the child with relatives when appropriate.

Item 15 asks whether the child was placed with maternal or paternal relatives. If he or she was not, it explores the reason.

Here are some important tips to keep in mind for item 15:

  • Assessments of relatives can be both formal and informal.
  • This item, like most, explores placement or efforts during the PUR.
  • Even if a State has a formal assessment, an informal assessment is adequate to rate item 15 as a Strength.

A difficult question to answer for this item is: at what point is it okay for the agency to stop looking for relatives altogether? This, of course, will depend on the circumstances of the case, and will be something you will need to discuss with your peer reviewers and Team Leaders. Many times, it is appropriate to periodically attempt to locate or reassess relatives, even if they were initially determined to not be appropriate.

Item 16

Relationship of Child in Care with Parents

Purpose of Assessment: To determine whether, during the Period Under Review, the agency made concerted efforts to promote, support, and maintain positive relationships between the child in foster care and his or her parents or primary caregivers through activities other than visitation.

Item 16 is used to determine what else the agency did, other than facilitate visitation, to help the child in foster care maintain positive relationships with his or her parents or primary caregivers. In other words, did the agency include the parents in the child’s medical appointments, school activities, and special events?

Keep in mind the following tip for item 16:

  • Even though the instrument instructions are clear, it’s fairly common for reviewers to include parent/child visitation in this item, even though that has already been assessed in item 13. Remember that item 16 refers to activities other than visitation.
     

Well-Being

The well-being domain is divided into three separate outcomes: Well-Being Outcome 1, Well-Being Outcome 2, and Well-Being Outcome 3. Together, they encompass seven performance items.

Well-Being Outcome 1 is "Families have enhanced capacity to provide for their children's needs." it consists of item 17, item 18, item 19, and item 20. Item 17 is itself divided up into four separate sub-sections: 17A, 17B, 17C, and 17 overall.

Well-Being Outcome 2 is "Children receive appropriate services to meet their educational needs." It is the only outcome in the instrument that has only one item, item 21

Well-Being Outcome 3 is "Children receive adequate services to meet their physical and mental health needs." It consists of item 22 and item 23.

In general, the Well-Being domain is concerned with the following questions:

  • Did the agency do a thorough assessment of the needs of the child, family, and foster family, and provide the services necessary to ensure the child’s well-being?
  • Did the agency make sure that the child’s physical, educational, and mental health needs were met?
  • Were the child and the family really involved in case planning?
  • Did the caseworker meet often enough with the child, parents, and foster family to ensure that the child was safe and that everyone was working toward the goal?

In-Home Cases and Items 21-23

It is important to understand that item 21 (Well-Being Outcome 2), and 22 and 23 (Well-Being Outcome 3) are not necessarily applicable for in-home cases. The instrument gives specific instructions for when these items will be applicable for in-home situations; the items are applicable if educational, physical health, or mental health issues were one of the reasons the case came to the agency’s attention, or if these issues presented while the case was open and the caseworker would reasonably be expected to be involved. So let the instrument be your guide in determining if these three items are applicable for in-home cases.

Records for Items 21-23

Even if State policy requires that educational, medical, dental, or mental health records be kept in the case file, if you can determine through your case-related interviews or the case record that the child’s needs were met, the actual records are not required in order to rate these items as Strengths.
 

Item 17

Needs and Services of Child, Parents, and Foster Parents

Purpose of Assessment: To determine whether, during the Period Under Review, the agency made concerted efforts to assess the needs of children, parents, and foster parents at the child’s entry into foster care and on an ongoing basis to identify the services necessary to achieve case goals and address the issues relevant to the family, and whether the agency provided the appropriate services to address the identified needs.

Item 17 is divided into three sections. Section 17A addresses needs assessment and services to children, 17B addresses needs assessment and services to parents, and 17C addresses needs assessment and services to foster parents. For each section, we need to determine whether initial and ongoing needs assessments were conducted and whether appropriate services were provided to meet the identified needs. You’ll also need to look at what the agency did to engage the various parties in services, and if services were effective in meeting the need.

The final section, 17 overall, is where you provide a summary of the three other sections.

Two tips to keep in mind while completing item 17 are:

  • Section 17A shouldn’t include any information related to educational, physical, or mental health needs or services, as those areas are assessed in later Well-Being items. Examples of needs and services that should be addressed include socialization activities, preparation for adoption, services to enhance self-esteem, and engagement with a mentor as a role model.
  • Your overall item rating Main Reason statement should not contain information that was not provided in either 17A, 17B or 17C. Section 17 overall is a short summary of those three sections.

Item 18

Child and Family Involvement in Case Planning

Purpose of Assessment: To determine whether, during the Period Under Review, efforts were made to involve parents and children in the case planning process on an ongoing basis.

Item 18 assesses whether parents and children were really and actively involved in case planning. If they were not, it explores the reasons. In other words, were children and families involved in identifying strengths and needs, services, goals, and so on, and were they involved in assessing progress toward meeting case goals?

One tip to keep in mind for item 18 is:

  • This item does not assess foster parent’s involvement in case planning.

Item 19

Caseworker Visits With Child

Purpose of Assessment: To determine whether, during the Period Under Review, the frequency and quality of visits between caseworkers and the child(ren) in the case are sufficient to ensure the safety, permanency, and well-being of the child and promote achievement of case goals.

Item 19 addresses whether the caseworker’s visits with the child were of sufficient frequency and quality to ensure the safety, permanency, and well-being of the child and promote achievement of case goals. Keep in mind that frequency is assessed by the standards provided in the instrument instructions, not by the policy of the state being reviewed.

Item 20

Caseworker Visits With Parents

Purpose of Assessment: To determine whether, during the Period Under Review, the frequency and quality of visits between caseworkers and the mothers and fathers of the children are sufficient to ensure the safety, permanency, and well-being of the children and promote achievement of case goals.

Item 20 addresses whether the caseworker’s visits with the parents were of sufficient frequency and quality to address the child’s needs and to achieve the goals of the case. Again, frequency is assessed by the standards in the instrument, not by policies of the State.

Item 21

Educational Needs of the Child

Purpose of Assessment: To assess whether, during the Period Under Review, the agency made efforts to assess children’s educational needs at the initial contact with the child and on an ongoing basis, and whether identified needs were appropriately addressed in case planning and case management activities.

Item 21 is used to determine whether the agency identified the educational needs of the child, both initially and on an ongoing basis, and tried to arrange for services to address the identified needs.

Note that the focus of this item is on the agency's response, not the educational services themselves. This is the one instance in which the reviews are not holding the State child welfare agency as accountable for the delivery of services to children. The reason that States are provided a little more leeway with regard to this item is because most educational systems are operated at the local level, with a separate board that oversees policy, practice, and budgetary matters. While States are expected to continue to forge relationships with educational systems, the Federal Government recognizes the different degrees of leverage that child welfare agencies will have in dealing with these systems.

A tip to keep in mind for item 21 is:

  • Unless this item is rated NA, the chart needs to be completed showing educational needs, and services provided or not provided.

Item 22

Physical Health of the Child

Purpose of Assessment: To determine whether, during the Period Under Review, the agency made concerted efforts to address the physical health needs of the child, including dental health needs.

Item 22 addresses the physical health needs of the child. To complete it, we need to determine whether the agency assessed the child’s health care needs both initially and on an ongoing basis, and then addressed those needs appropriately and effectively. As in item 21, unless the item is rated NA, the chart needs to be completed showing needs and services provided or not provided.

Item 23

Mental/Behavioral Health of the Child

Purpose of Assessment: To determine whether, during the Period Under Review, the agency made concerted efforts to address the mental/behavioral health needs of the child.

Item 23 addresses the mental/behavioral health needs of the child. We use it to determine whether the agency assessed the child’s mental-behavioral health needs both initially and on an ongoing basis, and then addressed those needs appropriately and effectively. As in items 21 and 22, unless the item is rated NA, the chart needs to be completed showing needs and services provided or not provided.

Period Under Review

When completing the Onsite Review Instrument (OSRI), it is essential to distinguish between events that took place over the life of the case, and events that took place during the Period Under Review, or PUR. Unless specifically indicated otherwise, all items in the OSRI pertain to the PUR. When it is necessary to look outside that period, the instructions in the OSRI will tell you to do so.

While the start date of the PUR will differ from one review to another, the end date of the PUR will be the Friday of the review week. You will be told the exact dates of the PUR onsite, and it will also be provided on your tablet.

Ratings

There are two types of ratings in the OSRIitem ratings and outcome ratings. The item ratings feed into the outcome ratings, so that once reviewers have completed all of an outcome's items, that outcome receives its own distinct rating. Once all seven outcomes and 23 corresponding items in the instrument have received a rating, and rating documentation has been completed, the instrument is considered finished and should be finalized.

Item Ratings

Each item in the OSRI, with the exception of the Face Sheet, receives a rating once its questions are addressed. There are three possible ratings:

  • Strength
  • Area Needing Improvement (ANI)
  • Not Applicable (NA).

During the first round of reviews, reviewers were responsible for deciding which rating each item should receive. Now, with the automated system, item ratings are calculated automatically by the system based on the answers to each item question. If each of the questions under each item is answered appropriately and correctly, the rating that the automated system assigns to the item will also be appropriate and correct. Therefore, it is the responsibility of the reviewer to answer the questions correctly, so that the automated system can assign correct ratings for the items.

Note that if you answer all of the questions appropriately, but don’t agree with the system's automatic rating, there is a mechanism that allows site leaders to override the rating. This is a very rare occurrence, though. In most cases where there is a discrepancy between an item's automatically generated rating and the rating you expect it to have, the discrepancy is due to an incomplete or inacurate answer on one or more of the item questions. Review your answers carefully for errors!

Outcome Ratings

The automated application automatically calculates and rates the outcomes once all that outcome's items have been rated. There are four possible outcome ratings:

  • Substantially Achieved: The required number of applicable items are rated as strengths.
  • Partially Achieved: Some applicable items are rated as strengths, but the number does not meet the level required for the outcome to be rated as substantially achieved.
  • Not Achieved: None of the applicable items is rated as a strength.
  • Not Applicable: None of the items is applicable.

To rate an outcome as substantially achieved, the following criteria must be met:

  • Safety Outcome 1 and Safety Outcome 2: All applicable items must be rated as strengths. Items rated not applicable are disregarded.
  • Permanency Outcome 1: Item 7 and the corresponding item  (8, 9, or 10) rated for the case must be rated as strengths. If the State is using concurrent planning and the reviewer rated two corresponding items, they must both be strengths. No more than one of either items 5 or 6 (if applicable) may be rated as an area needing improvement. Items rated not applicable are disregarded. 
  • Permanency Outcome 2: No more than one of the applicable items for this outcome is rated as an area needing improvement. Items rated not applicable are disregarded.
  • Well-Being Outcome 1: Item 17 must be rated as a strength. No more than one of the remaining applicable items may be rated as an area needing improvement. Items rated not applicable are disregarded.
  • Well-Being Outcome 2: Item 21 is rated as a strength.
  • Well-Being Outcome 3: All applicable items are rated as strengths. Items rated not applicable are disregarded.

Note that, while item ratings are included in the automated application, outcome ratings are not listed in the normal view. To review outcome ratings, use the Preliminary Case Summary Report.

Rating Documentation

Once an item's questions have been answered and its rating of Strength, Area Needing Improvement, or NA assigned, reviewers must completed the Reason for Rating and Documentation section. This part of the OSRI provides space to write a general statement justifying and clarifying the item's calculated rating. This general statement, referred to as a Main Reason statement, is followed by a number of specific follow-up questions designed to help further clarify the rationale for the assigned rating.

Note that, even if all 23 of the OSRI's items and all seven outcomes have been rated, the instrument is not considered completed and ready for finalization until all of its rating documentation has also been completed. For information on completing rating documentation in the automated instrument, see Module 6.4.3: Item Ratings and Rating Documentation.

Main Reason Statement

When composing your Main Reason statement, you should provide strong and clear justification for the item's rating. This information should be concise and clearly presented, should support the rating, and should not conflict with any of the information you have entered for other items. It's important to remember that the site leaders who conduct quality assurance (QA) on your completed instruments will neither have completed a case record review of the files nor have participated in your case-related interviews. Therefore, they will use the information you provide in your Main Reason statements to determine that each item's questions were answered appropriately and that the ratings are therefore correct.

Before you begin writing your Main Reason statement, though, you should take the time to review the item's follow-up questions. Your goal, wherever possible, should be to address most of the issues raised in the follow-up questions in the Main Reason statement itself. The follow-up questions can also serve as a guide for the order in which you should present information in your Main Reason statement, which in turn can help keep the statement as clear and concise as possible.

You must begin the Main Reason statement of every rated item with the phrase, “Item [number] was rated as [rating] because...” and then complete the sentence with a concise summary of why the item received the rating that it did. This summary sentence is extremely important and will not only help you and your review partner to crystallize why you’re rating the item as you are, but will make your justification information much clearer to the site leader who conducts QA on your instrument.

Following this summary sentence should be your explanatory information. This should be succinct and on-topic, but also thorough. Keep the following in mind as you compose your answer:

  • Avoid Abbreviations. The only abbreviations you should use when completing the instrument are PUR, for “period under review”; ANI for “Area Needing Improvement”; and NA for “Not Applicable.”
  • Do Not Use Names. Names are not to be used anywhere in the body of the instrument or rating documentation. The only exception to this rule is the Face Sheet, which can include names, and certain specific places in the instrument where first names are necessary for distinguishing individuals in the case (such as the chart in item 12).
  • Do Not Cut and Paste. Even though the automated application allows for cutting and pasting across items, you should avoid doing so. Each item in the instrument should be answered separately and stand on its own to ensure that its specific purpose of assessment is being addressed.
  • Explain Concerted Efforts. Several items require that you address whether concerted efforts were made by the agency. In such cases, you must detail those efforts clearly and not just state that “concerted efforts were made.” Likewise, if concerted efforts were not made, you should describe the lack of efforts clearly and note what should have taken place in terms of agency efforts.

In addition to the summary sentence and explanation that justifies the item's calculated rating, reviewers also must note the source or sources of the information they used to address each item. These sources should be listed at the end of the explanation. For example, reviewers who used the case record and an interview with the target child to answer and rate one item might conclude their Main Reason statement with:

Sources: case record, interview with target child

If you believe that an item's rating should have been different from what it is, first double-check your answers to each item question. An inaccurate response to one of the item questions is the most typical reason why the rating that is calculated may differ from what you would expect. If, after reviewing your answers, you still think the rating is incorrect, you should consult with one of the local site leaders. If necessary and appropriate, he or she can conduct a manual override of the rating and change it to what it should be. 

Follow-Up Questions

Every item in the OSRI must include a Main Reason statement with its rating documentation. This Main Reason statement is followed by additional exploratory, or follow-up, questions that are intended to help reviewers more thoroughly explore the rationale behind the item's rating. In most cases, these follow-up questions can and should be answered in the Main Reason statement itself. However, you must still enter text for each exploratory question. Otherwise, the automated application will register that you have omitted an answer and your instrument will be considered incomplete.

In these cases, your answers to follow-up questions can be as simple as “See Main Reason” or “NA,” depending on what is appropriate. Remember that you should only use "NA" if the follow-up question is actually not applicable; if you answered it in the Main Reason statement, that means it did apply to the case. If you say you answered it in the Main Reason statement, though, be sure to double-check the Main Reason and verify that the answer is actually there.

Some follow-up questions require that you complete a chart. You must complete these charts on their own; they cannot be answered as part of the Main Reason statement.

Case Record Review

Reviewers are assigned their cases at the review week's Monday morning team meeting. Upon receiving these, each review pair should make plans for beginning work on each case as quickly as possible. Each review pair is generally expected to finalize one case per day, which means that it must then go through the quality assurance process. In any event, all cases at the review site must be completed before the Thursday local site exit conference.

Some tips for working through each case are as follows:

  • Confirm that the case is applicable by checking it against the case elimination guidelines provided in the OSRI Quality Assurance Guide. It is critical that you and your partner ensure that each case you review belongs in the onsite review sample before you get too far in the case review. If it appears that the case does not belong with the sample, you should meet with your NRT Local Site Leader to discuss the possibility of eliminating it from the review.
  • Review the case history to determine how the child became involved with the agency.
  • Identify the key dates within the case, being sure to focus on the period under review except where otherwise instructed by the OSRI.
  • Note areas of the instrument in which information is incomplete, missing, or requires corroboration that will need to be collected through case-related interviews.

 

Case-Related Interviews

An important part of the case review process involves review pairs conducting case-related interviews with key individuals who are involved in the case. These interviews are not conducted as "customer satisfaction surveys," but rather serve as an opportunity to confirm case record documentation or collect information that might be missing from the record. 

One of the early lessons many child welfare workers learned in relation to maintaining case files is this one: “If it isn’t written down in the case file, it didn’t happen.” In the CFSR process, though, that motto should actually be: “If it isn’t written down in the case record, it still might have happened.” It becomes the reviewers’ responsibility to ask the right questions of persons important to the case to determine whether or not it really did happen.

Thus, interview information “weighs” just as heavily as information obtained from the case file documentation. Sometimes, information obtained during an interview may conflict with the documentation contained within the case record or obtained from another interview. In these cases, you and your partner have a responsibility to pursue the issue across multiple interviews until you can determine the most accurate response to the relevant questions. The case-related interviews are critical to gathering all the information needed to correctly complete the OSRI.

Key Individuals

The following key individuals related to a case will always be interviewed unless they are unavailable or completely unwilling to participate:

  • The child, assuming he or she is school age.
  • The child's parent(s).
  • The child's foster parent(s), pre-adoptive parent(s), or other caregiver(s), such as a relative caregiver or group home houseparent (if the child is in foster care).
  • The family's caseworker. If the caseworker has left the agency or is no longer available for interview, it may be necessary to schedule interviews with the supervisor who was responsible for the caseworker assigned to the family.
  • Other professionals knowledgeable about the case. When numerous service providers are involved with a child or family, it may be necessary to schedule interviews only with those most recently involved, those most knowledgeable about the family, or those who provide the primary services the family is receiving. More than one service provider may be interviewed.

Conducting the Interview

While there is no set agenda or checklist to use during a case-related interview, there are general tips you and your review partner should follow to ensure that each interview is as productive and informative as possible. These tips can be divided into three categories: pre-interview, interview, and post-interview.

Pre-Interview

How do you get the most out of the interviews you conduct? Here’s what you should be sure to do before the interview even takes place:

1. Complete case record review quickly but thoroughly. Review the case record quickly but carefully before the interviews, noting the areas in which information is incomplete or missing or areas in which the information should be confirmed by another party.

2. Recognize that time may not be on your side. Ideally, you would have an hour and a half to two hours to review the case file before your first interview; however, this isn’t always the case, depending on the availability of participants.

3. Become very familiar with item questions. Review the questions for each item, noting especially those sections of the instrument for which you did not identify sufficient information during the case record review. This is just another reason that it’s essential for you to be completely familiar with the OSRI so that you’ll know—even in a limited amount of time—which questions to ask which participants. Remember that the OSRI does provide some guidance on where you might find information for each item. This includes where in the case file to look for information as well as appropriate interviewees for each item.

4. Prepare interview questions specific to items and interviewees. Prepare a list of questions that are specific to the items you are rating and the role of the person you are interviewing. This will help you get the most out of the responses and more easily complete the instrument using those responses. There may be some questions that you’ll always want to ask certain parties, particularly to confirm or corroborate other information.

For instance, it’s advisable to always ask birth parents about item 17, needs assessment and service provision, and item 18, Involvement in Case Planning, as well as item 20, caseworker visits. You may also want to ask the child and caseworker—and perhaps the foster parent—about item 14, maintenance of the child’s connections, because many times this information is difficult to find or is missing from the case file. You’ll need to give some thought, either before the review or as it begins, to these “core” questions that you’ll want to ask different parties, particularly in terms of corroboration of information. Thinking through and jotting down “core questions” to be asked before the review begins will help you be more efficient in your interviews, ensure that all needed information is gathered, and more accurately assess and fully justify your item ratings.

Experienced reviewers often have “standard” questions they ask certain parties, like the birth parent or caseworker, to ensure that all relevant information is gathered. If you haven’t already developed some standard questions to ask specific interviewees, we encourage you to think through the items and come up with your own list of questions ahead of time so that you’ll be certain to cover all the important issues.

Interview

There are also several points you should keep in mind when the interview actually begins:

1. Introduce yourself and the interview process. Let interviewees know the approximate amount of time that the interview might take. You may find that you normally spend about 30 to 45 minutes in your interviews, although the interview with the caseworker will likely take longer. Let the participant know in advance that you will need to take notes while he or she is talking. You should not tape-record any interviews.

2. Provide an overview of the review process. Provide individuals with a brief overview of the purpose of the review process and the interview. Explain that the Federal and State governments are looking at how well the State is helping children and families achieve positive outcomes. Let parents or foster parents know that you are interested in learning about their experiences because it will help to determine how the State can better support children and families.

3. Reassure participants of confidentiality. Emphasize that the comments of particular individuals will not be identified by name in any report. Reinforce participants’ confidence in confidentiality by not revealing the comments of other persons interviewed, particularly those involved with the family. Stressing confidentiality is particularly important when interviewing children, parents, or foster parents. Note, however, that if concerns arise regarding the safety of the child, such concerns become subject to mandatory reporting laws. In addition, situations that you believe put the child at risk, such as individuals of whom the agency was not aware living in the home with the child or caregivers allowing a child in foster care to have visits with a non-custodial parent without the knowledge of the State, must be reported to the agency.

4. Explain your neutrality. Another important concept for your interviewees to understand is that you are a neutral reviewer with no ability to affect the case that you are reviewing. This is especially important when you are interviewing birth parents, who may see you as someone who can intervene on their behalf in a case plan or a case’s goals. You’ll need to be very clear that your role is not to specifically help or advocate for them, but to help the State know how to better meet the needs of families in the future. While you should acknowledge complaints raised by interviewees, you should not commit to checking on their situation or to getting back in touch with them.

5. Be flexible in your interview style and approach. Also, as you know, your interviewees may cross the spectrum from child to grandparent to therapist. You’ll need to be very flexible in your interview styles to accommodate the particular parties that you’re interviewing. At the same time, remain focused on what you need from each interview so that you obtain critical information while still using your limited time as efficiently as possible.

6. Get caseworker contact information. We advise you to get a phone number for the caseworker during the interview, and to ask if you may call him or her if further information is needed. It’s been the experience of many reviewers that they need to contact the caseworker again after the initial interview to ask for clarification or obtain further information, particularly if the caseworker is one of their earlier interviewees.

Post-Interview

Once the interview has concluded, you and your partner should:

1. Immediately report child safety concerns. If you hear information in an interview or observe something while interviewing that raises concern about risk or the safety of a child, immediately report that concern to your Local Site Leader (unless it is an emergency that requires you to immediately call 911). The Local Site Leader will work with you and the child welfare agency to address the issue. 

Note that you should always strive to ensure that children are not upset by these interviews, and normally, they aren’t. However, in the event that a child appears upset after an interview, be sure to immediately tell a Local Site Leader so that the State can respond to the situation by providing support to the child.

2. Record the interview results. Immediately after the interview, you should record your interview notes more completely into the appropriate sections of the OSRI. Note that you should not tape record interviews.

3. Schedule additional interviews as needed. You may discover that additional interviews beyond those scheduled by the Local Site Coordinator are needed in order for you and your review partner to complete a thorough case record review. If this happens, you should immediately consult with your Local Site Leader about the possibility of scheduling a new interview. Depending on where you are in the review week and with your case load, this may or may not be possible.

Finalizing the Instrument

Once reviewers have finished their case record review, interviewed everyone involved with the case, answered all of the questions in the automated instrument, and ensured that every item is rated and documented, they must perform Preliminary Quality Assurance (QA) on the entire instrument. This Preliminary QA is intended to ensure that all of the answers and ratings are accurate and complete.

Once the Preliminary QA is finished and reviewers have verified the completeness of the instrument, they are ready to data transfer the record to a Local Site Leader's tablet for First-Level QA. Through this process, a Local Site Leader reviews the instrument and adds stickies to any items that seem problematic. Reviewers must then address these problem items before the record moves on to Second-Level QA.

Once First- and Second-Level QA are complete, the record is ready for the additional QA steps of Local Site Finalization and Data Validation. In addition, the NRT Local Site Leader will collect all of the local site's cases to a single tablet in order to prepare the preliminary report that will serve as the foundation for the local site exit conference that takes place on Thursday. At some point prior to leaving the local site, he or she will also ensure that all of the site's records have been uploaded to the central server. The cases' outcome ratings will then become a part of both the the Friday statewide exit conference and the Final Report that serves as the foundation for the State's Program Improvement Plan.

The Stakeholder Interview Guide

During the review week, Team and Local Site Leaders will interview stakeholders who are representative of the types of organizations and individuals who participated in the development of the State’s Child and Family Services Plan. These include State and local representatives of courts, administrative review bodies, children’s guardians ad litem, and other individuals or bodies assigned responsibility for representing the best interests of children. There will typically be around 15 State interviews per review, each scheduled by the State Team Leader. Local Site Coordinators schedule the local site interviews, of which there will also be, typically, around 15 per site. Each interview generally lasts for an hour or more.

To conduct these stakeholder interviews, the review team will use the Stakeholder Interview Guide (SIG). The SIG is a paper instrument (available for download here), similar to the Onsite Review Instrument, or OSRI, that is focused on the systemic factors. Items that include core questions related to the stakeholder and follow-up questions designed to further explore the core issues are included for each systemic factor and outcome. The overall purpose of the stakeholder interviews is threefold:

  • To answer questions that may have been raised in the Statewide Assessment
  • To obtain information about the systemic factors under review
  • To obtain information about how the systemic factors are functioning and therefore affecting outcomes for children and families.

It is important to note that the stakeholder interview process is different from the case review process, but equally important to the overall review. The case reviews focus on case practice and on collecting case-level data to assess the outcomes of safety, permanency, and well-being. The stakeholder interviews, on the other hand, assess the State’s child welfare system that supports the child welfare practice. Together, the OSRI and SIG provide a comprehensive, big-picture view of a State's entire system of care.

Stakeholders

The stakeholders interviewed during the onsite review will include both State and local representatives. While there will be times when stakeholders are interviewed individually, they will often be interviewed as part of a group.

State Stakeholders

State stakeholders are interviewed by the State Review Team Leaders at a specially designated site within the State. Examples of State stakeholders include the following:

  • State child welfare director
  • State child welfare program specialists
  • State court system representatives
  • Major tribal representatives
  • State representatives of administrative review bodies
  • Youth being served by the agency
  • State foster and/or adoptive parent association representatives

Local Stakeholders

Local stakeholders are interviewed by Local Site Leaders at each State's local sites. Examples of State stakeholders include the following:

  • Local child welfare agency administrators
  • Foster and adoptive parents
  • Juvenile court judges
  • Law enforcement representatives
  • Caseworkers from the local agency
  • Supervisors from the local agency
  • Guardians ad litem and/or legal representatives
  • Agency attorneys
  • Local representatives of administrative review bodies
  • Tribal representatives
  • Youth being served by the local agency

Additional Stakeholders

Review teams may interview additional stakeholders at both the State and local levels, as needed. Optional interviewees at the State level may include representatives of the education system, youth service agency, health department, Medicaid program, mental health agency, child welfare advocacy organization, university social work education program, major child welfare initiative or project, or other appropriate stakeholders. Optional interviewees at the local level may include representatives of youth service agencies, major child welfare initiatives or projects, major service providers, mental and physical health agencies, educational institutions (including special education or early intervention coordinators), child and family advocacy organizations, or other appropriate stakeholders.

SIG Content and Structure

The complete Stakeholder Interview Guide consists of 46 different items. It picks up where the OSRI leaves off, with item 24, and through item 45 it covers seven systemic factors. Each of these systemic factors consists of one or more item that has a core question and multiple follow-up questions to which stakeholders respond during an interview. The 46th item is for State-specific issues. After item 46, the SIG returns to item 1 to begin addressing the outcomes of safety, permanency, and well-being that are explored in the OSRI.

Note that there are slight differences between the paper version of the SIG and the automated SIG that is used on site. The paper instrument begins with specific instructions on page 2. These instructions include a section entitled “How to Use the Questions” on page 3. Because none of these instructions are included in the automated instrument, you should make a point of reading over at least this much of the paper instrument before you arrive for the review week

Following the instructions in the paper instrument is a chart for recording a stakeholder’s name, date of the interview, and other identifying information. There is also a Supplementary Page to be used when extra space is needed for recording purposes. These pages are also not represented in the automated instrument, because the information you would enter on them is captured when you create a new SIG.

As mentioned above, the SIG begins with item 24. Included with the item is a brief synopsis detailing the item's purpose and a list of stakeholders considered appropriate to the item. The core question and follow-up questions come next, along with a space for explanatory comments. This approach is consistent throughout the rest of the paper instrument. In the automated version, though, the core question, follow-up questions, and explanatory comments are all addressed in the same space.

Systemic Factors

While the OSRI focuses on how the agency addresses the outcomes of safety, permanency, and well-being on a case-by-case basis, the SIG focuses on the entire statewide system and explores its effectiveness across seven systemic factors. Those seven systemic factors, which span items 24 through 45, include the Statewide Information System, Case Review System, Quality Assurance System, Staff and Provider Training, Service Array and Resource Development, Agency Responsiveness to the Community, and Foster and Adoptive Parent Licensing, Recruitment, and Retention.

The items that make up each systemic factor are as follows:

Section IV: Statewide Information System

  • Item 24: Statewide Information System

Section V: Case Review System

  • Item 25: Written Case Plan
  • Item 26: Periodic Reviews
  • Item 27: Permanency Hearings
  • Item 28: Termination of Parental Rights
  • Item 29: Notice of Hearings and Reviews to Caregivers

Section VI: Quality Assurance System

  • Item 30: Standards Ensuring Quality Services
  • Item 31: Quality Assurance System

Section VII: Staff and Provider Training

  • Item 32: Initial Staff Training
  • Item 33: Ongoing Staff Training
  • Item 34: Foster and Adoptive Parent Training

Section VIII: Service Array and Resource Development

  • Item 35: Array of Services
  • Item 36: Service Accessibility
  • Item 37: Individualizing Services

Section IX: Agency Responsiveness to the Community

  • Item 38: State Engagement in Consultation with Stakeholders
  • Item 39: Agency Annual Reports Pursuant to CFSP
  • Item 40: Coordination of CFSP Services with Other Federal Programs

Section X: Foster and Adoptive Parent Licensing, Recruitment, and Retention

  • Item 41: Standards for Foster Homes and Institutions
  • Item 42: Standards Applied Equally
  • Item 43: Requirements for Criminal Background Checks
  • Item 44: Diligent Recruitment of Foster and Adoptive Homes
  • Item 45: State Use of Cross-Jurisdictional Resources for Permanent Placements

 

 

Core Questions

Each item in the SIG consists of one core question and multiple follow-up questions. Each item's core question represents the central theme that should be addressed for that item during the stakeholder interview.

It is important to remember that, because each individual stakeholder will not be able to answer every core question, the core questions that are used in each interview will vary according to the stakeholder. The list of respondents that is included with each item in the paper instrument identifies those stakeholders for whom the core question is most likely to be appropriate. These are referred to as stakeholder-specific questions.

Over the course of the review week's interviews, the interviewer should strive to ask each core question two times, of two different stakeholders, in order to get more than one perspective. Keep in mind, though, that just because a specific stakeholder isn't listed as a respondent, it does not mean that the core question cannot be used with him or her. A good interviewer will recognize when an individual has knowledge that may go beyond what the instrument recognizes as typical for that particular stakeholder group and will proceed to ask even non-stakeholder-specific core questions as appropriate. Note-takers are then responsible for recording the core question's answer in the automated application.

Note that when you create a SIG in the automated application, the application will load only those core questions relevant to the stakeholder you have identified. It is possible, however, to access other, non-stakeholder-specific core questions if necessary. The best way to simplify this process is to use Advanced Navigation.

Follow-Up Questions

Each core question is proceeded by multiple follow-up questions that interviewers may use to more fully explore the various aspects of a stakeholder's response to the core question. The follow-up questions should be seen as a guide rather than a mandate or limit on what reviewers may ask and should be used as appropriate during the interviews. Interviewers may rephrase the follow-up questions or ask related questions in order to explore the item's core question more fully. For example, instead of using the listed follow-ups, an interviewer may ask “why” or “why not” as appropriate, or request that the stakeholder restate or clarify some point.

The responses to these follow-up questions are meant to support each core question response. When recording follow-up questions and answers in the automated application, note-takers should capture the follow-up with the core question so that all of the stakeholder's answers for one item appear together.

State-Specific Questions

Item 46 in the SIG is a "blank" item that is used for State-specific questions. In addition to the instrument's core questions and follow-up questions, the Regional Office Team Leader, in collaboration with the State and the Children’s Bureau, will identify State-specific issues from the Statewide Assessment that need further examination through stakeholder interviews. These State-specific questions will be pre-loaded into the automated application before the review week begins.

In many cases, there will be no State-specific questions, and item 46 will remain as a "blank" item. For this reason, many note-takers use it as a "dumping ground" for notes taken during an interview when they get lost or are otherwise unsure of where to put the material. Following the interview, when they are revising their notes for clarity, they can then cut and paste the content from item 46 to the item where it properly belongs.

Stakeholder Interviews

Stakeholder interviews are conducted by a lead interviewer who moves through a series of stakeholder-specific core questions and follow-up questions that explore a State's systemic factors. The stakeholders interviewed may be State- or local-level stakeholders. There will also be an official note-taker and one or more supporting note-takers who take notes during the interview. The four roles in each stakeholder interview, then, are interviewer, stakeholder, official note-taker, and supporting note-taker.

Before each interview, the interview team of interviewer and note-takers will meet to prepare. Following the interview, the supporting note-takers will revise and clarify their notes and then collaborate with the official note-taker to produce a final set of clean and complete notes. These final notes are then transferred to the central server along with all the other SIG and OSRI records at the end of the review week, and they serve as an integral component of the review's Final Report. Key ideas and themes raised during each day's interviews are also shared at each evening's nightly debriefing.

Interview Roles

Basically, an interview consists of three types of participants:

  • The stakeholder being interviewed. In most cases, this will be a group of people.
  • The interviewer.
  • The note-takers. One will be an "official" note-taker, while the others will be supporting note-takers.

Interviewer

State-level stakeholders are interviewed by the NRT Team Leader, while local-level stakeholders are interviewed by the NRT Local Site Leader. This person will lead the interview, asking core questions and follow-up questions of the stakeholder to fully explore the State's systemic factors. To prepare for the interview, the interviewer must be thoroughly familiar with the SIG content and structure. He or she will prepare for each interview before it takes place by setting parameters and guidelines for note-takers, and will provide an overview of the day's interviews at the nightly debriefing.

Note-Takers

The interview team's note-takers will consist of Federal and State Local Site Leaders. They share responsibility for capturing all of the information provided by the stakeholder during the interview. As with the interviewer, each note-taker is expected to be thoroughly familiar with the SIG content and structure. During the interview, the note-taker's job is to listen attentively to all questions and answers and record everything accurately without interpretation. They are also expected to revise their notes for clarity as soon as possible after the interview is complete so that they can contribute accurately to the final notes.

Remember that, although there typically will be several note-takers at each interview, there is only one "official" version of each SIG. These official notes must be taken using the automated application, and at the end of the review week they are uploaded to the central server along with all of the site's other records. This upload takes place after the other other note-takers at the interview (considered supporting note-takers) have had the opportunity to provide input on the official notes' content.

While supporting note-takers are encouraged to use the automated application to take their notes, it is not a requirement. Supporting note-takers may, if they wish, use a word processing program to type their notes, or may even choose to take hand-written notes. Keep in mind, though, that supporting note-takers who do use the automated application for note-taking must remember to label their notes as "supporting notes" when they add a new interview. This helps eliminate the possibility of their notes becoming confused with the official notes when it becomes time to upload the SIG to the central server.

Preparing for Stakeholder Interviews

The specific roles of interviewers and note-takers can vary from interview to interview. Sometimes, the interviewer may want note-takers to remain silent during an interview, only interrupting if some specific piece of information needs clarification. Other times, the leader may want note-takers to take a more active role and even ask follow-up questions in addition to clarification questions. The interviewer may take notes during the interview or may concentrate solely on asking questions.

Some interviewers may even develop signals, such as setting down a pen or folding arms over the chest, to indicate to note-takers that an interviewee’s answer has strayed off-topic and does not need to be recorded. Other lead interviewers may want note-takers to capture everything, even if it seems to be off-topic.

Because of these variances, it is critically important for the entire interview team to meet before the interview and clearly outline expectations and responsibilities. At this meeting, it should be determined exactly who is taking notes as well as who is the primary note-taker. Additionally, the interviewer should make clear exactly who is responsible for asking questions, either as follow-ups or clarifications. If he or she wants note-takers to ask questions, that procedure should be established as well. The interviewer should also clearly define any cues or signals he or she will use during the interview to indicate off-topic information that note-takers don’t need to capture and explain the procedure for how the team will come back together after the day's interviews to compile the final record.

Note that this "meeting" may be very informal in nature. It may take place in the lobby of the hotel, before the interview team leaves to conduct the day's interviews. It may also take place in the car on the way over to the interviews. Regardless of how this meeting takes place, it is very important that everybody involved in the interview process has a clear understanding of their specific roles and responsibilities. Additionally, be sure that you have read through and committed to memory the entire SIG (which can be downloaded in Module 9.1: The Instruments). Understanding how the various items relate to the systemic factors is critical for interviewers to ensure that they ask the correct questions and for note-takers to ensure that they take good notes.

Conducting the Interview

Most stakeholder interviews will last for approximately 1 hour. Following the arrival of the stakeholders, the interviewer will spend a few minutes explaining the purpose of the interview, including information about how the review process works, the timeframe that the review is examining, and the overall purpose of the review. The interviewer will attempt to set a comfortable, non-threatening tone for the interview and will facilitate introductions of everyone involved.

The interview itself will start with whatever core question the interviewer has chosen as a starting point. As the stakeholder responds to the question, the note-takers will take notes on what is said. The interviewer will then determine whether the stakeholder's response warrants any follow-up questions, or whether another core question should be asked. While it is possible that the interviewer will proceed through the SIG consecutively, moving from item to item in numeric sequence, it is more likely that he or she will jump around as the conversation shifts focus. In other words, just because Item 29 follows Item 28 does not mean that the interviewer can't jump ahead to Item 36. For this reason, it is very important that the note-takers are completely familiar with the SIG's content and structure and understand how to move between items quickly and efficiently.

At the conclusion of the interview, the interviewer typically will give stakeholders the opportunity to share any other information that they did not have the opportunity to discuss. The interviewer will then thank the stakeholders for their time and end the interview. At this point, the note-takers will either prepare for the next interview or begin the process of revising their notes and compiling the final record. Issues and themes raised during the interview may also be brought up at that evening's nightly debriefing.

Note-Taking During the Interview

Effective note-taking during a stakeholder interview requires attentive listening, good summarizing skills, and a thorough understanding of the SIG's content and structure. The goal of each note-taker is to capture as much of what is shared by the stakeholders as possible, as accurately as possible, with the understanding that the notes will be almost exclusively in the form of summary or paraphrase. Note-takers should not filter information during an interview. In other words, you should not be deciding during the interview what is "important" information and what is "not important." Rather, you should concentrate on capturing all the information you hear and then edit out off-topic content later.

Remember that there will only be one "official" note-taker at each interview. This official note-taker must take his or her notes using the automated application, and after the interview is over these notes will be revised (with the collaboration of the supporting note-takers) to create the final notes that will be uploaded to the central server at the end of the review week. Supporting note-takers should also use the automated application to take their notes, because this streamlines the process and makes it easy to move from item to item as necessary. However, if a supporting note-taker is more comfortable taking hand-written notes or typing notes in some other format during the interview, this is acceptable as long as appropriate steps are taken to make those notes accessible to the official note-taker when the revision process begins.

Remember, though, that while you can cut and paste notes from one item to another within the automated application, you must never cut and paste notes from Word or other outside programs into the application itself. Doing so can cause critical, irrecoverable errors that may result in the loss of all your saved data.

Recording Questions and Responses

Effective note-taking in stakeholder interviews involves capturing the essence of everything that is said. This includes the core questions and follow-up questions asked by the interviewer as well as the responses shared by the stakeholders.

In general, as you’re taking notes, remember that you should listen carefully and not filter out information as being "important" or "not important." Your goal should be to try to capture everything, as summary or paraphrase, as accurately and completely as you can. Use shortcuts, though. Don’t be afraid to abbreviate, even if you’re making up your own abbreviations, and don’t worry about spelling and punctuation. Following the interview, you will take steps to revise and clarify your notes and then work collaboratively with the rest of the interview team to compile the final, official record.

Recording Questions

The first step in good note-taking is to capture each question as it is asked by the interviewer. This is important because, at times, the questions that an interviewer asks may not reflect what is actually printed in the instrument. For example, an interviewer may ask a core question and pair it with a follow-up question, or may string several of the listed follow-up questions together, or even ask a follow-up question that is not included in the SIG itself. Therefore, in order to understand the response, you must capture the question as it is asked, not as it is written.

What this also means is that you, as a note-taker, must know the instrument well enough that you understand which item is being referenced even if the question does not match what you expect to see in the SIG. You must not only recognize the topic of discussion, but you must be able to navigate to that item quickly, capture the question, and then begin taking notes on the response. Otherwise, you may find yourself lost in the interview and miss valuable information.

A good formatting technique to keep in mind for capturing questions is to insert a double space after each question that you record. Then, on a new line, begin to record the response. By separating the question from the answer, you make it easier to skim your notes later and see where the subject breaks take place. This also makes it easier to revise your notes and move material to different items, if necessary.

Recording Responses

When it comes to capturing the stakeholder response to a question, the most important thing to remember is that you are creating a summary or paraphrase of what you hear, not a transcript. It won't be possible for you to capture every word, so you should focus instead on listening to what the stakeholder says and then rephrasing that in an abbreviated but accurate way. The goal should be to capture all the main points, especially as they relate to the question that was asked.

If you hear something that seems important to capture as a direct quote, you should indicate that within your notes by using quotation marks. This will serve to separate the exact language from the rest of your summary and paraphrase.

Once a response to a question has ended, you must be prepared to begin capturing the next question. This may be another item's core question, which will require that you navigate to that new item or a follow-up question for that same item. If it is another follow-up question, you should use the same double-spacing that you did for the question to separate the new question from the response. This will improve the readability of your notes.

 

Moving Between Items

Obviously, the ability to move from item to item within the SIG as the discussion evolves is critical for good note-takers. You must know the instrument from beginning to end in order to recognize which item is being addressed and then use the automated application's built in navigation tools to get to that item.

Sometimes, the interviewer will attempt to make this process easier for the note-takers by announcing when one item's questions are concluded and a new item's questions are about to begin. For example, he or she might state, "We were moving on to Item 27," and then give each note-taker a moment to get there.

Other times, though, the interviewer may not give clear instructions on what item is coming next. There may be little or no pause between items. This might be due to the interviewer forgetting to remind the note-takers of a topic change, or to the stakeholders shifting topics on their own so that the interviewer has to move onto a new line of follow-up questions. Also, depending on how the interview progresses, the interviewer may cover items out of the order they appear in the SIG, or may even return to items that he or she already addressed to ask additional follow-up questions. All of this can sometimes lead to note-takers becoming lost and unsure about which item the interviewer is currently asking questions.There are, however, strategies that can minimize the chances of this happening.

The first strategy is obvious: you must know the SIG thoroughly before the interview begins. By knowing the entire SIG, you will vastly increase your chances of keeping up with the interview if the interviewer jumps around in it.

Using Advanced Navigation instead of normal navigation is another tool that can help you. Advanced Navigation lets you move from item to item by simply clicking on boxes instead of trying to use the navigational arrows or directory view and can greatly streamline the process of getting around in the instrument.

Finally, remember that the Advanced Navigation’s hover tool can be tremendously helpful. If you hover the mouse over an item’s box, you will see a summary of that item’s topic. Use the hover tool continuously during the interview to preview the other items in the instrument. If you hear a question but are unsure into which item it fits, hover the mouse over the item boxes to find the best match.

 

Getting Lost

Despite your best efforts, at times you may find yourself lost in an interview. The discussion may have shifted topics quickly, or the interviewer may have forgotten to identify the current item, or you may have fallen behind while summarizing a lengthy stakeholder response. Regardless of the reason, you must be prepared to deal with getting lost when and if it occurs.

The most important thing in this situation is that you continue to take notes. Do not simply give up and stop recording what you hear. Many note-takers use item 46, which is for State-Specific Questions, to "dump" content when they're not exactly sure to which item it belongs. Later, when you are working on revising and clarifying your notes, you can cut and paste this "dumped" content into the appropriate item.

If you do not have time to get to item 46, simply continue to take notes in whatever item you are currently viewing. Separate the content that you know is off-topic from the rest of the content by creating a line of asterisks or some other marker to show that what follows is off-item and needs to be moved at a later point. As soon as you've regained your bearings and know which item the interviewer is addressing, navigate to that item and continue taking notes normally.

Remember, too, that note-takers may ask for clarification about which item is being discussed. Be sure, though, that you have clarified with the team leader before the interview begins how he or she would like for you to ask clarification questions.

 

Working With Groups

In many cases, it will be a group of stakeholders who are involved in a stakeholder interview rather than a single stakeholder. Situations like this still require that note-takers record questions and responses, but they become more complicated because there may be multiple responses, sometimes contradictory, that you will need to paraphrase and summarize for each question.

There are three key points to keep in mind when taking notes during a group interview:

Create a key. It’s important to have a quick and easy method to distinguish different speakers. While you won't be identifying anybody by name, and you won't always need to attribute specific comments to the individuals who said them, you will at times need to show that specific comments were made by different people. For example, if the interviewer asks each stakeholder for an example to illustrate a previous point, you will want to capture each of those examples separately. You may find it helpful at the beginning of the inteview to create a quick key or other guide, such as a grid, to help you quickly distinguish one stakeholder from another.

Observe the focus group’s dynamics. Think about how the various stakeholders relate to one another and take special note of places where there seem to be disagreements. If one person seems to be dominating the discussion, that’s worth capturing in your notes. Record your observations with your normal summary and paraphrase, but distinguish these notes from the rest by setting them off in parentheses or brackets.

Capture polling questions. These are questions that the interviewer uses to survey the entire group for a quick reply—for example, “Raise your hand if you agree with X.” As with any other follow-up question, you must capture the question as it’s asked, but you must also capture each individual’s reply.

Completing Stakeholder Interviews

As soon as possible after the interview, each note-taker should spend some time revising his or her notes for clarity in order to ensure that they are as clean, complete, and accurate as possible. Following this revision, the note-takers work together collaboratively to compile the final notes. These final notes, which are framed around the official note-taker's version, are what get uploaded to the central server at the end of the review week as the official record of the interview.

Revising for Clarity

As soon as possible after the interview, you should sit down with your notes and spend some time reviewing them for clarity. This “first pass” through your notes, which should happen before you begin to work with the other note-takers to compile the final SIG record, is designed to help you better understand the main points raised by the stakeholder and ensure that any mistakes or omissions that might have occured during the interview are corrected before you forget them. For this reason, you should try to revise your notes for clarity as soon as possible after each interview. If you wait until the end of the day, you are far more likely to forget the details of each individual interview.

If you are the official note-taker, there are a number of things that you should specifically consider while revising your notes:

Spell out abbreviations other than PUR for Period Under Review. This is particularly important for the official note-taker, since the official record that is uploaded to the central server cannot include any abbreviations other than PUR. But even if you are a supporting note-taker, you should take the time to ensure that any shorthand or spur-of-the-moment abbreviations you used in order to keep up with the interview are clarified, at least to the point that you will remember what they mean after some time has passed.

Insert spacing in your notes to separate questions from responses. When you are recording questions and responses during the interview, you should use spacing to separate each question from its response. This spacing helps make your notes easier to read and simplifies the process of moving content from one item to another (see below). If you missed some spacing during the actual interview, take the time to insert it as you review your notes for clarity.

Correct punctuation and other mechanical errors as necessary. As with spelling out abbreviations, this is especially important for the official note-taker. The final record should be as clean as possible when it is uploaded to the central server, and that includes correcting obvious spelling, punctuation, and grammatical mistakes. But even supporting note-takers will benefit from taking the time to correct glaring errors in their notes. Since these types of errors can cause confusion down the line, it is a good idea to correct the worst of them as soon as possible after the interview.

Cut and paste misplaced content into the correct item. Especially if a note-taker becomes lost during an interview, some content will likely be captured with the wrong item. Following the interview, this misplaced content must be moved into the correct part of the instrument. The automated application permits a simple cut-and-paste functionality between items; simply select the text you wish to move, right-click the mouse, and select "cut." Then, navigate to the correct item, right-click, and select "paste" to drop the content there. You can cut and paste as many times as you need to, from any item to another, but never cut and paste from an outside application (such as Word or Notepad) into the application.

Compiling Final Notes

Once you have finished note-taking during the interview and had the opportunity to revise your notes for clarity, it will be time to work with the other supporting note-takers and the official note-taker to compile the final version of the official SIG that is uploaded to the central server at the end of the review week. Since it is  likely that each note-taker involved in the interview ended with slightly different notes, this final collaborative step ensures that the final notes include all the relevant information that was shared by the stakeholder.

Remember that there can be only one version of the final, official SIG. Therefore, it is very important that any changes to the final record that take place during this process happen only to the official note-taker's notes. All of the supporting notes should have been clearly labeled as such when they were first created and should be kept separate from the final notes.

How this collaboration actually takes place will vary from site to site. In some cases, the official note-taker may print out a copy of his her notes for each of the supporting note-takers and have them edit the paper copy as necessary. The official note-taker would then use these edited copies to revise the official notes and meet with the interview team to review the revised record. Other times, the official note-taker may collect printed notes from each of the supporting note-takers and revise the official notes him or herself. Whatever the process that is used, the end result will be the same: one, final version of the official notes will be created, agreed upon as complete and accurate, and uploaded at the end of the review week as the final account of that stakeholder interview. 

The Automated Application

The automated CFSR Data Management Application, commonly referred to as "the application," was developed by JBS to streamline the review process. It has numerous benefits over the paper versions of the instrument that were used for the reviews during the first round. It significantly reduces the amount of work required to complete the OSRI by filling in duplicate information across items wherever possible and by locking out items that are Not Applicable to the current case so that reviewers do not need to complete them. It also uses a built-in logic system to automatically determine item and outcome ratings.

Another key benefit of the application is in how it helps automate portions of the Data Integrity and Quality Assurance process. The system permits a simple electronic transfer of cases from reviewer to Site Leader and enables Site Leaders to add electronic comments ("stickies") easily to problem areas in a case. Reviewers can then read these comments and adjust their answers as necessary to complete their case review.

Finally, the application offers a wide variety of built-in report functions that enable reviewers and Site Leaders alike to easily access specific data from single cases or across an entire review site.

The application is hosted on a tablet PC, which you will use as a laptop computer while on site. The tablet PC is a lightweight, easily transportable system that will store all of your casework and allow you to seamlessly network with Site Leaders and, if necessary, the central server and CFSR Data Repository. For more information on using the tablet PC, including instructions on how to power it on and pass through its security systems, click Getting Started.

Getting Started

To begin using the tablet PC, locate the sliding latch on the right side of the screen’s front edge. When you slide this latch to the right, you will be able to lift the screen and open the tablet PC. You will notice a red nub located in the center of the keyboard, directly above the 'B' key. This is the tablet PC's built-in mouse, which you can control with your index finger. The two red-lined tabs below the keyboard are the left- and right-click buttons.

Some people are uncomfortable using the red nub as their mouse. You will be provided a mini-USB mouse on site, which you can use by inserting the USB plug into one of the two USB ports on the tablet PC. There is one USB port on either side of the computer; it does not matter which one you plug the mouse into. The mouse is plug-and-play, which means that it will work automatically once you plug it in.

To turn on the tablet PC, you can use either of its two power buttons.  Each power button is an oval-shaped button with a white dot in the center. One is located directly above the keyboard, in the center of the panel. The other is located directly beneath the screen, on the left-hand side.

Once the tablet PC has powered on, you will be at the Network Logon screen. This is the first level of system security. Click System Security for more information.

System Security

To ensure the confidentiality of sensitive case information, each tablet PC has three levels of security built into it. You must successfully pass each level in order to access the Application. The levels of security are: Encryption Login, Windows Login, and the USB Key.

Encryption Login

The Encryption Login screen is the first screen that will load after you turn on the tablet PC. It is green in color. At the prompt, you must first hit the Enter key, then type the encryption password. You will be provided this password onsite; it will not be the same as the password used at your training.

Note that you must hit Enter before you type the password. If you try to type the password before hitting the Enter key, your login attempt will fail. After three failed attempts, the tablet PC will lock. At this point, you will need to turn off the tablet PC, wait for it to power down, and then turn it on again to re-try the encryption login.

Also note that, once you have logged onto Windows (see below), a pop-up window will report on any failed attempts to login at the encryption screen.

Windows Login

The second level of security built into each tablet PC is the Windows login screen. You will see two login options here: "Administrator" and "CFSR Reviewer." Always click the second option, "CFSR Reviewer," regardless of your actual role on site. The "Administrator" option is only for system administrators who provide technical support.

After you click CFSR Reviewer, a textbox will open. Type the Windows login password you were provided onsite. Note that this password, like the Encryption password, will be different from what you used during your training.

Once you have logged in at this screen, the Windows desktop will display.

USB Key

The final level of security built into each tablet PC is a USB key that you will be provided on site. This USB must be plugged into one of the tablet PC's two USB ports in order for the Application to launch.  Once your desktop has displayed, plug in the USB key and wait for the computer to recognize it. This may take up to a minute. If a window opens displaying the contents of the USB key, you may close it. You do not need to access any of the files on the USB key.

Once the USB key has been inserted and recognized, you are ready to launch the Application.

 

Launching the Application

Once the Windows desktop loads and you have inserted your USB key into the tablet PC, you can launch the Application. To launch the Application, locate and double-click the CFSR Data Management Application icon in the upper right-hand corner of the screen.

The Data Management System (DMS) Welcome Screen will load. Click the large, orange Enter button to bypass this screen. The Application will open to its Overview Screen.

Exiting the Application and Shutting Down the Computer

Before you exit the Application, be sure that you have saved and backed up all of your recent work.

Exit the Application by using the Exit button, which is located in the upper right-hand corner of the screen. Note that you should always close the Application with the Exit button, not with the red 'X' in the topmost corner. Also, you should always exit the Application before you shut down the computer.

To shut down the computer, click the Start button in the bottom left-hand corner to open the Start menu. Click the Turn Off Computer button. The computer will shut down automatically through a process that may take up to 1 minute.

The Overview Screen

The Overview Screen is the first screen to display when you launch the application.

At the top of the Overview Screen is the blue Title Bar, which displays on every screen. The Title Bar displays the application name (CFSR Data Management Application). It also displays the name of the person or review team whose USB key started the application and the Period Under Review (PUR).

The three buttons on the right-hand side of the Title Bar allow you to minimize, reduce, or close the display window. Note that closing the display window also closes the application.

The remainder of the Overview Screen contains the Menu Bar, the Record Summary Grid, and, at the bottom, the Navigator Bars, which include the Unanswered Questions Navigator and the Unresolved Comment Navigator.

Menu Bar

The light-blue Menu Bar displays immediately below the Title Bar. Like the Title Bar, it displays no matter where you are within the application.

There are seven options available on the Menu Bar, most of them drop-down menus. These options are: Overview, OSRI, SIG, Reports, Data Management, Admin, and Exit. Note that the SIG and Admin menus are not available to Reviewers and will appear grayed-out on the screen.

Overview

Clicking Overview will return you immediately to the Overview Screen from anywhere in the application. Be sure to save any work before clicking here.

OSRI

The OSRI Menu offers five options concerning OSRI records: Edit Cases, Add New Case, Delete Existing Case, Show Comment Navigator, and Show Unanswered Questions Navigator.

SIG

The SIG Menu offers five options: Edit Interviews, Add New Interview, Delete Existing Interview, Advanced Navigation Mode, and Show Unanswered Questions Navigator. Note that this menu is not available to Reviewers.

Reports

The Reports Menu offers access to the wide variety of reports that the application can generate.
It features numerous submenus from which you can select multiple report formats. You can also print and save reports as HTML files from this menu.

See Module 6.9: Reports for more information about using the Reports Menu.

Data Management

In addition to providing a variety of options for backing up, transfering, and restoring records on the tablet, the Data Management Menu also provides information on the current version of the application.

See Module 6.6.1: Data Management Menu for more information about using the Data Management Menu.

Admin

The Admin Menu is not available to Reviewers. It offers functions for site leaders and system administrators to manage the database and USB keys.

See Module 6.6.2: Admin Menu for more information about using the Admin Menu.

Exit

Clicking Exit will exit the application and return you to the desktop. You will be asked to confirm this selection. Be sure to save all work before you exit the system.

Record Summary Grid

Most of the Overview Screen is reserved for the Record Summary Grid, which summarizes all of the OSRI and SIG records currently stored on the tablet. The currently selected case will be highlighted in blue.

Each record summary is divided into seven columns, which are labeled in the gray bar at the top of the grid. When the Record Summary Grid displays more than one record, you can sort them by clicking this gray bar. The seven columns of the Record Summary Grid are: ReviewSite, ReviewTeam, Instrument, Record, Completeness, QA Status, and Action.

Information on the first four columns is detailed here. For information on the Completeness, QA Status, and Action columns, click the links below.

ReviewSite

This column lists the Review Site to which the record is assigned.

ReviewTeam

This column lists the Review Team (for OSRI records) or Interviewer (for SIG records) responsible for the record.

Instrument

This column identifies whether the record is an OSRI or SIG.

Record

This column displays the case name. For OSRI records, bracketed information indicates whether the record is a Foster Care (FC) or In-Home Services (IH) case.

Completeness Column

This column shows how much of each record has been completed. For OSRI records, this summary involves three sections: Facesheet, OSRI, and Rating Documentation. Each section is summarized with both a ratio (12 out of 12, for example) and a percentage (100 percent) that refers to how many questions in that section have been answered. Also listed is a ratio showing how many of the record’s 23 items have a rating.

For SIG records, the Completeness column shows only the total percentage of questions that have been answered.

Note that the Rating Documentation number only reflects the number of Main Reason questions that have been answered and does not include follow-up questions.

QA Status Column

The QA Status column shows the current status of the record. There are seven possibilities
accessible here through a drop-down menu. The first six, Record Created, Ready for QA Review, QA Review Complete, Ready for Debriefing, Ready for State QA Review, and State QA Review Complete, are for OSRI records. The seventh, Interview Record Complete, is used for SIG records.

You can change a record’s QA Status by clicking the down arrow and selecting a new QA Status from the menu. A pop-up window will display confirming that the QA Status change was successful, and the record’s new QA Status will display in the column. Changing a record’s QA Status is an integral part of the overall Quality Assurance process.

Once data transfers begin for a record, if you float the cursor over that record’s QA Status, a pop-up window will open to indicate that record’s transfer history. This is a quick way of viewing how many data transfers that record has undergone.

Action Column

The Action column contains an Edit button, which opens for editing that OSRI or SIG record. OSRI records will open to their Face Sheet, while SIG records will open to their first Core Question.

For Local Site Leaders who have downloaded OSRI records to their tablets to perform a Quality Assurance Review, the Action column’s Edit button will be replaced by a QA Review button. Clicking this button will open that record’s Completed Case Report, which the Local Site Leader will then use to review and comment on the case.

Note that floating the cursor in this column will open a pop-up window of the record’s Transfer History just as is the case with the QA Status column.

Inputting Data

The automated OSRI and SIG each include a variety of different question formats. These different formats require not only different types of information, but different methods for inputting the information as well.

The following types of question formats are incorporated into the application:

  • date questions
  • text questions
  • number questions
  • select one/select any questions
  • chart questions

     

Date Questions

Date questions require the input of specific dates in a normal date format (month/day/year). To answer a date question, click on the first (left-most) spot in the blank datefield displayed in the white textbox. Type the date using a month/day/year format.

To make corrections to a date question, position the cursor in the textbox and use the delete or backspace key to erase your errors. Type any corrections normally.

Note that some date questions, particularly those embedded within a grid question, employ a calendar tool that allows you to select a date without typing.

Text Questions

Text questions require the input of blocks of text, typically full sentences or paragraphs. To enter text, position the cursor in the white textbox and type normally. Tabs and hard returns are accepted by the application, but no other type of automated formatting (for example, bullets or font changes) are accepted.

To delete or replace text, position the cursor at the appropriate spot in the white textbox and use the delete and/or backspace keys. Type any revisions normally.

As with a normal word processor, you may select blocks of text for copying, cutting, pasting, or deleting. Select text by holding down the left mouse button and dragging over the portion of text that you need. Then, release the left mouse button and right click. A pop-up menu of options (including cut, copy, paste, and delete) will display.

Number Questions

Number questions require the input of a single number. To answer them, position the cursor in the white textbox and type the number normally. Use the backspace or delete key to make corrections.

Select One/Select Any Questions

Select One and Select Any questions require you to select single or multiple items from a list of options. Simply click on the appropriate choice or choices to make your selections. To make corrections, click the Clear All Choices button. Your current selections will be removed so that you can start over.

Chart Questions

Chart questions require the input of information to a multi-cell chart. You can adjust the width of individual cells in the chart by clicking the cell’s border line in the title bar and dragging sideways. You can also adjust the height of an individual cell by clicking on the left border’s side bar and dragging up or down.

Note that expanding cells may result in the chart becoming too large to fit within the window. In this case, scroll bars will appear at the bottom or right-hand side that enable you to scroll from side to side or up and down as necessary.

Charts also feature arrow controls that allow you to scroll up, down, left, or right. Clicking on an arrow will scroll the chart in that direction.

Note that every time you click the left-hand margin of the bottom row in the chart (which displays with an asterisk beside it), a new row will be automatically created below it. To delete a row from a chart, click one of its cells, then click the Delete Highlighted Row button. You will be prompted to confirm your selection; when you do, the row will disappear along with all of the information entered in it. You can also clear only the cell with which you are currently working by using the Clear Active Cell button. Clicking this button will delete all information in the currently active cell, regardless of the question type (see below), and allow you to start over in your answer.

Types of Chart Questions

There are four types of chart questions: text, menu selection, yes/no, and date. To answer either type, click once in the question's cell. You can move across a row from cell to cell by using the Tab key. The Enter key moves you down to the next row of cells. You can also select any cell in the chart at any time by clicking in it.

Note that the Clear Active Cell button is the only way to delete menu selection, yes/no, and date questions from a chart.

Text, Menu Selection, and Yes/No Questions

Text Questions

When you click in a cell that requires a text answer, the cell will change to a white textbox with a cursor. Type your answer normally, using the backspace or delete key to correct errors.

Menu Selection Questions

Cells requiring selection from a drop-down menu will have a down arrow appearing inside them. Click this arrow to open the menu, then click on the appropriate choice.

Yes/No Questions

For yes/no questions, use the mouse to check the box for yes. Leave the box blank for no.

Chart Date Questions

When you click in a cell requiring a date answer, a drop-down menu showing the current date will appear in the chart cell. Simply click on the month, day, or year to highlight it, then type the correct date over it. Note that you must click on the month, day, and year individually in order to type them. You cannot simply type the date as one entry or use the tab key to move from field to field.

Calendar Controls

You can also change the date using only the mouse. To do so, click the chart cell’s down arrow. A pop-up calendar showing the currently selected date will appear. If the month and year are correct, select a new day by clicking it on the calendar.

If you need to change the month or year, you have several options. The left and right arrows at the top of the calendar will move you back (left) or forward (right) one month. You can also click on the month’s name to reveal a pop-up menu to choose from all 12 months.

You can change the year by using the left or right arrows to move backward or forward. However, an easier way to change the year is to simply click on the year displayed at the top of the calendar. Doing so will reveal a spinner control (an up and down arrow) that allows you to move the year forward (up) or backward (down).

Once you have selected the month and year, click on the appropriate day to close the calendar control. 

The Automated Onsite Review Instrument

The Automated Onsite Review Instrument is the electronic version of the OSRI. It has the same overall structure as the paper instrument but differs in a few specific areas. It does not, for example, re-create the "General Instructions" page that begins the paper instrument, and in a few instances the automated application sub-divides item questions differently from those that are listed in the paper instrument. A few of the charts, most notably Chart F on the Face Sheet, are also formatted differently in the automated instrument from those in the paper version.

By and large, though, the structure of the automated instrument exactly follows that of the paper instrument. This includes listing each item's Purpose of Assessment and question instructions as well as its Rating Documentation and follow-up questions

One notable difference between the automated instrument and its paper version is how Outcome Ratings are handled. In the paper instrument, each Outcome Rating has a separate page that must be manually answered by the reviewer. In the automated instrument, Outcome Ratings are generated automatically once the Outcome's various items have all been rated, and there is no separate Outcome Rating page. To view an Outcome Rating, you must use the Preliminary Case Summary Report

Working With Cases

There are three ways for review pairs to work with OSRI cases while onsite. They can add a new case, edit existing cases, or delete existing cases. All three of these options are accessed through the OSRI Menu in the Menu Bar

Add New Case

To create a new OSRI case, select Add New Case from the OSRI Menu. The Add New Case window will appear. Next, complete the following steps:

1) Enter the case's name in the Case Name area. For foster care cases, the case name will be the full name of the target child. For in-home services cases, the case name should be the family surname.

See Module 6.3: Inputting Data for more information on how to enter text.

2) In the Case Type box, select whether the case is foster care or in-home services by clicking the appropriate choice.

3) Click the arrow beside the Date Reviewed box to display the pop-up calendar control. The current date will be highlighted in red. Select a review date for the case by clicking the appropriate date.

4) To finish, click the Save and Start button. You may also click Cancel to close the Add New Case window without saving.

Once you click the Save and Start button, the Face Sheet for the new case will open. To exit the Face Sheet, click Overview to return to the Overview Screen. The new case will be displayed in the Record Summary Grid with a QA Status of Record Created.

Delete Existing Case

It is possible to delete a case from the database. However, deletions are permanent and cannot be undone. You should always be sure to backup the database before you delete any case or cases from it.

To delete cases already saved to the system, use the following steps:

1) Select Delete Existing Case from the OSRI Menu to open the Deletion Screen.

2) On the Deletion Screen, click the arrow beside the Selected Record Menu to open a list of cases available for deletion. Click the case you wish to delete, then click the Delete Selected Record button.

3) A Confirm Delete window will open asking if you wish to backup the database before deleting. If you click Yes, you will be asked to select a destination for the backup file. If you click No, the deletion will continue.

4) A Delete Record Final Warning window will open reminding you that record deletions are permanent and irreversible. Click OK to proceed with the deletion or Cancel to change your mind.

5) Once you click OK, a new window will open confirming that the record has been deleted. Click OK to continue.

6) Click Overview to return to the Overview Screen. The case will no longer be displayed in the Record Summary Grid.

Remember that you cannot undo a deletion. Once you click OK, that case is wiped from the system and can be restored only by restoring a previous version of the database.

Edit Cases

There are two ways to open a case for editing:

1) Click the Edit button in the Action column of the Record Summary Grid. This will open that case’s Face Sheet.

2) From any screen, select Edit Cases from the OSRI Menu. This will open the Face Sheet for the first case listed in the Record Summary Grid.

Once you have opened a case for editing, you can switch to another case without returning to the Overview Screen by using the Selected Record Menu, which is located near the top of the screen. Click the down arrow in the field labeled Selected Record to open this menu to display a list of all your current cases. Select a case from this list to open its Face Sheet.

If, after creating a case, you need to edit its name, case type, or date reviewed, click the Edit button located beside the Selected Record Menu. This will reopen the Add New Case window for that case, from which you can enter changes. Click the Save and Start button to close the window and return to that case’s Face Sheet. Your changes will be saved to the system.

OSRI Screen Layout

All screens in the OSRI use the same basic layout. At the very top of each OSRI screen is the Title Bar. Below it is the Menu Bar. Below the Menu Bar is a display window that shows the current reviewer or Review Team. To the right of it is the Selected Record Field, which displays the name of the currently selected case. The drop-down arrow opens a list of other cases that you can edit. To the right of the Selected Record Field is the Edit button.

Below the Selected Record field and Edit button appear the green Document Navigation Bars. If you are viewing the Face Sheet, there will only be one bar labeled Face Sheet. Everywhere else in the OSRI, three bars will display: one for the current section, one for the current outcome, and one for the current performance item. You can use the Document Navigation Bars to navigate through the instrument using either Arrow Navigation or Directory Navigation.

Below the Document Navigation Bars is the item’s Purpose of Assessment. Below that is the Save Bar and the Question Area. At the bottom of the OSRI screen are two Navigator Bars: the Unanswered Questions Navigator and the Unresolved Comment Navigator. Both can be turned on and off through the OSRI Menu and allow you to quickly access unfinished sections of the OSRI.

Save Bar

Below the Purpose of Assessment is the Save Bar, which displays buttons for saving data as well as the item’s Calculated Rating. There are generally three save options available: Save>Rating Documentation, Save>Next Item, and Save

Save>Rating Documentation: This saves your answers and opens the item’s Rating Documentation Screen. Note that on the Face Sheet, this button does not appear because the Face Sheet is an unrated item.

Save>Next Item: This saves your answers and moves you forward one screen to the next item in the OSRI.

Save: This saves all of your answers. The application remains on the current screen.

Note that none of your answers to OSRI questions are saved to the system until you use one of the three save options available on the Save Bar. If you attempt to exit the current screen without saving, the application will prompt you with a pop-up window to first save your work.

Question Area

Below the Save Bar is the main Question Area. Because each item in the OSRI has more than one question, there are multiple Question Areas for each item. To scroll through the various questions, use the scroll bar located on the right-hand side.

Each question is identified with a designation (normally a letter or number) that corresponds to a designation used in the Questions Overview Bar (see below). To the right of the question’s designation is a colored oval that shows that question’s status. A question’s status may be Saved, Unsaved, Locked, or Blank.

Saved questions (light blue) have been answered and saved to the system using one of the three Save buttons.

Unsaved questions (yellow) have been answered or edited but are not saved.

Locked questions (dark gray) have been locked by the system and cannot be edited. These are usually questions that, due to answers already input, are either no longer applicable to the case or were automatically answered by the application.

Blank questions (tan) have not been answered.

The color used with each question’s status also corresponds to the color that appears in the Questions Overview Bar.

Below the question’s designation and status is the question itself and space in which to answer it. See Module 6.3: Inputting Data for more information on answering questions. At the bottom of each question are any special instructions/definitions relevant to it. You should read these carefully before answering the question.

Questions Overview Bar

To the left of the Questions Area is the Questions Overview Bar, which is a vertical bar made up of boxes that correspond to each of the item’s questions. Each box’s color will match the color of that question’s status. Click inside a box to jump to its matching question. If you float the cursor over a box, a pop-up window will appear showing its question’s designation, status, and answer. Note, however, that the answer will not appear until you have saved your work.

Item Ratings and Rating Documentation

Each OSRI item is rated automatically by the application once all of its questions are answered and its Calculated Rating is shown on the Save Bar. The exception to this is the Face Sheet, which is unrated. There are four possible ratings for each item:

  • Incomplete (questions remain unanswered)
  • NA (Not Applicable)
  • Strength
  • ANI (Area Needing Improvement)

If an item has been overridden, a double asterisk (**) will appear beside the rating. If you float the cursor over the rating, you will be able to view the override.

Once an item has been rated, the reviewer must complete that item’s Rating Documentation. This documentation is entered on the Rating Documentation Screen, which is accessed by clicking the Save>Rating Documentation button on the Save Bar.

The Rating Documentation Screen is laid out in the same format as every other OSRI screen, with two exceptions: the Purpose of Assessment is replaced by the Reason for Rating and the Save>Rating Documentation button is replaced by the Save>Item Questions button. Clicking this button returns you to the regular OSRI screen.

Answering Rating Documentation Questions

The first question on each Rating Documentation Screen is referred to as the Main Reason Statement, and it is where the review pair must provide the main justification for the item’s Calculated Rating. There are certain formatting and content issues that you must keep in mind when composing your Main Reason Statement; see Module 3.3.1: Writing the Main Reason Statement for more information. The same is true for the follow-up questions that come after the Main Reason Statement.

Remember that when you are completing the automated instrument, every question must have an answer. If you answer one of the follow-up questions in the Main Reason Statement, you must enter text in the follow-up question to indicate this. For example, you might type "See Main Reason" or "Answered in Main Reason." If a follow-up question is Not Applicable to the case, then you must type the letters "NA" as an answer. You cannot leave any questions blank.

Also, note that there is no need to complete the Rating Documentation immediately. A review pair may choose to finish the Rating Documentation after they answer all of the OSRI’s questions, or they may complete each item’s Rating Documentation as they complete each item. Work through the instrument in the manner you find most efficient.

Overriding Item Ratings

There are times when a review pair may disagree with the rating that the automated application assigns to an item or outcome. In these cases, the system does allow for any item’s Rating to be overridden and changed. Note that review pairs can only suggest that a Rating be overridden; only Site Leaders can actually perform an override.

If a review pair believes that an item’s rating should be changed, they should indicate this in the Main Reason question on that item’s Rating Documentation Screen. Local Site Leaders will review these answers during First-Level QA and decide whether overriding the item’s rating is justified.

If an override is justifiable, then the Site Leader can use the Override button. The Override button can only be accessed through the Screen View. To override an item’s Rating, use the following steps:

  1. Navigate to the appropriate item. Click the Override button, which is located in the middle of the Save Bar.
  2. The Item Rating Override Window will open. The field at the top of the window displays the item’s current Rating. Beneath that is the Select Override Rating drop-down menu. Use this menu to select the new item rating.
  3. Enter the reason for the item rating override in the text area of the Item Rating Override Window. When you are finished, click the Save and Close button.
  4. The item’s original Calculated Rating will still display in the Save Bar. However, two asterisks (**) will display beside it to indicate that it has been overridden, and if you float the cursor over the rating a pop-up window will display both the new rating and the reason for the override.

You can further edit an item’s rating or the reasons for overriding it by clicking the Override button to reopen the Item Rating Override Window. Repeat the steps described above to make any changes.

Note that you can undo an item rating override by clicking the Undo Existing Override button, which is located beside the Save and Close button. Clicking this button will open a pop-up window asking you to confirm your selection. Click the Yes button to proceed with the undo, or the No button to cancel.

Once an override has been undone, the Item Rating Override Window will close. The item’s Calculated Rating will be returned to what it had been originally, and the asterisks marking the override will no longer be present.

OSRI Navigation

Within an individual item, the Questions Overview Bar, which is located in the Question Area, allows you to quickly jump from question to question without having to scroll up and down. Similarly, the application also features tools that enable you to quickly move from item to item. These tools include the Document Navigation Bars and the Navigator Bars.

Document Navigation Bars

The Document Navigation Bars allow for the use of Arrow Navigation to move through the instrument item by item, or Directory Navigation to select an item from a directory tree view of the entire instrument. See the links below for more information about these methods of navigation.

Navigator Bars

There are two Navigator Bars located at the bottom of the screen: the Unanswered Questions Navigator, which lets you quickly access any unanswered question in the entire instrument, and the Unresolved Comment Navigator, which enables review pairs to quickly jump to any stickies that have been placed in the instrument as a result of the Quality Assurance review.

Arrow Navigation

The green arrows located beside the Document Navigation Bars allow you to move through the various OSRI sections, outcomes, and items in single steps. Clicking a right-facing arrow will move you forward one step, and clicking a left-facing arrow will move you backward one step. If there are no steps remaining in either direction, the corresponding arrow will be grayed out.

Note that the green arrows correspond to the part of the instrument that the navigation bars identify. For example, if you are on Section I, Safety Outcome 1, Performance Item 1, then the top bar identifies the section, the middle bar identifies the outcome, and the bottom bar identifies the item. If you click the middle arrow, you will move forward one outcome, to Safety Outcome 2 (which is still in Section I but begins with Item 3). The green arrows will gray out when you can no longer move forward or backward through that section or outcome.

Directory Navigation

Directory Navigation provides a fast way to move through the OSRI in nonsequential order. To use Directory Navigation, click anywhere inside any of the Document Navigation Bars. The bars will be replaced by a directory tree that displays the entire OSRI.

The item you are currently viewing will be highlighted in blue. Scroll through the directory by using the scroll bar on the right side and click on the section, outcome, or item that you wish to access. The OSRI will jump to that screen.

Note that the directory will close once the new screen opens and will be replaced by the normal Document Navigation Bars. You can also restore the Document Navigation Bars anytime by clicking within the white space of the directory itself. To re-open Directory Navigation, click the green Navigation Bars.

Unanswered Questions Navigator

The Unanswered Questions Navigator allows you to quickly jump to any single question in the automated instrument that has not yet been answered. It is an effective tool to use toward the end of your case review as a way of verifying that you have remembered to answer every question in the instrument. It is also an important part of Preliminary QA.

The Unanswered Questions Navigator is located at the bottom of the screen. To use it, click the down arrow located on the far right. A list will open that shows all of the questions that have not yet been answered for that case. Each question is identified in brackets as being part of the Face Sheet (FS), OSRI, or Rating Documentation (Rating) as well as by its number and letter designation. To jump to one of the questions, simply click it in the list. The application will immediately jump to its screen.

Note that the Unanswered Questions Navigator works best when you have a single case open for editing. While you can use the Unanswered Questions Navigator from the Overview Screen,  the list that displays there will show unanswered questions from every case on your tablet instead of the one case you are probably focusing on. When you have a case open, the Unanswered Questions Navigator will list only the unanswered questions from that case.

Note also that the Unanswered Questions Navigator can be turned off and on through the OSRI Menu.  The default setting is ON. To turn it off, select the Show Unanswered Questions Navigator option from the drop-down list. The orange highlighting around the option will disappear. You can turn on the Uanswered Questions Navigator at any time by re-selecting it from the menu.

Unresolved Comment Navigator

The Unresolved Comment Navigator displays unresolved stickies that have been placed within a case record by a site leader during the Quality Assurance process. It displays at the bottom of the screen, alongside the Unanswered Questions Navigator. It can be turned off and on by selecting Show Comment Navigator from the OSRI Menu. Orange highlighting around the menu's icon means that the Unresolved Comment Navigator is active.

To use the Unresolved Comment Navigator, click the down arrow on the right side of the bar to open a list of all item questions and rating documentation that currently have an unresolved sticky. Scroll through the list and click on the sticky you wish to view, and the application will jump to it. You can then respond to the sticky normally.

Note that the Unresolved Comment Navigator works best if the case you are working on is open. If the case is open, the stickies that display in the bar will only be for that case. If you open the Unresolved Comment Navigator on the Overview Screen,  it will list all unresolved comments for all the cases on your tablet.

Also note that the Unresolved Comment Navigator does not track or identify the comments that you have viewed. They remain on the list until your site leader has resolved them, which only happens after the case is transferred back to his or her tablet. For this reason, you should work through the stickies added to the comment bar methodically, in a sensible order (top to bottom or bottom to top) so that you do not lose track of where you are.

Completing the OSRI

Before a review pair submits a completed OSRI to a site leader to begin First-Level QA, its QA Status must first be changed to Ready for QA Review.

Note that only cases that have been entirely completed and have undergone Preliminary QA should have their status changed to Ready for QA Review. At this point, you are ready to transfer the case record to the site leader who is handling First-Level QA.

The Automated Stakeholder Interview Guide

As with the OSRI, the automated application also provides an electronic version of the Stakeholder Interview Guide, or SIG. This is the instrument used to capture information during stakeholder interviews, which are conducted by Local Site and Team Leaders alongside the review week's case reviews. The purpose of the SIG is to collect information for evaluating and rating the outcomes and systemic factors that are examined during the review process.

While there are some minor differences between the content and structure of the paper SIG (available for download in Module 9.1: The Instruments) and the automated version, the automated SIG is, functionally, very similar to the automated OSRI. The main differences between the two automated instruments involve the creation of the individual SIG case and case navigation. SIG records also differ from OSRI cases in that the QA Status of a SIG record only has one change: from Record Created to Interview Record Complete. Finally, SIG records do not undergo the same sort of Quality Assurance Review as do OSRI cases.

Working with SIGs

Unlike OSRI records, which are created by review pairs, SIGs can only be created by site leaders. All SIG-related functions are handled through the SIG Menu. This menu allows site leaders to add new interviews, delete existing interviews, and edit interviews

Add New Interview

A SIG record can be created by any site leader to take notes during a stakeholder interview. The basic process for creating a SIG is the same whether you are the "official" or a "supporting" note-taker, although supporting note-takers must remember to identify their record as "supporting" so that they do not get confused with the official record at the end of the review week.

1) From the SIG Menu, select Add New Interview.

2) The Add Interview window will open. Use the drop-down menu beside the Stakeholder field to select the type of stakeholder being interviewed.

3) Enter the stakeholder’s name and title/agency in the appropriate fields.

4) Enter the correct date in the Interview Date field. Note that the default entry is the current day's date.

5) Enter any relevant comments in the Comments field. Note that this step is not required; if there are no comments to enter, you may leave this field blank.

6) Click the Save and Start button to create the SIG. You may also click the Cancel button to exit the Add Interview window without saving the record.

Once you click the Save and Start button, the record will open to the first stakeholder-specific Core Question for the stakeholder you are interviewing.

Secondary Note-Takers

It is important to remember that there can only be one "official" SIG per interview. However, each interview will most likely include a number of different note-takers who will create their own SIGs.  While the information captured by these additional note-takers will eventually be compiled with the official notes to create a final version of the SIG record, it is very important that the SIG records remain distinct from one another. 

For this reason, if you are a supporting note-taker, you should clearly label any SIG you create as such. In the same field where you enter the stakeholder's name, you should identify the record with your own name, the label "supporting notes," and then the stakeholder's name. For example, if your name is John, and you are interviewing Judge Yates, you would identify the SIG as "John's Supporting Notes: Judge Yates." This ensures that the "supporting notes" designation appears on the Overview Screen.

By labeling supporting SIG notes in this fashion, you make it much easier for the site leader who is responsible for uploading all final OSRI and SIG records to the central server at the end of the review week to correctly identify which SIGs are "official" and should be uploaded. If you fail to designate supporting SIGs in this way, the site leader may see several SIG records with identical names on the wireless network and accidentally upload the wrong one. 

Delete Existing Interview

It is possible to delete an interview record from the database. However, deletions are permanent, and cannot be undone. You should always be sure to backup the database before you delete any record or records from it. To delete SIGs already saved to the system, use the following five steps:

1) Select Delete Existing Interview from the SIG Menu to bring up the Deletion Screen.

2) On the Deletion Screen, select the appropriate Interviewer and the interview you wish to delete using the Interviewer and Selected Record drop-down menus.

3) Click the Delete Selected Record button.

4) A Confirm Delete window will open asking you if you wish to backup the database before deleting. If you click Yes, you will be asked to select a destination for the backup file. If you click No, the deletion will continue.

5) A Delete Record Final Warning window will open reminding you that record deletions are permanent and irreversible. Click OK to proceed with the deletion or Cancel to change your mind.

6) Once you click OK, a new window will pop up confirming that the record has been deleted. Click OK again to continue.

7) Click Overview to return to the Overview Screen. The record will no longer be displayed in the Record Summary Grid.

Again, you cannot undo a deletion. Once you click OK, the record is wiped from the database and can be replaced only by restoring a previous version of the database.

Edit Interviews

SIG records are listed on the Overview Screen just like OSRI cases. They are designated on the Record Summary Grid with the word SIG in the Instrument column. Newly created SIG records will show a QA Status of Record Created.

To open a SIG for editing from the Overview Screen, click the Edit button in the Action column of the Record Summary Grid. This will open that case to its first stakeholder-specific Core Question. You can also open a SIG for editing by selecting Edit Interviews from the SIG Menu. This will open the first SIG Record listed on the Record Summary Grid to its first Core Question.

Once you open a SIG, you can switch to another SIG by using the Selected Record drop-down menu. Click the down arrow beside the Selected Record field to display all the records assigned to the current Interview Team. Select the SIG you wish to open from this list.

If, after creating a case, you need to edit its name, case type, or date reviewed, click the Edit button located beside the Selected Record field. This re-opens the Add Interview window for that case, which you may use to enter any changes. Click the Save and Start button to close the window and return you to that case’s first Core Question.

SIG Layout

All screens in the SIG use the same basic layout. The layout is very similar to that used for the OSRI, but it features a few significant differences.

At the very top of each SIG screen is the blue Title Bar, which names the Interviewer or Interview Team assigned to the record. Below it is the Menu Bar. Below the Menu Bar are the Interviewer and Selected Record fields. The Interviewer field features a drop-down menu that allows you to filter SIG records initiated by other Interviewers that have been transferred to your tablet for review.

To the right of the Interviewer field is the Selected Record field, which displays the name of the currently open SIG. Its drop-down menu displays SIG records on your tablet that belong to the Interviewer shown in the Interviewer field.

To the right of the Selected Record field is the Edit button. Clicking it will open the Add New Interview window, which can be used to edit a record’s identifying information. Below the Selected Record Field and Edit button are the Document Navigation Bars. These bars work similarly to the OSRI’s Document Navigation Bars

Below the Document Navigation Bars is an Item Description for the current item. The Save Bar appears below this. There are two options for saving SIG answers: Save>Next Item and Save. The Save>Next Item option saves all of your answers and moves you forward to the next stakeholder-specific Core Question in the SIG. The Save option saves all of the questions you have answered for the current item to the system and keeps the application on its current screen.

The Question Area of a SIG contains the question’s designation in the upper left corner. Beside that is its status. A question’s status may be Saved, Unsaved, Locked, or Blank.

Saved questions (light blue) have been answered and saved to the database using one of the two Save buttons.

Unsaved questions (yellow) have been at least partially answered or edited but are not yet saved to the database.

Locked questions (dark gray) have been locked by the system and cannot be edited. Questions most typically become locked due to the SIG being transferred to another tablet.

Blank questions (tan) are unanswered.

The color used in each question’s status corresponds to its color in the Questions Overview Bar, which is located to the left of the Questions Area and functions in the same way as it does on the OSRI.

SIG Navigation

Within an individual SIG item, the Questions Overview Bar allows you to quickly jump from question to question without having to scroll up and down. The application also features tools that enable you to quickly move from item to item. These tools include the Document Navigation Bars, which like the OSRI allow both Arrow Navigation and Directory Navigation, and the Unanswered Questions Navigator. The SIG also features a unique form of navigation referred to as Advanced Navigation.

Arrow Navigation

The arrows located beside the Document Navigation Bars allow you to move sequentially through the various stakeholder-specific Core Questions. Clicking a right-facing arrow will move you forward one step to the next stakeholder-specific Core Question, and clicking a left-facing arrow will move you backward one step.

Remember that when a SIG first opens, only the stakeholder-specific Core Questions are accessible through Arrow Navigation. If you wish to answer other questions that are not stakeholder specific, you must use either Directory Navigation or Advanced Navigation.

Directory Navigation

Directory Navigation in the SIG works almost exactly as it does in the OSRI. To use it, click anywhere inside the Document Navigation Bars. The bars will be replaced by a directory view of the entire SIG. The section you are currently working in will be highlighted in blue. Scroll through the directory by using the scroll bar on the right side. When you click an item, the application will load to its page and close the directory view.

Unlike the OSRI's Directory Navigation, which loads every item in the instrument, Directory Navigation in the SIG only initally displays items specific to that SIG’s stakeholder. To access non-stakeholder-specific questions, you must float the cursor over a single section header. Doing so will expand any “hidden” items that were not included in the original listing. These hidden items will be listed in red. Clicking a hidden item will jump you to its screen.

If you float the cursor over the SIG header at the top of the directory tree, the directory will expand to reveal all hidden sections. You may then expand each section individually as described above.

Note that once the Directory Navigation window closes, all of the expanded sections will once again become hidden. Even previously hidden items that you answered will remain hidden unless you again expand the directory to include the hidden sections.

Unanswered Questions Navigator

To use this navigator bar, click the down arrow located on the far right. A menu will display showing all of the questions that have not yet been answered for that record. To jump to one of the questions, select it from the menu. The application will immediately jump to its screen as if you were using Directory Navigation.

Note that, unlike the OSRI’s Unanswered Questions Navigator, the SIG’s navigator will only display unanswered stakeholder-specific questions. There is no way to access non-stakeholder-specific questions through the SIG’s Unanswered Questions Navigator.

Advanced Navigation

Advanced Navigation is a navigation method unique to SIG records. It replaces the Document Navigation Bars with a group of icons, each of which represents one SIG item. Clicking an icon will open the item’s question screen.

Because Document Navigation Bars are the default display for the application, Advanced Navigation must be enabled by the Interviewer. To set the application to Advanced Navigation, you must first return to the Overview Screen. Then, select Advanced Navigation Mode from the SIG Menu. Once you select this option, a highlight will appear beside the selection each time you reopen the SIG Menu.

Once you have enabled Advanced Navigation, the Advanced Navigation Interface will appear in place of the Document Navigation Bars. Each item’s icon appears as a numbered box. The item you are currently viewing will be highlighted in blue. The red bars over each group of icons represent the various outcomes and systemic factors. Boxes highlighted in yellow are the Core Questions specific to that SIG’s particular stakeholder. Non-stakeholder-specific questions are not highlighted. If you float the cursor over either a red bar or one of the boxes, a pop-up window will appear with a full description of that item.

To use the Advanced Navigation Interface, simply click the icon for the item that you wish to answer. The application will jump directly to that item’s screen. There is no difference between how you access stakeholder-specific and non-stakeholder-specific items.

To deactivate Advanced Navigation, reopen the SIG Menu and select Advanced Navigation Mode again. The highlight will disappear. Use the Overview button to return to the Overview Screen, and the next time you open a SIG for editing, the Document Navigation Bars will display instead of the Advanced Navigation Interface.

Completing the SIG

A SIG interview is complete once the Interviewer is satisfied that enough questions have been answered to provide a complete picture of the stakeholder’s responses. While this will often mean that all of the stakeholder-specific questions have been completed, there is no requirement that they all be answered. An Interviewer may also use (or not use) as many of the other, non-stakeholder-specific questions as he or she wants.

The Completeness Column of the Record Summary Grid displays the total percentage of each SIG record that is completed. This percentage is calculated only from the stakeholder-specific questions; non-stakeholder-specific questions do not figure into it. Therefore, a record that is 100 percent complete has had all of its stakeholder-specific questions answered, but may have all, none, or only a few of its non-stakeholder-specific questions answered.

Note that, unlike an OSRI, a SIG record does not have to be 100 percent complete in order to be considered “finished.” Also, unlike an OSRI, the QA Status of a SIG plays no role in its finalization. SIGs do not undergo Quality Assurance like OSRIs do; rather, they are compiled into one "official record" that builds off all of the records created by supporting note-takers, and then the one "official record" is uploaded to the central server at the end of the review week. When this official record is ready, its QA Status should be changed to Interview Record Complete as a signal to the uploading site leader that it is ready.

Data Management

Data management in the application includes such functions as database backups, data transfers, and USB key creation. All data management functions are accessed through either the Data Management or Admin Menus.

Note that many data management functions, including access to the Admin menu, are only available to site leaders.

Data Management Menu

In addition to providing information about the application’s version, this drop-down menu offers six options: Backup, Restore, Transfer Records, Re-assign Case, Unlock a Case, and Unlock a SIG.

Backup

The backup function is available to both reviewers and site leaders, and is a critical step to ensure that no data is lost in the event of a corrupted or lost tablet. When you back up the database, you are essentially taking a “photograph” of how it exists at that moment in time. If a disaster occurs and your tablet becomes damaged, you can then restore the database to the state captured in that “photograph.” This restoration, however, will erase any new information entered since the last backup. Therefore, you should backup the database at least once per hour to ensure that the backup file reflects the most current information.

To create a backup file, make sure that your USB key is plugged into the tablet PC. Then, use the following steps:

1) Select Backup from the Data Management Menu.

2) At the Browse for Folder window, choose a destination for the backup file. Your USB key will be the default destination. Click OK to save the file.

3) A pop-up window will open stating that the backup was successful. Click OK to continue.

The backup file will be located on your USB key and will have a name beginning with the prefix cfsr_snapshot. Following that will be the date and time at which the file was generated. Note that clicking on this backup file will have no effect on the application or your database; the only way to use it is through the application's restore function.

Restore

This function, available to both reviewers and site leaders, enables you to restore the database from a previously backed-up copy. The previous copy will overwrite the entire database, effectively erasing all current information and replacing it with what was previously saved. For this reason, it is important to always keep the backup file as current as possible.

To restore the database, use the following steps:

1) Select Restore from the Data Management Menu.

2) A pop-up window will display with a warning message explaining that restoring the database will delete all current data. Click OK to continue with the restoration.

3) From the Select Restore File window, navigate to the location of the backup file. If the location has more than one backup file already saved, select the most current one by choosing the file with the most recent date/time in its name. Click the Open button to continue.

4) As with the Backup function, a pop-up window will open to show that the restoration was successful. Click OK to continue.

Remember that the restore function will overwrite all information that currently exists on your tablet. Do not use this function except as a last resort! If you do need to restore your database, you should seek out the JBS representative who is providing technical assistance on site and ask him or her to assist you.

Transfer Records

This function is only available to site leaders. It is grayed out for reviewers. It allows a site leader to transfer OSRI or SIG records between tablets at the local site and the central server. These transfers can be done wirelessly, through a local area network, or over a red wire that connects two tablets to each other. These record transfers are an essential part of the Quality Assurance process.

For more information on transferring records, see Module 6.7: Data Transfers.

Re-assign Case

This function allows a case to be reassigned from one review pair to another. Once a case is re-assigned, it will no longer appear with the previous review pair's listing of cases on the Record Summary Grid. Only the new review pair will be able to access and edit it.

Note that this function is typically only used as a last resort, when it is clear that a particular review pair is struggling to complete a particularly complicated case and will not have time to finish the others it has been assigned. The decision to re-assign a case should only be made after careful consultation between the site leaders and affected review pairs.

To reassign a case, use the following steps:

1) Select Re-assign Case from the Data Management Menu.

2) The Reassignment Screen will open. Click the down arrow beside the Selected Record field to open a drop-down menu of available cases, then select the case that you wish to reassign.

3) Click the down arrow in the Select User field to open a drop-down menu of all current reviewers/Review Teams. Select the case’s new reviewer or Review Team from this list.

4) Click the Re-assign Case button.

5) A Status Update window will show the reassignment’s progress. You may cancel the reassignment by clicking the Cancel button. The window will close automatically once the reassignment is complete.

6) Click Overview to return to the Overview Screen. The reassigned case will no longer display on the Record Summary Grid.

Note that the new review pair will not be able to access the reassigned case until a site leader transfers the case from the original review pair's tablet to their own.

Unlock a Case

During the Quality Assurance process, OSRI cases that are transferred from a review pair's tablet to a site leader’s tablet become “locked” on the review pair’s tablet until the site leader returns it. Locked cases are highlighted in gray on the Record Summary Grid. They also display a gray background when they are opened. Locked cases cannot be edited or altered.

While transferred cases will appear on the site leader’s tablet as active on the Record Summary Grid, they are actually locked for editing. Site leaders cannot edit any information on an OSRI during quality assurance; rather, they can only add stickies and override item ratings as necessary. In emergencies, though, Local Site Leaders can unlock a case for editing. To unlock a case, use the following steps:

1) Select Unlock a Case from the Data Management Menu.

2) The Unlock Record Screen will open. Use the Selected Record drop-down menu to choose the case that you wish to unlock.

3) Click the Unlock Record button.

4) A confirmation window will open with a warning that unlocking the record can lead to the duplication of files. Click OK to continue.

5) Another window will open confirming that the case has been unlocked. Click OK to continue.

6) Click Overview to return to the Overview Screen. Use the Edit Cases option in the OSRI Menu to access the case and edit its information.

Note that if a site leader edits a case record after unlocking it, the changes he or she makes will never be seen by the review pair, even if the record is uploaded back to the reviewer tablet. Local Site Leaders should use the unlock feature to edit OSRIs only as a last resort, in cases where the review pair has already been dismissed from the local site.

Unlock a SIG

Although the SIG does not undergo the same Quality Assurance Review that the OSRI does, SIG records can still undergo data transfers and, as with OSRI cases, become locked to the original Interviewer once a transfer takes place. If a situation arises where one person needs to edit a SIG record while it is locked on his or her tablet, he or she can unlock it for emergency editing. To unlock a SIG, use the following steps:

1) Select Unlock a SIG from the Data Management Menu.

2) The Unlock Record Screen will display. Use the Selected Record drop-down menu to choose the record that you wish to unlock.

3) Click the Unlock Record button.

4) A confirmation window will open warning that unlocking the record can lead to the duplication of files. Click OK to continue.

5) Another window will open confirming that the case has been unlocked. Click OK to continue.

6) Click Overview to return to the Overview Screen. The case will no longer appear as gray in the Record Summary Grid and can be edited normally.

Note that, as with OSRI cases, any changes made to unlocked SIGs will not be seen by the site leader to whom the SIG was transferred until another data transfer takes place.

Admin Menu

The Admin Menu is only accessible by site leaders. It is grayed out for review pairs. The menu offers two functions: USB Key Utility and Initialize Database.

USB Key Utility

This function allows site leaders to create a new USB key in the event that one is lost. When you select it from the Admin Menu, the USB Utility Screen opens. There are two buttons at the top of the screen: Read Key and Write Key. Below these buttons are three information fields that display login information for the database. This information includes the database login name (DB User), the password, and the tablet’s designation (Host). The Team Member field is a drop-down menu that allows the site leader to select a reviewer from a list of those currently on site.

To create a new USB key, plug a working USB key into one of the tablet’s USB ports. Then follow these steps:

1) Click the Read Key button. The information fields will populate with the necessary database information. Remove the key once these fields have populated.

3) Use the Team Member drop-down menu to select the reviewer or Local Site Leader for whom the new USB key is being created.

4) Plug a blank USB key into one of the tablet’s USB ports. Click the Write Key button.

5) A pop-up window will appear asking you to specify the USB key file. Navigate to the drive containing the new (blank) USB key, then click the Save button.

6) Keeping the new USB key in the tablet, close and reopen the application. The new name or Review Team should appear in the title bar at the top of the screen.

The application will write the necessary data onto the new USB key. Note that any given USB key can only contain the data for one review pair or site leader.

Initialize Database

This function allows a database administrator to completely reset the database as it exists on that tablet. This is only done in extreme cases where the database has become irreversibly corrupted or otherwise unusable. Initializing the database erases all existing files and restores the database to a pre-review condition. The Restore function can then be used to restore a previously generated backup file.

Note that initializing the database requires a network password. Consult the database administrator or onsite JBS representative for more information.

Data Transfers

Where the application is concerned, a "data transfer" refers to the act of moving OSRI or SIG records from one tablet PC to another. During the first part of the review week, most of these will be local site data transfers related to First-Level QA. These local site data transfers involve moving records from one or more tablet PCs to another. Most often, this will be from a review pair or pairs' tablet to a Site Leader's tablet. Later, during Second-Level QA and Local Site Finalization, the data transfers take place between the local site and an off-site central server, which is where the CFSR Data Repository that ultimately stores all four sites' records is located. During these processes, records move from one Site Leader's tablet to the central server.

Regardless of whether they are between two tablets at the local site or the local site and central server, data transfers use the Data Transfer Screen to transfer records. There are slight differences between the appearance of the Local Site and Central Server Data Transfer screens, but the basic functionality of both are identical. Both screens involve transferring records as either downloads or uploads.

Data transfers can only be initiated by Site Leaders. Reviewers cannot initiate data transfers.

Also, after a record is transferred, it becomes inactive (locked) on its original location and active on its new location. An inactive record is highlighted in dark gray on the Record Summary Grid. If it is opened, it displays with a dark gray background and cannot be edited. For an inactive record to become active again, it must be transferred back from its new location. In emergency situations, a Site Leader can unlock an OSRI or SIG record.

Local Site Data Transfers

Local site data transfers take place between tablets located at the local review site. In all cases, the data transfer will be initiated by a Site Leader. The data transfer will take place as either a record download or a record upload, and can involve one or more source or destination computers. In other words, the Site Leader may download records from one or more tablets (either reviewer or Site Leader), or upload records to one or more tablets (again, either reviewer or Site Leader). Transfers also may take place wirelessly or over a connecting wire, although transfers over a connecting wire (referred to as "Red Wire" transfers) can only involve two tablets.

Local site data transfers are managed through the Local Site Data Transfer Screen.

Wireless Transfers

Wireless transfers take place between an initiating Site Leader's tablet and one or more other reviewer or Site Leader tablets. They take place over a local wireless network established by a wireless router. Before wireless transfers can take place, the initiating Site Leader must ensure that the wireless network is activated and that all involved tablets are connected. To activate the local wireless network, a Site Leader (or the onsite JBS representative) must plug the wireless router provided to the local site into an electrical outlet. Once the router is plugged in, all tablets in the immediate vicinity should connect to it automatically, although it may take several seconds for each connection to establish.

To check a tablet’s connection status, float the cursor over the Wireless Network icon in the tablet’s tool tray. A pop-up window showing the tablet’s active network connection will open. The network’s name should be cfsr-wlan-1. You are now ready to wirelessly connect to other tablets. To do so, select Local Site from the Data Management Menu's Transfer Records option. The Local Site Data Transfer Screen will open.

For a wireless transfer to work, all of the tablets involved must be connected to the wireless cfsr-wlan-1 network. If any of the tablets fail to connect to this network, you may need to reconnect or re-enable the wireless network

Reconnecting to the Network

In some instances, a tablet may fail to connect to the network. This will often be due to the tablet’s already being connected to another wireless network (the hotel’s, for example). In this case, another network name will display when you float the cursor and the tablet’s network connection must be manually set. To manually set a tablet’s network connection:

1) Right-click the Wireless Network Connection icon to open the Wireless Network Connection Status window.

2) Click the View Wireless Network button.

3) Select cfsr-wlan-1 from the list of available networks. If cfsr-wlan-1 does not appear as an available network, you may need to refresh the network list. To do this, click the Refresh Network List link in the upper left-hand corner.

4) Click the Connect button.

Once you click the Connect button, a Wireless Network Connection Status window will open. When it closes, the connection will be complete. You can close the Wireless Network Connection window by clicking the red X in its upper corner.

Once the wireless connection is established, you must open the Local Site Data Transfer Screen to execute a wireless transfer.

Re-enabling Wireless Connections

The tablet PCs may sometimes become disconnected from the cfsr-wlan-1 wireless network. If a tablet is not connected to cfsr-wlan-1, users can manually connect by right-clicking the “Wireless Network Connection” icon, selecting “View Available Wireless Networks”, selecting cfsr-wlan-1, and clicking “Connect”.

Other than manually connecting to the cfsr-wlan-1 wireless network as described above, users should not alter the wireless connection configurations, “disable” the wireless connection, or “repair” the wireless connection unless JBS staff are available to assist in person or over the phone.

Red Wire Transfers

Red wire transfers take place between a Site Leader's tablet and one other reviewer’s or Site Leader’s tablet through a hard-wired connection between the two tablets. They are an efficient way to quickly transfer records from tablet to tablet. The hard-wired connection is established via a red ethernet wire that will be on site.

Note that, as with all data transfers, only Site Leaders can initiate a red wire transfer. In order for the transfer to work, the other tablet must be turned on with the USB key inserted and the application open. To establish a red wire connection, first connect both tablets to one another with the red wire. It plugs into the ethernet port, which is located on the right-hand side.

Once the two tablets are connected, they will create a Local Area Connection. While this connection is taking place, you will see an orange dot moving back and forth beneath the Local Area Connection icon in the bottom right-hand corner of the screen. This process may take up to a minute. When the connection has been established, you will see a yellow exclamation mark appear over the icon. If you float your cursor over it, you will see a pop-up window indicating that there is only limited connectivity available. This is normal.

With the tablets still connected, go to the Data Management Menu and select the Transfer Records option. A status window will display as the Local Site Data Transfer Screen opens. Once the screen has opened, you will be able to upload or download records between your two tablets normally. However, you will only have access to the one tablet to which you are connected.

When you have finished the data transfer, be sure to disconnect the red wire from each tablet.

The Local Site Data Transfer Screen

Local site data transfers are executed from the Local Site Data Transfer Screen. From here, Local Site Leaders can transfer OSRI records, SIG records, and the Summary of Findings Form from one or more tablet PCs located at the local site to their own.

The Local Site Data Transfer Screen is accessed by using the Transfer Records option from the Data Management Menu. When a Site Leader selects this option, the tablet verifies that a working network connection exists and then launches this screen. At the top of the screen, beneath the title bar and menu bar, is a green Site Bar. This identifies the current review site. If the network connection is good, the Site Bar will be green. If it is not good, the Site Bar will be red.

The central area of the Local Site Data Transfer Screen displays square tablet icons that represent the tablets currently connected to the network. Each icon appears as a tan or white box. Tan shading indicates OSRI records, while white shading indicates SIG records. Each tablet icon displays the Review Team or Site Leader’s name in the header.

Within each Tablet Icon appear record icons. These are smaller boxes identified by an individual record’s name in the header. The header’s text will be in bold if the record is active on that tablet; it will be grayed out if it is inactive (i.e., if it has already been transferred).

Each record icon is shaded with green, red, or a combination of both colors. The amount of green corresponds to how complete the record is. Red indicates that there are still incomplete items in the record. If you float the cursor over a Record Icon, a pop-up window will display basic record information including a transfer history.

If any tablet has more cases than will fit within its Tablet Icon, a scroll bar will appear on the Tablet Icon’s right-hand side. Use this scroll bar to move up and down through the stored records. If a tablet holds both OSRI and SIG records, its Tablet Icon will be divided in half. OSRIs will appear in the tan area and SIGs will appear in the white.

The bottom area of the Local Site Data Transfer Screen represents the tablet of the Local Site Leader who initiated the data transfer. It is separated from the Central Screen by another green Site Bar. Below the Site Bar are three shaded areas. The tan area on the left is where OSRI records display.

The white area on the right is for SIGs. Below this is a smaller purple area where an icon for the Summary of Findings Form icon will display.

The Site Bar at the bottom of the screen includes a filtering drop-down menu that Local Site Leaders at one of the two metro sites or State Team Leaders can use to select Review Sites. When the filtering menu is used, only Record Icons originating from the currently selected Review Site will display in the Leader’s Tablet Area at the bottom of the screen.

If tablets lose connectivity during a wire transfer, or if new tablets connect to a local wireless network, it may become necessary to refresh the Local Site Data Transfer Screen. Refreshing the screen resets the display and updates each tablet and record icon. To refresh the Local Site Data Transfer Screen, simply select Transfer Records from the Data Management Menu again.

Central Server Data Transfers

Central server data transfers occur between the tablet PC of a Site Leader and the central server, also called the CFSR Data Repository. Central server data transfers are generally initiated as part of the Second-Level QA process, which often involves QA performed by Site Leaders located off site. They are also used to transfer final SIG records to the central server at the end of the review week. They can take place as either record downloads or uploads.

Note that central server data transfers require a working Internet connection. The local wireless network used for Local Site Data Transfers does not provide Internet access. You may have wired or wireless Internet access options available on site, including access to a wireless card that will be provided by JBS. If, for some reason, Internet access is not available at your site, you may have to seek out other options. These options may include your hotel room, local restaurants, coffee houses, libraries, or other public sites.

Note that you most likely will have to manually configure your tablet's Internet connection, especially if you have been performing local site transfers.

Configuring an Internet Connection

The tablet PCs are preconfigured to connect automatically to the cfsr-wlan-1 wireless network if they are within range of the router. Users should not alter the wireless connection configurations unless JBS staff are available to assist in person or over the phone.

If a tablet is not connected to the cfsr-wlan-1 wireless network, specific steps can be taken as detailed in the “Application Transfer Instructions” document provided on site. Remember that you can also use a red wire transfer to connect two tablets if you are unable to connect to the cfsr-wlan-1 wireless network.
 

The Central Server Data Transfer Screen

The Central Server Data Transfer Screen is where all central server data transfers are executed. To access the Central Server Data Transfer Screen, select Transfer Records from the Data Management Menu. Then select the Between Your Tablet and Central Server option.

Once you select this option, a Data Transfer Setup pop-up window will open asking if you are using a wired connection to access the Internet. If you are using a wired connection (e.g., an ethernet cable plugged into a wall port), make sure that the wire is properly connected and then click the Yes button. The tablet will automatically adjust its settings to account for the wire. If you are using a wireless Internet connection, click the No button.

A Status Update window will open showing the tablet’s progress as it connects with the central server. When the Status Update window closes, the Central Server Data Transfer Screen will display. This screen’s basic layout is similar to the Local Site Data Transfer Screen. At the top is the application’s Title Bar, followed by the Menu Bar and the green Central Server Bar, which confirms the connection to the central server. If the Internet connection is good, the Site Bar will be green. If the Internet connection is not good, the Site Bar will be red.

The main portion of the Central Server Data Transfer Screen is where each local site’s individual “folder” space on the central server is represented. Each folder icon displays the local site’s name in its blue header. Beneath this header are three colored areas: a tan area, which displays OSRI Records; a purple area, which displays the Summary of Findings Form; and a white area, which displays SIG Records.

Generally, Site Leaders will only have access to their own local site’s central server folder. However, Site Leaders based at either of the two metro sites will be able to view and transfer to and from either metro site folder. Team Leaders will be able to view all four site folders.

The bottom of the Central Server Data Transfer Screen represents the tablet of the leader who initiated the data transfer. It is separated from the rest of the screen by another green Site Bar. This area of the screen displays record icons for all the records currently stored on the leader’s tablet. Like the Central Server Folder icons, the tablet display is divided into three colors: tan for OSRI records, white for SIG records, and purple for the Summary of Findings Form.

The green bar includes a drop-down menu that allows you to filter by review site the records being displayed. This allows you to select which review site’s records you wish to view at once.

Record Downloads

Record downloads occur when a Site Leader moves one or more records down from the central area of the Central Server or Local Site Data Transfer Screen to the leader’s area located at screen bottom. The original record becomes locked, and the downloaded record becomes accessible on the leader’s tablet.

There are three types of downloads: site downloads, tablet downloads, and individual downloads. All three work with OSRI records, but only tablet and individual downloads work with SIGs.

Site Downloads

Site downloads can only happen during local site wireless transfers. They only work with OSRI records. A site download transfers, at once, all eligible OSRI records located on any tablet that is connected to the network. "Eligible records" are those that have had their QA Status set to Ready for QA Review.

To execute a site download, click and hold on the green Site Bar at the top of the screen. When you drag the cursor, the headers of all eligible OSRI records will turn yellow to show that they have been selected. Drag the records to the OSRI section at the bottom of the screen, which will turn yellow. When you release the cursor, a Status Update window will open to show the download’s progress.

Tablet Downloads

Tablet downloads transfer all eligible OSRI or SIG records from one tablet or central server folder. All SIGs are eligible; OSRIs must have their QA Status set to Ready for QA Review (for local site transfers) or State QA Review Complete (for central server transfers) for a tablet download to work. To execute a tablet download, click inside a tablet or folder icon in the central screen area. When you drag the cursor, the headers of all eligible records will turn yellow. Drag the records to the correct side at the bottom of the screen (tan for OSRIs, white for SIGs) and release them there. When you release the cursor, a Status Update window will open to show the download’s progress.

Individual Downloads

Individual downloads allow Site Leaders to transfer a SIG record or any OSRI record regardless of its QA Status. To execute an individual download, click the individual record icon in the central screen area. When you drag the cursor, its header will turn yellow. Drag the record to the correct side at the bottom of the screen (tan for OSRIs, white for SIGs) and release it there. When you release the cursor, a Status Update window will open to show the download’s progress.

Record Uploads

Generally, record uploads occur when a Site Leader opens either the Local Site or Central Server Data Transfer Screen and then moves one or more records (either OSRI or SIG) from his or her tablet area at the bottom of the screen up to either a tablet or the Central Server in the center screen. The original record becomes locked and the uploaded record becomes accessible from its new tablet or the central server.

There are two ways to upload OSRI and SIG records: group uploads and individual uploads.

Group Uploads

A group upload moves all eligible OSRI or SIG records at once. All SIG records are eligible; OSRI records are eligible if they have had their QA Status set appropriately. The appropriate setting will depend on where in the overall QA process the OSRI record is:

Site Leader to Reviewer (First-Level QA): QA Review Complete

Site Leader to Central Server (beginning Second-Level QA): Ready for State QA Review

Site Leader to Central Server (ending Second-Level QA): State QA Review Complete

To execute a local site group upload, click anywhere within either the OSRI or SIG side at the bottom screen. When you drag the cursor, the headers of all eligible record icons will turn yellow. Drag the OSRI or SIG records to their destination tablet in the central screen. Note that while group uploads only permit you to move OSRI records back to their original tablet, SIG records can be group uploaded to any leader’s tablet.

If you are using a local wireless transfer and have OSRI records to upload to multiple tablets, you can upload them in one step by dragging the selected records to the green Site Bar at the top of the screen. This will automatically transfer them back to their original tablets.

For central server uploads, drag the selected records to the appropriate site folder in the central screen.

In either case, a Status Update window will open to show the upload’s progress.

Individual Uploads

Individual uploads allow the transfer of a single OSRI or SIG record. For local site data transfers, an individual upload allows the Local Site Leader to transfer an OSRI record either back to the original reviewer’s tablet or to another Local Site Leader’s tablet. Individual uploads can be performed regardless of a record’s QA Status.

To execute an individual upload, click and hold an OSRI or SIG record icon located at the bottom screen. When you drag the cursor, that record’s header will turn yellow. Drag the selected record to its destination tablet or central server folder in the central screen. Assuming that it is an eligible destination, the tablet or folder icon will also turn yellow once the cursor is positioned over it.

For SIG records, eligible destinations include the tablet of another Site Leader or the site’s folder on the central server. For OSRI records, eligible destinations include the original review pair’s tablet, another Site Leader’s tablet, or the site’s folder on the central server. It is not possible to upload an OSRI record to a new review pair's tablet unless the original review pair first reassigns the case.

When you release the cursor, a Status Update window will open to show the upload’s progress.

Quality Assurance

Quality Assurance, or QA, is a critical component of the CFSR process. The QA process is designed to ensure that, as much as possible, all cases completed during the review week consist of clean, complete data that accurately answers all questions, and that all item and outcome ratings are accurate and supported with well-reasoned Rating Documentation.  

The QA Review process as a whole is divided into seven parts:

  1. Preliminary QA
  2. First-Level QA
  3. Second-Level QA
  4. Local Site Finalization
  5. Data Validation
  6. Third-Level QA
  7. Data Change Management

There are numerous tools and functions built into the automated application to assist and streamline the QA process. These include QA tools that can be used by both reviewers and site leaders, data transfer functions, and the ability to post electronic comments, or "stickies."

QA Tools

The Quality Assurance (QA) process is an integral part of the onsite review. Review pairs must conduct an initial QA on all cases that they complete. This initial QA, referred to as Preliminary QA, helps ensure that basic, easily corrected errors (such as missing information) are caught and corrected before they take up time in the later stages of the QA process. During later stages of QA, such as First-Level and Second-Level QA, site leaders become involved in the process and work with review pairs to correct any errors or omissions in case records. 

There are a number of QA tools built into the application that are designed to streamline the QA process for both review pairs and site leaders. While many of these tools involve advanced QA functions used in First- and Second-Level QA, such as data transfers and stickies, there are also numerous resources that review pairs and site leaders can use to assist in reviewing the instrument, especially during a Preliminary QA. Two of these resources, the Completeness column and the Unanswered Questions Navigator, are essential for review pairs in determining that an OSRI record is complete and ready for QA.

Another important resource are the application's many built-in reports. Especially useful for Preliminary QA are the Completed Case Report, the Preliminary Case Summary Report, and the Case QA Rating Summary Report.

During First- and Second-Level QA, the Unresolved Comment Navigator is a critical tool for review pairs to locate and respond to the stickies that site leaders add to a case record.

 

Stickies

"Stickies" are electronic notes, or comments, that can be attached by a site leader to an item's question in an OSRI record. They are intended to draw attention to places where corrections, revisions, or additional input might be required by the review pair, and are an integral part of both First-Level and Second-Level QA.

To add stickies, the site leader must first transfer the OSRI record to his or her tablet. At this point, the record becomes active on the site leader's tablet, and locked on the review pair's tablet. While it is locked, the review pair cannot edit the record in any way. They can, however, read it and access reports on it. The site leader should conduct QA as quickly as possible, add any stickies that are necessary, and then transfer the case back to the reviewer pair's tablet. The review pair will then use the Unresolved Comment Navigator to locate the stickies and respond to it as appropriate.

When the case is again transferred to the site leader's tablet, the site leader can resolve the sticky (if the issue has been adequately addressed) or add to the comment and return the case for further clarification by the review pair. This process can continue until the issue is resolved, although it is likely that site leaders will ultimately choose to discuss persistent issues face-to-face rather than continue to exchange sticky notes back and forth with a review pair.

Adding Stickies

After a Local Site Leader transfers an OSRI record from a review pair's tablet, he or she will be able to open it for QA review by using the QA button in the Action Column of the Record Summary Grid. The case will open as a Completed Case Report, which will allow the site leader to scroll through the entire instrument in one window from start to finish. From this report, the site leader can add stickies as necessary to any single question. Each sticky will then be accessible by the review pair when the case is transferred back to their tablet.

To add a sticky to rating documentation or item questions, use the following steps:

1) Click the Add link, which is located beside the question identification.

2) The Conversation Form Window will open. The case name and the question to which you are commenting display at the top. Click the Add New Message button to add your sticky

3) The Add/Edit Message Window will open. Type your comment in the text field. When you are finished, click the Save button to close the window.

4) The Conversation Form Window will now display your comment below the question. As the first comment in the conversation, it will be highlighted in yellow. Your name will display beside it, and an Edit and Delete button will appear on the right-hand side. Clicking the Edit button will reopen the Add/Edit Message Window, where you can edit your answer. If you click Delete, a confirmation window will open asking you to verify your selection. If you click OK at this window, your comment will be permanently deleted from the Conversation Form Window.

5) Click Close to exit the Conversation Form Window.

Note that the presence of stickies will be noted on the Completed Case Report by the use of orange highlighting. Clicking on the View/Add link will reopen the Conversation Form Window and allow further editing or deletion of the sticky. Note, however, that once a sticky has been responded to by reviewers, it can no longer be deleted. It can only be resolved.

There is no limit to how many stickies each OSRI record can hold. There is also no limit to how many times stickies can be sent back and forth between site leaders and reviewers. However, if it becomes obvious that a case is overloaded with stickies, or if one particular sticky appears to keep getting passed back and forth without resolution, site leaders are probably well-advised to seek out a face-to-face conference with the review pair, rather than continue an unproductive exchange.

Note also that there may be times when you want to review a case as the review pair sees it, using the Screen View instead of the Case Summary report. One reason why this may be true is if you want to review the instrument's instructions for a particular question and you do not have a copy of the paper instrument handy. Since the instructions are not included in the Case Summary Report, you would need to open the case in a screen view.  For information on how to do this, see Module 6.8.2.2: Adding Stickies in the Screen View.

Adding Stickies in the Screen View

There may be situations where you need to view a record’s individual item screens as a reviewer sees them. In these cases, the Screen View should be used.

To access Screen View, select Edit Cases from the OSRI Menu. This will open the Edit Cases Screen. Use the Review Team drop-down menu to select the correct reviewer, then the Selected Record drop-down menu to select the case.

The case will open automatically to its Face Sheet. The layout and navigational functions are the same as for reviewers, although the Save Bar no longer features any save buttons. Instead, it displays the Rating Documentation button, the item’s Calculated Rating, the Override button, and the Next Item button.

Use the normal navigational controls to move through an OSRI record in the Screen View. To add a sticky to rating documentation or item questions, click the Add button, which is located in the upper right-hand corner of the question’s Identification Area. The Conversation Form Window will open. Adding comments here works the same way as in the Report View.

When you close this window, the Add button will be replaced by the View button, which indicates that comments have been started. Clicking View will reopen the Conversation Form Window.

Note that while the Screen View does allow site leaders to view a case as reviewers see it, the case is locked for editing. It is possible, however, to use the Unlock a Case function to make emergency edits to case information.

Responding to Stickies

Each sticky added to an OSRI case by a site leader must be read and responded to by the review pair. The response will most likely involve edits to question answers, but must also include a written comment back to the site leader. These comments serve as an indication to the site leader that the sticky was properly addressed.

To open a sticky, click the View button located above the question. The Conversation Form Window will open. The name of the commenting site leader will display beside his or her comment, which will be highlighted in yellow. Read the comment, then click the Close button and make any necessary changes to the question. When you are finished with your edits, reopen the sticky and use the following steps to post a response to the site leader's comment:

1) Click the Add New Message button to open the Add/Edit Message Window

2) The Add/Edit Message Window will open. Type your comment in the text field. When you are finished, click the Save button to close the window.

3) The Conversation Form Window will now display your response below the Local Site Leader’s comment. Your name will display beside it, and an Edit and Delete button will appear on the right-hand side. Clicking the Edit button will reopen the Add/Edit Message Window, where you can edit your answer. If you click Delete, a confirmation window will open asking you to verify your selection. If you click OK at this window, your comment will be permanently deleted from the Conversation Form Window.

4) Click Close to exit the Conversation Form Window.

Use the Unresolved Comment Navigator to continue working through each sticky added by the Local Site Leader. When you are finished, return to the Overview Screen and change that case’s QA Status back to Ready for QA Review to indicate that it is once again ready for download. The site leader will transfer the case to his or her tablet and read through your comments and edits.

Resolving Stickies

Once the review pair has addressed the issues raised in the stickies that a site leader added to an OSRI, the site leader must transfer the case back to his or her tablet and determine whether or not the stickies were adequately addressed. To do this, open the case in its Report View by using the QA button in the Action Column of the Overview Screen. Existing stickies will appear with orange highlighting. Click the View button to reopen the Conversation Form Window. A message from the review pair will display below your comment indicating that they addressed the issue raised by your sticky. You must decide if their response is adequate or if the question still requires more input.

If the response is adequate, check the small box labeled Resolved at the bottom of the Conversation Form Window. A notification that the conversation has been resolved will appear and the sticky’s orange highlighting will display with a different hue. Also, the View button will be replaced by a Re-Open/View button.

If additional input is required, add a new comment for the reviewer by clicking the Add New Message button to bring up the Add/Edit Comment Window. Type your comment in the text box and click the Save button when you are finished adding your new comment. Click the Close button to close the Conversation Form Window. You must now transfer the case to the review pair so that they can view the sticky and make the necessary corrections.

After all of the stickies are resolved, the case will be ready to move on to either Second-Level QA or Local Site Finalization.

Reports

The automated application allows both reviewers and Site Leaders to generate a wide variety of reports that compile and organize data collected during the onsite review. These reports are accessible through the Report Menu, which also allows for the saving and printing of any report.

Note that reports may be generated for any record that currently exists on your tablet, whether that record is locked or unlocked. This means that, after the QA process has begun, review pairs can generate reports even after an OSRI record has been transferred to a Site Leader's tablet. Similarly, Site Leaders will be able to generate site- or review-based reports for any case record they have ever transferred to their tablet, whether or not that record has been returned to its original review pair.

Reports will be automatically updated to reflect the most current information anytime a case record is transferred again. Revised data will be included when a new report is generated. Reports can be generated for any record stored on your tablet PC, regardless of whether that record is locked or unlocked.

The Reports Menu

All report functions are accessed through the Reports Menu in the Menu Bar. The Reports Menu contains the following reports: Completed Case, Preliminary Case Summary, Preliminary Rating Summary, Completed Interview Guide, Case Progress Report, Nightly Debriefing Report, Case Review Summary, Case QA Rating Summary, Overridden Item Ratings, Rankings, Trend & Issue Tracker, and Summary of Findings Form. In addition, you can save and print reports using the Save Report As HMTL File or Print Report option.

Selecting Return to Document will close the Reports Menu and return you to the last record you were viewing. It will not function if you were previously viewing the Overview Screen. You can also click the Overview button to return to the Overview Screen from any open report.

Most of the report options in the Reports Menu have several sub-selections that open in another menu when you select a report. These sub-selections enable you to choose the specific criteria under which you will run any specific report. Some of the options are only available to Site Leaders, while others are also available to review pairs. The various sub-selections are described below:

Current Case
This option, available to both review pairs and Site Leaders, provides a report for either the currently open case (if the case is already open for editing) or for the record that is currently selected on the Record Summary Grid.

All My Cases
This option allows review pairs to view a report for each case record to which they have been assigned. The reports generate as one continuous document that can be navigated with the scroll bar on the right-hand side of the screen.

All Cases for Selected Reviewer
This option allows Site Leaders to view all the downloaded records from any single review pair. The reports generate as one continuous document that can be navigated with the scroll bar on the right-hand side of the screen.

All Cases for Site
This option allows a Site Leader to view all of his or her site’s records at once, as long as the records have been previously transferred to his or her tablet PC at some point. It does not matter if the record is locked or unlocked. The reports generate as one continuous document that can be navigated with the scroll bar on the right-hand side of the screen.

All Cases for Review
This option allows a Site Leader to view all of the records for an entire review at once, as long as the records have been transferred from the central server to his or her tablet PC. The reports generate as one continuous document that can be navigated with the scroll bar on the right-hand side of the screen. This function is generally only used by State Team or NRT Leaders.

Completed Case Report

The Completed Case Report can be opened from either the Record Summary Grid, by highlighting the case you wish to access, or from within an open case record. The report provides questions, answers, and ratings for the entire instrument. It is normally used by reviewers to perform Preliminary QA on a completed case and is also the default view (Report View) for Site Leaders who are performing First- or Second-Level QA.

The available sub-selections for a Completed Case Report include Current Case, All My Cases, All Cases for Selected Reviewer, All Cases for Site, and All Cases for Review.

Note that the Completed Case Report will indicate the presence in a record of both resolved and unresolved stickies by the use of orange highlighting. Clicking the View or Respond links attached to the sticky will open that sticky’s Conversation Form Window.

Preliminary Case Summary Report

The Preliminary Case Summary can be opened from either the Record Summary Grid, by highlighting the case you wish to access, or from within an open case record. It provides a summary chart of the Item and Outcome Ratings for each item and outcome in the instrument.

The available sub-selections for a Completed Case Report include Current Case, All My Cases, All Cases for Selected Reviewer, All Cases for Site, and All Cases for Review.

Preliminary Rating Summary

The Preliminary Rating Summary provides Local Site Leaders with counts of outcome and item ratings for the entire review site (choose sub-selection For Current Site), or State Team Leaders with outcome and item ratings for the entire review (choose sub-selection For Entire Review). Either option includes percentages for each category. The report includes both foster care and in-home cases and provides both an individual and combined percentage rating for each item and outcome for both case types.

Completed Interview Guide

The Completed Interview Guide report provides either the full text of an individual stakeholder interview (choose sub-selection Current Interview Only) or all of the stakeholder interviews at a local site (choose sub-selection All Interviews for Site). For Team Leaders, it can also provide the full text of all the stakeholder interviews in the review (choose sub-selection All Interviews for Review). All of the report options include questions and answers grouped by item, with responses grouped by stakeholder.

Case Progress Report

The Case Progress Report provides a brief summary of the current status of cases under review. The summarized information includes the review team, case name and type, QA status, and percentage completed. The available sub-selections for a Case Progress Report include Current Case, All Cases for Site, and All Cases for Review.

Nightly Debriefing Report

The Nightly Debriefing Report provides a summary presentation, called a "Basis for Rating," of each case's outcome ratings. This statement must be entered manually by the review pair and often serves as the foundation for the review pair's presentation at that night's nightly debriefing. Once this summary information has been entered, the available sub-selections for completed Nightly Debriefing Reports include Current Case, All My Cases, All Cases for Site, and All Cases for Review.

To enter a Basis for Rating, you must open the Nightly Debriefing Report in Data Entry Mode. To do so, first open the appropriate case normally. It does not matter if it is locked or unlocked on your tablet. Once the case is open, select Nightly Debriefing Report>Data Entry for Current Case from the Reports Menu.

Two reports will open on your tablet in a split screen. On the top will be a Completed Case Report. On the bottom will be the Nightly Debriefing Data Entry Form. Move through either display by using the scroll bars located on the side. You can also adjust the display size of each report by dragging the blue separator line up or down.

The Nightly Debriefing Report Data Entry Form features a total of nine text fields. The first two are located in Section II: Case History. They require background information about the family’s past issues and needs and information about the services that were provided in the past. The other seven text fields require that you provide a Basis for Rating for each of the individual outcome ratings.You may type information normally into these text fields. You may, however, find it faster and more efficient to use a specialized “copy and paste” function that is built into this report.

To use this function, locate the text in the Completed Case Report (top of screen) that you wish to copy. Then, ensure that the text box where you want to paste the text is displayed in the Nightly Debriefing Report Data Entry Form at the bottom of the screen. Use the mouse to highlight the text that you wish to copy, then drag it down to the text box. When you release the mouse button, the text will now display in the text box and will appear as part of the Nightly Debriefing Report.

Note that you can continue to add text to any text box by repeating these steps as often as necessary. You can, for example, use multiple answers from the Completed Case Report to cobble together a thorough Basis for Rating on the Nightly Debriefing Report.

It is also possible to access other reports while completing the Nightly Debriefing Report Data Entry Form. To do this, select the new report that you wish to open from the Reports Menu. Be sure to select the Current Case option for the report. The new report will open at the top of the screen.

The Nightly Debriefing Data Entry Form will remain open at the bottom of the screen until you use one of the two buttons located below the blue separator line: Close and Save. The Close button will close the bottom (data entry) portion of the screen, so that the Completed Case Report or other report displayed there shows as a full-screen report. From here, click Overview to return to the Overview Screen.

The Save button saves any typing or pasting that you have done to the Nightly Debriefing Report. When you open the report in non-data entry mode, your answers will appear as part of the report. A pop-up window will appear after you click Save to let you know that the save was successful; click OK to close this window and return to the split screen view.

Note that if you click Close before you have saved your work, any information that you typed or pasted into the Nightly Debriefing Data Entry Form will still automatically save to the system.

Case Review Summary

The Case Review Summary produces a simple chart listing the total number of cases from each of the four review sites. In addition to the totals, it breaks down the numbers by foster care and in-home services cases. Note that it will only count cases that are saved (either locked or unlocked) onto the current tablet.

Case QA Rating Summary Report

The Case QA Rating Summary is a summary chart that provides each item’s Calculated Rating and Main Reason Statement as well as each outcome’s rating and Basis for Rating. The Basis for Rating is taken from the data entered by the reviewer into the Nightly Debriefing Report. The available sub-selections for a Case QA Rating Summary Report include Current Case, All Cases for Site, and All Cases for Review.

Overridden Item Ratings

The Overridden Item Ratings chart provides information on all items that received an Override during the course of a review. The information provided for each instance of an override includes the review site, review team, case name, item, original Calculated Rating, Overridden Rating, and the reason for the override.

Rankings

There are two options listed under the Rankings item on the Reports Menu: Outcomes and Outcomes Sorted by Substantially Achieved.

Outcomes

The Outcomes selection provides the outcome ratings and item rankings for the entire State’s review, ranked in order from highest to lowest. It lists them for the State as a whole first, then breaks out each review site individually.

Each outcome is listed in relative order of highest ranking to lowest. The items are included under each outcome, also ranked from highest to lowest. Outcome ratings are given a numerical value in which 1 equals Substantially Achieved, 2 equals Partially Achieved, and 3 equals Not Achieved. Item rankings are given a numerical value in which 1 equals Strength and 2 equals Area Needing Improvement. Only applicable cases are included. Each outcome and item’s ranking is determined by averaging its total numerical value.

Note that only Applicable cases are included for each ranking.

Outcomes Sorted by Substantially Achieved

This option displays a chart that shows the total number of cases—total, foster care, and in-home services—that had a Substantially Achieved rating for each of the seven outcomes. For each area, it lists the total number of Applicable cases, the total number that were rated as Substantially Achieved, and the percentage that were rated as Substantially Achieved. It lists numbers for the entire State review first, then breaks down the numbers by each review site.

Trend and Issue Tracker Form

The Trend and Issue Tracker is not accessible by reviewers. Site Leaders can use it to make notes on trends and issues in the review. These notes are grouped by performance item and are automatically pulled into the exit conference PowerPoint as presenter notes. Unlike the Summary of Findings Form, though, the Trend and Issue Tracker is not an official form to be submitted to anyone else.

When you select Trend and Issue Tracker from the Reports Menu, the data entry area will open at the bottom of the screen. This area features a drop-down menu from which you can choose any one of the OSRI’s items, a Summary box for typing notes, a Save button, and a Close button. The blue line dividing the data entry area from the top of the screen can be dragged up or down to resize the space in which you can work.

To use the Trend and Issue Tracker, first select from the drop-down menu the item for which you are adding a note. Then, type your note in the Summary box. Click the Save button to save your note to the Trend and Issue Tracker Form. A pop-up window will appear to notify you that your Item Summary was saved. Click OK to close the window. If you now select another performance item from the drop-down menu, you can then return to this first item and your note will still appear in the Summary box.

Note that you can also open any report in the top part of the screen. Open reports by selecting them from the Reports Menu. The same process used by reviewers to copy and paste text in Nightly Debriefing Reports will work here, too.

To exit data entry for the Trend and Issue Tracker Form, click the Close button. Note that this will close the data entry with no warning message. Any unsaved information that you have entered will be lost unless you first click the Save button.

Summary of Findings Form

The Summary of Findings Form provides a summary of the findings for the entire review. The information it contains must be manually entered by one of the Site Leaders. It is important that only one Site Leader from any given review site work on the Summary of Findings Form, because only one copy of the form can be uploaded to the central server from each of the review sites. The Site Leader responsible for completing it may begin the Summary of Findings Form as early as Tuesday of the review week. Once this information has been manually entered, the available sub-selections for the Summary of Findings Form include View Local Site Report, View State Level Report, and View Combined State and Local Site Report.

To enter data onto the Summary of Findings Form, select Summary of Findings Form from the Reports Menu. Then select the Data Entry option. A split screen display will open. A site-wide Case QA Rating Summary Report will appear on top. The bottom portion of the screen is the data entry area. The screen is divided by a blue line; you can drag it up or down to increase the viewing area on either side.

Use the drop-down menu to select the performance item you need. You can also use the Previous Item and Next Item buttons to move backward and forward through each performance item. Once you have selected the correct performance item, you must enter your data into the Basis textbox. Any data that you have already typed and saved will appear here; you can edit this information, delete it, or add to it as necessary. You can also add text to the Basis textbox by using the copy-and-paste function that reviewers use in the Nightly Debriefing Report.

When you have finished entering data, click the Save button to save your text to the system. A pop-up window will briefly display to notify you that the Basis was saved.

You can open any other report to assist you in completing the Summary of Findings Form. Open another report by selecting it from the Reports Menu. That report will appear in the top portion of the screen in place of the Case QA Rating Summary Report.

Click the Close button to exit the Summary of Findings Data Entry Form. The bottom portion of the screen will close, leaving only the report that was open on top. Click the Overview button to return to the Overview Screen. Once you have completed data entry for the Summary of Findings Form, it should be uploaded along with your site’s completed cases to the central server.

The completed Summary of Findings Form should be uploaded to the central server after the conclusion of the Local Site Exit Conference. To upload the Summary of Findings Form, click and hold its record icon in the purple area at the bottom of the Central Server Data Transfer Screen. The record icon’s header will turn yellow to indicate that it has been selected. Drag the record icon up to the purple area of the site’s central server folder. The area will turn yellow; release the Summary of Findings Form there. A Status Update window will open to show the upload’s progress.

Saving and Printing Reports

The application allows you to both save reports as HTML files and print reports for offline use.

To save a report, first open the report. Then, select Save Report as HTML File from the Reports Menu. This will open the Save Web Page window. Navigate to the file location where you want the report to be saved (the default is your USB key), then click the Save button. The report will be saved as an HTML file, which can be opened for viewing in any Web browser.

To print a report, select Print Report from the Reports Menu. This opens the standard Print window. There will be a USB printer located on site; ensure that this printer is connected to your tablet, then select it from the list of available printers. Click the Print button to print the report.

To return to a previously opened record, select Return to Document from the Reports Menu. This loads the data entry form for the case or interview record that was opened prior to loading the report. This function will work only if the report was loaded when a case or interview record was open for editing.

Exit Conference PowerPoint

This is a normal PowerPoint file that will be prepared for each review. PowerPoint files exist for both the Thursday local exit conference and Friday's statewide exit conference. The specific PowerPoint file will be loaded onto the appropriate Site Leader’s tablet.