Quality Data Collection

Collecting data and ensuring quality are critically important in a State’s efforts to establish a robust continuous quality improvement (CQI) system of data compilation, analysis, and dissemination. The collection of quantitative and qualitative data from varying sources is the foundation of a CQI system; a robust connection between administrative data and other sources of information is key to a plausible vision of change. If solid process and outcome data are used to identify strengths and concerns and establish strategies for improvement, and if progress and trends are tracked by repeated measuring, the results can provide management with a simple, visually compelling thermometer of the organization’s performance and health at every level and can help the agency see where it wants to go in the future.

All agencies want data that are timely, complete, understandable, and relate to the task at hand. Data must be accurate and relevant before data analysis can yield beneficial results; for this to occur, any issues that exist with caseworkers and data entry must be identified and resolved. Furthermore, there must be an efficient process in place for the resolution of data quality issues.

An agency’s interrelated activities and processes are anchored by the quality of its information systems and their ability to produce accurate, reliable, interpretable data that are consistent in definition and usage across the State and nationally.

Collecting Data and Ensuring Quality

States should have the ability to input and collect or extract quality information from a variety of sources, including data from Federal reporting systems, case review data, as well as other administrative, quantitative, and qualitative data sources. States should also be able to ensure that the quality of their data is maintained.

Data collected in relation to continuous quality improvement (CQI) should be related to both practice standards (Did monthly visits with the child occur?) and outcomes (Did the child experience repeat maltreatment?). Agencies are already collecting large amounts of aggregate data, or data compiled from several measurements, much of which feeds into systems such as the Adoption and Foster Care Analysis and Reporting System (AFCARS), National Child Abuse and Neglect Data System (NCANDS), and National Youth in Transition Database (NYTD) systems. States may also collect case review data and data that reflect performance in systemic areas. Many agencies collect data specific to various other areas, such as length of time to complete investigations, occurrence of team meetings with families, worker caseloads, and evidence of racial or ethnic disproportionality. Some States also access data that are available from partner agencies, such as the courts, juvenile justice, and mental health providers.

Data from case record reviews, in a well-functioning CQI system, will help determine whether case review instruments and ratings are completed as per instrument instructions and with consistency across reviewers. Review data should also support practice and outcome summaries. Additionally, processes to extract accurate quantitative and qualitative data from across the State’s jurisdictions should be clear and consistently implemented. These methods and processes should be documented, with a process in place to review and verify that they are being followed.

It is possible to generate so much data that an agency becomes overwhelmed as it begins the process of analysis. According to Reveal and Helfgott (2012), “There is a simple and universal answer to ‘what data do I need?’ and that is, it depends on what question(s) you are trying to answer.” For example, agencies might ask themselves “How can we increase placement stability of children in care?” or “How can we stabilize children emotionally and decrease placements in residential treatment facilities?” In other words, agencies will need to make strategic decisions about data they need based on an understanding of what they want to achieve. Thus, agencies should tie data back to their goals, key strategies, and system change efforts.

Agencies should also prioritize attention to data by focusing on the most critical data first. They should then consider data that have the broadest value, are of the greatest benefit to the majority of users, or are of value to the most diverse of users. As agencies learn from their earlier efforts and increasingly improve and become more skilled at analyzing data, the better they will become with analysis of more and varied data. 

Quantitative and Qualitative Data

States will input and extract both quantitative and qualitative data from their continuous quality improvement (CQI) systems, in order to have a more complete understanding of issues being evaluated and addressed. Quantitative data are those that are expressed by numbers and/or frequencies rather than by meaning and observation/experience. In other words, quantitative data are numerical measurements of an object or event (e.g., how many, how much, or how often), while qualitative data are descriptive of characteristics or attributes, representing what someone observes or otherwise gleans. Qualitative data and research help agencies understand action and experience as a whole and in context.

Some common sources of quantitative data are questionnaires, case record reviews, and extractions from institutional databases, while obtaining qualitative information can be achieved from activities such as focus groups, review of case files, and case-related interviews. Because qualitative data are descriptive, they may be more challenging to analyze than quantitative data. Each type of data has positive attributes, and combining both can result in gaining a more comprehensive picture of agency functioning, as both types enable deeper understanding of various phenomena and provide new knowledge.

The following chart shows the difference in qualitative and quantitative elements for the same group, youth age 17 about to age out of foster care in a region of a State:

Aging Out Youth - Qualitative Data Aging Out Youth - Quantitative Data
Employment readiness 173 youth
Relationship with birth families 83 girls, 90 boys
Support systems with caring adults 62% (107) graduating from high school
Preparedness to live unsupervised 48 youth college-bound

To augment their case review systems or to delve further into specific issues, States may want to administer surveys, conduct interviews, or hold focus groups with staff, external stakeholders, or consumers to obtain more qualitative information. Collecting this additional information will go far to provide a more complete picture of overall agency strengths, needs, and functioning in terms of outcomes for children and families, and may be particularly helpful in evaluating systemic factors, such as adequacy of services in the community and training of staff and resource parents. For example, if foster parent retention is a challenge for the agency, it may choose to interview foster parents who dropped out of the program in the past year, or conduct a focus group with current foster parents, to gain valuable qualitative data about changes needed in the program in order to increase retention.

In general, internal stakeholders who would typically be interviewed or included in focus groups include caseworkers (investigation, foster care, and in-home), supervisors, foster home finders, adoption staff, information technology staff, and the local child welfare director. External stakeholders could include organizations and individuals who are representative of entities who participated in the development of the State’s Child and Family Services Plan. Likely participants would be the courts, guardians and attorneys ad litem, directors and staff of community agencies who serve agency consumers, Tribal representatives, law enforcement personnel, and agency attorneys. Foster and adoptive parents and consumers, such as youth served by the agency, would also be included.

To assist in the process, the State’s CQI oversight division might develop a set of core and follow-up questions for the various groups to be used as guidelines in interviewing and facilitating focus group discussions across the State. However, if specific issues are being targeted, then questions may need to be added to reflect local/regional concerns. The interviews/focus group meetings should be standardized as much as possible to help ensure more consistency in information/data that are obtained throughout the State. It will be important to remind group participants that their responses should reflect current agency information, or within the past year or two, rather than anecdotes or information (whether positive or negative) from several years in the past.

Caseworkers and Data Entry

Caseworkers are commonly the originators of the bulk of an agency’s data in its Statewide Automated Child Welfare Information System (SACWIS) or other statewide system, and often this is where data quality begins. A study (Carrilio, 2008) on caseworkers’ use of computers and data systems found four variables relating to the accuracy of their data entry:

  1. Skills and experience with using computers (worker background and comfort level with use of computers)
  2. Perceived ease of use of the agency’s automated system (worker perception regarding user-friendliness of the system)
  3. Utility of the data (worker belief about usefulness and helpfulness of data being gathered)
  4. Attitude about the data (worker perception regarding importance of inputting and gathering the data)

Agencies already employ tools for checking the accuracy and completeness of data. States that continue to have significant data errors and inconsistencies should address any worker entry issues through training and coaching. A well-functioning help desk and other supports for direct delivery staff will also assist greatly in minimizing errors and ensuring a collective sense of responsibility for accurate data. In addition, States should examine how they define data elements to be captured, the clarity of instructions overall, and if data entry screens and systems are well-designed. Well-designed systems will have:

  • Clear screens
  • Well-spaced and uncluttered fields
  • Easy-to-read font sizes
  • Descriptive captions that are easy to understand
  • Information that flows in a logical order within the screen and from screen to screen
  • Ease of entry

Agency leaders should accept responsibility for the appropriate breadth, quality, and usefulness of an agency’s data, and should continually look for ways to improve data. When data are faulty or otherwise inadequate, management should ensure that effective processes are in place to identify, report, and address data errors, inconsistencies, and omissions at whatever juncture and level they may occur. Corrective mechanisms may involve instituting a vigorous data quality assurance (QA) process, training or re-training staff, re-examining skills of those analyzing the data, and/or creating partnerships with outside entities for training and technical assistance to ensure more effective data collection and analysis.

Resolution of Data Quality Issues

Child welfare agencies must stay current on the demographic data about the children, youth, and families being served at any given time. Additionally, they must be able to ascertain the practices and services being provided by frontline staff, as well as services provided by contractors and community agencies. Finally, they must be able to determine whether outcomes of services and practices are meeting agency expectations. Sound assessments of practice and outcomes depend on correct, consistent, and complete data.

Even though agencies strive for data accuracy, in the best of systems there will occasionally be inconsistencies. For example, if the number of children free for adoption differs significantly from the number of children on whom termination of parental rights (TPR) has occurred, there is an obvious error in at least one of the numbers. The State’s process should be clear about who is responsible for entering data, ensuring data accuracy, and correcting errors that are found. Additionally, the process for correcting such errors should be clear, transparent, and effective.

Three of the most common elements that contribute to poor data quality are:

  1. Duplication of information across files and systems
  2. Incomplete and missing data elements
  3. Inconsistent or untimely data entry

To be successful, data quality improvement activities need widespread support and active involvement from all levels of staff. Data quality management must be a collaborative effort that bridges the gaps between the information technology (IT) department and the program divisions.

One approach to managing data might be a collaborative model in which the program side is accountable for ensuring that there are well-defined data quality rules, elements to be captured, measures, and acceptability levels, while IT is responsible for instituting and maintaining the architectural framework to ensure ease of capturing information, that rules are observed, and measures are accurately reported. As Reveal and Helfgott (2012) explained in their article Putting the Pieces Together, “In a fact-based decision-making culture, operational, policy, and regulatory data are all treated as assets of the agency and system, not the purview of a single office.”