Warning: Trying to access array offset on value of type bool in /home/topgsnkq/timelyhomework.com/wp-content/themes/enfold/framework/php/function-set-avia-frontend.php on line 570

Two forum responses about criminal justice research, law homework help

I need two forum responses in APA format, 350 words each. The topic is research in criminal justice. These are responses to other students on a specific topic. They require peer reviewed references. Below is the two posts that need responses:

Post 1

Validity, Reliability, and Generalizability

There are different ways in which a researcher can measure validity based on the specific situation and how exactly the researcher feels validity is best measured. The four common approaches are through construct validation, criterion validation, content validation, and face validation (Bachman & Schutt, 2015). Face validity can be summarized as a big picture, common sense look at if the research seems valid on it’s face. Nonsensical results can be an indicator that something in the study is wrong; or at least that the study necessitates replication and further study to validate that the study is indeed accurate and no intentional or unintentional errors were made in any step of the process. Content validity is used to ensure that the measure fully grasps the entire meaning of the concept. Criterion validity is used to compare the data on one measure—commonly one that is indirect—against the results of data collected on a different, more direct measure. The more direct measure is referred to as the criterion. The final measure of validity is referred to as construct validity, which when a researcher shows that a particular measure is theoretically related to a series of other measures. This can be used to compare several measures against one another when one specific, direct measure is not available to be used for a criterion method of validation.

A necessary prerequisite for validity is reliability. Reliability means observing consistent data points from the method of measurement while the phenomenon being measured remains the same. Reliability ensures that the measurement of data remains unaffected (or minimally so) by random error or chance variation. Additionally, generalizability has two aspects. Sample generalizability refers to the ability to take the results of the study of a sample, and make the same conclusions about a greater population(Bachman & Schutt, 2015). Cross-population generalizability is the ability to take conclusions about one group and apply them to a different group. Generalizability is a key aspect of research because without it, there is a limited amount of knowledge that can be obtained, and a study becomes little more than data collection. Likewise, reliability is important because without it, the data being collected can not be trusted as an indicator of any conclusions a researcher may postulate. Validity is important because without it, the conclusions one reaches from studying the topic of research are invalid. Therefore, validity, reliability, and generalizability are all key components to consider when developing and implementing a research study.

Secondary Analysis

Secondary analysis is research where the researcher uses data previously collected to draw conclusions, instead of collecting original, primary data themselves. There are four types of secondary data, which include: “ official statistics, official records, and other historical documents” (Bachman & Schutt, 2015, p. 207). The main advantages of secondary analysis are the reduced cost and time that could be required if the researcher were to collect data themselves and then analyze it. One of the main disadvantages of secondary analysis is that the researchers must work within the confines of how previous data collection was structures, so they can not derive any new questions for respondents—for instance—since the data was already collected. In the words of Murphy and Schlaerth (2010), secondary analysis is “research is divorced from situational contingencies, cultural dynamics, or any conflicts of interest” (pp. 382-383). Furthermore, they can not replicate the actual data itself to ensure that the collection process did not result in any errors; they must rely on the integrity of the primary data to conduct their analysis.

An example of a study that utilized secondary analysis is the a study by Boydell, Gladstone and Volpe in 2006 which analyzed why young people often delay in seeking help after first experiencing psychosis. They used verbatim transcripts of original interviews of adolescents to analyze why they delay seeking help, and how actions by adults—such as parents, teachers, and police officers—impacted their decision regarding obtaining assistance in dealing with their mental illness. To establish trustworthiness of the secondary analysis, they verified “the consistency of results within individual interviews, and comparing and contrasting analysis across all transcripts” (Boydell, Gladstone & Volpe, p. 57). They found that ignoring and hiding symptoms of psychosis acted as a barrier to the adolescents seeking help, and that persuasive influence from adults helped them to ultimately seek help in treating their psychosis. Overall, this was an important study that can be used to help ensure that adolescents struggling with psychosis receive the medical interventions necessary to help them cope with psychosis.

References

Bachman, R, & Schutt, R. K. (2015). Fundamentals of Research in Criminology and Criminal Justice. Thousand Oaks, CA: SAGE Publications, Inc.

Boydell, K. M., Gladstone, B. M., & Volpe, T. (2006). Understanding help seeking delay in the prodrome to first episode psychosis: A secondary analysis of the perspectives of young people. Psychiatric Rehabilitation Journal, 30(1), 54-60. Retrieved from https://search-proquest-com.ezproxy1.apus.edu/docv…

Murphy, J. W., & Schlaerth, C. A. (2010). Where Are Your Data? A Critique of Secondary Data Analysis in Sociological Research. Humanity & Society, 34(4), 379-390. Retrieved from https://search-proquest-com.ezproxy1.apus.edu/docv…

Post 2:

Validity

The questions formulated by a sample group are valid only if they measure what they are intended to measure. When conducting research in general and social issues in particular, it is important that data collected is measured as designed to ensure validity. There are three aspects of validity: measurement validity, generalizability, and causal validity (also known as internal validity). Conclusions based on invalid measures, invalid generalizations, or invalid causal inferences will themselves be invalid (Bachman & Schutt, 2014). Validity depends on whether or not the action defines the concept it is intended to measure. The information collected does not make the concept invalid but how the information is used can cause invalidity. For that purpose, validity is necessary to ensure proper policy responses.

Reliability

Achieving the same result on repeated occasions is called reliability. If a sample group answer question on a questionnaire uniformly and repeatedly, then the data collected is reliable. Authenticity is important to ensure that the data collected can be generalizable. Bachman and Schutt (2014) chose the term authenticity to describe reliability. They state the goal of authenticity is to fairly reflect the perspectives of the participants in a study setting and is stressed by researchers who focus attention on the subjective dimension of the social world (p. 17).

Generalizability

The generalizability of a study is the extent to which it can be used to inform us about persons, places, or events that were not considered (Bachman & Schutt, 2014). There are two aspects of generalizability according to Bachman & Schutt, (2014): Sample and Cross-population (p. 16). Sample generalizability is a concern of survey research. For example, political pollsters study a sample population of likely voters and then generalize their finding to all the people of the electorate. Generalizability is essential because no one would show interest in political polls if they represented the small sample that was surveyed (Schutt, 2015).

Secondary Data Analysis

Secondary data analysis offers versatility in that it can apply to studies designed to the present of the past, to understand change, to examine phenomena comparatively, or replicate previous studies (Goodwin, 2012). Bachman and Schutt (2014) found that there are four major types of secondary data: surveys, official statistics, official records, and other historical documents (p. 208). Researcher uses secondary analysis to reanalyze data that was collected and processed by another person. In the 1960s, analysts became aware of the potential benefits of archiving survey data for analysis by scholars who had nothing to do with the study design and data collection. Even when one researcher had conducted a survey and analyzed the data, those same data could be further analyzed by others with different interest (Babbie, 2014). Replicating critical analyses with alternative indicators of key concepts, testing for the stability of relationships across theoretical subsets of data and examining the finding of comparable studies can strengthen the results of the secondary analysis (Schutt, 2015). Secondary data analysis is much more cost effective versus self-collection. A drawback is the data may not contain all the information a researcher would obtain through primary data collection.

JoAnn Miller utilized secondary analysis in An Arresting Experiment: Domestic Violence Victim Experiences and Perceptions. In her study, she analyzed Dade County, Florida police report from interviews with domestic violence victims. The original intent of the survey design focused on the suspect and explain how police intervention deterred the reoccurrence of domestic violence. In turn, Miller (2003) concentrated on the victim research was designed to complement Pate and Hamilton’s work (p. 701). The data suggested that certain types of women, especially poor and minority women, are more likely to be victimized. Additionally, the data reflected the only statistical difference between recurrence of violent episodes were between the control and the arrest-only treatment group. The study also found that personal power correlated with income status. Thus, the higher the income bracket, the more empowered a person felt. From a legal standpoint, it was concluded that police intervention served the victim better than nonarrests.

Reference

Bachman, R., & Schutt, R. K. (2014). Fundamentals of Research in Criminology and Criminal Justice, 3rd Edition. SAGE Publications, Inc., 01/2014. VitalBook file.

Schutt, R. K. (2015). Investigating the social world. the process and practice of research (8th ed.). Thousand Oaks, CA: Sage.

Babbie, E. R. (2014). The basics of social research (6th ed.). Belmont, CA: Wadsworth, Cengage Learning.

Goodwin, J. (2012). SAGE secondary data analysis. The secondary analysis of qualitative data. (Vol. 1-4). Los Angeles: SAGE.

Miller, J. (2003). An Arresting Experiment: Domestic Violence Victim Experiences and Perceptions. Journal of Interpersonal Violence, 18(7), 695-716. doi:10.1177/0886260503251130

 
"Looking for a Similar Assignment? Order now and Get 10% Discount! Use Code "GET10" in your order"

If this is not the paper you were searching for, you can order your 100% plagiarism free, professional written paper now!

Order Now Just Browsing

All of our assignments are originally produced, unique, and free of plagiarism.

Free Revisions Plagiarism Free 24x7 Support