Michael R. Leming
Professor of Sociology
Thus far we have centered primarily on examples of survey research. But not all sociological inquiry uses the standard survey approach. The process of deciding on the methodology for testing research hypotheses (whether it be survey, experiment, field research, or historical analysis) should not be dictated by one's "favorite" methodology. Rather, the decision for methodological type is influenced by 1) the nature of the research hypotheses, 2) the body of knowledge concerning the relationship between the variables of interest, 3) one's expertise in a given methodology (okay, favoritism may play some role), and 4) the resources at hand for carrying out the research. Therefore, the research hypotheses and the body of knowledge concerning the topic should be the primary factors in the selection of method. The "kosher" researcher does not first decide what method to use and then try to shape the hypotheses to the methodology.
Let me give an example from some research I have been conducting on older dating couples. A number of years ago, when reading through literature in the area of marriage and family, I was surprised to discover how little was known about dating at any age beyond young adulthood. It became clear that there was a need for information concerning the differences and similarities of later life dating and younger age dating. However, I must also admit that my curiosity was sparked by my personal acquaintance with a woman aged 70 who was dating a man age 82.
In conducting a thorough literature review, I found that virtually nothing had been written about dating in middle age and later later life. What I did find (and later proved to be beneficial for my research) was a substantial body of literature concerning mate selection, dating, and love for persons age 16-25. Using this knowledge base it was possible to extrapolate from the activities and attitudes of younger people to formulate research hypotheses which could be tested with data collected on older dating couples.
After modifying these research hypotheses (utilizing my knowledge of the literature in social gerontology) it became necessary to select a research methodology. At that point in time, I was most skilled in collecting quantitative survey data. I had worked on several projects that provided me with experience and skills in closed-ended questionnaire design, random sampling strategies, and quantitative data analysis. Yet, quantitative data collection did not appear to be appropriate for the research hypotheses of this study.
Had I been interested in the number of older persons in the population who were actively dating, then perhaps a large random telephone sample would have revealed this information. However, I suspected that many older respondents may have been reluctant to reveal such information in a telephone interview. Furthermore, had there been enough data previously collected on later life dating such that acceptable scales, indices, and measures were available, then a mailed questionnaire to a random sample of elders may have been appropriate. But given the fact that the topic of intimate dating relationships was potentially a very sensitive subject which demanded strong rapport between interviewer and respondent, coupled with the fact that I really did not know exactly what questions to ask, or even if older persons would define "dating" in a manner similar to younger people; I found it necessary to employ an exploratory qualitative methodology. That is to say, I would be utilizing more open-ended questions, and the data I obtained would be used primarily to help reshape the hypotheses rather than serve as an empirical test of proof. Given the nature of the topic, it also meant that I would be probing more deeply sensitive issues -- adjustment to widowhood or divorce, levels of sexual activity, and feelings of love and romanticism. For these reasons I decided on a methodology involving face-to-face open-ended interviews.
The point I am trying to convey is that the research questions and the body of knowledge in the field will dictate the research methodology employed. No methodology is perfect, even though the researcher may have selected the one most appropriate. Every methodology has some disadvantages. In my study of later life dating, I found the face-to-face interviews very costly in terms of personal time and computer resources. (I used computer content analysis to review all 60 verbatim interviews, a program which at this time is more costly than most quantitative statistical packages). Another problem I encountered with the face-to-face interviews was the possibility that respondents were supplying socially desirable answers to extremely personal and sensitive questions. Respondents may have attempted give what they considered to be appropriate answers to questions posed by a relatively young researcher who symbolically represented the academic world. Finally, my qualitative methodology required a non-random sampling procedure known as a snowball sample (see end of chapter for a discussion of sampling techniques) which created the possibility of a biased sample.
Despite these concerns, it still seems logical to argue that the qualitative method I selected to study later life dating was the most appropriate approach and yielded the most usable data given the current body of knowledge.
The above example pertains to the conducting of academic research, yet the same considerations apply when deciding upon a method in the applied setting. Whether one is designing a study to assess the needs of a local community, the demographics of pet ownership, or a college campus' attitudes toward apartheid, there are several factors to be considered in the selection of a research methodology:
1) Current knowledge -- Do standardized survey instruments presently exist which can be utilized? Is the study a replication, or near replication, of previous work? Is there a specific directional research hypotheses? Do we know anything empirically or theoretically about the topic of interest?
2) Costs -- How much will specific methods cost in terms of money, time, computer resources, and emotional involvement on the part of the researcher? Does one have a budget which can realistically meet the unanticipated costs of the research project?
3) Ethics -- How ethical is the approach selected? Does the methodology violate respondents' right to privacy and informed consent? Is confidentiality insured? Who "owns" the data?
4) The Skills of the Researcher -- To what extent does the researcher posses the expertise to carry out the project? Is consultation or direct assistance necessary in order to complete any of the parts of the project? Is the researcher realistic in assessing his or her own abilities and commitment to the research investigation?
As we have discussed, the selection of the research method is not an
easy task. Every method is open to criticism, and it becomes the
researcher's task to determine which method can produce "better" results.
We have already indicated that several research designs are possible.
Now we will examine the specific research methodologies and discuss the
relative strengths and weaknesses of each. While it is not possible
to provide extensive discussion of the designs, we will endeavor to make
key points concerning each method as it relates to "doing" sociological
The Experimental Design
The experimental design is not often used by sociologists. Rather, it is in psychology, where there is greater concern for internal validity, that we are more likely to find the experimental approach. That is not to say that sociologists are unconcerned with issues of internal validity, but sociology tends to emphasize issues of external validity and generalizability. This disciplinary emphasis seems to preclude the use of experimentation by most sociologists.
In the true experiment several conditions must be present. First, the true experiment has a control and an experimental group of research subjects. Typically, it is the control group who receives no treatment, test, stimulus, or whatever the independent variable of interest might be. The experimental group, however, is subjected to the independent variable. Furthermore, in the true experimental design, there is a pretest assessment of the dependent variable, followed by the introduction of the independent variable, and followed by a post-test assessment of both groups. It is assumed that any differences which exist between the experimental and control groups is a function of the presence of the independent variable and not due to other uncontrolled factors. The experiment also assumes that the two groups were comparable at the time of the pretest -- this can be achieved through the random assignment of subjects to the two groups or by matching of subjects on other variables which one might suspect may have some impact on post-test outcomes and then assigning one member of the pair to the control group and the other to the experimental group.
As a concrete example of an experiment, let us consider a study to test the effects of TV violence on preschool age children. In contemporary American society it would be quite difficult to find children who have not been exposed to television. Therefore, an experimental design of some type might prove helpful in determining if television viewing of violent acts increases aggressive play behaviors in children. Let's say, you decide to go to a large nursery school in an urban area and recruit children to be part of the project. You make certain that the children obtain signed parental consent forms, which explains the nature and purposes of the project to the parents and gives the researcher permission to study the children.
Should you obtain permission from 24 parents, your next step is to randomly assign 12 children to the control group and 12 children to the experimental group. You may also decide to opt for some form of assignment based upon the matching of subjects. Here you would match subjects on one or more characteristics -- generally those characteristics on which you choose to match are selected because they are believed to have some impact upon the outcome variable. If it is felt that gender differences are related to aggressive behaviors, then one would want to insure that there were equal numbers of boys and girls in the experimental and control groups. Or, if it is believed that the age of a child is related to aggressive activities, then matching respondents relative to age is also preferable. It is important to note that the more characteristics on which one chooses to match pared subjects, the greater the number of initial cases will end up being discarded due to failures in finding counterparts for opposite group assignment.
In general, the random assignment of subjects to the experimental and control groups occurs whenever the two groups are fairly large or when other factors are not suspected to interact with the dependent variable. Matching is used when the groups are small and when comparability between experimental and control groups should be adjusted due to the importance of extraneous factors.
Returning to the example of television violence as it relates to aggressive play behavior, the next step is to conduct a pretest of the experiment. During this pretest a trained researcher should observe each group of children involved in active play in the laboratory setting. The observer makes coded responses indicating the overall level of aggressive behaviors as demonstrated by each group of children. The control group of 12 children is then required to watch 3 hours of prerecorded Sesame Street episodes on television. (It will be assumed that the incidence of violent behaviors on these Sesame Street episodes is negligible.) The experimental group watches 3 hours of Superfriends cartoons which contain high levels of violence.
Following the TV viewing, the two groups are separately observed again. The post-test scores may be radically different in terms of levels of aggressive behaviors, unlike the pretest scores which were comparable (due to randomization or the matching of groups). If such post-test divergence exists, then one can feel fairly confident that the observed differences are a result of the television viewed, and not the caused by other uncontrolled variables.
This example illustrates the classical experimental design. However, there are many variations on this design (see Campbell and Stanley, 1966), some of which have time sequenced exposure to the independent variables, some with groups which have not been pretested, others with no control groups, and still others with three control groups. We will not elaborate on these variations but instead summarize some of the strengths and weaknesses of the experimental method. The strengths of experimental research design include:
1) Control over the environment and other external variables
2) One can infer causality through the manipulation of the independent variable.
3) Prediction is enhanced.
4) There is greater confidence the study has internal validity due to the systematic subject selection and equity of groups being compared.
5) Replication is very possible.
The negative aspects of experimental design include:
1) Since subjects of experimental designs are not usually representative of general populations, any generalizations are problematic. Social scientists typically say that experimental designs have low external validity or that they have poor generalizability.
2) The artificial settings of experiments may alter subject behaviors or responses.
3) Experimental designs can be costly if special equipment or facilities are needed.
In addition to the several unique weaknesses of experimental designs discussed above, Stanley and Campbell (1966) have listed five intrinsic factors faced by all research designs which may pose a threat to the internal validity of the study.
1) History: conflicting variables other than the independent variable found outside of the subject and the study but within the social environment.
2) Maturation: :a change in the subject's biological and/or psychological growth due to passage of time during which the study takes place.
3) Experimental Mortality: attrition or drop out of subjects during the course of the study.
4) Instrumentation: unreliable measurement in pretest and post-test measures.
5) Testing effects: changes caused by the interaction between subjects and measurements -- sometimes referred to as Hawthorne effects.
While the experimental method is usually superior to other research
designs in terms of internal validity and the ability to infer causality,
the sociologist's primary objection to this design is the artificial context
in which the study takes place, the small number of subjects studied, and
lack of representativeness of the subjects selected. In most experiments
relatively few persons are observed, and only then in a laboratory or contrived
settings, thus making the generalizability of findings very difficult.
Furthermore, the voluntary nature of subject participation in most experiments
may cause one to suspect that the subjects may not accurately represent
the population. We will now turn our attention to the survey research design
which is usually thought to be superior to other research designs in terms
of these issues related to external validity.
The Survey Design
Sociologists claim survey research as their own. In other words, most sociologists are well versed in survey design, interview and questionnaire formats as they relate to data collection, sampling frames, and the application of descriptive and inferential statistics as the means for evaluating research hypotheses.
Survey research employs group administered pencil and paper questionnaires, face-to-face and telephone interviews, mailed self-administered questionnaires, and/or some other techniques of data collection which would produce quantitative assessment. This is not to imply that surveys might not be qualitative, as demonstrated from my study of later life dating.
The survey method is often referred to as cross-sectional research. The key factor in defining the survey method is that some phenomenon (the major independent variable) is observed, but the researcher does not have control over its variation. Data is usually collected only once involving the administration of a questionnaire or interview schedule to a group of respondents who have been randomly selected. The key difference between the experiment and the survey is that the survey does not require a pretest measure and the independent variable is typically a naturally occurring phenomenon rather than being manipulated by the researcher. Furthermore, the survey does not have a control group for purposes of comparison.
Let's say you were interested in knowing about the attitudes toward female homosexuality on college campuses. You and your fellow researchers might hypothesize that the small private college will have more negative views on female homosexuality than does the larger state college. You suspect that this is due to the higher socioeconomic status of the students who attend private colleges and because many private colleges are church-related. After considering the experimental design, which you reject because it would be impossible to assign students private and public institutions, you decide to conduct a survey. Random samples of 400 students from two colleges (one public and one private) are drawn with the use of enrollment lists. Questionnaires are then mailed to the 800 students in order to assess their attitudes concerning homosexuality.
An important point to note in the survey design is that the data is collected after the fact or ex post facto. In other words, the attitudes toward female homosexuality were established long before the respondents made their college selection and before the survey was mailed. The survey does not usually attempt to measure behaviors or attitudes prior to the introduction of an independent variable. Instead, the independent variable (in this case, college affiliation) and other naturally occurring variables (age, socioeconomic status, religiosity, gender, and year in school) all may have an impact upon the dependent variable (attitudes toward female homosexuality). The task of the sociologist is to assess which variables are more highly associated with the dependent variable, and to explain why these variables are correlated. This is quite different from the experimental design which (due to a high degree of control over all extraneous variables) attempts to determine the causal effects of one independent variable upon one dependent variable.
The truly objective sociologist will, however, venture to admit that the survey method is not without its shortcomings. The weaknesses most often associated with survey design include the following:
1) The survey design often lacks internal validity because there is limited control over external variables.
2) Due to this lack of control it is often difficult to establish causal relationships.
3) Some forms of interviewing can be very time consuming and costly.
4) Complex behaviors or attitudes are difficult to assess with survey techniques.
The strengths of the survey design include the following:
1) A vast amount of data collected with a questionnaire can be analyzed in a short period of time and with a minimum of expense.
2) Unlike the experimental design, where the research context tends to be artificial, survey research takes place in the "real world."
3) Generalizability is very good with survey research, especially
when samples are representative of the population.
Participant Observation or Field Study Design
Although many other disciplines in the social sciences use direct observation as a means of gathering information about the social world, the field of anthropology claims participant observation as its primary method of inquiry. Participant observation, field research, or what Max Weber called "sympathetic understanding" involves the direct observation of persons in natural settings. Participant observation is a kind of "voyeur" or "window on the world" approach. The key task in field research is to observe "normal" behaviors in "everyday life" by persons performing the social roles which are part of their social environment.
Raymond Gold (cited by Babbie, 1983:247-248) has discussed the following four unique roles that field researchers play in participant observation research: complete participant, participant-as-observer, observer-as-participant, and complete observer. The complete participant is one who participates in a social setting and is viewed by other participants as only being a participant, and not a researcher. The participant-as-observer makes his or her researcher role known to other participants but participates in the social setting in a similar way as other participants, while the observer as participant does not participate in the social setting but informs others of the intention of observing behavior in the social setting. Finally, the complete observer is an unobtrusive observer -- this researcher does not participate in the social setting nor does he or she inform others of the observer role.
There are advantages and disadvantages with each of these roles in field research. When subjects are aware that they are being observed it is possible that participants may alter their behavior and may act according to their expectations of the researcher's views of them. They may also selectively inhibit behaviors which are deviant, non-normative, or merely embarrassing when performed in the presence of a researcher.
On the other hand, when subjects are unaware that their actions are being observed by a researcher (disguised as participant), there may be ethical violations related to the subject's right to privacy and to the failure of the researcher in securing the respondent's informed consent.
When researchers are not involved as participants they are not as likely to understand the subjective intentions of participants in the social interaction. On the other hand, participant researchers have a tendency to "go native." In adopting the perspective of the participant, they may also influence the social situation which they are attempting to observe and describe.
Clearly, it is not easy to decide which role one will adopt in conducting field research. This decision should be based on the nature of the topic at hand, ethical considerations, and accessibility to a particular group or setting.
Let's say you were interested in studying the relationship between stress and friendship formation. You hypothesized that as the intensity of experienced stress increases, the tendency to affiliate with strangers increases (if friends and family members are unavailable). You decide that since "affiliation" and "friendship formation" are difficult to quantify utilizing survey techniques, you will directly observe persons in stressful situations and try to determine if they begin initiating interaction with strangers as the intensity of their stress increases.
You and a colleague decide that a hospital waiting room on a surgical floor would be a very good natural setting to observe such behaviors. Obviously, those waiting would be under great stress as they await outcomes of loved ones' surgeries and they would be surrounded by strangers who were in similar situations.
Let's say your mother is a doctor at this hospital and you have received permission to observe this waiting room. (Gaining entry is an important consideration in the conducting of field research. For more complete examples of the problem of gaining entry see Whyte, 1943; Humphreys, 1970; Hochschild, 1973.) After much debate you decide to enter the setting as a complete participant, and come prepared to play the role of a waiting loved one. Of course some problems stem from this decision -- first, is it ethical to deceive those real participants, and second, how is one to take field notes in an unobtrusive fashion?
You never quite come to closure on the ethical question but decide that as long as the true identity of the participants is not revealed, you have protected your subject's right to privacy. Your solution to the problem of taking field notes is to excuse yourself from the room every hour under the guise of smoking a cigarette, during which time you step into the restroom and furiously write notes from recall of the past hour's events. In your absence your researcher colleague continues observing. At the conclusion of the data collection phase of the research, you and your friend exchange research notes as a check for inter-rater reliability.
As this example points out, there are several advantages and disadvantages in field research. First the disadvantages:
1) The study is time consuming for the researcher. Some field observers spend years in collecting data.
2) Due to the lack of control over external variables, participant
observation is very weak with regard to the demands of internal validity.
3) Typically, there is only one researcher collecting data and therefore the method is often criticized for being too subjective.
4) Given the present emphasis in the discipline of sociology on quantitative methods and computer analysis, field research is not as widely publishable.
Field research also has the following advantages:
1) The study takes place in the real world and therefore is likely to be strong with regard to the demands of external validity.
2) The study can be less expensive because there is no need for special equipment and multiple researchers in collecting data.
3) If the researcher is a complete participant, subjects are less likely to give socially desirable responses.
We have claimed that one of the disadvantages of field research is its
subjective approach and qualitative nature. However, with the advent of
computer content analysis, sociological researchers now have the ability
to content analyze field data. Coupled with an ever increasing dissatisfaction
with the inability to quantitatively measure many forms of human behavior
or attitudes, field research is experiencing increased legitimacy within
the field of sociology. Consequently, we would anticipate that sociologists
will more frequently use qualitative field studies in the coming decades.
Most sociologists do not acknowledge historical research as a type of methodological approach. It is our contention that because of the ever increasing number of social scientists trained in multidisciplinary and interdisciplinary programs, and because the high costs associated with the collection of original data, that the historical method of data gathering is gaining acceptance in the field of sociology.
In reality, much of historical research consists of content analysis and statistical evaluation of data originally collected for some other purpose. (Social scientists typically refer to these two methods of historical research with the generic term "secondary analysis.") Secondary analysis is very important because it enables the researcher the opportunity of approximating longitudinal research designs at a fraction of the cost -- while saving a tremendous amount of time. Through the use of secondary analysis of the following types of data sources: newspapers, diaries, census data, vital statistics, church records, opinion polls; sociologists can conduct trend studies and assess attitude and behavioral changes over time.
With a systematic interpretation of existing documents, one can examine changes in the variable(s) of interest. Let's say, for example, that a sociologist is interested in studying the bereavement process for children who have lost a parent. The researcher hypothesizes that with time the intensity of bereavement for a loss of a parent will subside and approximate a "J" curve. The researcher feels that to interview children employing a longitudinal design would be too time consuming and might influence the children's responses. Therefore, the researcher decides that one very plausible way to collect the information concerning childhood bereavement is to advertise for personal diaries or excerpts from such diaries which have been written by children (ages 8 through 15) who have experienced the death of a parent. By placing ads in newspapers with wide circulation, the researcher may obtain a substantial number of cases, many of which approximate a longitudinal design -- since some children may have kept diaries for consecutive years. These diaries are then systematically analyzed with respect to content in order to discover general trends in childhood bereavement behavior.
This content analysis has, until recently, been a matter of the researcher reading the material and establishing categories for observation or coming to the information with preconceived categories and determining whether or not the material supports the observation categories. Presently content analysis can be accomplished with the use of modified computerized text editing programs (see MacTavish, 1985). With the assistance of a computer, it is possible to type in verbatim texts (such as newspaper articles, entries from diaries, speeches, etc.) and have typologies or conceptual categories emerge from the text. This approach, it is argued, alleviates much of the researcher's subjective biases in interpreting secondary sources.
Returning to our example of content analysis of children's diaries, there are obviously some problems with the historical method. One of the biggest drawbacks in the use of the diary is that children may not have kept a diary in order to intentionally write about the bereavement experience. Consequently, much of the information contained within the diary is useless to the researcher who has a specific hypothesis to test.
A second major obstacle in this study is the likelihood of researcher bias in interpreting the material. If one child writes "Today would have been Daddy's birthday," is this an expression of bereavement or a statement of fact? The researcher must decide without any opportunity to probe the subject's intention. This creates problems of reliability and validity of indicators for bereavement. Another problem with content analysis is the time consuming nature of analyzing historical materials. If one were to receive 100 diaries from children (and one would probably want to have many more than this), it would take a great many hours to read and interpret the data (not to mention time necessary to enter the text into a computer).
To summarize, the following are the disadvantages of the historical
1. Researchers are likely to be biased in interpreting historical sources.
2. Interpreting sources is very time consuming.
3. Computerized content analysis is costly to quantitatively analyze -- programs of this type take large blocks of computer core time and make analysis much more expensive than standard statistical procedures used in evaluating survey data.
4. The sources of historical materials may well be problematic -- for example women are more likely than men to keep diaries, not all records are kept in consistent patterns, original authors bring their own perspectives and biases to the interpretation of events.
5. Due to the lack of control over external variables, historical research is very weak with regard to the demands of internal validity.
Some of the advantages of historical analysis include the following:
1. The historical method is unobtrusive -- the act of research does not affect the results of the study.
2. The historical method is well suited for trend analysis.
3. Compared to longitudinal designs, content analysis is usually less expensive.
4. There is no possibility of researcher-subject interaction.
As we have noted earlier, the use of historical methods will probably
increase with regard to the frequency of use in sociology. This increase
will be caused by the increasing costs associated with other methods of
collecting data and the increased availability of less expensive computer-based
content analysis programs. One should also remember that content
analysis can be employed with other research designs. These analytical
techniques can be used as an interpretive tool for the interpretation of
open-ended responses to questionnaires, or in understanding qualitative
data obtained in face-to-face interviews.
Another area which may also gain in importance as a methodology in the social sciences is that of the simulation. Simulations are methods of data collection in which a model is constructed which represents the phenomenon under study. One such model is SIMSOC (Gamson, 1978), which is a model of society. Gamson's goals in SIMSOC are to provide students with the opportunity to understand social control and conflict on the societal level, and to create a utopia. The field of family sociology also makes use of simulations to study such phenomenon as family decision-making, family power, and family coping strategies (Tallman, Wilson, and Straus, 19??).
The computer is also used to generate data which may be used in a simulation model. Here the researcher is essentially asked to be very explicit theoretically, and develop algorithms in making predictions. The computer is utilized so that very complex theories of human interaction can generate specific predictions regarding social outcomes.
What distinguishes simulations from the other methodologies is that
simulations are concerned with theory elaboration and hypothesis generation.
The experiment, the survey, field research, and historical research all
seek to test hypotheses which have been generated from a scientific body
of knowledge. The simulation, on the other hand, generates hypotheses or
serves as a mechanism by which theory can be elaborated.
Sociology is typically interested in aggregate or multiperson behavior. Given this emphasis on the collective nature of behavior, sociologists tend to study large numbers of individuals in formulating propositions concerning groups, organizations, institutions, and societies. Depending on the nature of the hypotheses, it is usually not possible to observe the behavior of the entire collectivity. Consequently, the researcher will study a representative group of individuals drawn from the larger group of concern, and any research conclusions reached will be generalized to all of the members of the larger collectivity. In other words, if one is interested in knowing about the attitudes of older Americans concerning the proposed changes in Social Security benefits, it is virtually impossible to ask every older person in the United States for his or her views on this subject. Since sociologists do not commonly have access to an entire population, they often rely on samples which are subsets of the total population. The goal is to obtain a sample which is a "good" or "unbiased" representation of the total population. Social scientists have developed statistical means for determining sampling margins of error which they call confidence intervals.
There are two basic types of sampling techniques -- probability sampling and nonprobability sampling. In a probability sample each unit or element in the general population has an equal or known chance of being included in the sample. By contrast, it is not possible to determine the likelihood that an element of the population might be included in the sample if one employs a nonprobability sampling technique. Therefore, most social scientists prefer probability samples, and point out the short-comings of nonprobability approaches to sampling. However, many research situations make it impossible to employ probability sampling techniques, making nonprobability sampling the method of choice -- even if not the method of preference.
As we discuss the various types of probability and nonprobability samples,
the differences between these two basic types of sampling will become clear.
In general we can say that the major advantage of probability sampling
is that it insures greater generalizability of findings. Nonprobability
sampling has the advantages of convenience, decreased costs, and may be
the only method possible.
We will begin with the more complex type of sample, the probability sample, since it is the method of preference and because it is the type of sampling upon which most inferential decisions are based in data analysis.
1. Simple Random Sample. By definition, a simple random sample refers to those cases that are selected so that each element in the population has an equal or known chance of being included in the sample. Typically all elements in the population are listed and assigned a unique number. Then a table of random numbers is used to generate truly random digits, and selection is made on this basis. Sometimes the "Monte Carlo" method is substituted for the use of the random number table. With this method all of the elements are "put in a hat" and drawn at random until the desired number of elements have been selected.
Perhaps few research terms are so widely misused as is the term "simple random sample". Researchers often claim that they have employed a "simple random sample"; however, samples of this type are actually rarely used in the strictest sense. The first reason is that it is relatively rare that all of the elements in the total population can be identified -- it may be impossible to identify all of the residents of a nation, state, or even a community. (Ask yourself whether or not there exists accurate list of all students in your university?) The second reason that simple random samples are not often employed is that when non-response occurs (such as failure to complete a questionnaire, subjects drop out, etc.), the actual resulting sample is most likely not a random sample. Nonresponse usually produces a biased or non-random sample in the final analysis -- even though the researcher had the best intentions of using a probability approach.
2. Stratified Random Sample. In this sampling technique the population is divided into two or more subgroups or strata. These strata represent those characteristics on which the researcher wishes to insure adequate representation. If one were concerned that male and female perspectives were adequately represented, the population listing (or sampling frame) would be divided into two subgroups, and a simple random sample would be taken from each subgroup using a table of random numbers or the "Monte Carlo" method. Sampling frames can be stratified by one or several social characteristics -- such as gender, educational attainment, religious affiliation, or age.
Within each strata, elements are selected by the simple random method. Researchers can choose to have an equal number or a proportionate number of individuals selected for each subgroup. For example if 60% of the students at a given university were females and the researcher wished to draw a sample of 100 students employing a proportionate stratified sample, he or she would randomly select 60 students from the female list and 40 students from the male list. If one wish to use a disproportionate stratified sample, 50 males and females would be randomly selected from the stratified sampling frame.
3. Cluster Sampling. As mentioned earlier, there can be enormous problems and costs associated with probability sampling when it is mandatory that particular individuals be interviewed. Both the simple random and stratified random samples would be quite costly to obtain if the population were very large or the distribution of the population was widely scattered.
Researchers have developed the cluster sample as a means of alleviating some of the costs of time and money. In this type of probability sample, the total population is identified as having clusters of elements. These clusters are then randomly selected with either simple random or stratified random techniques.
For example, if you were asked to sample a population of random households in the City of New York, you could go to all the work of listing all possible households and randomly select each on a case by case basis. You would then drive to the designated household and conduct the interview. But because these households are likely to be widely scattered, and because it would be a tedious and boring task to use census maps to identify, list, and randomly select households; a cluster sample would be a more efficient means of collecting representative information.
In the cluster sample, one would create a sampling frame by identifying all neighborhoods or residential areas and then take a random sample of these clusters. These clusters might be selected entirely at random -- hence a type of simple random selection; or they might be identified according to some characteristic deemed to be important, then a stratified random technique would be employed.
As we mentioned earlier, probability samples (such as the simple random, stratified random, and cluster samples) are preferable to nonprobability approaches because they are more generalizable to larger populations and one can estimate sampling errors. However, probability samples have the following drawbacks:
1. Probability samples are much more expensive.
2. Non-response is a special problem.
3. If one cannot list the elements in the population, they
4. Probability samples are much more time consuming.
While usually mush less costly to construct and implement, the nonprobability sample is subject to many criticisms related to external validity. In this section we will again present several types of nonprobability samples with a brief description of how those samples might prove useful in "doing sociology".
1. Accidental Samples. As the name implies, the accidental sample consists of units which are obtained because cases are readily available. In constructing the accidental sample (which is also referred to as an availability sample), the researcher determines the desired size of the sample and then simply collects data on that number of individuals. be a part of the study.
Many studies are constructed through the use of accidental sampling. Data obtained from 350 students in an introductory psychology class is a form of an accidental sample. Television and radio programs also employ accidental samples whenever they ask individuals to call a particular phone number and express their opinions. Keep in mind that the size of the sample has nothing to do with whether or not it is a probability or nonprobability sample!
Obviously the major flaw of accidental sampling is that those elements selected may not truly represent a larger population. Using the above examples, it could be that students enrolled in introductory psychology classes and people who are willing to spend 50 cents to express their opinions, are not "typical" of most college students or of members of a general audience.
2. Quota Samples. The quota sample is an attempt to approximate the stratified random sampling technique but in a non-random manner. The researcher first identifies those categories which he or she feels are important to insure the representativeness of the population, then establishes a sample size for each category, and finally selects individuals on an availability basis. For example, if one wished to interview equal numbers of women and men concerning their opinions toward municipal laws governing wages for jobs with comparable worth, employing a quota sample one would interview willing and available individuals until the desired number of individuals in each subgroup had been interviewed.
3. Purposive Samples. Purposive samples are sometimes called judgment samples, and are employed by the researcher in order to approximate the cluster sample using a nonprobability sample. In this sampling method the researcher selects a "typical group" of individuals who might represent the larger population and then collects data on this group. For example, if a researcher wished to survey the attitudes of freshman college students at particular university, he or she might survey the students in one or more freshman English classes -- the assumption is that since all students must take freshman English, the students in any class are representative of the entire freshmen class.
The judgment sample can also use the individual (rather than a group) as the sampling unit. Using the individual as the sampling unit, the researcher subjectively defines a "typical" case and then tries to select those individuals which best reflect this definition. Here again there is no guarantee that the researcher had identified the most representative cases, nor that his or her definitions of "typical" are accurate. The purposive sample is clearly a nonprobability approach, and the possibility exists that the sample is biased because the selection process is not random.
4. Snowball Sampling. Our final form of nonprobability sampling is snowball
sampling. In this approach the researcher selects available respondents
to be included in the sample. After the subject has been surveyed,
the researcher asks for a referral of other individuals, who would
represent the population of concern. For example, if you were studying
wealthy persons in Chicago, chances are that you do not have a total list
of all millionaire Chicago residents, but you might know one or more wealthy
persons. You might begin with the wealthy persons you do know, interview
them, and ask if they could each refer you to more Chicago millionaires.
Since "birds of a feather flock together," they could probably supply you
with such names. Through this snowball referral method, you could
eventually obtain a sample of the desired size. It should be noted
that it is very unlikely that this would be a unbiased and representative
In reality, researchers often employ more than one sampling strategy in a given project. In my study of later life dating, I began with an accidental sample, relying on membership lists of single clubs with older persons. As I proceeded in my study I began to believe those persons who were members of single clubs may not have been very representative of the population of dating individuals over the age of 60. I then asked for volunteers in senior centers and placed advertisements in the personals column of the newspaper with the purpose of recruiting older dating couples -- this technique was similar to a quota sample. After conducting my interviews, I asked each respondent if he or she could refer another older dater -- the snowball approach to obtaining a sample.
While my study is subject to criticism because the sampling strategy employs a nonprobability approach, my best defense has been the unknown nature of the population at large (i.e., how many older persons are actually dating and can anyone identify such a population). I have tried to compensate for the lack of randomization by enlarging the sample size -- with the rationale that a sample of 200 older daters is most likely more representative than a sample of 10. However, "bigger is not always better," and in the final analysis there is no way of insuring the representativeness of a nonprobability sample!
I have also worked on other research projects which initially had well designed probability samples, but due the problems of nonresponse and inaccurate sampling frames, the final sample may have been as biased and unrepresentative as nonprobability sampling techniques.
The importance of sampling to the success of a research endeavor cannot be minimized. While the concepts may have been successful operationalized, the interviewers carefully trained, the statistical measures appropriately applied and interpreted; an unrepresentative and biased sample can render the results of any study invalid.
IMPORTANT CONCEPTS COVERED IN THIS CHAPTER
Accidental Sample Availability Sample Content Analysis
Disproportional Sample Ex Post Facto Experimental Design Field Study
Historical Research Judgment Sample Nonprobability Sampling Participant Observation
Population Probability Sampling Proportional Sample Purposive Sample
Qualitative Methods Quantitative Methods Quota Sample Sampling Frame
Simple Random Sample Simulation Design Snowball Sample Stratified Random
Sample Survey Design
IMPORTANT POINTS COVERED IN THIS CHAPTER
1. The decision for methodological type is influenced by 1) the nature of the research hypotheses, 2) the body of knowledge concerning the relationship between the variables of interest, 3) one's expertise in a given methodology, and 4) the resources at hand for carrying out the research.
2. In the true experiment several conditions must be present. First, the true experiment has a control and an experimental group of research subjects. Typically, it is the control group who receives no treatment, test, stimulus, or whatever the independent variable of interest might be. The experimental group, however, is subjected to the independent variable. Furthermore, in the true experimental design, there is a pretest assessment of the dependent variable, followed by the introduction of the independent variable, and followed by a post-test assessment of both groups.
3. While the experimental method is usually superior to other research designs in terms of internal validity and the ability to infer causality, the sociologist's primary objection to this design is the artificial context in which the study takes place, the small number of subjects studied, and lack of representativeness of the subjects selected.
4. Survey research employs group administered pencil and paper questionnaires, face-to-face and telephone interviews, mailed self-administered questionnaires, and/or some other techniques of data collection which would produce quantitative assessment.
5. Participant observation or field research involves the direct observation of persons in natural settings.
6. Historical research consists of content analysis and statistical evaluation of data originally collected for some other purpose.
7. Simulations are methods of data collection in which a model is constructed which represents the phenomenon under study. What distinguishes simulations from the other methodologies is that simulations are concerned with theory elaboration and hypothesis generation.
8. In a probability sample each unit or element in the general population has an equal or known chance of being included in the sample. By contrast, it is not possible to determine the likelihood that an element of the population might be included in the sample if one employs a nonprobability sampling technique.
9. In a simple random sample every unit in the population has an equal or known chance of being included in the sample.
10. In the stratified random sample the population is divided into two or more subgroups or strata. Within each strata, elements are selected by the simple random method.
11. In a cluster sample the population is divided into areas or clusters. These clusters are then randomly selected with either simple random or stratified random techniques.
12. The accidental sample consists of units which are obtained because cases are readily available.
13. The quota sample is an attempt to approximate the stratified random sampling technique but in a non-random manner. The researcher first identifies those categories which he or she feels are important to insure the representativeness of the population, then establishes a sample size for each category, and finally selects individuals on an availability basis.
14. In the snowball sample the researcher selects available respondents to be included in the sample. After the subject has been surveyed, the researcher asks for a referral of other individuals, who would represent the population of concern.
15. The purposive or judgment sample is employed by the researcher in order to approximate the cluster sample using a nonprobability sample. The judgment sample can also use the individual (rather than a group) as the sampling unit.
REFERENCES IN CHAPTER
Babbie, Earl. 1983. The Practice of Social Research (Third Edition). Belmont, California: Wadsworth Publishing Company
Campbell, Donald T. and Stanley, Julian C. 1966. Experimental and Quasi-experimental Designs for Research, Chicago: Rand-McNally.
Humphreys, Laud. 1970. Tearoom Trade: Impersonal Sex in Public Places. Chicago: Aldine-Atherton.
Whyte, William F. 1943. Street Corner Society, Chicago: University of Chicago Press.
Hochschild, Arlie Russell. 1973. The Unexpected Community, Berkley: University of California Press.
Gamson, William A. 1978. SIMSOC: Simulated Society. New York: Free Press.
Tallman, I., Wilson, L. and Straus, M. "SIMCAR: A game simulation
Method for Cross National Family Research: in Family Sociology,
Volume 13, Number 2, 19??, p. 121-144.
Go back to Sociology 371 -- Research Methods
If you have any questions or comments please email: