Explain what is meant by the term "unobtrusive measures" and give two examples of such measures
Unobtrusive measures are measures that don't require the researcher to intrude in the research context. Direct and participant observation require that the researcher be physically present. This can lead the respondents to alter their behaviour in order to look good in the eyes of the researcher.
Unobtrusive measures are measures that don't require the researcher to intrude in the research context. Direct and participant observation require that the researcher be physically present. This can lead the respondents to alter their behaviour in order to look good in the eyes of the researcher. A questionnaire is an interruption in the natural stream of behavior. Respondents can get tired of filling out a survey or resentful of the questions asked.
Unobtrusive measures (also known as unobtrusive research) is a research method of data collection that does not involve direct contact with the research participants. This differs from direct measures like surveys, interviews, and questionnaires that involve interaction with the participants. Structured observation is an example of an unobtrusive measure - there is no direct interaction with the participants, only observation from a distance.
unobtrusive measures Techniques for collecting data without the knowledge of respondents. Two types—the covert and the indirect—may be identified. The former include, for example, covert participant observation, undisclosed notetaking, or use of one-way mirrors. The latter involves the use of personal documents and other records which might offer indirect measures of variables such that the need for interaction between the investigator and his or her subjects is obviated. (For example, student satisfaction with new educational practices might be assessed by inspecting records of attendance at classes and rates of switching between course, rather than direct interview or questionnaire.) The justification of such methods is that, because respondents are unaware of their status as research subjects, their activities are unaffected by certain potential biases in the research situation itself—such as the desire to please the investigator. Although some of these techniques (most notably covert observation) are now frowned upon by professional sociological associations as being ethically suspect, the imaginative use of existing documentary sources for novel research purposes is occasionally very effective, although one is normally working ‘against the grain’ of the data since they have usually been collected for purposes other than those embodied in the research. See also INTERVIEW BIAS; RESEARCH ETHICS.
An indirect measure is an unobtrusive measure that occurs naturally in a research context. The researcher is able to collect the data without introducing any formal measurement procedure. The types of indirect measures that may be available are limited only by the researcher's imagination and inventiveness. For instance, let's say you would like to measure the popularity of various exhibits in a museum. It may be possible to set up some type of mechanical measurement system that is invisible to the museum patrons. In one study, the system was simple. The museum installed new floor tiles in front of each exhibit they wanted a measurement on and, after a period of time, measured the wear-and-tear of the tiles as an indirect measure of patron traffic and interest. We might be able to improve on this approach considerably using electronic measures. We could, for instance, construct an electrical device that senses movement in front of an exhibit. Or we could place hidden cameras and code patron interest based on videotaped evidence.
Content analysis is the analysis of text documents. The analysis can be quantitative, qualitative or both. Typically, the major purpose of content analysis is to identify patterns in text. Content analysis is an extremely broad area of research. It includes:
Thematic analysis of text
The identification of themes or major ideas in a document or set of documents. The documents can be any kind of text including field notes, newspaper articles, technical papers or organizational memos.
There are a wide variety of automated methods for rapidly indexing text documents. For instance, Key Words in Context (KWIC) analysis is a computer analysis of text data. A computer program scans the text and indexes all keywords. A keyword is any term in the text that is not included in an exception dictionary. Typically you would set up an exception dictionary that includes all non-essential words like "is", "and", and "of". All keywords are alphabetized and are listed with the text that precedes and follows it so the researcher can see the word in the context in which it occurred in the text. In an analysis of the interview text, for instance, one could easily identify all uses of the term "abuse" and the context in which they were used.
Quantitative descriptive analysis
Here the purpose is to describe features of the text quantitatively. For instance, you might want to find out which words or phrases were used most frequently in the text. Again, this type of analysis is most often done directly with computer programs.
Content analysis has several problems you should keep in mind. First, you are limited to the types of information available in text form. If you are studying the way a news story is being handled by the news media, you probably would have a ready population of news stories from which you could sample. However, if you are interested in studying people's views on capital punishment, you are less likely to find an archive of text documents that would be appropriate. Second, you have to be especially careful with sampling in order to avoid bias. For instance, a study of current research on methods of treatment for cancer might use the published literature as the population. This would leave out both the writing on cancer that did not get published for one reason or another as well as the most recent work that has not yet been published. Finally, you have to be careful about interpreting the results of automated content analyses. A computer program cannot determine what someone meant by a term or phrase. It is relatively easy in a large analysis to misinterpret a result because you did not take into account the subtleties of meaning.
However, content analysis has the advantage of being unobtrusive and, depending on whether automated methods exist, can be a relatively rapid method for analyzing large amounts of text.
Secondary Analysis of Data
Secondary analysis, like content analysis, makes use of already existing sources of data. However, secondary analysis typically refers to the re-analysis of quantitative data rather than text.
In our modern world, there is an unbelievable mass of data that is routinely collected by governments, businesses, schools, and other organizations. Much of this information is stored in electronic databases that can be accessed and analyzed. In addition, many research projects store their raw data in electronic form in computer archives so that others can also analyze the data.
Among the data available for secondary analysis is:
census bureau data
standardized testing data
Secondary analysis often involves combining information from multiple databases to examine research questions. For example, you might join crime data with census information to assess patterns in criminal behaviour by geographic location and group.
Secondary analysis has several advantages. First, it is efficient. It makes use of data that were already collected by someone else. It is the research equivalent of recycling. Second, it often allows you to extend the scope of your study considerably. In many small research projects, it is impossible to consider taking a national sample because of the costs involved. Many archived databases are already national in scope and, by using them, you can leverage a relatively small budget into a much broader study than if you collected the data yourself.