Guide to the vibrant and



Download 17.16 Mb.
Page127/162
Date conversion17.05.2016
Size17.16 Mb.
1   ...   123   124   125   126   127   128   129   130   ...   162

of the intended population. For example, there are

systematic differences between people who do or

do not enter themselves on the electoral register

or own a telephone – in these cases, young adults,

those in privately rented accommodations, and

the unemployed tend to be underrepresented –

so that lists of such persons are not completely

representative of the adult population. Another

common source of sampling bias also associated

with the selection process is nonresponse error.

Nonresponse is a good indicator of response bias

in that, as a general rule, the higher the proportion

of nonrespondents in a study, the greater the

degree of bias among those who participate. Although

nonresponse error can occur for a number

of reasons, the largest component of nonresponse

is usually made up of people who refuse to participate

in a survey, in terms of either agreeing to be

interviewed or completing a questionnaire. Rules

of thumb for acceptable levels of survey response

vary, but 60 percent is usually regarded as the

bare minimum. Whatever the magnitude of nonresponse,

all surveys should identify the nature

and extent of any bias and seek to address it.

This can be done by a variety of means, the most

common being further follow-ups in terms of personal

callbacks or additional telephone calls to

the sample address in an attempt to “retrieve”

the nonrespondents.

The sample size is the number of cases or individuals

in the sample studied. It is usually represented

with the symbol n. Usually, the larger the

sample size in a study, the more likely it is that

the results will accurately reflect the universe, or

population, from which the sample was obtained

(all else being equal). Nevertheless, there are tradeoffs

when sample sizes are increased. In particular,

larger samples usually add time and expense

to data collection, coding, and data entry. In attempting

to balance the increased accuracy of a

sampling sampling

530

survey against the additional time and expense



that accompanies larger sample sizes, you need

to keep in mind the law of diminishing returns.

This basic law of probability means that the larger

your sample size already is, the less likely that

there will be a notable increase in accuracy by

adding more individuals. By the way, contrary

to common sense, the size of the universe basically

has no bearing on the size of the sample

needed to achieve accuracy at a given level. Normally,

samples are, in fact, tiny fractions of the

populations from which they are drawn. For

example, 3,000 adults is the usual sample size

employed by nationally representative opinion

poll surveys, such as the General Social Survey

(GSS) in the United States.

There are two general types of samples, probability

samples and nonprobability samples. A

probability, or scientific, sample is a sample that

has been selected using random selection methods

so that each individual (element) in the target

population is chosen at random and has a known,

nonzero chance of selection. The sampling fraction

(SF) is the chance of selection of each element of the

population. It is calculated from the sample size (n)

divided by the population size N, that is, SF¼n/N. It

is generally assumed that a representative sample

is more likely to be the outcome when this method

of selection from the population is undertaken.

The aim of probability sampling is to keep sampling

error to a minimum. Once the sample fraction

and the sample size are known, probability

theory provides the basis for a whole range of

statistical inferences to be made about the characteristics

of the population, from the observed characteristics

of the sample drawn from it. For

example, the standard deviation of the distribution

of sample means, which is referred to as the

standard error of the means for any one characteristic

(such as age), can be calculated to assess the

reliability of the sample data. Large standard errors

reduce our confidence that the sample is fully representative

of its target population. In calculating

the standard error, the size of the sample is crucial,

in that the larger your sample, the smaller your

standard error.

The most common type of a probability sample

is the simple random sample. As in all probability

samples, selection is based on the equal probability

of the selection method (EPSM) in that each

member of the population has an equal chance of

being selected. For a long time, the two most

common techniques for obtaining a pure random

sample involved literally throwing the names of

every member of the population into a hat and

drawing out names one at a time, or using what

is called a “table of random numbers,” which is

found in the back of most statistics texts. Today, it

is commonly done by a computer, which is programmed

to generate random lists of names from

sampling frames. Whatever the method of selection,

it is first necessary to obtain a sampling

frame, which uniquely identifies every member

of that population. This is not as straightforward

as it first appears, in that not only is it often

difficult to obtain a complete and accurate listing

of your intended population, but it can also be

surprisingly difficult to define exactly who is the

target population. For example, in examining

the relationship between religion and attitudes

towards political violence in Northern Ireland,

how do we define our target population? Is it the

total population of Northern Ireland or just the

adult population in private households, usually

defined as individuals aged eighteen years or

over?

Because taking a simple random sample can



be a long and tedious process, particularly when

a large population is to be sampled, researchers

often use a modification called systematic or interval

sampling. Systematic sampling takes sampling

units directly from the sampling frame at

designated intervals (such as every tenth name in

a telephone directory) or at designated positions

(such as the third name from the top of each page).

Although systematic sampling cannot be considered

random sampling in the strictest sense –

once the interval has been designated, most members

of the universe no longer have any chance of

being chosen – nevertheless, no one seriously

questions that systematic sampling methods are

as representative as pure random sampling

methods.


Another common variation on the simple

random sample is the stratified sample. This is a

special type of random sampling that is often

undertaken to make sure that groups (such as

female professors) with low representation in a

target population (for instance, British universities)

are adequately (or highly) represented in

the sample. Users of this sampling method take a

sample frame (for example, a list of all professors

in British universities) and divide the constituents

up according to one or more characteristics (such

as the proportion who are male and female), and

then randomly sample individuals from the

resulting sample lists. Of course, a necessary precondition

is that the researcher must know how

the stratified variable (gender) is distributed in the

target population (British universities). If the

sampling sampling

531

sampled proportions fit the population proportions



exactly, this sort of sampling is known as

proportionate stratified sampling. Alternatively, if

extra cases (additional individuals) are selected,

this is known as disproportionate stratified sampling,

where the sample proportions on the characteristics

of interest exceed the population

proportions. Disproportionate stratified sampling

usually occurs if the number that would appear in

the sample is too small to allow any reliable conclusions

to be drawn. In general, however, stratified

random samples are used to ensure that the

sample matches the population in all crucial

respects.

Another common variation, which is particularly

suitable for studying populations spread

over a large geographical area, is cluster sampling.

The word “cluster” refers in this context to what

might be called naturally occurring groups of subjects,

such as church congregations or students in

a university. In its most elementary form – a

simple cluster sample – a researcher picks a few

clusters, and then collects data from respondents

in each of the clusters. Say, for example, a researcher

was interested in studying the views of

particular religious groups about whether women

should be ordained as priests. Here the researcher

selects a number of church congregations geographically

dispersed throughout Britain and, at

each location, interviews a random sample of its

members about their opinion in relation to this

issue. Thus, although the church members are

randomly picked, the congregations to be sampled

are not. Nevertheless, this sampling procedure

will normally approximate a representative

sample.

In many situations, there is no obvious sampling



frame that can be used or compiled. This

means that it is not possible to draw random

samples. In these circumstances, nonprobability

sampling methods are used. One of the most commonly

used forms of nonprobability sampling

methods is a convenience or haphazard sample.

This involves building a sample almost by accident

from those who are conveniently at hand, such as

interviewing friends and neighbors, or just stopping

passers-by on a street corner. Because of its

convenience nature, this method leaves the researcher

open to all sorts of bias in the selection

of respondents and, thus, should be avoided if at

all possible. An improvement on this method is

the adoption of a purposive sample. Here, the

researcher deliberately seeks out those who meet

the needs of the study. This kind of sampling is

often associated with so-called snowballing techniques,

in which those in an initial sample are

asked to name others who might be willing to be

approached. This type of sampling is often used in

studies of deviant or closed groups, such as heroin

users, where the initial respondents themselves

provide the names of additional study members.

Because purposive or snowball samples are not

very useful in most large-scale surveys, a method

that tries to approximate random selection procedures

is most often used. This is what is known

as a quota sample. This method is superficially

similar to stratified random sampling, but does

not involve any statistically random procedures.

In quota sampling, the population is divided into

categories that are known to be important, such

as gender and age, and for which it is possible to

get some basic information, either from a census

or a similar source. This method is very widely

used in large-scale marketing surveys, as it is an

economical and efficient way of achieving a

sample that matches the broad and known features

of a population. The individuals chosen,

however, are not randomly selected and, thus, it

is not strictly legitimate to apply probability statistical

techniques to the results.

BERNADETTE HAYES

sampling error

– see sampling.

sanction

– see norm(s).

Saussure, Ferdinand de (1857–1913)

A professor of general linguistics at the University

of Geneva, Saussure is widely regarded as the

founding figure of French structuralism. But, significantly,

Saussure did not use the term “structure”;

he preferred “system” for conceptualizing

the relation between language and society. The

key elements of his doctrines, set out in his posthumously

published Course in General Linguistics

(1916 [trans. 1974]) which was reconstructed from

students’ notes, include: the distinction between

“langue” (language) and “parole” (speech); the arbitrary

character of the sign; and the function of

difference in the constitution of meaning through

the conjunction of signifier and signified.

For Saussure, language rather than speech is the

object of the science of structural linguistics. The

preexisting language system is assimilated, he

argued, by the speaker and reproduced through

the heterogeneous production of speech.

The production of meaning does not arise from

a pregiven connection between the sign and the

sampling Saussure, Ferdinand de (1857–1913)

532


real world of objects. Rather, Saussure argued that

signs are made up of a signifier (a sound or image)

and a signified (the concept or meaning evoked).

The meaning of a word arises through its differences

from other words: a pencil, for example, is

not a pen. Language creates meaning only

through an internal play of differences.

Structuralism, and particularly poststructuralism,

came to put more emphasis on the productive

role of the signifier and less on the signified.

The impact of Saussure’s thought is especially

evident in the work of Claude Le´vi-Strauss, Roland

Barthes, and Jacques Lacan. ANTHONY EL L IOTT

scales


The investigation of attitudes is a prominent area

in social research. One of the most common techniques

for conducting such an investigation is

through the use of scales. A scale consists of

answers to a number of statements. The most

widely used scale in social research is the Likert

scale, named after Rensis Likert (1903–81) who

developed the method in the 1930s to provide an

ordinal-level measure of a person’s attitude. In

constructing a Likert scale, respondents are presented

with a number of statements (“items”),

some positively and some negatively phrased,

and asked to rate each statement in terms of their

agreement or disagreement. The reason for including

both positively phrased and negatively

phrased statements is to avoid the problem of

the response set. The response set, otherwise

known as the acquiescent-response bias, is the

tendency for some people to answer a large

number of items in the same way (usually agreeing)

out of laziness or a psychological predisposition.

Typically, responses are scored using fivepoint

bipolar categories, such as strongly agree,

agree, neither agree nor disagree, disagree, and

strongly disagree. An “alpha coefficient” can be

calculated to assess the reliability of the item

battery.

The following two statements, designed to

measure attitudes towards providing sex education

for young people, will illustrate this process.

1 Sex education helps young people prepare for

married life.

2 Sex education encourages experimentation

and promiscuity among young people.

For each item, respondents were asked whether

they: strongly agree, agree, neither agree nor disagree,

disagree, strongly disagree, or don’t know.

Note, in this example, the additional category of

“don’t know” was included in the response set.

Although researchers have debated whether or

not to offer a “don’t know,” or no-opinion category,

as a general rule it is better to offer this

nonattitude (no opinion) choice, as it allows researchers

to distinguish between those who hold

a genuinely neutral opinion (neither agree nor

disagree) from those without any opinion.

Likert scales are called summated-rating or additive

scales because a person’s score on the scale

is computed by summing the number of responses

the person gives. In other words, each item is assumed

to have equal weight and the final score is

determined by simply adding up the various subscores

to all the separate items. Thus, while simplicity

and ease of use of the Likert scale is its greatest

strength, some important limitations include the

assumption of equal intensity in opinion (all items

are weighted equally) as well as its lack of reproducibility

(different combinations of several scale

items can result in the same overall score).

The Bogardus scale, otherwise known as a socialdistance

scale, is now one of the most widely used

techniques in measuring attitudes towards ethnic

groups. Developed in the 1920s by Emory Bogardus

(1882–1973) to measure the willingness of

members of different ethnic groups to associate

with each other, it can also be used to see how

close or distant individuals feel towards some

other group (for example, religious minorities or

immigrants). The scale has a simple logic. People

respond to a series of ordered statements: those

that are the most threatening are at one end, and

those that are least threatening are at the other

end. The scale is considered unidimensional in

that each item is assumed to measure the same

underlying concept and there is no set number

of statements required; the usual number ranges

from five to nine. An example of a Bogardus

scale to assess how willing individuals are to associate

with immigrants would ask respondents

how comfortable they feel with the following

statements:

1 Immigrants entering your country;

2 Immigrants living in your community;

3 Immigrants living in your neighborhood;

4 Immigrants as personal friends; and

5 Immigrants marrying a member of your

family.

Social-distance scales are considered cumulative



scales in that, unlike Likert scales, there is a relationship

between the pattern of item response

and the total score. In other words, the logic of

the scale assumes that individuals who refuse

contact or are uncomfortable with the more socially

distant items (immigrants entering your

country) will also refuse the socially closer items

scales scales

533

(an immigrant marrying a member of your



family).

Another form of cumulative scaling is the Guttman

scale. Developed by Louis Guttman (1916–87)

in the 1940s, these scales are considered strictly

unidimensional and list items in order of favorableness.

A frequently cited example of a Guttman

scale measuring attitudes towards abortion is the

following:

1 Abortion is acceptable under any

circumstances;

2 Abortion is an acceptable mechanism for

family planning;

3 Abortion is acceptable in cases of rape;

4 Abortion is acceptable if the fetus is found to

be seriously malformed; or

5 Abortion is acceptable if the mother’s life is in

danger.

A respondent who agrees with the first statement



is assumed to agree with subsequent statements.

In fact, if you know a respondent’s final score, it

should be possible to reproduce his or her responses

to all the scale items. This feature is called

reproducibility and, when constructing Guttman

scales, a coefficient of reproducibility is calculated

to determine the extent to which the scale conforms

to this requirement.

The semantic differential was developed by

Charles E. Osgood (1916– ) in the 1950s to provide

an indirect measure of how an individual feels

about a concept, object, or person. The technique

measures subjective feelings towards a person or

thing by using adjectives. Because most adjectives

have polar opposites – good/bad – it uses polar opposite

adjectives to create a ratingmeasure or scale. To

use a semantic differential, respondents are presented

with a list of paired opposite adjectives, usually

with a continuum of seven to eleven points

between them. Each respondent marks the spot on

the continuum between the adjectives that expresses

their feelings. The adjectives can be very diverse

and should be well mixed. In other words, positive

items should not be located mostly on either the

right or the left side. The semantic differential

has been used for many purposes. An example of a

semantic differential scale measuring attitudes towards

political parties would include the following:

Weak __ __ __ __ __ __ __ Strong

Right-wing __ __ __ __ __ __ Left-wing

Loser __ __ __ __ __ __ __ Winner.

BERNADETTE HAYES AND ROBERT MILLER

scarcity

In economics and classical political economy, this

concept is central to rational choice theory.

Scarcity is the key assumption behind the notion

that in the market the competition between individuals

over scarce resources forces them to specialize

and to economize. The principal criterion

of rationality involves the rational allocation of

resources in a context of scarcity of means to

satisfy human needs in order to maximize outcomes.

Marginal utility theory is the rational allocation

of an extra unit of effort for the satisfaction

of a want or need via scarce means, and market

equilibrium is achieved when individuals’ wants

are satisfied. What causes scarcity? In classical

economics, it was assumed that scarcity is a consequence

of the fact that nature cannot provide

enough resources to satisfy human wants. In

the classical demographic argument of Thomas

Robert Malthus, nature is a constant, and unregulated

population growth results in pestilence and

famine. Human beings must exercise sexual restraint

in order to match population growth

against fixed resources. Scarcity compels people

to limit their (sexual) desires. The alternative

view, which was implicit in much of the work of

the Frankfurt School, is that scarcity is socially

produced and that capitalism artificially inflates

human propensity to consume. The crises of the

capitalist mode of production are often interpreted

therefore as consequences in the unstable

relationship between production and consumption.

The British economist John Maynard Keynes

argued that economic slumps could be resolved by

1   ...   123   124   125   126   127   128   129   130   ...   162


The database is protected by copyright ©essaydocs.org 2016
send message

    Main page