Ideology and Issues
Updated Mar 9, 2025
Wednesday
Friday
Election forecasts reflect varying combinations of:
Forecasts differ in the extent to which they rely on these components and how they integrate them in their final predictions
The preeminence of polling in modern forecasts reflects the success of Nate Silver and FiveThirtyEight in correctly predicting the 2008 (49/50 states correct) 2012 (50/50) presidential elections
Any one poll is likely to deviate from the true outcome
Averaging over multiple polls
the polls aren’t systematically biased
Concerns about the polls reflect the failure of such approaches to predict
Trump’s Victory in 2016
Strength of Trumps Support in 2020
Some likely explanations
Likely voter models overstated Clinton’s support
Large number of undecided voters broke decisively for Trump
White voters without a college degree underrepresented in pre-election surveys
Polls did a better job
Forecasts correctly call:
However…
Average polling errors for national popular vote were 4.5 percentage points highest in 40 years
Polls overstated Biden’s support by 3.9 points national polls (4.3 points in state polls)
Polls overstated Democratic support in Senate and Guberatorial races by about 6 points
Forecasts predicted Democrats would hold
Unlike 2016, no clear cut explanations for what went wrong
Not a cause:
Potential Explanations
Overall, pretty good
Average error close to 0
Average absolute error ~ 4.5 percentage points
Some polls tended overstate Republican support (e.g. Trafalgar)
Harris 2-3 point lead in popular vote
Math of electoral college favors Republican candidates
Polls will be wrong, but hard to predict the direction of the error (this is a good thing)
A lot can happen
Different:
Who’s right?
This weeks readings are HARD
Our goal is to answer the following:
Converse introduces the concept of belief systems and tells us this article is about the contrast between the belief systems held by political elites and the mass public
He gestures towards a hierarchy of belief strata and the importance of belief systems for democratic theory
Kind of slow start
Converse defines his core concepts
Beliefs Systems
Idea elements
Constraint:
Centrality:
Range:
Converse lays out some plausible sources of ideological constraint:
Logical: More spending + Less taxes -> Bigger deficits
Psychological: “the quasi-logic of cogent arguments”
Social: Social diffusion of information -> creates perceptions of what goes with what
Converse also offers a definition of the well-informed person who understands what goes with what but can also articulate why.
Converse argues as we move from the well-informed to uninformed, several things happen:
So how does he go about doing showing this?
Converse considers people’s open-ended responses to questions about whether there is anything they like or dislike about presidential candidates in 1956 and the political parties discussed in detail in chapter 10 of the The American Voter
Ideologues:
Well, the Democratic Party tends to favor socialized medicine and I’m being influenced in that because I came from a doctor’s family.
Group Benefits:
Well I just don’t believe their for the common people
Nature of the times:
My husband’s job is better. … My husband is a furrier and when people get money they buy furs
No Content:
I hate the darned backbiting
Next Converse considers these levels of conceptualization in 1956 with peoples ability to attach the correct ideological labels with political parties
Overall, most respondents label Democrats as the liberal and Republicans as the conservative party (Table 2)
But the depth of this understanding appears quite shallow (e.g. spend vs save) (Table 3)
Recognition varies with education (Table 4)
Those with greater levels of recognition are more active in politics (Table 5)
Next Converse considers the degree of constraint (measured by correlations) between issue elements in an elite (congressional candidates) compared to the mass public
People who took a liberal position on one issue did not necessarily take a liberal position on another
the correlations between between elites’ issue attitudes were higher than the mass public
Read the footnotes!
I believe Converse is using a measure of association for ordinal data that’s built off the cross tabs of variables. The estimate of gamma,
where “ties” (cases where either of the two variables in the pair are equal) are dropped. Then
Note
As long as you have a basic sense of what correlations are trying to tell us you don’t need to know the technical details of a specific estimator
Converse considers the stability of responses over time, looking at data from 1958-1960 finding variation the strength of temporal correlations across surveys, and attributes this to the centrality of groups and the party system for the mass public
The final piece of Converse’s argument concerns the stability of a single belief over time (1956, 1958, 1960):
The government should leave things like electrical power and housing for private businessmen to handle
A limiting case an issue not in the public debate of this period
People appear to answer the question at random
Only a small proportion (~20% fn. 39) held stable attitudes across all three periods
Converse wraps up his argument by
Allowing for the possibility of small “issue publics” on more narrow issues
Offering some comments on cross-national and historical comparisons
Summarizing the “continental shelf” that exists between elites and masses.
Converse (1964) remains one of the most influential articles in American Political Behavior
Framed decades of research on questions of ideology and citizen competence
In the absence of coherent and stable worldviews, how does democracy function?
Please click here to take our periodic attendance survey
Ansolabehere et al. pick up a critique made by Achen (1975) and others that the lack of constraint is primarily caused my measurement error
They show that measurement error tends to decreases with the number of items one uses
They offer a simple solution measure concepts scales constructed from multiple items
Classic measurement error models assume what we observe is a measure of some unobserved (latent) truth, plus measurement error that has mean 0 and is uncorrelated with the latent truth, X.
One can show that:
And the covariance between our observed and unobserved variables is:
With some assumptions and transformations we can show that the square of correlations describe the reliability of a measure
Reliability is the proportion of the variance in the observed variable that comes from the latent variable of interest, and not from random error.
This motivates Ansolabehere et al. approach
Panel data from the NES
Principal component factor analysis to scale items together
Correlational analysis within items and across time and also within surveys
Simulations
Sub-group analysis by political sophistication
Regression analysis of issue voting
Find dimensions that explain the maximum variance with the minimum error
A useful tool for data reduction
This is in contrast to what Converse’s “Black and White” model would predict and consistent with general arguments about measurement error
The entry point for Freeder et al.’s critique
Democracy is saved?
It’s the measures not the public that’s the problem
Use multiple measures and scale them together to study the concepts we’re interested in
The importance of political sophistication may be overstated
Panel data with multiple issue items, measures of general knowledge/sophistication, and specific measures of WGWW proxied by candidate and party placements
Correlations and scale properties
Sub-group analysis
Regression analysis
Simulations
More items reduces measurement error
Constraint doesn’t vary with general knowledge, but does vary with WGWW
True of scales and individual items
WGWW predicts attitude stability
But only for people who agree with their party’s positions
More items won’t fix the problem
Correcting for measurement error alone won’t save democracy
Multiple items are still useful
Where does knowledge of what goes with what come from?
POLS 1140
Social Groupings as central objects in belief systems