A Short History of American Capitalism
 Copyright © 2002 by Meyer Weinberg. All Rights Reserved.  

Footnotes: To see a footnote referenced in the text, click on the footnote number. The screen will automatically scroll to the footnote and display it at the top of the screen. To return to the same place in the text, click the "Back" button at the top of your browser window.

 

Chapter 11

HUMAN COSTS OF AMERICAN CAPITALISM, 1945-2000

As American capitalism developed, the standard of living gave way increasingly to the standard of dying, especially in the lower reaches of American society. As early as 1850, Dr. Edward Jarvis analyzed death by occupation in a developing capitalist America. (See chapter 6.) A century and a half later, the class nature of death was even more sweepingly present.

In 1973, Evelyn Kitagawa and Philip Hauser wrote the first comprehensive book-length study of the socioeconomics of mortality, under the auspices of the American Public Health Association.1 Their principal interest was "the achievement of equal opportunity for survival."2 Socioeconomic differentials such as occupation, income, and education were inversely related to mortality in the U.S. On a world scale, the United States stood no higher than 16th in life expectancy at birth.3 The authors' research dealt almost wholly with whites since this population record was relatively complete over the years 1930-1960, the study period. During those years "the age-adjusted death rate (over all ages combined) of nonwhite males was 20 percent greater than white, and that of nonwhite females 34 percent greater than white."4 Disparities both racial and socioeconomic indicated that "the biomedical knowledge already available is not effectively within the grasp of the lower socioeconomic components of the population of the nation."5

Disparities in mortality were enormous in some dimensions. For example, "white males 25 to 64 of lower education (less than five years of school) experienced mortality 64 percent above that of men with high education (four years of college)."6 For women it was 105 percent.

Kitagawa and Hauser were among the earliest users of the concept of "excess mortality". By this, they meant the effect on a group's mortality level "if the mortality level of white men (or women) of high socioeconomic status had prevailed among all men (or women)."7 Thus, in the United States of 1960, 292,000 deaths could be regarded as "excess". Of this total, 92,000 were males and 200,000 female, constituting 11 percent of all male and 30 percent of all female deaths.8 In Chicago during the study period, "there was no significant decrease in excess mortality among white adults throughout … 1930-1960 … and excess mortality among nonwhites … was still 41 to 46 percent of all deaths."9

Gregory Pappas and his colleagues completed a replication of the Kitagawa-Hauser study, covering the years 1960-1986.10 Their overall finding was as follows:

Poor or poorly educated persons have higher death rates than wealthier or better educated persons, and these differences increased from 1960 through 1986. The disparity in death rates among adults 25 to 64 years of age has widened in relation to income and educational level.11

White men of the lowest educational level had a mortality rate two and a half times greater than whites on the highest level. Income differences had effects not very distant from these. At the same time, during 1960-1986, death rates for the upper-educated group fell by 50 percent but for the lower group by only 15 percent.12 Pappas and colleagues observe that "Medicaid … appears to have been insufficient to equalize the chances for survival among the poorest and least educated."13 The researchers remark that a possible explanation of rich-poor differences may lie in the area of life styles. In conclusion they state that "the results of this study raise serious questions about disparities in opportunity and equity in our nation."14

A further study by Eugene Rogot and associates concluded "much the same" as Kitagawa and Hauser, that "at age 25, white men at the low end of the scale, those with family incomes of less than $5,000 could expect to live on the average about 44 more years compared to 54 more years for those with family incomes of $50,000 or more."15 The study covered 1979-1985. Moore and Hayward studied the role of occupations in life expectancy but noted that previous investigators—including Kitagawa and Hauser—had ascribed only a single occupation to each subject in their studies. Upon further research they discovered that "ignoring prior occupational exposure will result in understatements of the effect of hazardous working conditions, insofar as these effects develop insidiously over time."16

A team of researchers led by George Kaplan studied inequality of income and mortality in the various states of the United States during the 1980s.

Strikingly, declines in mortality in the 1980s, experienced by all states, were smaller in states that had greater inequalities in income at the beginning of the decade. When changes in income inequality were examined with respect to the worst off 10% of households in each state, increasing income inequality was associated with smaller declines in mortality over the decade.17

An examination of mortality and inequality of income by metropolitan areas in the United States during 1989-1991 was done by John Lynch and colleagues. "It is not only the absolute amount of income that is important for health," they wrote, "but also the relative disparity with which income is distributed in a population."18 Applying the concept of excess mortality to the metropolitan data, the Lynch group found mortality rates ranged from 642.5 to 1092.9 per 100,000 over the various metropolitan areas; the average rate was 849.6. By calculating "the difference in mortality rates between high and low income inequality quartiles" the scope of "the disease burden associated with differences in income inequality" became clear.19 If mortality experience in all areas was equalized to that in the most favorable areas, average mortality would fall from 849.6 to 709.8 per 100,000. "This mortality difference," the Lynch group notes, "exceeds the combined loss of life from lung cancer, diabetes, motor vehicle crashes, HIV infection, suicide, and homicide in 1995."20

Two further studies of excess mortality concerned African-American populations in New York City and Chicago. The first study, conducted by McCord and Freeman, dealt with Central Harlem during 1979-1981. The second was done by a team led by Avery Guest and focused on areas of greatest socioeconomic distress in Chicago during the years 1989-1991.21

In Harlem, as compared with New York City at large, "the rate of hospital admissions is 26 percent higher, the use of emergency rooms is 73 percent higher, the use of hospital outpatient departments is 134 percent higher, and the number of primary care physicians per 1000 people is 74 percent lower."22 Mortality rates in Harlem for persons under 65 years of age were nearly three times those for white males in the city and somewhat less than that for white females; the highest rates were for women 25 to 34 years of age (more than six times the white rate) and for men 35 to 44 years of age (just under six times).23 "If the death rate among U.S. whites had applied to this community, there would have been only 3,994 deaths [rather than 6,415 deaths in 1979-1981.]"24 Living styles in Harlem contributed to the high mortality: "Cirrhosis, homicide, accidents, drug dependency, and alcohol use were considered the most important underlying causes of death in 35 percent of all deaths among people under 65. …"25

In Chicago, "the average death rate of black males in the prime working ages [of 25-54 years] is almost three times that of non-black males and over six times that of non-black females."26 More of these high death rates was explained by varying degrees of unemployment than of education. (Kitagawa and Hauser had not considered unemployment when they studied the Chicago experience.) "The mortality rate for black males aged 35-44 living in Chicago communities with the highest unemployment is more than triple the rate for U.S. black males aged 35-44, and more than nine times the rate for U.S. white males aged 35-44. …"27 Guest and associates also studied infant-death rates in Chicago and found that in both black and white areas, "socioeconomic status has negative effects on infant mortality." In addition, they reported that "infants in the most economically distressed black areas of Chicago are three times more likely to die than are those born in the most affluent non-black neighborhoods."28

Both in Harlem and in Chicago, the researchers observed that black males between 5 and 65 had higher mortality rates than their counterparts in Bangladesh—one of the poorest countries in the world.29 Indeed, death rates of black males aged 35-44 in Chicago were more than five times the rate for Bangladeshi counterparts.30 Guest and associates generalized beyond that single country: "Chicago communities with the highest levels of socioeconomic distress experience mortality rates far exceeding those found in some developing countries."31

In the year 2000, the World Health Organization published its first of a series of reports on worldwide health conditions. Excess mortality was presented as a serious problem: "In 1990, 70% of all deaths and fully 92% of deaths from communicable diseases in the poorest quintile [of the world population] were `excess' compared to the mortality that would have occurred at the death rates of the richest quintile."32 The report continued:

The denial of access to basic health care is fundamentally linked to poverty—the greatest blight on humanity's landscape. For all their achievements and good intentions, health systems have failed globally to narrow the health divide between rich and poor in the last 100 years. In fact, the gap is actually widening.33

Dr. Gro Harlem Brundtland, director-general of WHO, declares that "the impact of this failure is borne disproportionately by the poor."34 Further, the report itself stresses that "inequalities in life expectancy persist, and are strongly associated with socioeconomic class, even in countries that enjoy an average of quite good health."35 It should be noted that WHO is not voicing a hope for the sudden end of all poor health. By "preventable deaths" they mean "deaths due to causes amenable to medical care."36 More specifically, "the health system … has the responsibility to try to reduce inequalities by preferentially improving the health of the worse-off, wherever the inequalities are caused by conditions amenable to [medical] intervention."37 This was what Kitagawa and Hauser meant in their prescient work of 1973.

As indicated above, in the early 1970s the United States ranked no higher than 16th in the world with respect to life expectancy at birth. In 1999, according to WHO, the U.S. had declined to 29th (males) and 26th (females).38

A special characteristic of official U.S. mortality and morbidity (sickness) statistics is the presence of racial categories and the absence of class categories.39 Yet, as between racial and class differentials in mortality and morbidity, the latter is the larger.40 During 1985-1987, for example, the average annual percentage of persons with limitation of activity due to chronic conditions, by race, age, and family income was as follows:41

Family Income  

Less than $20,000

$20,000 or more

Black [14.5]

18.3

7.3

White [13.9]

21.4

9.7

As explained by Vicente Navarro:

Morbidity rates for blacks making less than $20,000 were much closer to those for whites in the same income group than to those for blacks in income groups greater than $20,000. Similarly, morbidity rates for whites below $20,000 were closer to those of blacks in the same income group than to those of whites in income groups over $20,000.42

Navarro also reports that "class differentials in mortality in the U.S. … are … increasing rather than declining."43

In 1992, according to a careful count by Paul Leigh and colleagues, 66,800 workers died from workplace accidents and illnesses while another 14,057,500 workers experienced non-fatal workplace accidents and illnesses.44 Fatal diseases outnumbered fatal accidents by nearly 10 to 1, while non-fatal accidents outnumbered workplace illnesses by a ratio of over 9 to 1. All the sources of information used by the researchers had shortcomings; over 214 sources were examined. The Leigh group considers the estimates at which they arrived to be on the conservative side. One reason for this is the great likelihood that both workplace accidents and illnesses are underreported. Nevertheless, they estimate that "the total cost of occupational injuries and illnesses appears to be considerably larger than those for Alzheimer's disease and of the same magnitude as those of cancer, of all circulatory disease, and of all musculoskeletal conditions."45

ESTIMATED NUMBER OF WORKPLACE INJURIES AND ILLNESSES IN THE UNITED STATES, 1992

INJURIES

ILLNESSES

TOTAL

Deaths

6,500

60,300

66,800

Non-Fatal

13,206,500

857,500

14,057,500

TOTALS

13,213,000

917,800

14,124,300

Source: J. Paul Leigh and others, "Occupational Injury and Illness in the United States. Estimates of Costs, Morbidity, and Mortality," Archives of Internal Medicine, 157 (July 28, 1997), p. 1564.

Until the late 1960s, "business-funded institutions were the only organized constituency in occupational safety and health. …"46 The situation at Harvard Medical School was not atypical: "The class of 1968 … received only one lecture on the subject [of occupational health and safety] in four years of training, compared to twelve lectures in 1949. …"47 Two national surveys of medical schools produced the following findings:

Occupational health was specifically taught to medical students in only 50% (1977-78) and 66% (1982-83) of medical schools. Occupational health was part of the required core curriculum in 30% (1977-78) and 54% (1982-83) of the schools. The required median time devoted to occupational health was only 4 hours [a year] in both surveys.48

In 1991-92, another survey of 127 medical schools was made. Occupational health was taught in 68 percent of the schools and it was a required course in 60 percent of the institutions. The median number of hours in instruction per year was six hours.49 Overall, very little substantive change was reported. Burstein and Levy suggest this resulted from the failure of the field of study to generate income comparable to other hospital-based specialties such as surgery and internal medicine.

In the workplaces of the nation, however, endangerments to workers' lives grew. Between 1968 and 1986 injury rates rose.50 While injury rates for black men fell, they remained half again as high as those for white men. At the same time, rates for black women rose. James Robinson concludes that "rates of work-related injuries are high and are not falling." "Occupational safety appears to be the neglected step child of an occupational health policy focused primarily on chronic disease and of injury prevention policies focused primarily on the home and the highway."51

Injury frequency rates fell sharply during 1945-1955. According to David Fairris, the rates "decreased most rapidly … in those manufacturing industries where workers possessed the greatest shopfloor power."52 Recall that the decade marked the peak of the numerical strength of American unions. Also, factory workers had staked out a series of traditional practices and rights which gave them power in the workplace beyond the strict letter of the contracts with employers. When workers felt in control, the workshop became a safer place to work. In the late 1950s and early 1960s, however, new winds began to blow through the shops. Employers, to some degree in concert, began to establish or re-establish their control.

Employers began systematically to reduce the scope of informal custom and practice in the determination of shop floor conditions, centralize management decision-making over such matters, and interpret labor's rights more narrowly as those contained in collective bargaining agreements.53

 

Throughout industry productivity began to rise again, not through new technology so much as through speeding up of the machinery. So did injury rates head upwards. Workers responded with unauthorized strikes, heightened absenteeism, and deliberate slowdowns by working to rule. Productivity declined. Into the 1990s, with advances in lean production, manufacturing industries experienced a deteriorating health and safety record.54

During the early to mid-1990s, according to the Bureau of Labor Statistics, "the proportion of lost-workday injuries and illnesses that involved days away from work dropped from 76.9 percent in 1992 to 64.7 percent in 1996."55 The increasing job insecurity during those years explained, at least in part, the hesitation among workers to endanger their jobs by excessive absenteeism. A survey reported in 1998 found that clauses in union contracts "requiring local-level labor-management safety and health committees were found in 29.4 percent of all contracts reviewed, a figure that was up from 26.5 percent 20 years earlier."56 Less than a three-percent advance over two decades could hardly be regarded as genuine growth, especially during a period of rising workplace injuries and illnesses. In addition, as Rosner and Markowitz put it:

In the conservative political climate of the cold war and McCarthy period, nearly all unions sought to win wage and benefit packages for their membership and to leave issues of workplace control to management. … Unions bargained for financial support of welfare funds and Blue Cross or private health insurance coverage rather than prevention of disease at the workplace.57

This was true even in industries in which workplace illnesses were very serious.

In 1969, a California state Department of Health report declared that "of the 774 [farm] workers examined, 548 or 71 percent displayed one or more symptoms of pesticide poisoning."58 Thirty years earlier, only 32 pesticide products were registered with the federal government while by 1989 "there were 729 active ingredient pesticide chemicals formulated into 22,000 commercial products."59 "Workers in agriculture face an average risk of skin disease four times higher than workers in other industries."60 Not every labor union was sensitive to this issue. In California, for example, while the United Farm Workers stipulated in its contracts that use of the deadliest pesticides was forbidden, the International Brotherhood of Teamsters permitted any farm employer to use whatever pesticide they wished, regardless of its effect on farmworkers.61 Throughout the economy, beyond agriculture, "10% to 33% of all types of lung cancer in men are attributable to occupational exposures."62 With reference to the pollutant dioxin, which is widely present in common sources, a federal agency in 2000 announced that "the chemical is 10 times more likely to cause cancer than previously estimated."63 Among the sources of the new report are "studies of industrial workers."

Industry suppression of research results that demonstrated the lethal effects of asbestos, widely used in the United States, was practiced as recently as the 1960s.64 In December 1934, "outright deception began."65 In 1950, K. Smith, the medical director of Canadian Johns-Manville, wrote in a letter to company officials:

I know of two hazards in our operations which may lead to pulmonary malignancy, namely asbestos and silica. There is ever increasing evidence in the medical literature today that these materials lead to lung cancer.66

This information was not made public; in fact, it contradicted the company's public denials of cancer effects of asbestos. The company's deception notwithstanding, however, in the 1960s medical researchers such as Selikoff and Wagner established the asbestos-cancer tie. "The findings were quickly appreciated by the general medical community. The industry was in a compromised position."67 Lilienfeld wrote in 1991 that "despite the disclosures of suppression and fraud, no mechanisms have been implemented to prevent future such occurrences."68 The leading firms in the industry had been implicated but their very prominence seemed to protect them from the clear responsibility involved.

By the year 2000, the National Institute of Environmental Health Sciences released a report listing "218 substances known or suspected to cause cancer in people."69 These substances included a number that had significant industrial uses in mining, the manufacture of alcohol, lead batteries, phosphate fertilizers, soap and detergents, synthetic ethanol, synthetic rubber, coating and plating, plastic and synthetic products and alloy, sterilization of medical devices, and diesel exhaust particulates.

The 1950s saw the passage of federal laws concerning occupational disability of miners that resulted in significant increases in beneficiaries: "Between 1957 and 1975, the number of beneficiaries increased from 149,850 to 4,264,092; expenditures went from a monthly average of $10,940,000 to $665,684,000.70 Many of these changes were brought about by concerted social movements." At critical junctures in the political contest over medical ideas, confrontational collective action accomplished what careful scientific investigation and subtle private negotiation could not."71 More happily, not only has new legislation emerged but "the prevalence of dust disease has declined markedly over the past quarter century, as has the incidence of new cases."72

At the same time, new scientific knowledge was emerging toward the end of the 20th century that implied the existence of new endangerments at the workplace. The Environmental Endocrine Hypothesis asserted that

a diverse group of industrial and agricultural chemicals in contact with humans and wildlife have the capacity to mimic or obstruct hormone function not simply disrupting the endocrine system like foreign matter in a watchworks, but fooling it into accepting new instructions that distort the normal development of the organism. Mostly synthetic organic chemicals, these compounds have been implicated in more than two dozen human and animal disorders, including reproductive and developmental abnormalities, immune dysfunction, cognitive and behavioral pathologies, and cancer.73

Krimsky notes that the growing incidence of breast cancer gives support to the endocrine hypothesis: "Between 1973 and 1989 the incidence of breast cancer increased by 21 percent, almost 1 percent annually. … The lifetime risk of breast cancer has climbed from 1 in 20 at the end of World War II to 1 in 8 by the mid-1990s more than doubling the risk."74 The chemical industry, by the 1990s, "had launched an ideological offensive against the use of scientific hypotheses implicating chemicals in human disease."75

While toward the close of the 20th century more government power was placed in the service of protecting workers in the workplace, often workers continued to be subjected to employer influence. Thus, in 1969, farm workers, probably the least protected in the country, filed a series of requests with the Occupational Safety and Health Administration (OSHA). These included elementary necessities such as "toilets and drinking water to reduce heat-induced injuries, infectious diseases, and pesticide poisoning." They waited 18 years for OSHA action.76 In 1990, six years after promulgation of the OSHA field sanitation standards, a survey in North Carolina discovered that "only about 4 percent of the farms complied with the requirements for toilets, potable drinking water, and hand-washing facilities."77 In many other cases, individual workers who complained about unsafe working conditions were discharged from their jobs. This was especially the case in non-unionized places. Even when federal laws required certain employer action, the laws were frequently violated. Thus, in 2000, "Congressional investigators say thousands of employers are violating a federal law that requires the level of insurance coverage for mental illness to be similar to that for physical illness."78

Business organizations were nearly unanimous in opposing bills benefiting workers who were endangered by workplace conditions.

In the 100th Congress (1987-1988), the Chamber of Commerce, National Association of Manufacturers, National Federation of Business and other business lobbies cooperated with the Reagan administration to defeat the High Risk Disease Notification and Prevention Act. The purpose of the measure was to establish a twenty-five-million-dollar federal program to identify, notify, and provide medical monitoring of workers who have been exposed to toxic substances in the workplace. The federal government would bear the responsibility for notifying at-risk employees, and employers would be obliged to pay for the monitoring of employees who were exposed to toxic chemicals on the job.79

A filibuster in the Senate, led by Senators Orrin Hatch (Utah) and Dan Quayle (Indiana), defeated the proposed law. Company officials were apparently not to be required to heed federal investigators' warnings about dangerous conditions in workplaces. W.R. Grace and Company operated a tremolite mine in Libby, Montana. "Grace's own files that show state and federal agencies warned the company often, and as early as the 1960s, that conditions at the Libby mine, the transport center and Grace's processing plant posed health hazards."80

J. Robert Oppenheimer, director of the atomic bomb center at Los Alamos, New Mexico, "kept a tight rein on information that might reveal health problems or suggest that Los Alamos was contaminating his beloved desert."81 After World War II the federal government employed companies to operate many federal weapons facilities, both nuclear and otherwise. "For years the contractors, on orders from the Energy Department or the Atomic Energy Commission that preceded it, routinely fought all claims of injury."82 There followed "five decades of denying that anyone had received enough [radiation or other] exposure to be hurt."83 Only in 2000 did the federal government finally concede that serious disabilities or deaths may have occurred to some 3,000 weapons workers and that they or their survivors would be granted "a lump-sum payment or reimbursement of all medical costs and partial compensation for lost wages."84 The entire episode was strongly reminiscent of the experiences of numerous private corporations in the lead, coal mining, and asbestos industries. Initially, denials of corporate responsibility were standard. Claims of workers were denied as unwarranted in the light of existing, scientific knowledge. As independent evidence piled up, however, it became more difficult to continue the customary stance. Only then did the companies relent. Of course, they made as little recompense as possible, as did the federal government when only partial compensation of lost wages was awarded.85

At the close of the 1990s, over 43 million Americans lacked health insurance. The figure had increased steadily over the decade. How did these and especially poor persons meet their medical needs? In addition, how well were those needs met by persons with insurance?

Addressing the quality of health care in the United States, a recent study found that "whether the care is preventive, acute, or chronic, it frequently does not meet professional standards."86 Schuster and colleagues specified:

A simple average of the findings of preventive care studies shows that about 50 percent of people received recommended care. … An average of 70 percent of patients received recommended acute care and 30 percent received contraindicated acute care. … For chronic conditions, 60 percent received recommended care and 20 percent received contraindicated care.87

A study of negligent care for uninsured, minority, and poor populations was conducted in New York State in 1984. Some 31,000 medical records were reviewed. The central finding of the research was that "the uninsured are systematically at higher risk for poor-quality care."88 This research "revealed that more negligent injuries among the uninsured happened in emergency departments, compared with patients with sources of insurance."89

Between 1982 and 1986, access to physician care declined for poor, minorities, and uninsured persons, "particularly for those in poor health."90 During those same years, a time when persons who were not poor and were in poor health increased their visits to physicians by 42 percent, "physician visit rates for low-income individuals in poorer health declined by 8 percent. …"91 This trend reversed gains made over the previous two decades. A separate study of the health of African-Americans found that between 1980 and 1991, the number of excess deaths among them increased to 66,000 from 60,000.92

In the municipal hospitals of New York City, designed especially to accommodate the city's poor—nearly half the city population—"all of the poor had access to a higher level of care in 1995 than they had in 1965."93 However, adds the team that made this study: "It would be difficult to conclude there was a significant narrowing of the gap between the services available to the Medicaid population and the privately insured."94 Even more broadly, they conclude that all previous positive measures on behalf of the poor "left them short, even far short, of the health care services available to the affluent."95

One of the most calamitous developments in post-World War II health care was the substitution of jails for mental hospitals. Linda A. Teplin, a professor of psychiatry at Northwestern University Medical School, has stated that "on an average day … 9 percent of men and 18.5 percent of women in local jails—about 56,000 people, are severely mentally ill."96 A U.S. Department of Justice report declared: "Unconstitutional conditions exist at the Los Angeles County Jail, including a deliberate indifference to the inmates' serious mental health needs."97 "95 percent of those who commit suicide in jail or prison have a diagnosed mental disorder, a study in the American Journal of Psychiatry found."98

Some concentrations of ill-health and excess deaths can be accurately described as "disaster areas". Thus, "in 1977 Jenkins et al. pointed out that the number of excess deaths recorded each year in the areas of worst health care in Boston was considerably larger than the number of deaths in places that the U.S. government had designated as natural disaster areas."99 The Wallaces have examined a similar process at work in the Bronx, New York City, during 1970-1980:

The Health Areas of the South-Central Bronx lost between 55% and 81% of their housing units. … Housing loss of such intensity in so short a time must be counted as a catastrophe of a magnitude rarely experienced outside of war or protracted civil conflict.100

They list six other parts of New York City which were struck by "similar catastrophes."

Beginning in about 1970, according to the Wallaces, the city's fire department adopted a policy of service reduction in areas they viewed as "sick neighborhoods". One out of ten fire companies were closed down. There followed "a contagious fire epidemic". People who could afford to move elsewhere did so while the poorest remained in the now greatly overcrowded housing. "By 1990, the number of extremely overcrowded housing units had reached about double the 1970 number, the initial condition of the fire epidemic."101 "Epidemics of contagious disease and contagious behavior problems arose; these included tuberculosis, measles, substance abuse, AIDS, low-weight births, and violence." "Life expectancy of elderly blacks declined between 1970 and 1980 after decades of increase and in contrast to that of elderly whites which increased uninterruptedly."102 In 1979, after nearly a decade of community destruction, the New York Department of Health warned about a large increase in TB levels and pointed to "impacts similar to famine or war on the population."103 The Wallaces explain that medical measures alone would prove inadequate to meet the challenge. It was necessary to apply new strategies "including significant improvements in the living and working conditions of the poor."104

A federal government initiative for U.S. public health called "Healthy People 2000" was established around 1980. It was based on a series of health objectives or targets to be achieved by the year 2000. In 1999, the New York Times reported the latest annual progress report:

The United States has met only about 15 percent of its health goals for the year 2000, set 20 years ago … but progress has been made on 44 percent. For about 20 percent of the objectives, the nation is getting less healthy and is moving away from the goals. …105

One of the more astonishing backward trends was "mothers who die in childbirth: it was 6.6 per 100,000 live births in 1987; it rose to 7.5 in 1997. The goal is 3.3."106 A social-work journal observed:

The most troubling revelations in "Healthy People 2000" are the persistence of enormous disparities by socioeconomic status, race, and ethnicity. The proportion of key indicators that are not progressing is considerably greater for ethnic minority groups. … Committed as they are to health promotion, risk reduction, and disease prevention, public health advocates with social good objectives are no match for the uneven, racist, and raw forces of capitalism that form the foundation of the U.S. health care system.107

Excess death continues to be built into America's public health programs.

Wealth in the United States during 1953 continued to be highly concentrated, if somewhat less so than in the preceding two centuries. Arranging wealth by quintiles or fifths, it was as follows that year:108

Ranking in Quintile Percent of Wealth

Top

71

Second

17

Third

7

Fourth

4

Bottom

1

The top fifth of adults owned over 70 percent of the entire wealth of the country. The bottom three-fifths together owned only about one-sixth as much. The bottom fifth owned practically nothing.

During the 1950s-1970s, governmental statistical agencies were extremely reluctant to facilitate publication of statistics of the very wealthy. Historian Carole Shammas writes that "between the 1950s and 1980, the amount of statistics on the wealthy the government compiled and made public dropped dramatically."109 In 1977, economist James Smith testified before Congress:

Most of what we have learned about the distribution of wealth … has come about … through the rather energetic support of studies of the distribution of wealth and income by the National Science Foundation, in opposition to a rather enormous effort of the Internal Revenue Service to prevent such studies, and, I might add, some other agencies of Government.110

With reference to the Bureau of the Census, Smith added, "it has really shied away from measuring anything that you could call important wealth information."111 Another witness, economist James Morgan, declared that "interpretation of wealth distributions and understanding of the [economic] system depend on data so that we do not rely on unsupported currently rampant theories about the importance of background, luck, success, or thrift."112 A Treasury Department official testified that in a 1966-1967 survey for the Office of Economic Opportunity "all families with larger amounts of wealth evidently underreported their holdings."113 He mused aloud: "Sometimes we think that we should have prepared for our work by taking courses in creative writing rather than in statistics and economics."114 Confession is good for the soul.

The years of depression and World War II were times of "massive government intervention in the marketplace," during which the wealth distribution became more equal.115 Judging by the top one percent of wealthholders, concentration reached a record low in the years 1972-1976, while in the following five years, it recovered somewhat. During the mid-1970s, while concentration receded, the very richest Americans nevertheless maintained a tight hold on the nation's wealth. Referring to the top one percent and one-half of one-percent, Smith and Franklin point out the concentration of stock ownership in 1972: "The superrich who own 66.7 percent of it, potentially control all corporate assets. … There is a certain amount of block control by virtue of the fact that many of the superrich are married to each other."116 Trusts, the most concentrated of all forms of wealth, were virtually all i.e., 93% in the hands of "the richest 1 percent in 1965."117 Between 1976 and 1989, "the degree of wealth inequality appears to have almost doubled … and it was 17 percent higher in 1992 than in 1972."118 By 1995, "the top 1 percent of families as ranked by net worth [i.e., assets minus liabilities] owned almost 39 percent of total household wealth; the top 20 percent of households held 84 percent."119

At the other end of the wealth scale, poorest Americans registered deterioration. "The bottom 40 percent of all households have only about 1 percent of all the wealth in the nation."120 Some sources differ but the entire magnitude is in any case quite tiny. During 1983-1992, "the poorest 20 percent have no net worth as a whole" that is, no assets after liabilities are deducted.121 More specifically, "households with negative net worth [i.e., liabilities exceed assets] account for about 25 percent of the [lowest wealth quintile] … in 1983 and 1992, and about 38 percent in 1989 5 to 8 percent of all households."122 In the recession years of 1989-1992, the proportion of whites in the lowest wealth quintile increased while that of blacks declined. "The percentage of households reporting zero or negative net worth increased from 15.5 percent in 1983 to 18.5 percent in 1995."123 In 1995, respondents were asked how many months their financial reserves could be used to live on.124 (These figures exclude the value of owner-occupied housing.)

MONTHS

Top quintile

19.0

Fourth quintile

3.5

Middle Quintile

1.2

Second Quintile

1.1

Bottom Quintile

0.0

Some light is thrown upon the preceding when one reads a statement by Shammas in 1987: "Current estimates are that over half of households headed by someone sixty-five or older have no wealth. …"125 Black wealth holding was between 17 and 12 percent of white wealth holding.126

The distribution of income was less concentrated than that of wealth. It was nevertheless quite lopsided. Based on a study of unpublished 1949 census data for New York State by Daniel Creamer, the author defined low income as "the cost of a standard minimum budget necessary for the maintenance of health and decency."127 About 30 percent of New York's non-institutional population qualified as low-income. However, "50 percent of all low-income families and 69 percent of all low-income individuals were impoverished in 1949."128 Nearly 85% of all the low-income families were white and 15% were non-white. At the same time, the incidence of low-income among New York City non-whites was 79% greater than among white families. Creamer stressed that low-incomes were more the result of low wages than of any lack of workers in families.

The years 1950-1978 saw significant improvements in family incomes over the entire country and through the entire range of incomes. A completely different experience was recorded in the following years, 1979-1995. The listing below indicates the percentage growth or decline in family incomes during both periods:129

 

1950-1978

1979-1995

Bottom fifth

+138

 -9

Second fifth

+98

-3

Middle fifth

+106

+1

Fourth fifth

+111

+7

Top fifth

+99

+26

Referring especially to the later period, "young families and workers stuck at the bottom have suffered the equivalent of an economic depression."130 Those years contained many depressive trends: "Average income of families in the bottom 60 percent of the income distribution was actually lower in 1996 than in 1979."131 Over the years 1973-1993, Lynn Karoly writes, "the bottom four-tenths of the population is worse off in real terms than similarly situated persons 20 years earlier."132

Income trends are not directly translatable to well-being. As Mayer and Jencks explain: "Federal statistical agencies have made relatively little effort to track changes in the quality of poor people's food intake, housing conditions, medical care, education, or health."133 Poor people tend to consume more goods and services than seem to be purchasable by the cash incomes they report to investigators. Borrowing, charity, and underreporting may account for the difference. The Mayer-Jencks studies were made in Chicago. A century ago, studies of standards of living in the U.S. almost always included material on budgets (i.e., expenditures) and did not depend merely on cash-income figures. As techniques of inquiry have become more sophisticated, less attention is paid to actual consumption patterns.

Special situations explained many examples of income experience during the period of depressed earnings. Thus, for example, "the only category of families who actually raised their median income between 1970 and 1987 were those in which both spouses were employed."134 In an international study of 18 countries including the United States, children of wealthy American families had higher living standards than wealthy children in the other countries. At the same time, children in poorer U.S. families had lower standards of living than poorer children in nearly all the other countries studied.135 The rewards and penalties of the American standard of living were reproduced on an international scale. From 1986 through 1997, the real after-tax income of the top one percent of Americans rose by 89 percent while that of the bottom 90 percent barely increased by 1.6 percent.136

Late in 1998, Louis Uchitelle analyzed what was then called the longest run of economic growth in American history. He approached it from the viewpoint of an entire business cycle rather than as an uninterrupted series of years:

But despite the surge [of 1996] economic growth measured over the entire [business] cycle makes the expansion of the 1990s the weakest since World War II. Americans, for the most part, have been running in place for 25 years. … Economic growth in … [the 1960s] averaged 4.7 percent a year, almost double the performance in the 1990s. … For a century … until 1973, the economy expanded at a 3 percent annual rate, or more, most of the time. … The pie, in effect, has grown more slowly than in the past, and the 1990s expansion has failed to break this pattern.137

The 1990s expansion was the longest but also the weakest since 1945. This may be part of the reason why the expansion's benefits failed to extend to the larger working class. (See below for discussion of wages.)

Minority workers saw their incomes suffer during these years. In 1998, the Annual Economic Report of the President referred to the black-white income gap as being as large as 30 years before.138 Marcus Alexis found the same.139 Michael Weinstein cites a relevant study by Sheldon Danziger of the years 1969-1997:

The earnings of these less skilled white workers fell so far that they earned less in 1997 than their black counterparts, a notoriously underpaid group, had earned almost 30 years earlier. And those black male high school graduates fared almost as badly. From 1969 to 1997, their inflation-adjusted earnings fell by 25 percent.140

Mike Davis traced the fortunes of Hispanic workers:

U.S.-born Mexican men … have seen their median incomes decline from 81 percent of non-Hispanic white men in 1959 to 61 percent in 1990. (For male Mexican immigrants, the fall was from 66 percent to 39 percent; for immigrant females, as compared to white women, from 81 percent to 51 percent.141

Poverty rates for Puerto Ricans in New York City rose to 48 percent in 1988 from 28 percent in 1970.142 Between 1979 and 1989, the income ratio of Native American men to white men fell some 12 percent; for women, the ratio fell from 77.0 percent to 69.8 percent.143

"It is shameful that `prisoner' is just about the fastest-growing `occupation' in the United States, the `trade' of more than 2 percent of American men."144

Spreading automation was a basic factor underlying the deterioration of labor conditions throughout the economy: "In establishments of 500 or more employees, 83 percent used computer-aided design (CAD) or computer-aided engineering (CAE), 70 percent used numerically controlled or computer numerically controlled machines, 36 percent used flexible manufacturing cells or systems, and 43 percent used robots."145 These figures relate to 1987 and help explain why manufacturing productivity in the 1980s "rose … 40 percent faster than in the previous twenty years."146 A sizable portion of the increase was associated with rising military expenditures under Reagan. Nevertheless, the decade of the `80s remained a period of low worker income, depressed wages, and destabilized employment conditions. Profitivity was at high levels because of rising productivity that accrued overwhelmingly to the benefit of employers. These conditions continued to operate in the following decade. Between 1959 and 1995, "average weekly earnings of American workers in the private sector fell (in 1982 dollars) from a level of $260.86 … to $255.90. …"147 Writing in 1999, Ethan Kapstein reported that "some 30 percent of American workers earn poverty-level wages, up from 24 percent in 1979."148 Between 1988 and the late 1990s, profits rose by nearly 50 percent.149 Meanwhile, while in 1975 three-quarters of the unemployed received unemployment insurance, by 1995, the figure fell to 36 percent.150 At the end of 1994, real wages were back to their level of the late 1950s.151

Economist Lester Thurow seemed astonished by the great and growing inequality in the American economy of the 1990s. In 1996 he wrote: "No country not experiencing a revolution or a military defeat with a subsequent occupation has probably ever had as rapid or as widespread an increase in inequality as has occurred in the United States in the past two decades."152 At the same time, he warns: "It is fair to surmise that if capitalism does not deliver rising real wages for a majority of its participants in a period when the total economic pie is expanding, it will not for long hold on to the political allegiance of a majority of the population."153 But American capitalism remained resistant to such forewarnings. The early 1990s continued to witness declines in real wages; 1996 and 1997 finally saw a reversal; but in the year 2000, the first eight months saw a resumption of real-wage decline.154

At the mention of the word "poverty," the speaker often has in mind a penniless beggar who is unemployed. But in post-World War II America, an increasing number of people were in poverty even though they were full-time, year-round workers. During the long-term economic boom, 1961-1969, significant numbers of such poor dropped out of poverty. During the even longer boom period of the 1990s, however, there was no diminution in the number of working poor. Writing in 2000, Linda Barrington observed that "the poverty rate among full-time workers is higher now than it was during the last recession [1989-1991]."155 In the Midwest, poverty among ethnic minorities is rising while that among minorities in the South is declining slowly. The working poor among ethnic minorities find themselves more frequently in and out of poverty. Overall, "the number of full-time workers in poverty has doubled since the late 1970s from about 1.5 million to almost 3 million by 1998."156 Adding in dependents makes a total of between four and five million Americans.157 "A non-white full-time worker today is … one-and-a-half times more likely to be poor than is a full-time worker in the population at large."158 For whatever reason, "poverty among Western, white, full-time, year round workers has experienced a distinct upward climb since the 1970s."159

The persistence of poverty among full-time workers is in large part related to structural changes in the economy. This is illustrated by the following trends of employment between 1963 and 1998 (by percent):160

 

1963

1998

High-paying industries

28

21

Middle-paying industries

37

16

Low-paying industries

35

6

"Between 1965 and 1998," writes Barrington, "combined employment in the retail and service sectors—the two lowest paying sectors, on average—increased from 30 to 48 percent of all production and non-supervisory employment."161 In the earlier year—1965—a cashier in a supermarket had to punch into the cash register the price of every item. Now, prices are entered by an optical scanner when the cashier moves each item over a small glass plate. The scanner reads the barcode printed on each package, looks up the current price of the item in a central computer, and adds it to the bill. At the same time, the products, identified from the bar codes, are deducted from an inventory record. Information about customer purchase statistics and trends, including items purchased at the same time, can also be compiled automatically and sent to a central office instantly.

Between the years 1967 and 1993, child poverty in the United States rose steadily. Compared with 17 other industrialized countries, the U.S. had the highest rate.162 The average in western Europe is less than half the U.S. rate of 20 percent. The poorest children in the United States have a smaller income than their counterparts in the other 17 countries while the richest children in the U.S. have higher incomes than almost all other counterparts elsewhere. This feature of the American standard of living is its most distinguishing characteristic. The United States is the most favorable location in which to be rich but it is the least favorable rich place in which to be poor.

Over an 18 year period, from 1969 to 1986, the child poverty rate increased … in the United States from 13.1 percent to 22.9 percent, before falling to 21.5 percent in 1991. All of the Scandinavian countries have been able to have kept child poverty below five percent. …163

The American "war" on poverty seems to be staffed by pacifists.

In 1992, the rural poverty rate was higher than that of the nation as a whole. On the Mississippi Delta the poverty rate was 44 percent, "but 64 percent of residents are African-American, and for them the poverty rate is at least 60%."164 New factories arose near El Paso and Ciudad Juarez areas. Ironically, this [industrial] expansion … has been followed by a significant increase in local poverty rates, due primarily to depressed wages caused by an excess labor supply, which in turn results from increasing immigration from Mexico.165 Growing numbers of fruit and vegetable workers continue to stream to the Northwest and the Plains "where growers have not moved towards mechanization but towards Mexicanization."166

Poverty rates among African Americans and Hispanics drifted somewhat lower but overall "the poverty rate … is still above the rate for any year in the 1970s."167 Blacks are still in poverty at 2½ times the rate for whites.168

During the last quarter of the 20th century, consumer debt became a significant prop supporting the American economy. Previously—as in the 1920s—consumer debt in the form of installment buying supported a significant if still modest sector of the economy. Now, however, new financial institutions and practices emerged to promote the growth of consumer debt. Very rapidly these became major factors on the financial scene.

At the end of 1995, four major credit cards (Visa, Master Card, Discover, and American Express) represented over $358 billion in outstanding credit. Another $77.6 billion was represented by a number of cards issued by oil companies and retail stores. A trade journal referred to "the extranormal profitability of credit card lending."169 In the years 1971-1982 the profit rate on credit cards was in the vicinity of 16.67 percent return on equity. Starting in 1983, however, the profitability rate "jumped" to around seventy-five percent.170 A U.S. Supreme Court ruling in 1978171 facilitated repeal of state usury ceilings, in effect permitting credit card operators to move their facilities to states with the least regulation of permissible interest rates on credit cards. Since over 80 percent of card issuers' profits come from interest payments by card-holders, the interest rate was for all practical purposes deregulated; card issuers' profits soared, as indicated above.172 No wonder that "the return on assets on credit cards equaled roughly four times the return on banking activities generally."173 Over the year of 1995, "more than one in eight poor households had credit card debt greater than twice their monthly income, and more than one in six had credit card debt as large as their monthly income or larger."174

Consumer debt was by no means the result merely of thoughtless overreaching. "Many middle-class and poor families," reports Lisa Keister, "are forced to take loans for daily survival and thus erode the small amount of wealth they may have accumulated."175 In 1995, for example, nonwhite families owned only $425 more than their debts; this sum could be pressed down to zero or less with one spell of unemployment.176 "Debt ownership was one reason that the poor grew poorer during the 1980s and 1990s."177 In 1962, in an era called the `Golden Age' by many writers and economists, "the bottom 40 percent of the population actually owned negative financial wealth."178 In popular parlance, they were "in hock". (Financial wealth is the total of savings readily available, not including owner-occupied housing. Negative financial wealth indicates that their liabilities exceed their readily usable assets.)

Mass layoffs including plant shutdowns and downsizing affected millions of industrial workers during the years 1984-1998, even when unemployment was reportedly very low. An authoritative count is difficult to find but a compilation by Teresa Sullivan and colleagues gives a good idea of the scope of the layoffs.179

1984-1986

600,000 

mid- and upper-level executives laid off

1987-1990

1,987,553

separations in mass layoffs

1990

586,690

laid-off workers

1992

5,600,000

displaced workers

1994

4,500,000

28% through 'downsizing'

1995-1997

3,600,000

displaced workers

SUBTOTAL

16,874,243

 

 

1,200,000

temporary layoffs

TOTAL

18,079,243

 

In addition, some 4.5 million workers were on part-time work for economic reasons, not their choice. Another 12.5 million workers were in non-permanent jobs as follows:180

independent contractors

8,500,000

"on call" workers

2,000,000

temporary help agencies

1,300,000

others

809,000

TOTAL

12,609,000

Sullivan also refers to the closing of uncounted small businesses which had been supporting families. It should be emphasized that all the foregoing figures are incomplete.

Many of the unemployed resorted to credit cards and ultimately to personal bankruptcy. As Sullivan puts it, "the middle-class way of life can be maintained for quite a while with smoke and mirrors and many credit cards."181 Personal bankruptcies more than quadrupled between 1979 and 1997; a number were, indeed, workers, but most were solidly in the middle class. Nevertheless, in a sample population, "a full 32.4 percent of the [bankrupt] debtors fell below the poverty rate, more than two and a half times the national average."182 Banks that issued credit cards continued their profitable ways, despite multiplying personal bankruptcies:

Between 1980 and 1992, the rate at which banks borrow money fell from 13.4 percent to 3.5 percent. During the same time, the average credit card interest rate rose from 17.3 percent to 17.8 percent.183

Some card issuers, however, were not content with such widening profitability. Two cases are relevant.

In 1999, Sears, Roebuck, whose credit cards were held by 63 million persons, pleaded guilty to one count of bankruptcy fraud. "The investigation began [in April 1997] after Sears admitted that since the 1980s it had illegally been collecting from bankrupt Sears card holders debt that had been wiped off the books" in a legal bankruptcy court proceeding.184 U.S. Attorney Donald Stern said:

This was not the haphazard action of a few employees. It represented an outrageous company policy, carried out by those responsible for debt collection, which plainly violated the law.185

"Sears agreed to pay $36 million in cash and issue store coupons worth $118 million to [credit] card holders."186

A second case concerned Providian Financial Corporation, a bank and the country's sixth largest issuer of credit cards. In 2000, it reached an agreement with the federal Office of the Comptroller General to settle "accusations that it had misled customers about rates and fees, changed rates without notice and delayed posting payments to accounts to generate late fees."187 Providian agreed to reimburse three million cardholders with $300 million dollars, pay the City of San Francisco $3.5 million, and pay $1.6 million to consumers in the state of Connecticut.188 (The payment of reimbursement is tax-deductible and thus will not be recorded as a loss in bank company reports.)

Housing

During the years 1940-1998, the percentage of Americans who were homeowners was as follows:189

1940

44.0

1990

65.0

1960

60.0

1992

63.9

1970

63.0

1994

63.5

1980

65.8

1995

64.7

1989

60.1

1998

66.6

The period of greatest growth was 1940-1960; the 1980 high point was overtaken only in another 18 years, and then by less than one percentage point; between 1990 and 1995, the rate of homeownership declined.

These listed percentages relate to the entire population. The picture is greatly altered when we examine figures for specific groups. In 1998, for example, 72.5 percent of whites were homeowners but only 45.3 and 43.9 percent of African Americans and Hispanics, respectively, owned their homes.190 On the other hand, "home ownership fell for families with less than $25,000 of income and for families headed by those aged 45 to 54."191 Uchitelle reports:

The homeownership rate among blacks … has risen to 45.8 percent from 42 percent in 1993. … But black households have recovered only the ground lost since 1983, when the rate was 45.6 percent.192

According to Gyourko, "the least-well educated households are owning at systematically lower rates than in the past."193 He notes, too, that "virtually no new housing is being produced that is of low enough quality to be affordable to low skill households who want to own."194 Indeed, Gyourko adds, "a comparison of the unadjusted and constant quality prices of lower-income homes … suggests a marked deterioration in the quality of such housing since the mid-1970s."195 Blank and Rosen write that "the incidence of homeownership among the poor has decreased. …"196

Rental housing for poorer Americans was becoming less available.

The supply of low-cost housing units that aren't government-subsidized is decreasing. Significant amounts of public housing are being demolished.197

The federal Department of Housing and Urban Development (HUD) reported in 2000 that "5.4 million low-income families were paying more than half their income for housing or living in dilapidated units, a rise of 12 percent since the economic expansion began in 1991."198 Based on his studies in Philadelphia and New York City, University of Pennsylvania professor Dennis Culhane found that "most poor families are paying 60 percent and 70 percent of their income on rent, twice what it was 25 years ago."199 Blank and Rosen observed that "the proportion of income spent on housing has been growing, especially among low-income families."200 Between 1991 and 1995, "the number of low-cost apartments decreased by 900,000 … while the number of `very-low income' families … grew by 370,000."201 Families that were paying rentals equal to half or more of their incomes numbered 5.32 million or nearly one-seventh of all family renters.202 Over the next three years, 1995-1997, such families increased to 8.9 million, according to HUD.203 Nationwide, during 1997-1998, the number of applicant families for public housing in 40 big cities" showed waiting-list increases of 10 percent to 25 percent. …"204 In New York City alone, where some 116,000 people are on waiting lists, the wait is for eight years.205

Federal housing vouchers are eagerly sought by poor families. Late in 1999, 1.4 million families were using vouchers. This enabled them to pay an average monthly rent of $623 of which $400 was covered by the voucher.206 In Spring 1999, some 600,000 families were on waiting lists for vouchers. "The wait for vouchers has stretched to 10 years in Newark and Los Angeles, 7 years in Houston, and 5 years in Chicago and Memphis. …"207 During 1995-1998, Congress refused to provide any additional vouchers. In 1999-2000, however, 110,000 new vouchers were authorized by Congress.208

The Urban Institute found that about 3.5 million people were homeless at least once a year. "About 65 percent more Americans had an episode of homelessness annually in 1996, during a sustained boom, than in 1987. …"209 Almost every tenth child experiences homelessness at least once a year. Two months earlier, a report by HUD, based on census data, found:

The homeless were deeply impoverished and most were ill. Two-thirds were suffering from chronic or infectious diseases, not counting AIDS, 55 percent lacked health insurance, and 39 percent had signs of mental illness.210

Mentally ill homeless persons did have an alternative to shelters: jails.

Urban rents rose in the 1970s-1990s; aggravating difficulties standing in the way of urban residence, including homelessness. Nevertheless, a process of segregation directed more of the poor toward cities in the same years. Following is a table illustrating the percentage of geographical bunching-up of the poor in the United States:211

Places

1970

1980

1990

Non-metropolitan areas

44

31

28

Central cities

34

39

43

Suburbs

22

30

29

Within the central cities, the poor were distributed among the following neighborhood types:212

Neighborhood type

1970

1980

1990

Not poor

45

36

31

Poor

38

41

41

Very poor

17

23

28

Over 20 years, the urban poor increased their residences from 55 percent to 69 percent. Douglas Massey observes: "Whether one looks south, north, east, or west, or at whites, blacks, Hispanics, or Asians, America became a more class-segregated society during the 1970s and 1980s."213 To the question is the average American living in an increasingly class-dominated society, Massey replies: "Averages tell you little when all the movement is toward the extremes."214

SUMMARY

Death and illness are closely linked with class in capitalist America. In 1960, 292,000 deaths were regarded as "excess", i.e., they were attributable to factors that did not prevail among persons of high socio-economic status. During 1960-1986, higher death rates for poorer or less educated persons widened. Medicaid had not equalized "the chances for survival among the poorest and least educated." This was true for whites and blacks. During the 1980s, it was found that inequality of income within states of the U.S. tended to limit declines in mortality. In 1989-1991, excess mortality in metropolitan areas averaged 849.6 per 100,000. In extremely poor areas of black Harlem and Chicago, mortality rates for males aged 5-65 were higher than their counterparts in Bangladesh one of the poorest countries in the world. According to the World Health Organization, "70% of all deaths and fully 92% of deaths from communicable disease in the poorest quintile were `excess' compared to the mortality that would have occurred at the death rates of the richest quintile." This gap widened during the last century. In 1992, 66,800 workers died from workplace illnesses and accidents more than those who died in U.S. armed forces in the entire Vietnam War. Farm workers remained probably the least-protected from workplace injury and illness.

Suppression of research results on lethal effects of asbestos by industry postponed remedial and preventive action for years. It took 18 years, punctuated by industry resistance, to get the federal government to provide farm workers with toilets, drinking water, and hand-washing facilities. Over 43 million Americans are without health insurance. They continue to receive inferior medical attention. Failures like these rise to the seriousness of natural disasters.

Wealth is highly concentrated with the top fifth of adults owning over 70 percent of the entire wealth of the country. Government has been reluctant to conduct and publicize results of research on wealth. During the 1990s, most Americans did not have enough savings to live on more than a month or so; the bottom fifth had no savings. The economic expansion of the 1990s was "the weakest since World War II" even though there was a widely-shared mis-impression to the contrary. Incomes of the poorest workers and minorities suffered extensive reductions which extended into the 1990s. "At the end of 1994, real wages were back to their level of the late 1950s." Nearly three million year-round workers did not even earn poverty-level wages in the decade of the 1990s. The United States is the most favorable location in which to be rich but the least favorable rich place in which to be poor. Middle class and worker families incurred increasing debt to buy necessaries.

Home ownership stalled during the 1960s and rose slightly thereafter. Affordable housing for rent became less available over the years. Public housing was subject to long waiting lists. Federal housing vouchers also required long waits. The number of homeless ran around 3.5 million people in a year. Increasingly, American society was becoming more class-segregated.

 

NOTES

1. Evelyn M. Kitagawa and Philip M. Hauser, Differential Mortality in the United States: A Study in Socioeconomic Epidemiology (Harvard University Press, 1973).

2. Ibid., p. 1.

3. Ibid., p. 4.

4. Ibid., p. 98.

5. Ibid., p. 151.

6. Ibid., p. 152.

7. Ibid., p. 167.

8. Ibid., p. 168.

9. Ibid., p. 179.

10. Gregory Pappas and others, "The Increasing Disparity Between Socioeconomic Groups in the United States, 1960 and 1986," New England Journal of Medicine, 329 (1993) 103-109.

11. Ibid., p. 104 or 105.

12. Ibid., p. 107.

13. Ibid.

14. Ibid., p. 108.

15. Eugene Rogot and others, "Life Expectancy by Employment Status, Income, and Education in the National Longitudinal Mortality Study," Public Health Reports, 107 (1992), p. 460.

16. David E. Moore and Mark D. Haywood, "Occupational Careers and Mortality of Elderly Men," Demography, 27 (February 1990), p. 49.

17. George A. Kaplan and others, "Inequality in Income and Mortality in the United States: Analysis of Mortality and Potential Pathways," British Medical Journal, 312 (1996), p. 1001.

18. John W. Lynch and others, "Income Inequality and Mortality in Metropolitan Areas of the United States," American Journal of Public Health, 88 (July 1998), p. 1074.

19. Ibid., p. 1075.

20. Ibid., p. 1079.

21. See Colin McCord and Harold P. Freeman, "Excess Mortality in Harlem," New England Journal of Medicine , 322 (January 18, 1990) 173-177, and Avery M. Guest and others, "The Ecology of Race and Socioeconomic Distress: Infant and Working-Age Mortality in Chicago," Demography, 35 (February 1998), pp. 23-34. See also Arline T. Geronimus and others, "Excess Mortality among Blacks and Whites in the United States," New England Journal of Medicine, 335 (Nov. 21, 1996), pp. 1552-1558.

22. McCord and Freeman, "Excess Mortality in Harlem," p. 173.

23. Ibid.

24. Ibid., p. 175.

25. Ibid.

26. Guest and others, "The Ecology of Race," p. 25.

27. Ibid., p. 31.

28. Ibid.

29. McCord and Freeman, "Excess Mortality in Harlem," p. 176.

30. Guest, "The Ecology of Race," p. 31.

31. Ibid.

32. World Health Organization, The World Health Report 2000. Health Systems: Improving Performance (Geneva, Switzerland: WHO, 2000), p. 5.

33. Ibid., p. 4.

34. Ibid., p. viii.

35. Ibid., p. xii.

36. Ibid., p. 9.

37. Ibid., p. 26.

38. See Annex Table 2 in ibid., pp. 156-163.

39. See Nancy Krieger, "The Making of Public Health Data: Paradigms, Politics, and Policy," Journal of Public Health Policy, 13 (1992) 412-427.

40. Vicente Navarro, "Race or Class versus Race and Class: Mortality Differentials in the United States," Lancet, 336 (November 17, 1990), pp. 1238-1240.

41. Ibid., p. 1239.

42. Ibid. See also Nancy E. Adler and others, "Socioeconomic Status and Health. The Challenge of the Gradient," American Psychologist, 49 (January 1994), p. 22.

43. Ibid., p. 1238. See also Susan P. Andrulis, "Access to Care is the Centerpiece in the Elimination of Socioeconomic Disparities in Health," Annals of Internal Medicine, 129 (1998), pp. 412-413.

44. J. Paul Leigh and others, "Occupational Injury and Illness in the United States. Estimates of Costs, Morbidity, and Mortality," Archives of Internal Medicine, 157 (July 28, 1997), p. 1564.

45. Ibid., p. 1566.

46. Daniel M. Berman, Death on the Job. Occupational Health and Safety Struggles in the United States (Monthly Review Press, 1978), p. 75.

47. Ibid., p. 95.

48. Jay M. Burstein and Barry S. Levy, "The Teaching of Occupational Health in U.S. Medical Schools: Little Improvement in 9 Years," American Journal of Public Health, 84 (April 1994), p. 846. A much lower figure is given by Navarro for the early 1980s; see Vicente Navarro, "The Labor Process and Health: A Historical Materialist Interpretation," International Journal of Health Services, 12 (1982), p. 6.

49. Ibid., p. 847.

50. James C. Robinson, "Trends in Racial Inequality and Exposure to Work-related Hazards, 1968-1986," Millbank Quarterly, 65 (1987).

51. Ibid., p. 418.

52. David Fairris, Shopfloor Matters. Labor-Management Relations in Twentieth-Century American Manufacturing (Routledge, 1997), p. 7.

53. Ibid.

54. Ibid., p. 9.

55. Hugh Conway and Jens Svenson, "Occupational Injury and Illness Rates, 1992-96: Why They Fell," Monthly Labor Review, 121 (November 1998), p. 43.

56. Ibid., p. 50.

57. David Rosner and Gerald Markowitz, Deadly Dust. Silicosis and the Politics of Occupational Disease in Twentieth-Century America (Princeton University Press, 1991), p. 212.

58. Robert Gordon, "Poisons in the Fields: The United Farm Workers, Pesticides, and Environmental Politics," Pacific Historical Review, 68 (1999), p. 58.

59. Marion Moses, "Farmworkers and Pesticides," p. 162 in Robert Bullard, ed., Confronting Environmental Racism: Voices from the Grassroots (Boston 1993).

60. Ibid., p. 166.

61. Gordon, "Poisons in the Fields," pp. 52 and 63-64.

62. Leigh, "Occupational Injury and Illness in the United States," p. 1559.

63. Gina Kolata, "E.P.A. Scientists Find Greater Cancer Risk in Dioxin," New York Times, May 18, 2000, p. A24.

64. Ibid., p. 793.

65. Ibid., p. 795.

66. Ibid., p. 795.

67. Ibid., p. 798.

68. Ibid., p. 798.

69. Reuters, "U.S. Report Adds to List of Carcinogens," New York Times, May 16, 2000, p. D4.

70. Edward Berkowitz, "Growth of the U.S. Social Welfare System in the Post-World War II Era: The UMW, Rehabilitation, and the Federal Government," p. 243 in Paul Uselding, ed., Research in Economic History, Vol. 5 (1980).

71. Alan Derickson, Black Lung. Anatomy of a Public Health Disaster (Cornell University Press, 1998), p. xii.

72. Ibid., p. 182.

73. Sheldon Krimsky, Hormonal Chaos. The Scientific and Social Origins of the Environmental Endocrine Hypothesis (Johns Hopkins University Press, 2000), pp. 2-3.

74. Ibid., p. 38.

75. Ibid., p. 181.

76. Thomas O. McGarity and Sidney A. Shapiro, Workers at Risk. The Failed Promise of the Occupational Safety and Health Administration (Praeger, 1993), p. 99.

77. Ibid., p. 173.

78. Robert Pear, "Many Employers Found to Violate Law Requiring Parity for Mental Health Coverage," New York Times, May 18, 2000, p. A18.

79. David Jacobs, "Labor and Social Legislation in the United States" Business Obstructionism and Accommodation," Labor Studies Journal, 23 (Summer 1998), p. 62.

80. Michael Janofksy, "Montana Town Grapples With Asbestos Ills," New York Times, May 10, 2000, p. 1.

81. Eileen Welsome, The Plutonium Files. America's Secret Medical Experiments in the Cold War (Dial Press, 1999), pp.70-71.

82. Matthew L. Wald, "U.S. Outlines Plan to Settle Claims of Bomb Plant Workers," New York Times, April 13, 2000, p. A19.

83. Ibid.

84. Ibid.

85. The federal government's actions in military experiments with soldiers as unknowing human subjects between 1950 and 1975 are instructive in comparing industry and government. See Constance M. Pechura and David P. Roll, eds., Veterans at Risk. The Health Effects of Mustard Gas and Lewisite (National Academy Press, 1993).

86. Mark A. Schuster and others, "How Good Is the Quality of Health Care in the United States?" Milbank Quarterly, 76 (1998), p. 556.

87. Ibid., p. 521.

88. Helen R. Burstin and others, "Socioeconomic Status and Risk for Substandard Medical Care," Journal of the American Medical Association, 268 (November 4, 1992), p. 2387.

89. Ibid.

90. Howard E. Freeman and others, "Americans Report on Their Access to Health Care," Health Affairs, (Spring 1987), p. 7.

91. Ibid., p. 10.

92. David R. Williams and Chiquita Collins, "U.S. Socioeconomic and Racial Differences in Health: Patterns and Explanations," Annual Review of Sociology, 21 (1995), p. 361.

93. Eli Ginzberg and others, Improving Health Care of the Poor. The New York City Experience (Transaction Publishers, 1997), p. 60.

94. Ibid.

95. Ibid., p. 119.

96. Fox Butterfield, "Prisons Replace Hospitals for the Nation's Mentally Ill," New York Times, March 5, 1998, pp. 1 and A18.

97. Ibid., p. A18.

98. Ibid.

99. McCord and Freeman, "Excess Mortality in Harlem," p. 177.

100. R. Wallace and D. Wallace, "The Destruction of U.S. Minority Urban Communities and the Resurgence of Tuberculosis: Ecosystem Dynamics of the White Plague in the Dedeveloping World," Environment and Planning A, 29 (1997), pp. 270-271.

101. Deborah Wallace and Rodrick Wallace, A Plague On Your Houses. How New York Was Burned Down and National Public Health Crumbled (Verso, 1998), p. xvi.

102. Ibid.

103. Wallace and Wallace, "The Destruction of Minority Urban Communities," p. 272.

104. Ibid., p. 90.

105. Philip J. Hilts, "Nation Is Falling Short of Health Goals for 2000," New York Times, June 11, 1999, p. A24.

106. Ibid.

107. Sharon M. Keigher, "Reflections on Progress, Health, and Racism,: 1900 to 2000," Health & Social Work, 24 (November 1999), p. 246.

108. Carole Shammas, "A New Look at Long-Term Trends in Wealth Inequality in the United States," American Historical Review, 98 (April 1993), p. 424.

109. Carole Shammas and others, Inheritance in America from Colonial Times to the Present (Rutgers University Press, 1987), p. 145.

110. James D. Smith, testimony in U.S. Congress, 95th 1st session, House of Representatives, Committee on the Budget, Task Force on Distributive Impacts of Budget and Economic Policies, Data on Distribution of Wealth in the United States. Hearings. … (GPO, 1977), p. 9. Rep. Donald Fraser, chairman of these hearings, stated that "the Federal Government collects relatively little data on the distribution of wealth." (p. 1).

111. Ibid.

112. James N. Morgan in ibid., p. 3.

113. Nelson McClurg in ibid., p. 152.

114. Ibid., p. 151.

115. James D. Smith and Stephen D. Franklin, "The Concentration of Personal Wealth, 1922-1969," American Economic Review, 64 (1974), p. 162.

116. James Smith in U.S. Congress, Data on Distribution of Wealth, p. 178.

117. Smith and Franklin, "Concentration of Personal Wealth," p. 164.

118. Edward N. Wolff, "International Comparisons of Wealth Inequality," Review of Income and Wealth, 42 (December 1992), p. 441.

119. Edward N. Wolff, "Recent Trends in the Size Distribution of Household Wealth," Journal of Economic Perspectives, 12 (Summer 1998), p. 135.

120. Vincenzo Quadrini and José-Victor Rios-Rull, "Understanding the U.S. Distribution of Wealth," Federal Reserve Bank of Minneapolis Quarterly Review, 21 (Spring 1997), p. 22. Apparently this was a 1992 figure.

121. John C. Weicher, "The Rich and the Poor: Demographics of the U.S. Wealth Distribution," Federal Reserve Bank of St. Louis Review, (July-August 1997), p. 25.

122. Ibid., p. 33.

123. Wolff, "Recent Trends in the Size Distribution of Household Wealth," p. 134.

124. Ibid., p. 144.

125. Shammas, Inheritance in America, p. 151.

126. Wolff, "Recent Trends in the Size Distribution of Household Wealth," p. 140.

127. Daniel Creamer, "Some Determinants of Low Family Income (Based on Family Income Statistics for New York State for 1949)," Economic Development and Cultural Change, 9 (April 1961), p. 414.

128. Ibid., p. 419.

129. Vernon M. Briggs, Jr., "American-Style Capitalism and Income Disparity: The Challenge of Social Anarchy," Journal of Economic Issues, 32 (June 1998), p. 477.

130. Gary Burtless, "Trends in the Level and Distribution of U.S. Living Standards: 1975-1993," Eastern Economic Journal, 22 (Summer 1996), p. 289.

131. Richard B. Freeman and Joel Rogers, What Workers Want (ILR Press, 1999), p. 13.

132. Lynn Karoly, "Anatomy of the U.S. Income Distribution: Two Decades of Change," Oxford Review of Economic Policy, 12 (Spring 1996), p. 81.

133. Susan E. Mayer and Christopher Jencks, "Recent Trends in Economic Inequality in the United States: Income versus Expenditures versus Material Well-Being," p. 181 in Dimitrou Papadinitriow and Edward N. Wolff, eds,., Poverty and Prosperity in the U.S.A. in the Late Twentieth Century (St. Martin's Press,1993). See also Mayer and Jencks, "Poverty and the Distribution of Material Hardship," Journal of Human Resources, 24 (1989) 88-113.

134. Marvin E. Olsen, "The Affluent Prosper While Everyone Else Struggles," Sociological Focus, 23 (May 1990), p. 84.

135. Lee Rainwater and Timothy M. Sneeding, Doing Poorly: The Real Income of American Children in a Comparative Perspective. Luxembourg Income Study Working Paper No. 127 (Maxwell School of Citizenship and Public Affairs, August, 1995), p. i. See also A.B. Atkinson, "Income Distribution in Europe and the United States," Oxford Review of Economic Policy, 12 (Spring 1996), pp. 21-23.

136. David Cay Johnston, "On a New Map, the Income Gap Grows," New York Times, September 17, 2000, p. BU12.

137. Louis Uchitelle, "Muscleman, Or 98-Pound Weakling?" New York Times, October 18, 1998, p. BU1.

138. Richard W. Stevenson, "Black-White Economic Gap Is Narrowing, White House Says," New York Times, February 10, 1998, p. A16.

139. Marcus Alexis, "Assessing 50 Years of African-American Economic Status, 1940-1990," American Economic Review, 88 (May 1998), p. 368.

140. Michael M. Weinstein, "5 Problems Tarnishing a Robust Economy," New York Times, January 4, 1999, p. C10.

141. Mike Davis, "Magical Urbanism: Latinos Reinvent the US Big City," New Left Review, No. 234 (March-April 1999), p. 37.

142. Ibid., p. 38.

143. Robert G. Gregory and others, "The Individual Economic Well-Being of Native American Men and Women during the 1980s: A Decade of Moving Backwards," Population Research and Policy Review, 16 (1997), pp. 116 and 136.

144. Freeman and Rogers, What Workers Want, p. 13.

145. Eli Berman and others, "Changes in the Demand for Skilled Labor Within U.S. Manufacturing: Evidence From the Annual Survey of Manufactures," Quarterly Journal of Economics, (May 1994), p. 391.

146. Ibid., p. 375.

147. Ethan B. Kapstein, Sharing the Wealth, Workers and the World Economy (Norton, 1999), p. 101.

148. Ibid., p. 103.

149. Thomas I. Palley, Plenty of Nothing. The Downsizing of the American Dream and the Case for Structural Keynesianism (Princeton University Press, 1998), p. 41.

150. Ibid., p. 131.

151. Lester C. Thurow, The Future of Capitalism. How Today's Economic Forces Shape Tomorrow's World (Morrow, 1996), p. 24.

152. Ibid., p. 42.

153. Ibid., p. 268.

154. See Louis Uchitelle, "Those Raises, Adjusted for Oil Inflation, Are a Mirage," New York Times, September 10, 2000, p. BU4.

155. Linda Barrington, Does a Rising Tide Lift All Boats? America's Full-Time Working Poor Reap Limited Gains in the New Economy, Research Report 1271-00-RR (Conference Board, 2000), p. 5.

156. Ibid., p. 7.

157. Ibid., p. 8.

158. Ibid., p. 11.

159. Ibid., p. 13.

160. Ibid., p. 16.

161. Ibid., p. 15.

162. Lee Rainwater and Timothy M. Sneeding, Doing Poorly. The Real Income of American Children in a Comparative Perspective. Luxembourg Income Study. Working Paper No. 127 (Maxwell School of Citizenship and Public Affairs, August 1995), p. 1.

163. Ibid., p. 11.

164. Janet M. Fitchen, "Why Rural Poverty Is Growing Worse: Similar Causes in Diverse Setting," p. 249 in E.N. Castle, ed., The Changing American Countryside: Rural People and Places (University of Kansas Press, 1995).

165. Ibid., p. 251.

166. Ibid., p. 252.

167. Louis Uchitelle, "Rising Incomes Lift 1.1 Million Out of Poverty," New York Times, October 1, 1999, p. 1.

168. Ibid.

169. Lawrence M. Ausubel, "Credit Card Defaults, Credit Card Profits, and Bankruptcy," American Bankruptcy Journal, 71 (1997), p. 250.

170. Ibid., p. 260.

171. Marquette National Bank of Minneapolis v. First of Omaha Service Corp., 439 U.S. 299 (1978).

172. Teresa A. Sullivan and others, The Fragile Middle Class. Americans in Debt (Yale University Press, 2000), p. 135.

173. Ausubel, "Credit Card Defaults," p. 261.

174. Lisa A. Keister, Wealth in America. Trends in Wealth Inequality (Cambridge University Press, 2000), p. 8.

175. Ibid.

176. Ibid., p. 187.

177. Ibid., p. 126.

178. Ibid., p. 64.

179.Sullivan, The Fragile Middle Class, pp. 84 and 94.

180. Ibid., p. 103.

181. Ibid., p. 2.

182. Ibid., p. 63.

183. Ibid., p. 255.

184. Susan Chandler, "Sears Fraud Fine is $60 Million," Chicago Tribune, February 10, 1999, sec. 3, p. 1.

185. Ibid.

186. Leslie Kaufman, "Sears Settles Suit on Raising of Its Credit Card Rates," New York Times, March 11, 1999, p. C2. This headline is peculiar; the charge of criminal fraud to which Sears pleaded guilty did not concern a practice of raising rates but of fraudulently collecting non-existent charges.

187. "Credit Card Issuer Nears Settlement on Tactics," New York Times, June 21, 2000, p. C10.

188. David Leonhardt, "Credit Card Issuer Will Repay Millions to Some Customers," New York Times, June 29, 2000, p. 1.

189. Alan S. Blinder, "The Level and Distribution of Economic Well-Being," p. 429 in Martin Feldstein, ed., The American Economy in Transition (University of Chicago Press, 1980); Louis Uchitelle, "In Home Ownership Data, a Hidden Generation Gap," New York Times, September 26, 1999, p. BU4; Lisa A. Keister, Wealth in America. Trends in Wealth Inequality (Cambridge University Press, 2000), p. 123; Joseph Gyourko, "The Changing Strength of Socioeconomic Factors Affecting Home Ownership in the United States: 1960-1990," Scottish Journal of Political Economy, 45 (September 1998), p. 466; Arthur B. Kennickell and others, "Recent Changes in U.S. Family Finances: Results from the 1998 Survey of Consumer Finances," Federal Reserve Bulletin, 86 (January 2000), pp. 15-16," and Erik Hurst and Others, "The Wealth Dynamics of American Families, 1984-94," Brookings Papers on Economic Activity, 1988, p. 271.

190. Michael Janofsky, "HUD Plans Nationwide Inquiry on Housing Bias," New York Times, November 17, 1998, p. A10.

191. Kennickell and others, "Recent Changes in U.S. Family Finances," p. 18.

192. Uchitelle, "In Home Ownership Data," p. BU4.

193. Gyourko, "The Changing Strength," p. 482.

194. Ibid., p. 487.

195. Ibid.

196. Rebecca M. Blank and Harvey S. Rosen, Recent Trends in Housing Conditions Among the Urban Poor, NBER Working Paper No. 2886 (National Bureau of Economic Research, March 1989), p. 1.

197. Joint Center for Housing Studies, Harvard University, The State of the Nation's Housing, cited in Mary Umberger, "The Dearth of Affordable Housing Is Worrisome," Chicago Tribune, July 1, 2000, sec. 4, p. 1.

198. Irvin Molotsky, "Robust Economy Is Contributing to a Loss of Affordable Housing," New York Times, March 28, 2000, p. A18.

199. Nina Bernstein, "Study Documents Homelessness in American Children Each Year," New York Times, February 1, 2000, p. A12.

200. Blank and Rosen, Recent Trends in Housing Conditions Among the Urban Poor, pp. 1-2.

201. Michael Janofsky, "Shortage of Housing for Poor Grows in U.S.," New York Times, April 28, 1998, p. A14.

202. Ibid.

203. David Stout, "Odds Worsen in Hunt for Low-Income Rentals," New York Times, September 24, 1999, p. A14.

204. Ibid.

205. Ibid.

206. Marc Lacey, "Clinton Plans New Vouchers for Working-Class Housing," New York Times, December 29, 1999, p. A.16.

207. Ibid.

208. Molotsky, "Robust Economy," p. A18.

209. Nina Bernstein, "Study Documents Homelessness in American Children Each Year," New York Times, February 1, 2000, p. A12.

210. Nina Bernstein, "Deep Poverty and Illness Found Among Homeless," New York Times, December 8, 1999, p. A13.

211. Douglas S. Massey, "The Age of Extremes: Concentrated Affluence and Poverty in the Twenty-First Century," Demography, 33 (November 1996), p. 397.

212. Ibid., p. 398.

213. Ibid., p. 403. See, also, Douglas S. Massey and Mitchell L. Eggers, "The Spatial Concentration of Affluence and Poverty During the 1970s," Urban Affairs Quarterly, 29 (December 1993), 299--315.

214. Massey, "The Age of Extremes," p. 427.